aid
string
mid
string
abstract
string
related_work
string
ref_abstract
dict
title
string
text_except_rw
string
total_words
int64
1907.04824
2962342247
It is well known that size-based scheduling policies, which take into account job size (i.e., the time it takes to run them), can perform very desirably in terms of both response time and fairness. Unfortunately, the requirement of knowing a priori the exact job size is a major obstacle which is frequently insurmountable in practice. Often, it is possible to get a coarse estimation of job size, but unfortunately analytical results with inexact job sizes are challenging to obtain, and simulation-based studies show that several size-based algorithm are severely impacted by job estimation errors. For example, Shortest Remaining Processing Time (SRPT), which yields optimal mean sojourn time when job sizes are known exactly, can drastically underperform when it is fed inexact job sizes. Some algorithms have been proposed to better handle size estimation errors, but they are somewhat complex and this makes their analysis challenging. We consider Shortest Processing Time (SPT), a simplification of SRPT that skips the update of "remaining" job size and results in a preemptive algorithm that simply schedules the job with the shortest estimated processing time. When job size is inexact, SPT performs comparably to the best known algorithms in the presence of errors, while being definitely simpler. In this work, SPT is evaluated through simulation, showing near-optimal performance in many cases, with the hope that its simplicity can open the way to analytical evaluation even when inexact inputs are considered.
While many works have studied scheduling for single-server queues, most of them focused on the extreme cases where job size is either completely unknown ( algorithms) or known perfectly. When the job size distribution is skewed---meaning that resources are occupied most of the time by a minority of large jobs---algorithms such as Least Attained Service (LAS) @cite_34 or multi-level queues @cite_17 @cite_18 can still perform well by prioritizing new jobs. LAS is evaluated experimentally in this work; for more details about it, see .
{ "abstract": [ "Previous job scheduling studies indicate that providing rapid response to interactive jobs which place frequent but small demands, can reduce the overall system average response time [1], especially when the job size distribution is skewed (see [2] and references therein). Since the distribution of Internet flows is skewed, it is natural to design a network system that favors short file transfers through service differentiation. However, to maintain system scalability, detailed per-flow state such as flow length is generally not available inside the network. As a result, we usually resort to a threshold-based heuristic to identify and give preference to short flows. Specifically, packets from a new flow are always given the highest priority. However, the priority is reduced once the flow has transferred a certain amount of packets.In this paper, we use the MultiLevel (ML) feedback queue [3] to characterize this discriminatory system. However, the solution given in [3] is in the form of an integral equation, and to date the equation has been solved only for job size distribution that has the form of mixed exponential functions. We adopt an alternative approach, namely using a conservation law by Kleinrock [1], to solve for the average response time in such system. To that end, we approximate the average response time of jobs by a linear function in the job size and solve for the stretch (service slowdown) factors. We show by simulation that such approximation works well for job (flow) size distributions that possess the heavy-tailed property [2], although it does not work so well for exponential distributions.Due to the limited space available, in Section 2 we briefly describe the queueing model and summarize our approximation approach to solving for the average response time of the M G 1 ML queueing system. We conclude our paper in Section 3.", "Recent studies of Internet traffic have shown that flow size distributions often exhibit a high variability property in the sense that most of the flows are short and more than half of the total load is constituted by a small percentage of the largest flows. In the light of this observation, it is interesting to revisit scheduling policies that are known to favor small jobs in order to quantify the benefit for small and the penalty for large jobs. Among all scheduling policies that do not require knowledge of job size, the least attained service (LAS) scheduling policy is known to favor small jobs the most. We investigate the M G 1 LAS queue for both, load ? < 1 and ? = 1. Our analysis shows that for job size distributions with a high variability property, LAS favors short jobs with a negligible penalty to the few largest jobs, and that LAS achieves a mean response time over all jobs that is close to the mean response time achieved by SRPT.Finally, we implement LAS in the ns-2 network simulator to study its performance benefits for TCP flows. When LAS is used to schedule packets over the bottleneck link, more than 99 of the shortest flows experience smaller mean response times under LAS than under FIFO and only the largest jobs observe a negligible increase in response time. The benefit of using LAS as compared to FIFO is most pronounced at high load.", "" ], "cite_N": [ "@cite_18", "@cite_34", "@cite_17" ], "mid": [ "2018852485", "2150013062", "2325187468" ] }
Scheduling With Inexact Job Sizes: The Merits of Shortest Processing Time First
0
1907.04824
2962342247
It is well known that size-based scheduling policies, which take into account job size (i.e., the time it takes to run them), can perform very desirably in terms of both response time and fairness. Unfortunately, the requirement of knowing a priori the exact job size is a major obstacle which is frequently insurmountable in practice. Often, it is possible to get a coarse estimation of job size, but unfortunately analytical results with inexact job sizes are challenging to obtain, and simulation-based studies show that several size-based algorithm are severely impacted by job estimation errors. For example, Shortest Remaining Processing Time (SRPT), which yields optimal mean sojourn time when job sizes are known exactly, can drastically underperform when it is fed inexact job sizes. Some algorithms have been proposed to better handle size estimation errors, but they are somewhat complex and this makes their analysis challenging. We consider Shortest Processing Time (SPT), a simplification of SRPT that skips the update of "remaining" job size and results in a preemptive algorithm that simply schedules the job with the shortest estimated processing time. When job size is inexact, SPT performs comparably to the best known algorithms in the presence of errors, while being definitely simpler. In this work, SPT is evaluated through simulation, showing near-optimal performance in many cases, with the hope that its simplicity can open the way to analytical evaluation even when inexact inputs are considered.
were the first to consider estimation errors for size-based scheduling, and observed that existing algorithms perform well only when job sizes were rather accurately estimated. Further work @cite_4 @cite_31 has shown that most problems happen when the job size distribution is skewed and large jobs' sizes are under-estimated: in that case, these jobs eventually reach a very high priority and are not preempted when smaller jobs arrive, clogging the system. PSBS @cite_31 () and MCSS @cite_11 () are proposals that perform better on estimated job sizes; in both are evaluated and compared to SPT.
{ "abstract": [ "Size-based schedulers have very desirable performance properties: optimal or near-optimal response time can be coupled with strong fairness. Despite this, however, such systems are rarely implemented in practical settings, because they require knowing a priori the amount of work needed to complete jobs: this assumption is difficult to satisfy in concrete systems. It is definitely more likely to inform the system with an estimate of the job sizes, but existing studies point to somewhat pessimistic results if size-based policies use imprecise job size estimations. We take the goal of designing scheduling policies that explicitly deal with inexact job sizes . First, we prove that, in the absence of errors, it is always possible to improve any scheduling policy by designing a size-based one that dominates it: in the new policy, no jobs will complete later than in the original one. Unfortunately, size-based schedulers can perform badly with inexact job size information when job sizes are heavily skewed; we show that this issue, and the pessimistic results shown in the literature, are due to problematic behavior when large jobs are underestimated. Once the problem is identified, it is possible to amend size-based schedulers to solve the issue. We generalize FSP—a fair and efficient size-based scheduling policy—to solve the problem highlighted above; in addition, our solution deals with different job weights (that can be assigned to a job independently from its size). We provide an efficient implementation of the resulting protocol, which we call Practical Size-Based Scheduler (PSBS). Through simulations evaluated on synthetic and real workloads, we show that PSBS has near-optimal performance in a large variety of cases with inaccurate size information, that it performs fairly and that it handles job weights correctly. We believe that this work shows that PSBS is indeed pratical, and we maintain that it could inspire the design of schedulers in a wide array of real-world use cases.", "We study size-based schedulers, and focus on the impact of inaccurate job size information on response time and fairness. Our intent is to revisit previous results, which allude to performance degradation for even small errors on job size estimates, thus limiting the applicability of size-based schedulers. We show that scheduling performance is tightly connected to workload characteristics: in the absence of large skew in the job size distribution, even extremely imprecise estimates suffice to outperform size-oblivious disciplines. Instead, when job sizes are heavily skewed, known size-based disciplines suffer. In this context, we show - for the first time - the dichotomy of over-estimation versus under-estimation. The former is, in general, less problematic than the latter, as its effects are localized to individual jobs. Instead, under-estimation leads to severe problems that may affect a large number of jobs. We present an approach to mitigate these problems: our technique requires no complex modifications to original scheduling policies and performs very well. To support our claim, we proceed with a simulation-based evaluation that covers an unprecedented large parameter space, which takes into account a variety of synthetic and real workloads. As a consequence, we show that size-based scheduling is practical and outperforms alternatives in a wide array of use-cases, even in presence of inaccurate size information.", "When scheduling single server systems, Shortest Remaining Processing Time (SRPT) minimizes the number of jobs in the system at every point in time. However, a major limitation of SRPT is that it requires job processing times a priori. In practice, it is likely that only estimates of job processing times are available. This paper proposes a policy that schedules jobs with estimated job processing times. The proposed Modified Comparison Splitting Scheduling (MCSS) policy is compared to SRPT when scheduling both single and multi-server systems. In the single server system we observe from simulations that the proposed scheduling policy provides robustness that is crucial for achieving good performance. In contrast, in a multi-server system we observe that robustness to estimation errors is not dependent on the scheduling policy. However, as the number of servers grows, SRPT becomes preferable." ], "cite_N": [ "@cite_31", "@cite_4", "@cite_11" ], "mid": [ "1919895129", "2109380537", "2762457728" ] }
Scheduling With Inexact Job Sizes: The Merits of Shortest Processing Time First
0
1907.04824
2962342247
It is well known that size-based scheduling policies, which take into account job size (i.e., the time it takes to run them), can perform very desirably in terms of both response time and fairness. Unfortunately, the requirement of knowing a priori the exact job size is a major obstacle which is frequently insurmountable in practice. Often, it is possible to get a coarse estimation of job size, but unfortunately analytical results with inexact job sizes are challenging to obtain, and simulation-based studies show that several size-based algorithm are severely impacted by job estimation errors. For example, Shortest Remaining Processing Time (SRPT), which yields optimal mean sojourn time when job sizes are known exactly, can drastically underperform when it is fed inexact job sizes. Some algorithms have been proposed to better handle size estimation errors, but they are somewhat complex and this makes their analysis challenging. We consider Shortest Processing Time (SPT), a simplification of SRPT that skips the update of "remaining" job size and results in a preemptive algorithm that simply schedules the job with the shortest estimated processing time. When job size is inexact, SPT performs comparably to the best known algorithms in the presence of errors, while being definitely simpler. In this work, SPT is evaluated through simulation, showing near-optimal performance in many cases, with the hope that its simplicity can open the way to analytical evaluation even when inexact inputs are considered.
In the literature, some systems use job size estimation to drive scheduling. For batch computation systems, a part of jobs is run to estimate running time @cite_0 @cite_15 ; web servers use file size to estimate serving time @cite_5 . More elaborate approaches predict the size of database queries @cite_2 , MapReduce jobs @cite_24 @cite_29 @cite_20 , deep learning training @cite_16 and the length of call-center calls @cite_27 : approaches such as these can be used to inform size-based schedulers.
{ "abstract": [ "", "", "MapReduce and Hadoop represent an economically compelling alternative for efficient large scale data processing and advanced analytics in the enterprise. A key challenge in shared MapReduce clusters is the ability to automatically tailor and control resource allocations to different applications for achieving their performance goals. Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required Service Level Objective (SLO). In this work, we propose a framework, called ARIA, to address this problem. It comprises of three inter-related components. First, for a production job that is routinely executed on a new dataset, we build a job profile that compactly summarizes critical performance characteristics of the underlying application during the map and reduce stages. Second, we design a MapReduce performance model, that for a given job (with a known profile) and its SLO (soft deadline), estimates the amount of resources required for job completion within the deadline. Finally, we implement a novel SLO-based scheduler in Hadoop that determines job ordering and the amount of resources to allocate for meeting the job deadlines. We validate our approach using a set of realistic applications. The new scheduler effectively meets the jobs' SLOs until the job demands exceed the cluster resources. The results of the extensive simulation study are validated through detailed experiments on a 66-node Hadoop cluster.", "", "We present an adaptive, random sampling algorithm for estimating the size of general queries. The algorithm can be used for any query D over a database D such that (1) for some n, the answer to Q can be partitioned into n disjoint subsets Q 1 , Q 2 , Q n , and (2) for 1≤i≤n, the size of Q i is bounded by some function b(D, Q), and (3) there is some algorithm by which we can compute the size of Q i , where i is chosen randomly. We consider the performance of the algorithm on three special cases of the algorithm: join queries, transitive closure queries, and general recursive Datalog queries.", "This article provides a detailed implementation study on the behavior of web serves that serve static requests where the load fluctuates over time (transient overload). Various external factors are considered, including WAN delays and losses and different client behavior models. We find that performance can be dramatically improved via a kernel-level modification to the web server to change the scheduling policy at the server from the standard FAIR (processor-sharing) scheduling to SRPT (shortest-remaining-processing-time) scheduling. We find that SRPT scheduling induces no penalties. In particular, throughput is not sacrificed and requests for long files experience only negligibly higher response times under SRPT than they did under the original FAIR scheduling.", "Size-based scheduling with aging has, for long, been recognized as an effective approach to guarantee fairness and near-optimal system response times. We present HFSP, a scheduler introducing this technique to a real, multi-server, complex and widely used system such as Hadoop. Size-based scheduling requires a priori job size information, which is not available in Hadoop: HFSP builds such knowledge by estimating it on-line during job execution. Our experiments, which are based on realistic workloads generated via a standard benchmarking suite, pinpoint at a significant decrease in system response times with respect to the widely used Hadoop Fair scheduler, and show that HFSP is largely tolerant to job size estimation errors.", "Deep learning workloads are common in today's production clusters due to the proliferation of deep learning driven AI services (e.g., speech recognition, machine translation). A deep learning training job is resource-intensive and time-consuming. Efficient resource scheduling is the key to the maximal performance of a deep learning cluster. Existing cluster schedulers are largely not tailored to deep learning jobs, and typically specifying a fixed amount of resources for each job, prohibiting high resource efficiency and job performance. This paper proposes Optimus, a customized job scheduler for deep learning clusters, which minimizes job training time based on online resource-performance models. Optimus uses online fitting to predict model convergence during training, and sets up performance models to accurately estimate training speed as a function of allocated resources in each job. Based on the models, a simple yet effective method is designed and used for dynamically allocating resources and placing deep learning tasks to minimize job completion time. We implement Optimus on top of Kubernetes, a cluster manager for container orchestration, and experiment on a deep learning cluster with 7 CPU servers and 6 GPU servers, running 9 training jobs using the MXNet framework. Results show that Optimus outperforms representative cluster schedulers by about 139 and 63 in terms of job completion time and makespan, respectively.", "We consider MapReduce workloads that are produced by analytics applications. In contrast to ad hoc query workloads, analytics applications are comprised of fixed data flows that are run over newly arriving data sets or on different portions of an existing data set. Examples of such workloads include document analysis indexing, social media analytics, and ETL (Extract Transform Load). Motivated by these workloads, we propose a technique that predicts the runtime performance for a fixed set of queries running over varying input data sets. Our prediction technique splits each query into several segments where each segment’s performance is estimated using machine learning models. These per-segment estimates are plugged into a global analytical model to predict the overall query runtime. Our approach uses minimal statistics about the input data sets (e.g., tuple size, cardinality), which are complemented with historical information about prior query executions (e.g., execution time). We analyze the accuracy of predictions for several segment granularities on both standard analytical benchmarks such as TPC-DS [17], and on several real workloads. We obtain less than 25 prediction errors for 90 of predictions." ], "cite_N": [ "@cite_29", "@cite_0", "@cite_24", "@cite_27", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_20" ], "mid": [ "", "2623179677", "2002472616", "", "1972223681", "1986166139", "2090259550", "2798515322", "2241020437" ] }
Scheduling With Inexact Job Sizes: The Merits of Shortest Processing Time First
0
1907.04666
2956124191
Traditionally, the automatic recognition of human activities is performed with supervised learning algorithms on limited sets of specific activities. This work proposes to recognize recurrent activity patterns, called routines, instead of precisely defined activities. The modeling of routines is defined as a metric learning problem, and an architecture, called SS2S, based on sequence-to-sequence models is proposed to learn a distance between time series. This approach only relies on inertial data and is thus non intrusive and preserves privacy. Experimental results show that a clustering algorithm provided with the learned distance is able to recover daily routines.
The traditional approach to compute distances between sequences (or time series, or trajectories) is to perform Dynamic Time Warping (DTW) @cite_5 which was introduced in 1978. Since then, several improvements of the algorithm have been published, notably a fast version by Salvador @cite_0 . DTW is considered one of the best metric to use for sequence classification @cite_21 combined with @math -nearest neighbors. Recently, Abid @cite_26 proposed a neural network architecture to learn the parameters of a warping distance accordingly to the euclidean distances in a projection space. However, DTW, as other shaped-based distances @cite_9 , is only able to retrieve local similarities when time series have a relatively small length and are just shifted or not well aligned.
{ "abstract": [ "Measuring similarities between unlabeled time series trajectories is an important problem in domains as diverse as medicine, astronomy, finance, and computer vision. It is often unclear what is the appropriate metric to use because of the complex nature of noise in the trajectories (e.g. different sampling rates or outliers). Domain experts typically hand-craft or manually select a specific metric, such as dynamic time warping (DTW), to apply on their data. In this paper, we propose Autowarp, an end-to-end algorithm that optimizes and learns a good metric given unlabeled trajectories. We define a flexible and differentiable family of warping metrics, which encompasses common metrics such as DTW, Euclidean, and edit distance. Autowarp then leverages the representation power of sequence autoencoders to optimize for a member of this warping distance family. The output is a metric which is easy to interpret and can be robustly learned from relatively few trajectories. In systematic experiments across different domains, we show that Autowarp often outperforms hand-crafted trajectory similarity metrics.", "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.", "Many algorithms have been proposed for the problem of time series classification. However, it is clear that one-nearest-neighbor with Dynamic Time Warping (DTW) distance is exceptionally difficult to beat. This approach has one weakness, however; it is computationally too demanding for many realtime applications. One way to mitigate this problem is to speed up the DTW calculations. Nonetheless, there is a limit to how much this can help. In this work, we propose an additional technique, numerosity reduction, to speed up one-nearest-neighbor DTW. While the idea of numerosity reduction for nearest-neighbor classifiers has a long history, we show here that we can leverage off an original observation about the relationship between dataset size and DTW constraints to produce an extremely compact dataset with little or no loss in accuracy. We test our ideas with a comprehensive set of experiments, and show that it can efficiently produce extremely fast accurate classifiers.", "Dynamic Time Warping (DTW) has a quadratic time and space complexity that limits its use to small time series. In this paper we introduce FastDTW, an approximation of DTW that has a linear time and space complexity. FastDTW uses a multilevel approach that recursively projects a solution from a coarser resolution and refines the projected solution. We prove the linear time and space complexity of FastDTW both theoretically and empirically. We also analyze the accuracy of FastDTW by comparing it to two other types of existing approximate DTW algorithms: constraints (such as Sakoe-Chiba Bands) and abstraction. Our results show a large improvement in accuracy over existing methods.", "This paper reports on an optimum dynamic progxamming (DP) based time-normalization algorithm for spoken word recognition. First, a general principle of time-normalization is given using time-warping function. Then, two time-normalized distance definitions, called symmetric and asymmetric forms, are derived from the principle. These two forms are compared with each other through theoretical discussions and experimental studies. The symmetric form algorithm superiority is established. A new technique, called slope constraint, is successfully introduced, in which the warping function slope is restricted so as to improve discrimination between words in different categories. The effective slope constraint characteristic is qualitatively analyzed, and the optimum slope constraint condition is determined through experiments. The optimized algorithm is then extensively subjected to experimental comparison with various DP-algorithms, previously applied to spoken word recognition by different research groups. The experiment shows that the present algorithm gives no more than about two-thirds errors, even compared to the best conventional algorithm." ], "cite_N": [ "@cite_26", "@cite_9", "@cite_21", "@cite_0", "@cite_5" ], "mid": [ "2952853797", "2081028405", "2039260438", "2144994235", "2128160875" ] }
Routine Modeling with Time Series Metric Learning
Human Activity Recognition (HAR) is a key part of several intelligent systems interacting with humans: smart home services [10], actigraphy and telemedecine, sport applications [3], etc. It is particularly useful for developing eHealth services and monitoring a person in its everyday life. It has been so far mainly performed in supervised contexts with data annotated by experts or with the help of video recordings [8]. Not only is this approach time consuming, but it also restricts the number of activities that can be recognized. It is associated with scripted datasets where subjects are asked to perform sequences of predefined tasks. This approach is thus unrealistic and difficult to set up for real environments where people do a vast variety of specific activities everyday and can diverge from a preestablished behavior in many different ways (e.g., falls, accidents, contingencies of life, etc.). Besides, most people present some kind of habitual behavior, called routines in this paper: the time they go to sleep, morning ritual before going to work, meal times, etc. Results from behavioral psychology show that habits are hard and long to form but also hard to break when well installed [20]. From a data-driven perspective, Gonzalez et al. [14] observed the high regularity of human trajectories thanks to localization data and show that "humans follow simple reproducible patterns". Routines produce distinguishable patterns in the data which, if not identifiable semantically, could be retrieved over time and so produce a relevant signature of the daily life of a person. In this paper, we advocate for the modeling of such routines instead of activity recognition, and we propose a machine learning model able to identify routines in the daily life of a person. We want this system to be unintrusive and to respect people's privacy and therefore to rely only on inertial data that can be gathered by a mobile phone or a smart watch. Moreover, routines do not need to be semantically characterized, and the model does not have to use any activity labels. The daily routines of a person may present characteristics of almost-periodic functions, periodic similarity, regarding a certain metric which we propose to learn. To do so, we adapted the siamese neural network architecture proposed by Bromley et al. [7] to learn a distance from pairs of sequences and propose experiments to evaluate the quality of the learned metric on the problem of routine modeling. The contributions of this paper are threefold: 1. a formulation of routine modeling as a metric learning problem by defining routines as almost-periodic functions, 2. an architecture to jointly learn a representation and a metric for time series using siamese sequence-to-sequence models and an improvement of the loss functions to minimize, 3. results showing that the proposed architecture is effectively able to recover human routines from inertial data without using any activity labels. The remainder of the paper is organized as follows. Section 2 is dedicated to routine modeling definition. Section 3 gives an overview of time series metrics. The proposed approach to recognize routines is presented in Section 4 and Section 5 presents experimental protocols and results. Finally, conclusions and perspectives are drawn in the last section. Routine Modeling A routine can be seen as a recurrent behavior of an individual's daily life. For example, a person roughly does the same thing in the same order when waking up or going to work. These sequences of activities should produce distinguishable patterns in the data and can thus be used to monitor the life of an individual without knowing what he or she is doing exactly. The purpose of this work is to design an intelligent system which is able to recognize routines. To tackle routines with machine learning, we propose a starting principle similar to the one used in natural language processing: similar words appear in similar contexts. The context surrounding a word designates the previous and following words of the sentence, for example. The context of a routine corresponds here to the moment of the day or the week, etc. it generally happens. From this principle, we seek now to propose a mathematical formulation of routines which would include the notions of periodicity and similarity. The almost periodic functions defined by Bohr [6] show similar properties: Definition 1. Let f : R → C be a continuous function. f is an almost-periodic function with respect to the uniform norm if ∀ > 0, ∃T > 0 called an -almost period of f such as: sup |f (t + T ) − f (t)| .(1) Obviously, the practical issue of routine modeling presents several divergences from this canonical definition: data are discrete time series and the periodicity of activities cannot be evaluated point-wise. Nevertheless, it is possible to adapt it to our problem. Let S : N → R n be an ordered discrete sequence of vectors of dimension n. If the frequency of S is sufficiently high, it is possible to get a continuous approximation of it, by interpolation for example. We now consider a function f S of the following form with a fixed interval length l: f S : R + → R n×l t → [S(t) : S(t + l)[,(2) where [S(t) : S(t + l)[ is the set of vectors between S(t) and S(t + l) sampled at a certain frequency from the continuous approximation. l is typically one or several hours: a sufficiently long period of time to absorb the little changes from one day to another (e.g., waking up a little earlier or later, etc.). The objective is to define almost-periodicity with respect to a distance d between sequences, such that ∀ > 0, ∃T > 0: d(f S (t), f S (t + T )) .(3) The parameter T can be a day, a week or a sufficiently long period of time to observe repetitions of behavior. The metric d must be sufficiently flexible to handle the high variability of activities which can be similar but somewhat different in their execution while exhibiting a similar pattern. We therefore postulate that d may be learned for a specific user from its data and we will now show that f S respects the condition established in Eq. (3) with respect to d. To learn d if pairs of similar and dissimilar sequences are known, a Recurrent Neural Network (RNN) encoder parametrized by W , called G W , can encode the sequences into vector representations and the contrastive loss [15] can be used to learn the metric from pairs of sequence encodings: L(W, Y 1 , Y 2 , y) = (1 − y) 1 2 d(Y 1 , Y 2 ) 2 + y 1 2 max(0, m − d(Y 1 , Y 2 )) 2 ,(4) where y is equal to zero or one depending if the sequences are respectively similar or not, Y 1 and Y 2 are the last output of the RNN for both sequences and m > 0 a margin that defines the minimal distance between dissimilar samples. Several justifications arise for the use of a margin in metric learning. It is necessary to prevent flat energy surface, according to energy-based learning theory [21], a situation where the energy is low for every input/output associations, not only those in the training set. It also insures that metric learning models are robust to noise [29]. As the learning process aims to minimize the distances between similar sequences which are, by definition, shifted by a period T , we get, for a fixed T > 0 and ∀t ∈ R + : d(G W (f S (t)), G W (f S (t + T ))) m.(5) The margin m can be chosen as close to zero as possible and thus Eq. (5) identifies itself with Eq. (3). In practice, this optimization is only possible up to some point, depending on the model and the data. This argumentation suggests the interest of modeling routines with metric learning as, in this case, the main property of almost-periodic functions is fulfilled. Siamese Sequence to Sequence Model Feature Extraction Approach The time series data obtained from inertial sensors may be very noisy and certainly vary for the same general activity (e.g., cooking). Robust feature representations of time series should therefore be learned before learning a metric. We thus propose ( Fig. 1) to map each sequence to a vector using a Sequence to Sequence model [1,9,27]. The sequence is given as input to the first LSTM network (the encoder) to produce an output sequence, the last output vector is considered as the learned representation. This representation is then given to the second LSTM (the decoder) which tries to reconstruct the input sequence. Typically, an autoencoder is trained to reconstruct the original sequence with the Mean Squared Error (MSE): MSE(S,Ŝ) = 1 l l−1 t=0 (S(t) −Ŝ(t)) 2 ,(6) where S is the sequence andŜ the output sequence produced by the autoencoder from the vector. Similarly, we propose a new Reconstruction Loss (RL) based on cosine similarity, the Cosine Reconstruction Loss (CRL): CRL(S,Ŝ) = l − l−1 t=0 cos(S(t),Ŝ(t)). CRL is close to 0 if the cosine similarity between each pair of vectors is close to one when the vectors are collinear. Metric Learning Our architecture is a siamese network [7], that is to say it is constituted of two subnetworks sharing the same parameters W (see Fig. 1). It takes pairs of similar or dissimilar sequences as input constituted with what is called equivalence constraints. The objective of our architecture is therefore to learn a metric which makes close similar elements and separates the dissimilar ones in the projection space. Three metric forms can generally be used: Euclidean, cosine or Mahalanobis [15,32,12]. The first two are not parametric and only a projection is learned. Learning a Mahalanobis-like metric implies not only learning the projection but also the matrix which will be used to compute the metric. One different Metric Loss (MeL) is proposed to learn each metric form. Y 1 and Y 2 are the representations learned by the autoencoder from the inputs of the siamese network. The first is the contrastive loss [15] (see Eq. (4)) to learn an euclidean distance. The second is a cosine loss to learn a cosine distance: L(W, Y 1 , Y 2 , y) = 1 − cos(Y 1 , Y 2 ), if y = 1 max(0, cos(Y 1 , Y 2 ) − m), if y = −1.(8) Finally, Mahalanobis metric learning can be performed with the KISSME algorithm [19] which can be integrated into a NN [12]. This algorithm aims to maximize the dissimilarity log-likelihood of dissimilar pairs and conversely for similar pairs. The model learns a mapping under the form of a matrix W and an associated metric matrix M of the dimension of the projection space. W is integrated into the network as a linear layer (just after the recurrent encoding layers in SS2S) trained with backpropagation while M is learned in a closed-form manner and updated after a fix number of epochs with the following formula: M = Proj((W T Σ S W ) −1 − (W T Σ D W ) −1 ).(9) Σ S and Σ D are the covariance matrices of similar and dissimilar elements in the projection space and Proj is the projection onto the positive semi definite cone. We propose a modified version of the KISSME loss proposed in [12] which we found was easier to train based on the contrastive loss (Eq. (4)): L(W, Y 1 , Y 2 , y) =(1 − y) 1 2 (Y 1 − Y 2 )M (Y 1 − Y 2 ) T + y 1 2 max(0, m − (Y 1 − Y 2 )M (Y 1 − Y 2 ) T ).(10) Training Process Two training processes can be considered for this architecture. Train the autoencoder and then "freeze" the network parameters to learn the metric if it is parametric. Or, add the metric loss to the reconstruction loss and learn jointly both tasks. In this case, several difficulties could appear. Both losses must have similar magnitudes to have similar influences on the training process. The interaction between the two must also be considered. Both tasks could have eventually divergent or not completely compatible objectives. Indeed, we proposed the CRL with the a priori that it should better interact with the learning of a cosine metric than MSE due to the similar form between the two. This leads to our first hypothesis (H1): Hypothesis 1. Learning a cosine distance along a representation with CRL gives better results than with MSE. Despite the possible issues, we hope that learning both tasks jointly should lead to the learning of more appropriate representations and thus to better results. This leads to our second hypothesis (H2): Hypothesis 2. Jointly learning a metric and a representation with a sequence to sequence model gives better results than learning both separately. Experiments Experimental Setup Dataset Presentation. Long-term unscripted data from wearable sensors are difficult to gather. The only dataset we found that could fit our requirements has been obtained by Weiss et al. [30] and is called Long Term Movement Monitoring dataset (LTMM) 1 . This dataset contains recordings of 71 elderly people which have worn an accelerometer and a gyroscope during three days with no instructions. This dataset contains no labels. Fig 2.a presents two days of data coming from one axis of the accelerometer: similar profiles can be observed at similar moment. Fig 2.b presents the autocorrelation of the accelerometer signal: the maximum of 0.4 is reached for a phase of 24h. These figures show the interest of this dataset as the data show periodic nature while presenting major visual differences. That said, the definition of periodicity that our algorithm is made to achieve is stronger as it is based on a metric between extracted feature vectors, not just correlations of signal measurements. To constitute our dataset, we selected in the original dataset a user who did not remove the sensor during the three days to avoid missing values. We set up a data augmentation process to artificially increase the quantity of data while preserving its characteristic structure. The dataset is sampled at 100 Hz and thus, to multiply the number of days by ten, each vector measurement at the same index modulo 10 will be affected to a new day (the order is respected). This new dataset has a sampling rate of 10 Hz which means that one hour of data is a sequence of size 36000, we consider only non overlapping sequences. Thus, to make the computation more tractable, polyphase filtering is applied to resample each sequence of one hour to a size of 100. Finally, equivalence constraints need to be defined in order to make similar and dissimilar pairs: two sequences of one hour, not from the same day but recorded at the same time are considered similar, all other combinations are considered dissimilar. This approach does not therefore require semantic labels. Model Parameters and Training Details. We describe here the hyperparameters used to train the models. The autoencoders are constituted of one layer of 100 LSTM neurons for the encoder and the decoder. For the KISSME version, the encodings are then projected into a 50-dimensional space, and the distance matrix, which thus has also dimension 50, was updated with the closedform every 30 epochs. These parameters were determined after preliminary tests where deeper architectures and higher dimensional spaces were tested. Models are trained with 20 similar pairs for each time slot and the same total number of dissimilar pairs for a total of 960 training pairs coming from 12 different days of data. The training was stopped based on the loss computed on the validation set which contains three days of data i.e., 72 sequences. The testing set is composed of 15 days or 360 sequences. The data in the training set were rescaled between -1 and 1 and the same parameters were applied on the validation and testing sets. A learning rate of 0.001 was used and divided by 10 if the loss did not decrease anymore during 10 epochs. A batch size of 50, a margin of 1 for the contrastive loss and of 0.5 for the cosine loss were chosen. We also observed that changing to zero 30% of the values of the training sequences sliglty improved the results as suggested in [28]. Experimental Results and Discussion Since the only available labels are time indications and to keep minimal supervision, the evaluation metrics rely on clustering. We report average values on 20 tests for 4 clustering evaluation metrics. Completeness assesses if sequences produced at the same hour are in the same clusters. Silhouette describes the cluster shapes, if they are dense and well-separated. Normalized Mutual Information (NMI) is a classical metric for clustering and measures how two clustering assigments concur, the second being the time slots. Adjusted Mutual Information (AMI) is the adjusted against chance version of NMI. A spectral clustering into 5 clusters is performed with the goal not to find the precise number of clusters maximizing the metrics but to choose a number which will make appear coherent and interpretable routines of the day, namely sleep moments, meals and other daily activities performed every day. Finally, to make our distances usable by the spectral clustering, they are converted to kernel functions. The following transformation was applied to the Euclidean, Mahalanobis and DTW distances: exp(−dist · γ) where γ is the inverse of the length of an encoding vector (respec-(a) 2 days of accelerometer data. (b) Input signal autocorrelation for accelerometer data. tively number of features time length of sequence for DTW). 1 was added to the cosine similarity so it becomes a kernel. Evaluation of Cosine Reconstruction Loss. The performance of the CRL on LTMM is first evaluated. An experiment was performed by jointly training models for Euclidean or cosine distances with CRL or MSE. The results are re-ported in Table 1. An asterisk means the average results are significantly higher according to a Welch's test. The results demonstrate a significant improvement of the proposed CRL over MSE when trained with the cosine similarity for Completeness, NMI and AMI. For the Silhouette score, better results are obtained with the MSE. However, the standard deviations are large, and this improvement is thus not significant. With the Euclidean distance, the same improvement is not realized with a slight advantage of MSE over CRL. These results confirm our hypothesis H1 that it is more appropriate to learn a cosine distance with CRL. They also suggest a positive interaction between the two as the same effect could not be observed with the Euclidean distance. We then use CRL in the remaining of the paper. Evaluation of the SS2S Architecture. Next, we investigated the benefit of the SS2S architecture over DTW and Siamese LSTM (SLSTM) [24] as well as the interest of jointly learning the encoder-decoder and the metric on the LTMM dataset. Results are presented in Table 2. To test the DTW, the better radius was selected on the validation set and the spectral clustering was performed using DTW as kernel. Although Completeness, NMI and AMI are higher than every SS2S architectures except one, we observe a negative silhouette value which indicates a poor quality of the clustering and seems to confirm than indeed shaped based distances are not suitable for this type of data. Concerning the encoding architecture, SS2S gives overall better results than SLSTM and the best results are achieved by using the disjoint version of KISSME with a completeness of 0.983 and an NMI of 0.619. These results are not surprising as KISSME uses a parametric distance which can therefore be more adapted to the data. For the silhouette score, cosine distances performed best, i.e., they learned more compact and well-defined clusters. We also note that disjoint versions of the architectures performed better than the joint versions, thus invalidating our hypothesis H2. To investigate the reasons of this difference which could be due to the autoencoder not being learned properly, Table 3 reports average best Reconstruction Errors on Validation set (REV). The lowest errors are systematically achieved when the encoder is learned alone before the metric therefore supporting the hypothesis that learning the metric prevents the autoencoder from being trained at its full potential. It explains why the joint learning does not perform best. For the CRL, results are closer than for MSE suggesting why this reconstruction loss is easier to learn jointly. Finally, Fig. 3 shows clustering representations for two approaches: DTW and disjoint KISSME. The clusterings reflect the sequences of one hour that were found similar across the days on the testing set. If these sequences are at the same hour or cover the same time slots, we can argue it is a recurrent activity (or succession of activities) and therefore a routine. The disjoint KISSME version exhibits more coherent discrimination of routines, which, according to the 4 evaluation metrics reported was predictable. Several misclassified situations seem to appear for DTW which is coherent with the negative silhouette score. High regularities can be observed, and it is actually possible to make interpretations: yellow probably corresponds to sleeping moments and nights, and purple to activities during the day. Other clusters seem to correspond to activities at the evening or during meal time. Consequently, the SS2S architecture is able to learn a metric which cluster and produce a modeling of the daily routines of the person without labels. In this example, the clusters are coarse, the granularity of this analysis could be improved simply by working with sequences of half an hour or even shorter and produce more clusters. Table 3: Average reconstruction errors on the validation set of LTMM. (a) DTW [26]. (b) SS2S and KISSME, disjoint learning. Conclusions and perspectives We presented a metric learning model to cluster routines in the daily behavior of individuals. By defining routines as almost-periodic functions, we have been able to study them in a metric learning framework. We thus proposed an approach which combines metric learning and representation learning of sequences. Our proposed architecture relies on no labels and is learned only from time slots. A new reconstruction loss was also proposed to be learned jointly with a cosine metric and it showed better results than MSE in this case. Our SS2S architecture with KISSME and disjoint learning process achieved stimulating results with 0.983 of completeness and 0.619 of NMI. A visual evaluation analysis allows to interpret the recurrent behaviors discovered by the architecture. However, these results invalidate in this case our second hypothesis that combining metric learning and sequence to sequence learning would give better results. In the future, we will investigate more deeply joint learning of representations and metrics. Several architecture improvements could also be made, for examples: work with triplets instead of pairs, replace the LSTM with a convolutionnal neural network [13] or an echo states network [17]. This last approach works quite differently from a normal neural network and would require subsequent modifications of the architecture. Finally, we will study in further details the link between almost-periodic functions and metric learning.
3,858
1901.00439
2907303408
Twitter has been a prominent social media platform for mining population-level health data and accurate clustering of health-related tweets into topics is important for extracting relevant health insights. In this work, we propose deep convolutional autoencoders for learning compact representations of health-related tweets, further to be employed in clustering. We compare our method to several conventional tweet representation methods including bag-of-words, term frequency-inverse document frequency, Latent Dirichlet Allocation and Non-negative Matrix Factorization with 3 different clustering algorithms. Our results show that the clustering performance using proposed representation learning scheme significantly outperforms that of conventional methods for all experiments of different number of clusters. In addition, we propose a constraint on the learned representations during the neural network training in order to further enhance the clustering performance. All in all, this study introduces utilization of deep neural network-based architectures, i.e., deep convolutional autoencoders, for learning informative representations of health-related tweets.
Devising efficient representations of tweets, i.e., features, for performing clustering has been studied extensively. Most frequently used features for representing the text in tweets as numerical vectors are (BoWs) and (tf-idf) features @cite_2 @cite_42 @cite_61 @cite_5 @cite_43 . Both of these feature extraction methods are based on word occurrence counts and eventually, result in a sparse (most elements being zero) document-term matrix. Proposed algorithms for clustering tweets into topics include variants of hierarchical, density-based and centroid-based clustering methods; k-means algorithm being the most frequently used one @cite_42 @cite_43 @cite_24 .
{ "abstract": [ "Abstract Social media data carries abundant hidden occurrences of real-time events. In this paper, a novel methodology is proposed for detecting and trending events from tweet clusters that are discovered by using locality sensitive hashing (LSH) technique. Key challenges include: (1) construction of dictionary using incremental term frequency–inverse document frequency (TF–IDF) in high-dimensional data to create tweet feature vector, (2) leveraging LSH to find truly interesting events, (3) trending the behavior of event based on time, geo-locations and cluster size, and (4) speed-up the cluster-discovery process while retaining the cluster quality. Experiments are conducted for a specific event and the clusters discovered using LSH and K-means are compared with group average agglomerative clustering technique.", "In the emerging field of micro-blogging and social communication services, users post millions of short messages every day. Keeping track of all the messages posted by your friends and the conversation as a whole can become tedious or even impossible. In this paper, we presented a study on automatically clustering and classifying Twitter messages, also known as “tweets”, into different categories, inspired by the approaches taken by news aggregating services like Google News. Our results suggest that the clusters produced by traditional unsupervised methods can often be incoherent from a topical perspective, but utilizing a supervised methodology that utilize the hash-tags as indicators of topics produce surprisingly good results. We also offer a discussion on temporal effects of our methodology and training set size considerations. Lastly, we describe a simple method of finding the most representative tweet in a cluster, and provide an analysis of the results.", "Accurate depression diagnosis is a very complex long-term research problem. The current conversation oriented depression diagnosis between a medical doctor and a person is not accurate due to the limited number of known symptoms. To discover more depression symptoms, our research work focuses on extracting entity related to depression from social media such as social networks and web blogs. There are two major advantages of applying text mining tools to new depression symptoms extraction. Firstly, people share their feelings and knowledge on social medias. Secondly, social media produce big volume of data that can be used for research purpose. In our research, we collect data from social media initially, pre-process and analyze the data, finally extract depression symptoms.", "An unsupervised multilingual approach to identify topics on Twitter is proposed.Localised language can be leveraged for identifying relevant and important topics.Joint term ranking coupled with DPMM clustering consistently performed well.Multilingual sentiment analysis is essential to understand sentiment on the ground.Topics coverage of social media and main stream media does not always stay the same. Social media data can be valuable in many ways. However, the vast amount of content shared and the linguistic variants of languages used on social media are making it very challenging for high-value topics to be identified. In this paper, we present an unsupervised multilingual approach for identifying highly relevant terms and topics from the mass of social media data. This approach combines term ranking, localised language analysis, unsupervised topic clustering and multilingual sentiment analysis to extract prominent topics through analysis of Twitter's tweets from a period of time. It is observed that each of the ranking methods tested has their strengths and weaknesses, and that our proposed Joint ranking method is able to take advantage of the strengths of the ranking methods. This Joint ranking method coupled with an unsupervised topic clustering model is shown to have the potential to discover topics of interest or concern to a local community. Practically, being able to do so may help decision makers to gauge the true opinions or concerns on the ground. Theoretically, the research is significant as it shows how an unsupervised online topic identification approach can be designed without much manual annotation effort, which may have great implications for future development of expert and intelligent systems.", "As microblogging grows in popularity, services like Twitter are coming to support information gathering needs above and beyond their traditional roles as social networks. But most users’ interaction with Twitter is still primarily focused on their social graphs, forcing the often inappropriate conflation of “people I follow” with “stuff I want to read.” We characterize some information needs that the current Twitter interface fails to support, and argue for better representations of content for solving these challenges. We present a scalable implementation of a partially supervised learning model (Labeled LDA) that maps the content of the Twitter feed into dimensions. These dimensions correspond roughly to substance, style, status, and social characteristics of posts. We characterize users and tweets using this model, and present results on two information consumption oriented tasks.", "Abstract Depression is a common chronic disorder. It often goes undetected due to limited diagnosis methods and brings serious results to public and personal health. Former research detected geographic pattern for depression using questionnaires or self-reported measures of mental health, this may induce same-source bias. Recent studies use social media for depression detection but none of them examines the geographic patterns. In this paper, we apply GIS methods to social media data to provide new perspectives for public health research. We design a procedure to automatically detect depressed users in Twitter and analyze their spatial patterns using GIS technology. This method can improve diagnosis techniques for depression. It is faster at collecting data and more promptly at analyzing and providing results. Also, this method can be expanded to detect other major events in real-time, such as disease outbreaks and earthquakes." ], "cite_N": [ "@cite_61", "@cite_42", "@cite_24", "@cite_43", "@cite_2", "@cite_5" ], "mid": [ "2015503141", "2486235263", "2618742246", "2597761542", "2137958601", "2082609157" ] }
Deep Representation Learning for Clustering of Health Tweets
Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights [1]- [3]. These insights range from forecasting of influenza epidemics [4] to predicting adverse drug reactions [5]. A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering. Classification of tweets into topics has been studied extensively [6]- [8]. Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter. Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm [9]- [11]. Performance of O. Gencoglu is with Faculty of Medicine and Health Technology, Tampere University, Tampere, 33014, Finland e-mail: (oguzhangencoglu90@gmail.com). such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required [12]. Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner [13], [14]. Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets [15], predicting potential suicide attempts from Twitter [16] and simulating epidemics from Twitter [17]. In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters. III. METHODS A. Dataset For this study, a publicly available dataset is used [46]. The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. [47]. Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table I. The outlook of a typical tweet from the dataset can be examined from Figure 1. For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of "RT" appears as a prefix in the raw data and for user mentions, a string of form "@username" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the "#" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table I. Longest tweet consists of 27 words. B. Conventional Representations For representing tweets, 5 conventional representation methods are proposed as baselines. 1) Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of N × P in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., N data points and P features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below. 2) Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of O(N P 2 + P 3 ). 3) Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for S components are O(min(N P 2 , N 2 P )) and O(N 2 S), respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis. 4) LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model [48]. 5) NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm. C. Representation Learning We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, t, consisting of W words, the 2D input is I t ∈ R W ×D where D is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1. The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, f enc (·), is the part of the network that compresses the input, I, into a latent space representation, U , and the decoder, f dec (·) aims to reconstruct the input from the latent space representation (see equation 1). In essence, U = f enc (I) = f L (f L−1 (...f 1 (I)))(1) where L is the number of layers in the encoder part of the CAE. The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1). All convolutional layers have a kernel size of (3×3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2×5), (2×5) and (2×2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size 32 × 300 (corresponding to maximum sequence length × embedding dimension, D) is downsampled to size of 4 × 6 out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2×8), (2×8) and (2×2), respectively for that case. In summary, a representation of 4 × 6 = 24 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : 32 × 300 → 16 × 60 → 8 × 12 → 4 × 6. In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable. Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. L 2 -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the L 2 -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones [49]. In addition, from a probabilistic point of view, minimizing the L 2 -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities [50]. The learning rate for the optimizer is set to 10 −5 and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50. D. L 2 -norm Constrained Representation Learning Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include L 1 regularization, L 2 regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), U , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance [51], [52]. minimize L = 1/ N I − f dec (f enc (I)) 2 2 subject to f enc (I) 2 2 = 1(2) We propose an L 2 norm constraint on the learned representations out of the bottleneck layer, U . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit L 2 norm out of the bottleneck layer (see equation 2 where N is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying L 2 normalization on the learned representations after training [52]. To the best of our knowledge, this is the first study to incorporate L 2 norm constraint in a task involving text data. E. Evaluation In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for kmeans clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score [43], also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of [0, +∞] and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is O(N ). For a given dataset X consisting of N data points, i.e., X = x 1 , x 2 , ..., x N and a given set of disjoint clusters C with K clusters, i.e., C = c 1 , c 2 , ..., c K , Calinski-Harabasz score, S CH , is defined as S CH = N − K K − 1 c k ∈C N k c k − X 2 2 c k ∈C xi∈c k x i − c k 2 2(3) where N k is the number of points belonging to the cluster c k , X is the centroid of the entire dataset, 1 N xi∈X x i and c k is the centroid of the cluster c k , 1 N k xi∈c k x i . For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) [53] and Uniform Manifold Approximation and Projection (UMAP) [54] mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries [55], [56] on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU. IV. RESULTS Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table II. L 2 -norm constrained CAE is simply referred as L 2 -CAE in Table II. Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of 63, 326 × 13, 026 with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with 32 × 300 = 9, 600 for word2vec, GloVe and fastText, 32 × 768 = 24, 576 for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's T 2 test (multivariate version of ttest), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with p < 0.001. In addition, introducing the L 2 -norm constraint on the learned representations during training enhances the clustering performance further (again p < 0.001 when comparing for example fastText+CAE vs. fastText+L 2 -CAE). An example learning curve for CAE and L 2 -CAE with fastText embeddings as input can also be seen in Figure 2. Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by kmeans algorithm for LDA, CAE and L 2 -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into L 2 -CAE): • <Suicide risk falls after talk therapy> • <Air pollution may be tied to anxiety> • <Stress, depression boost risks for heart patients> • <Nearly 1 in 5 Americans who has been out of work for at least 1 year is clinically depressed.> • <Study shows how exercise protects the brain against depression> V. DISCUSSION Overall, we show that deep convolutional autoencoderbased feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table II). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training (L 2norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table II). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features. Visualizations of t-SNE and UMAP mappings in Figure 3 show that L 2 -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table II). This phenomena is not unexpected as k-means clustering is based on L 2 distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of L 2 -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2). When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug. Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces [57]. Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the nth root of its volume, whereas the number of data points in the region varies roughly linearly with the volume [57]. This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts. The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions [58]. In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels. Future work includes representation learning of healthrelated tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions [59]. Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies [60]- [62]. VI. CONCLUSION In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data.
3,514
1901.00439
2907303408
Twitter has been a prominent social media platform for mining population-level health data and accurate clustering of health-related tweets into topics is important for extracting relevant health insights. In this work, we propose deep convolutional autoencoders for learning compact representations of health-related tweets, further to be employed in clustering. We compare our method to several conventional tweet representation methods including bag-of-words, term frequency-inverse document frequency, Latent Dirichlet Allocation and Non-negative Matrix Factorization with 3 different clustering algorithms. Our results show that the clustering performance using proposed representation learning scheme significantly outperforms that of conventional methods for all experiments of different number of clusters. In addition, we propose a constraint on the learned representations during the neural network training in order to further enhance the clustering performance. All in all, this study introduces utilization of deep neural network-based architectures, i.e., deep convolutional autoencoders, for learning informative representations of health-related tweets.
Numerous works on topic modeling of tweets are available as well. Topic models are generative models, relying on the idea that a given tweet is a mixture of topics, where a topic is a probability distribution over words @cite_45 . Even though the objective in topic modeling is slightly different than that of pure clustering, representing each tweet as a topic vector is essentially a way of dimensionality reduction or feature extraction and can further be followed by a clustering algorithm. Proposed topic modeling methods include conventional approaches or variants of them such as Latent Dirichlet Allocation (LDA) @cite_8 @cite_2 @cite_42 @cite_32 @cite_20 @cite_48 @cite_54 @cite_27 @cite_43 @cite_40 @cite_29 and Non-negative Matrix Factorization (NMF) @cite_22 @cite_5 . Note that topic models such as LDA are based on the notion that words belonging to a topic are more likely to appear in the same document and do not assume a distance metric between discovered topics.
{ "abstract": [ "Non-negative matrix factorization (NMF) has been successfully applied in document clustering. However, experiments on short texts, such as microblogs, Q&A documents and news titles, suggest unsatisfactory performance of NMF. An major reason is that the traditional term weighting schemes, like binary weight and tfidf, cannot well capture the terms' discriminative power and importance in short texts, due to the sparsity of data. To tackle this problem, we proposed a novel term weighting scheme for NMF, derived from the Normalized Cut (Ncut) problem on the term affinity graph. Different from idf, which emphasizes discriminability on document level, the Ncut weighting measures terms' discriminability on term level. Experiments on two data sets show our weighting scheme significantly boosts NMF's performance on short text clustering.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "Social media platforms such as Twitter are becoming increasingly mainstream which provides valuable user-generated information by publishing and sharing contents. Identifying interesting and useful contents from large text-streams is a crucial issue in social media because many users struggle with information overload. Retweeting as a forwarding function plays an important role in information propagation where the retweet counts simply reflect a tweet's popularity. However, the main reason for retweets may be limited to personal interests and satisfactions. In this paper, we use a topic identification as a proxy to understand a large number of tweets and to score the interestingness of an individual tweet based on its latent topics. Our assumption is that fascinating topics generate contents that may be of potential interest to a wide audience. We propose a novel topic model called Trend Sensitive-Latent Dirichlet Allocation (TS-LDA) that can efficiently extract latent topics from contents by modeling temporal trends on Twitter over time. The experimental results on real world data from Twitter demonstrate that our proposed method outperforms several other baseline methods.", "By aggregating self-reported health statuses across millions of users, we seek to characterize the variety of health information discussed in Twitter. We describe a topic modeling framework for discovering health topics in Twitter, a social media website. This is an exploratory approach with the goal of understanding what health topics are commonly discussed in social media. This paper describes in detail a statistical topic model created for this purpose, the Ailment Topic Aspect Model (ATAM), as well as our system for filtering general Twitter data based on health keywords and supervised classification. We show how ATAM and other topic models can automatically infer health topics in 144 million Twitter messages from 2011 to 2013. ATAM discovered 13 coherent clusters of Twitter messages, some of which correlate with seasonal influenza (r = 0.689) and allergies (r = 0.810) temporal surveillance data, as well as exercise (r = .534) and obesity (r = −.631) related geographic survey data in the United States. These results demonstrate that it is possible to automatically discover topics that attain statistically significant correlations with ground truth data, despite using minimal human supervision and no historical data to train the model, in contrast to prior work. Additionally, these results demonstrate that a single general-purpose model can identify many different health topics in social media.", "In the emerging field of micro-blogging and social communication services, users post millions of short messages every day. Keeping track of all the messages posted by your friends and the conversation as a whole can become tedious or even impossible. In this paper, we presented a study on automatically clustering and classifying Twitter messages, also known as “tweets”, into different categories, inspired by the approaches taken by news aggregating services like Google News. Our results suggest that the clusters produced by traditional unsupervised methods can often be incoherent from a topical perspective, but utilizing a supervised methodology that utilize the hash-tags as indicators of topics produce surprisingly good results. We also offer a discussion on temporal effects of our methodology and training set size considerations. Lastly, we describe a simple method of finding the most representative tweet in a cluster, and provide an analysis of the results.", "Although there are millions of transgender people in the world, a lack of information exists about their health issues. This issue has consequences for the medical field, which only has a nascent understanding of how to identify and meet this population's health-related needs. Social media sites like Twitter provide new opportunities for transgender people to overcome these barriers by sharing their personal health experiences. Our research employs a computational framework to collect tweets from self-identified transgender users, detect those that are health-related, and identify their information needs. This framework is significant because it provides a macro-scale perspective on an issue that lacks investigation at national or demographic levels. Our findings identified 54 distinct health-related topics that we grouped into 7 broader categories. Further, we found both linguistic and topical differences in the health-related information shared by transgender men (TM) as com-pared to transgender women (TW). These findings can help inform medical and policy-based strategies for health interventions within transgender communities. Also, our proposed approach can inform the development of computational strategies to identify the health-related information needs of other marginalized populations.", "Public health-related topics are difficult to identify in large conversational datasets like Twitter. This study examines how to model and discover public health topics and themes in tweets. Tobacco use is chosen as a test case to demonstrate the effectiveness of topic modeling via LDA across a large, representational dataset from the United States, as well as across a smaller subset that was seeded by tobacco-related queries. Topic modeling across the large dataset uncovers several public health-related topics, although tobacco is not detected by this method. However, topic modeling across the tobacco subset provides valuable insight about tobacco use in the United States. The methods used in this paper provide a possible toolset for public health researchers and practitioners to better understand public health problems through large datasets of conversational data.", "An unsupervised multilingual approach to identify topics on Twitter is proposed.Localised language can be leveraged for identifying relevant and important topics.Joint term ranking coupled with DPMM clustering consistently performed well.Multilingual sentiment analysis is essential to understand sentiment on the ground.Topics coverage of social media and main stream media does not always stay the same. Social media data can be valuable in many ways. However, the vast amount of content shared and the linguistic variants of languages used on social media are making it very challenging for high-value topics to be identified. In this paper, we present an unsupervised multilingual approach for identifying highly relevant terms and topics from the mass of social media data. This approach combines term ranking, localised language analysis, unsupervised topic clustering and multilingual sentiment analysis to extract prominent topics through analysis of Twitter's tweets from a period of time. It is observed that each of the ranking methods tested has their strengths and weaknesses, and that our proposed Joint ranking method is able to take advantage of the strengths of the ranking methods. This Joint ranking method coupled with an unsupervised topic clustering model is shown to have the potential to discover topics of interest or concern to a local community. Practically, being able to do so may help decision makers to gauge the true opinions or concerns on the ground. Theoretically, the research is significant as it shows how an unsupervised online topic identification approach can be designed without much manual annotation effort, which may have great implications for future development of expert and intelligent systems.", "", "", "Abstract Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO); however, collecting and analyzing a large scale conversational public health data set is a challenging research task. The goal of this research is to analyze the characteristics of the general public's opinions in regard to diabetes, diet, exercise and obesity (DDEO) as expressed on Twitter. A multi-component semantic and linguistic framework was developed to collect Twitter data, discover topics of interest about DDEO, and analyze the topics. From the extracted 4.5 million tweets, 8 of tweets discussed diabetes, 23.7 diet, 16.6 exercise, and 51.7 obesity. The strongest correlation among the topics was determined between exercise and obesity ( p p p", "As microblogging grows in popularity, services like Twitter are coming to support information gathering needs above and beyond their traditional roles as social networks. But most users’ interaction with Twitter is still primarily focused on their social graphs, forcing the often inappropriate conflation of “people I follow” with “stuff I want to read.” We characterize some information needs that the current Twitter interface fails to support, and argue for better representations of content for solving these challenges. We present a scalable implementation of a partially supervised learning model (Labeled LDA) that maps the content of the Twitter feed into dimensions. These dimensions correspond roughly to substance, style, status, and social characteristics of posts. We characterize users and tweets using this model, and present results on two information consumption oriented tasks.", "Abstract Depression is a common chronic disorder. It often goes undetected due to limited diagnosis methods and brings serious results to public and personal health. Former research detected geographic pattern for depression using questionnaires or self-reported measures of mental health, this may induce same-source bias. Recent studies use social media for depression detection but none of them examines the geographic patterns. In this paper, we apply GIS methods to social media data to provide new perspectives for public health research. We design a procedure to automatically detect depressed users in Twitter and analyze their spatial patterns using GIS technology. This method can improve diagnosis techniques for depression. It is faster at collecting data and more promptly at analyzing and providing results. Also, this method can be expanded to detect other major events in real-time, such as disease outbreaks and earthquakes.", "Twitter has become a significant means by which people communicate with the world and describe their current activities, opinions and status in short text snippets. Tweets can be analyzed automatically in order to derive much potential information such as, interesting topics, social influence, user’s communities, etc. Community extraction within social networks has been a focus of recent work in several areas. Different from the most community discovery methods focused on the relations between users, we aim to derive user’s communities based on common topics from user’s tweets. For instance, if two users always talk about politic in their tweets, thus they can be grouped in the same community which is related to politic topic. To achieve this goal, we propose a new approach called CETD: Community Extraction based on Topic-Driven-Model. This approach combines our proposed model used to detect topics of the user’s tweets based on a semantic taxonomy together with a community extraction method based on the hierarchical clustering technique. Our experimentation on the proposed approach shows the relevant of the users communities extracted based on their common topics and domains." ], "cite_N": [ "@cite_22", "@cite_8", "@cite_48", "@cite_54", "@cite_42", "@cite_29", "@cite_32", "@cite_43", "@cite_27", "@cite_45", "@cite_40", "@cite_2", "@cite_5", "@cite_20" ], "mid": [ "2058846446", "1880262756", "2076959242", "2079591709", "2486235263", "2887267870", "1630939116", "2597761542", "", "2334889010", "2963685694", "2137958601", "2082609157", "183527652" ] }
Deep Representation Learning for Clustering of Health Tweets
Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights [1]- [3]. These insights range from forecasting of influenza epidemics [4] to predicting adverse drug reactions [5]. A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering. Classification of tweets into topics has been studied extensively [6]- [8]. Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter. Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm [9]- [11]. Performance of O. Gencoglu is with Faculty of Medicine and Health Technology, Tampere University, Tampere, 33014, Finland e-mail: (oguzhangencoglu90@gmail.com). such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required [12]. Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner [13], [14]. Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets [15], predicting potential suicide attempts from Twitter [16] and simulating epidemics from Twitter [17]. In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters. III. METHODS A. Dataset For this study, a publicly available dataset is used [46]. The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. [47]. Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table I. The outlook of a typical tweet from the dataset can be examined from Figure 1. For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of "RT" appears as a prefix in the raw data and for user mentions, a string of form "@username" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the "#" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table I. Longest tweet consists of 27 words. B. Conventional Representations For representing tweets, 5 conventional representation methods are proposed as baselines. 1) Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of N × P in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., N data points and P features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below. 2) Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of O(N P 2 + P 3 ). 3) Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for S components are O(min(N P 2 , N 2 P )) and O(N 2 S), respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis. 4) LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model [48]. 5) NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm. C. Representation Learning We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, t, consisting of W words, the 2D input is I t ∈ R W ×D where D is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1. The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, f enc (·), is the part of the network that compresses the input, I, into a latent space representation, U , and the decoder, f dec (·) aims to reconstruct the input from the latent space representation (see equation 1). In essence, U = f enc (I) = f L (f L−1 (...f 1 (I)))(1) where L is the number of layers in the encoder part of the CAE. The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1). All convolutional layers have a kernel size of (3×3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2×5), (2×5) and (2×2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size 32 × 300 (corresponding to maximum sequence length × embedding dimension, D) is downsampled to size of 4 × 6 out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2×8), (2×8) and (2×2), respectively for that case. In summary, a representation of 4 × 6 = 24 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : 32 × 300 → 16 × 60 → 8 × 12 → 4 × 6. In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable. Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. L 2 -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the L 2 -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones [49]. In addition, from a probabilistic point of view, minimizing the L 2 -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities [50]. The learning rate for the optimizer is set to 10 −5 and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50. D. L 2 -norm Constrained Representation Learning Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include L 1 regularization, L 2 regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), U , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance [51], [52]. minimize L = 1/ N I − f dec (f enc (I)) 2 2 subject to f enc (I) 2 2 = 1(2) We propose an L 2 norm constraint on the learned representations out of the bottleneck layer, U . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit L 2 norm out of the bottleneck layer (see equation 2 where N is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying L 2 normalization on the learned representations after training [52]. To the best of our knowledge, this is the first study to incorporate L 2 norm constraint in a task involving text data. E. Evaluation In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for kmeans clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score [43], also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of [0, +∞] and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is O(N ). For a given dataset X consisting of N data points, i.e., X = x 1 , x 2 , ..., x N and a given set of disjoint clusters C with K clusters, i.e., C = c 1 , c 2 , ..., c K , Calinski-Harabasz score, S CH , is defined as S CH = N − K K − 1 c k ∈C N k c k − X 2 2 c k ∈C xi∈c k x i − c k 2 2(3) where N k is the number of points belonging to the cluster c k , X is the centroid of the entire dataset, 1 N xi∈X x i and c k is the centroid of the cluster c k , 1 N k xi∈c k x i . For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) [53] and Uniform Manifold Approximation and Projection (UMAP) [54] mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries [55], [56] on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU. IV. RESULTS Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table II. L 2 -norm constrained CAE is simply referred as L 2 -CAE in Table II. Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of 63, 326 × 13, 026 with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with 32 × 300 = 9, 600 for word2vec, GloVe and fastText, 32 × 768 = 24, 576 for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's T 2 test (multivariate version of ttest), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with p < 0.001. In addition, introducing the L 2 -norm constraint on the learned representations during training enhances the clustering performance further (again p < 0.001 when comparing for example fastText+CAE vs. fastText+L 2 -CAE). An example learning curve for CAE and L 2 -CAE with fastText embeddings as input can also be seen in Figure 2. Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by kmeans algorithm for LDA, CAE and L 2 -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into L 2 -CAE): • <Suicide risk falls after talk therapy> • <Air pollution may be tied to anxiety> • <Stress, depression boost risks for heart patients> • <Nearly 1 in 5 Americans who has been out of work for at least 1 year is clinically depressed.> • <Study shows how exercise protects the brain against depression> V. DISCUSSION Overall, we show that deep convolutional autoencoderbased feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table II). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training (L 2norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table II). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features. Visualizations of t-SNE and UMAP mappings in Figure 3 show that L 2 -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table II). This phenomena is not unexpected as k-means clustering is based on L 2 distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of L 2 -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2). When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug. Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces [57]. Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the nth root of its volume, whereas the number of data points in the region varies roughly linearly with the volume [57]. This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts. The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions [58]. In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels. Future work includes representation learning of healthrelated tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions [59]. Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies [60]- [62]. VI. CONCLUSION In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data.
3,514
1901.00439
2907303408
Twitter has been a prominent social media platform for mining population-level health data and accurate clustering of health-related tweets into topics is important for extracting relevant health insights. In this work, we propose deep convolutional autoencoders for learning compact representations of health-related tweets, further to be employed in clustering. We compare our method to several conventional tweet representation methods including bag-of-words, term frequency-inverse document frequency, Latent Dirichlet Allocation and Non-negative Matrix Factorization with 3 different clustering algorithms. Our results show that the clustering performance using proposed representation learning scheme significantly outperforms that of conventional methods for all experiments of different number of clusters. In addition, we propose a constraint on the learned representations during the neural network training in order to further enhance the clustering performance. All in all, this study introduces utilization of deep neural network-based architectures, i.e., deep convolutional autoencoders, for learning informative representations of health-related tweets.
In contrary to abovementioned feature extraction methods which are not specific to representation of tweets but rather generic in natural language processing, various works propose custom feature extraction methods for certain health-related information retrieval tasks from Twitter. For instance, engineered sentiment analysis features to discover latent infectious diseases from Twitter @cite_7 . In order to track public health condition trends from Twitter, specific features are proposed by Parker at al. employing Wikipedia article index, i.e., treating the retrieval of medically-related Wikipedia articles as an indicator of a health-related condition @cite_1 . Custom user similarity features calculated from tweets were also proposed for building a framework for recommending health-related topics @cite_27 .
{ "abstract": [ "", "Traditional public health surveillance requires regular clinical reports and considerable effort by health professionals to analyze data. Therefore, a low cost alternative is of great practical use. As a platform used by over 500 million users worldwide to publish their ideas about many topics, including health conditions, Twitter provides researchers the freshest source of public health conditions on a global scale. We propose a framework for tracking public health condition trends via Twitter. The basic idea is to use frequent term sets from highly purified health-related tweets as queries into a Wikipedia article index -- treating the retrieval of medically-related articles as an indicator of a health-related condition. By observing fluctuations in frequent term sets and in turn medically-related articles over a series of time slices of tweets, we detect shifts in public health conditions and concerns over time. Compared to existing approaches, our framework provides a general a priori identification of emerging public health conditions rather than a specific illness (e.g., influenza) as is commonly done.", "Abstract Introduction The authors of this work propose an unsupervised machine learning model that has the ability to identify real-world latent infectious diseases by mining social media data. In this study, a latent infectious disease is defined as a communicable disease that has not yet been formalized by national public health institutes and explicitly communicated to the general public. Most existing approaches to modeling infectious-disease-related knowledge discovery through social media networks are top-down approaches that are based on already known information, such as the names of diseases and their symptoms. In existing top-down approaches, necessary but unknown information, such as disease names and symptoms, is mostly unidentified in social media data until national public health institutes have formalized that disease. Most of the formalizing processes for latent infectious diseases are time consuming. Therefore, this study presents a bottom-up approach for latent infectious disease discovery in a given location without prior information, such as disease names and related symptoms. Methods Social media messages with user and temporal information are extracted during the data preprocessing stage. An unsupervised sentiment analysis model is then presented. Users’ expressions about symptoms, body parts, and pain locations are also identified from social media data. Then, symptom weighting vectors for each individual and time period are created, based on their sentiment and social media expressions. Finally, latent-infectious-disease-related information is retrieved from individuals’ symptom weighting vectors. Datasets and results Twitter data from August 2012 to May 2013 are used to validate this study. Real electronic medical records for 104 individuals, who were diagnosed with influenza in the same period, are used to serve as ground truth validation. The results are promising, with the highest precision, recall, and F 1 score values of 0.773, 0.680, and 0.724, respectively. Conclusion This work uses individuals’ social media messages to identify latent infectious diseases, without prior information, quicker than when the disease(s) is formalized by national public health institutes. In particular, the unsupervised machine learning model using user, textual, and temporal information in social media data, along with sentiment analysis, identifies latent infectious diseases in a given location." ], "cite_N": [ "@cite_27", "@cite_1", "@cite_7" ], "mid": [ "", "2164912194", "2565943263" ] }
Deep Representation Learning for Clustering of Health Tweets
Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights [1]- [3]. These insights range from forecasting of influenza epidemics [4] to predicting adverse drug reactions [5]. A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering. Classification of tweets into topics has been studied extensively [6]- [8]. Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter. Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm [9]- [11]. Performance of O. Gencoglu is with Faculty of Medicine and Health Technology, Tampere University, Tampere, 33014, Finland e-mail: (oguzhangencoglu90@gmail.com). such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required [12]. Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner [13], [14]. Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets [15], predicting potential suicide attempts from Twitter [16] and simulating epidemics from Twitter [17]. In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters. III. METHODS A. Dataset For this study, a publicly available dataset is used [46]. The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. [47]. Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table I. The outlook of a typical tweet from the dataset can be examined from Figure 1. For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of "RT" appears as a prefix in the raw data and for user mentions, a string of form "@username" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the "#" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table I. Longest tweet consists of 27 words. B. Conventional Representations For representing tweets, 5 conventional representation methods are proposed as baselines. 1) Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of N × P in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., N data points and P features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below. 2) Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of O(N P 2 + P 3 ). 3) Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for S components are O(min(N P 2 , N 2 P )) and O(N 2 S), respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis. 4) LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model [48]. 5) NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm. C. Representation Learning We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, t, consisting of W words, the 2D input is I t ∈ R W ×D where D is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1. The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, f enc (·), is the part of the network that compresses the input, I, into a latent space representation, U , and the decoder, f dec (·) aims to reconstruct the input from the latent space representation (see equation 1). In essence, U = f enc (I) = f L (f L−1 (...f 1 (I)))(1) where L is the number of layers in the encoder part of the CAE. The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1). All convolutional layers have a kernel size of (3×3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2×5), (2×5) and (2×2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size 32 × 300 (corresponding to maximum sequence length × embedding dimension, D) is downsampled to size of 4 × 6 out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2×8), (2×8) and (2×2), respectively for that case. In summary, a representation of 4 × 6 = 24 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : 32 × 300 → 16 × 60 → 8 × 12 → 4 × 6. In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable. Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. L 2 -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the L 2 -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones [49]. In addition, from a probabilistic point of view, minimizing the L 2 -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities [50]. The learning rate for the optimizer is set to 10 −5 and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50. D. L 2 -norm Constrained Representation Learning Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include L 1 regularization, L 2 regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), U , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance [51], [52]. minimize L = 1/ N I − f dec (f enc (I)) 2 2 subject to f enc (I) 2 2 = 1(2) We propose an L 2 norm constraint on the learned representations out of the bottleneck layer, U . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit L 2 norm out of the bottleneck layer (see equation 2 where N is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying L 2 normalization on the learned representations after training [52]. To the best of our knowledge, this is the first study to incorporate L 2 norm constraint in a task involving text data. E. Evaluation In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for kmeans clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score [43], also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of [0, +∞] and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is O(N ). For a given dataset X consisting of N data points, i.e., X = x 1 , x 2 , ..., x N and a given set of disjoint clusters C with K clusters, i.e., C = c 1 , c 2 , ..., c K , Calinski-Harabasz score, S CH , is defined as S CH = N − K K − 1 c k ∈C N k c k − X 2 2 c k ∈C xi∈c k x i − c k 2 2(3) where N k is the number of points belonging to the cluster c k , X is the centroid of the entire dataset, 1 N xi∈X x i and c k is the centroid of the cluster c k , 1 N k xi∈c k x i . For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) [53] and Uniform Manifold Approximation and Projection (UMAP) [54] mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries [55], [56] on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU. IV. RESULTS Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table II. L 2 -norm constrained CAE is simply referred as L 2 -CAE in Table II. Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of 63, 326 × 13, 026 with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with 32 × 300 = 9, 600 for word2vec, GloVe and fastText, 32 × 768 = 24, 576 for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's T 2 test (multivariate version of ttest), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with p < 0.001. In addition, introducing the L 2 -norm constraint on the learned representations during training enhances the clustering performance further (again p < 0.001 when comparing for example fastText+CAE vs. fastText+L 2 -CAE). An example learning curve for CAE and L 2 -CAE with fastText embeddings as input can also be seen in Figure 2. Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by kmeans algorithm for LDA, CAE and L 2 -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into L 2 -CAE): • <Suicide risk falls after talk therapy> • <Air pollution may be tied to anxiety> • <Stress, depression boost risks for heart patients> • <Nearly 1 in 5 Americans who has been out of work for at least 1 year is clinically depressed.> • <Study shows how exercise protects the brain against depression> V. DISCUSSION Overall, we show that deep convolutional autoencoderbased feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table II). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training (L 2norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table II). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features. Visualizations of t-SNE and UMAP mappings in Figure 3 show that L 2 -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table II). This phenomena is not unexpected as k-means clustering is based on L 2 distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of L 2 -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2). When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug. Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces [57]. Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the nth root of its volume, whereas the number of data points in the region varies roughly linearly with the volume [57]. This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts. The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions [58]. In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels. Future work includes representation learning of healthrelated tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions [59]. Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies [60]- [62]. VI. CONCLUSION In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data.
3,514
1901.00439
2907303408
Twitter has been a prominent social media platform for mining population-level health data and accurate clustering of health-related tweets into topics is important for extracting relevant health insights. In this work, we propose deep convolutional autoencoders for learning compact representations of health-related tweets, further to be employed in clustering. We compare our method to several conventional tweet representation methods including bag-of-words, term frequency-inverse document frequency, Latent Dirichlet Allocation and Non-negative Matrix Factorization with 3 different clustering algorithms. Our results show that the clustering performance using proposed representation learning scheme significantly outperforms that of conventional methods for all experiments of different number of clusters. In addition, we propose a constraint on the learned representations during the neural network training in order to further enhance the clustering performance. All in all, this study introduces utilization of deep neural network-based architectures, i.e., deep convolutional autoencoders, for learning informative representations of health-related tweets.
Metrics for evaluating the performance of clustering algorithms varies depending on whether the ground truth topic categories are available or not. If so, frequently used metrics are and . In the case of absence of ground truth labels, one has to use internal clustering criterions such as Calinski-Harabasz (CH) score @cite_58 and Davies-Bouldin index @cite_4 . provides an extensive comparative study of cluster validity indices @cite_28 .
{ "abstract": [ "The validation of the results obtained by clustering algorithms is a fundamental part of the clustering process. The most used approaches for cluster validation are based on internal cluster validity indices. Although many indices have been proposed, there is no recent extensive comparative study of their performance. In this paper we show the results of an experimental work that compares 30 cluster validity indices in many different environments with different characteristics. These results can serve as a guideline for selecting the most suitable index for each possible application and provide a deep insight into the performance differences between the currently available indices.", "A method for identifying clusters of points in a multidimensional Euclidean space is described and its application to taxonomy considered. It reconciles, in a sense, two different approaches to the investigation of the spatial relationships between the points, viz., the agglomerative and the divisive methods. A graph, the shortest dendrite of . (1951a), is constructed on a nearest neighbour basis and then divided into clusters by applying the criterion of minimum within cluster sum of squares. This procedure ensures an effective reduction of the number of possible splits. The method may be applied to a dichotomous division, but is perfectly suitable also for a global division into any number of clusters. An informal indicator of the \"best number\" of clusters is suggested. It is a\"variance ratio criterion\" giving some insight into the structure of the points. The method is illustrated by three examples, one of which is original. The results obtained by the dendrite method are compared with those...", "A measure is presented which indicates the similarity of clusters which are assumed to have a data density which is a decreasing function of distance from a vector characteristic of the cluster. The measure can be used to infer the appropriateness of data partitions and can therefore be used to compare relative appropriateness of various divisions of the data. The measure does not depend on either the number of clusters analyzed nor the method of partitioning of the data and can be used to guide a cluster seeking algorithm." ], "cite_N": [ "@cite_28", "@cite_58", "@cite_4" ], "mid": [ "1985059878", "2085487226", "2051224630" ] }
Deep Representation Learning for Clustering of Health Tweets
Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights [1]- [3]. These insights range from forecasting of influenza epidemics [4] to predicting adverse drug reactions [5]. A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering. Classification of tweets into topics has been studied extensively [6]- [8]. Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter. Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm [9]- [11]. Performance of O. Gencoglu is with Faculty of Medicine and Health Technology, Tampere University, Tampere, 33014, Finland e-mail: (oguzhangencoglu90@gmail.com). such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required [12]. Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner [13], [14]. Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets [15], predicting potential suicide attempts from Twitter [16] and simulating epidemics from Twitter [17]. In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters. III. METHODS A. Dataset For this study, a publicly available dataset is used [46]. The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. [47]. Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table I. The outlook of a typical tweet from the dataset can be examined from Figure 1. For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of "RT" appears as a prefix in the raw data and for user mentions, a string of form "@username" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the "#" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table I. Longest tweet consists of 27 words. B. Conventional Representations For representing tweets, 5 conventional representation methods are proposed as baselines. 1) Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of N × P in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., N data points and P features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below. 2) Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of O(N P 2 + P 3 ). 3) Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for S components are O(min(N P 2 , N 2 P )) and O(N 2 S), respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis. 4) LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model [48]. 5) NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm. C. Representation Learning We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, t, consisting of W words, the 2D input is I t ∈ R W ×D where D is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1. The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, f enc (·), is the part of the network that compresses the input, I, into a latent space representation, U , and the decoder, f dec (·) aims to reconstruct the input from the latent space representation (see equation 1). In essence, U = f enc (I) = f L (f L−1 (...f 1 (I)))(1) where L is the number of layers in the encoder part of the CAE. The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1). All convolutional layers have a kernel size of (3×3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2×5), (2×5) and (2×2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size 32 × 300 (corresponding to maximum sequence length × embedding dimension, D) is downsampled to size of 4 × 6 out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2×8), (2×8) and (2×2), respectively for that case. In summary, a representation of 4 × 6 = 24 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : 32 × 300 → 16 × 60 → 8 × 12 → 4 × 6. In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable. Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. L 2 -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the L 2 -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones [49]. In addition, from a probabilistic point of view, minimizing the L 2 -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities [50]. The learning rate for the optimizer is set to 10 −5 and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50. D. L 2 -norm Constrained Representation Learning Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include L 1 regularization, L 2 regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), U , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance [51], [52]. minimize L = 1/ N I − f dec (f enc (I)) 2 2 subject to f enc (I) 2 2 = 1(2) We propose an L 2 norm constraint on the learned representations out of the bottleneck layer, U . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit L 2 norm out of the bottleneck layer (see equation 2 where N is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying L 2 normalization on the learned representations after training [52]. To the best of our knowledge, this is the first study to incorporate L 2 norm constraint in a task involving text data. E. Evaluation In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for kmeans clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score [43], also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of [0, +∞] and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is O(N ). For a given dataset X consisting of N data points, i.e., X = x 1 , x 2 , ..., x N and a given set of disjoint clusters C with K clusters, i.e., C = c 1 , c 2 , ..., c K , Calinski-Harabasz score, S CH , is defined as S CH = N − K K − 1 c k ∈C N k c k − X 2 2 c k ∈C xi∈c k x i − c k 2 2(3) where N k is the number of points belonging to the cluster c k , X is the centroid of the entire dataset, 1 N xi∈X x i and c k is the centroid of the cluster c k , 1 N k xi∈c k x i . For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) [53] and Uniform Manifold Approximation and Projection (UMAP) [54] mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries [55], [56] on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU. IV. RESULTS Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table II. L 2 -norm constrained CAE is simply referred as L 2 -CAE in Table II. Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of 63, 326 × 13, 026 with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with 32 × 300 = 9, 600 for word2vec, GloVe and fastText, 32 × 768 = 24, 576 for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's T 2 test (multivariate version of ttest), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with p < 0.001. In addition, introducing the L 2 -norm constraint on the learned representations during training enhances the clustering performance further (again p < 0.001 when comparing for example fastText+CAE vs. fastText+L 2 -CAE). An example learning curve for CAE and L 2 -CAE with fastText embeddings as input can also be seen in Figure 2. Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by kmeans algorithm for LDA, CAE and L 2 -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into L 2 -CAE): • <Suicide risk falls after talk therapy> • <Air pollution may be tied to anxiety> • <Stress, depression boost risks for heart patients> • <Nearly 1 in 5 Americans who has been out of work for at least 1 year is clinically depressed.> • <Study shows how exercise protects the brain against depression> V. DISCUSSION Overall, we show that deep convolutional autoencoderbased feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table II). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training (L 2norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table II). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features. Visualizations of t-SNE and UMAP mappings in Figure 3 show that L 2 -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table II). This phenomena is not unexpected as k-means clustering is based on L 2 distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of L 2 -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2). When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug. Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces [57]. Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the nth root of its volume, whereas the number of data points in the region varies roughly linearly with the volume [57]. This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts. The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions [58]. In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels. Future work includes representation learning of healthrelated tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions [59]. Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies [60]- [62]. VI. CONCLUSION In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data.
3,514
1907.04112
2957945053
When studying multi-body protein complexes, biochemists use computational tools that can suggest hundreds or thousands of their possible spatial configurations. However, it is not feasible to experimentally verify more than only a very small subset of them. In this paper, we propose a novel multiscale visual drilldown approach that was designed in tight collaboration with proteomic experts, enabling a systematic exploration of the configuration space. Our approach takes advantage of the hierarchical structure of the data -- from the whole ensemble of protein complex configurations to the individual configurations, their contact interfaces, and the interacting amino acids. Our new solution is based on interactively linked 2D and 3D views for individual hierarchy levels and at each level, we offer a set of selection and filtering operations enabling the user to narrow down the number of configurations that need to be manually scrutinized. Furthermore, we offer a dedicated filter interface, which provides the users with an overview of the applied filtering operations and enables them to examine their impact on the explored ensemble. This way, we maintain the history of the exploration process and thus enable the user to return to an earlier point of the exploration. We demonstrate the effectiveness of our approach on two case studies conducted by collaborating proteomic experts.
Most of the currently available computational tools for protein-protein interactions are focusing on protein pairs and a comprehensible overview was published by Huang @cite_4 . Some of the existing approaches, such as ArDock @cite_29 , already combine the computational method with a basic visual representation of the predictions. There are even some solutions, such as DockingShop @cite_28 , which are enabling the user to interactively design an initial configuration for a protein docking prediction process through a molecular graphics interface.
{ "abstract": [ "The molecular docking problem is to determine how molecules interact with other molecules and plays a key role in understanding how cells function. DockingShop is an integrated environment far interactively steering molecular docking by navigating a ligand or protein to the receptor's estimated binding site. This tool provides a graphical interface for molecular modeling featuring real-time visual guides, interactive manipulation, navigation, optimization, and dynamic visualization enabling users to apply their biological knowledge to steer the docking process.", "", "Protein–protein docking is attracting increasing attention in drug discovery research targeting protein–protein interactions, owing to its potential in predicting protein–protein interactions and identifying ‘hot spot’ residues at the protein–protein interface. Given the relative lack of information about binding sites and the fact that proteins are generally larger than ligand, the search algorithms and evaluation methods for protein–protein docking differ somewhat from those for protein–ligand docking and, hence, require different research strategies. Here, we review the basic concepts, principles and advances of current search strategies and evaluation methods for protein–protein docking. We also discuss the current challenges and limitations, as well as future directions, of established approaches." ], "cite_N": [ "@cite_28", "@cite_29", "@cite_4" ], "mid": [ "2106082849", "2808404822", "1979284950" ] }
0
1907.04112
2957945053
When studying multi-body protein complexes, biochemists use computational tools that can suggest hundreds or thousands of their possible spatial configurations. However, it is not feasible to experimentally verify more than only a very small subset of them. In this paper, we propose a novel multiscale visual drilldown approach that was designed in tight collaboration with proteomic experts, enabling a systematic exploration of the configuration space. Our approach takes advantage of the hierarchical structure of the data -- from the whole ensemble of protein complex configurations to the individual configurations, their contact interfaces, and the interacting amino acids. Our new solution is based on interactively linked 2D and 3D views for individual hierarchy levels and at each level, we offer a set of selection and filtering operations enabling the user to narrow down the number of configurations that need to be manually scrutinized. Furthermore, we offer a dedicated filter interface, which provides the users with an overview of the applied filtering operations and enables them to examine their impact on the explored ensemble. This way, we maintain the history of the exploration process and thus enable the user to return to an earlier point of the exploration. We demonstrate the effectiveness of our approach on two case studies conducted by collaborating proteomic experts.
One of the first tools designed primarily for multi-body docking was CombDock @cite_10 . The algorithm works on a principle of hierarchical construction of the complex from smaller subunits and a greedy selection of the best-ranking subunits. The combinatorial step is followed by the reduction of solutions based on RMSD and a scoring function. Multi--LZerD @cite_24 uses a genetic algorithm to generate complexes from initial pairwise docks and applies an energy minimization structure refinement procedure for the ranking of the solutions. @cite_13 proposed an ant colony optimization approach to solve the combinatorial problem. DockStar @cite_33 formulates the task of detecting the spatial conformation of a protein complex as an Integer Linear Program. Unlike other methods, it also integrates experimental data from mass spectrometry into the scoring of the solutions. Another tool reusing pairwise docks in combination with experimental data is PRISM-EM @cite_14 . It uses density maps from cryo-electron microscopy for guiding the placement of subunits.
{ "abstract": [ "Many biological processes are governed by large assemblies of protein molecules. However, it is often very difficult to determine the three-dimensional structures of these assemblies using experimental biophysical techniques. Hence there is a need to develop computational approaches to fill this gap. This article presents an ant colony optimization approach to predict the structure of large multi-component protein complexes. Starting from pair-wise docking predictions, a multi-graph consisting of vertices representing the component proteins and edges representing candidate interactions is constructed. This allows the assembly problem to be expressed in terms of searching for a minimum weight spanning tree. However, because the problem remains highly combinatorial, the search space cannot be enumerated exhaustively and therefore heuristic optimisation techniques must be used. The utility of the ant colony based approach is demonstrated by re-assembling known protein complexes from the Protein Data Bank. The algorithm is able to identify near-native solutions for five of the six cases tested. This demonstrates that the ant colony approach provides a useful way to deal with the highly combinatorial multi-component protein assembly problem.", "A revised Table 6 and Supporting Information are provided for the article by [(2016), Acta Cryst. D72, 1137–1148].", "", "The tertiary structures of protein complexes provide a crucial insight about the molecular mechanisms that regulate their functions and assembly. However, solving protein complex structures by experimental methods is often more difficult than single protein structures. Here, we have developed a novel computational multiple protein docking algorithm, Multi-LZerD, that builds models of multimeric complexes by effectively reusing pairwise docking predictions of component proteins. A genetic algorithm is applied to explore the conformational space followed by a structure refinement procedure. Benchmark on eleven hetero-multimeric complexes resulted in near native conformations for all but one of them (a root mean square deviation smaller than 2.5A). We also show that our method copes with unbound docking cases well, outperforming the methodology that can be directly compared to our approach. Multi-LZerD was able to predict near native structures for multimeric complexes of various topologies.", "The majority of proteins function when associated in multimolecular assemblies. Yet, prediction of the structures of multimolecular complexes has largely not been addressed, probably due to the magnitude of the combinatorial complexity of the problem. Docking applications have traditionally been used to predict pairwise interactions between molecules. We have developed an algorithm that extends the application of docking to multimolecular assemblies. We apply it to predict quaternary structures of both oligomers and multi-protein complexes. The algorithm predicted well a near-native arrangement of the input subunits for all cases in our data set, where the number of the subunits of the different target complexes varied from three to ten. In order to simulate a more realistic scenario, unbound cases were tested. In these cases the input conformations of the subunits are either unbound conformations of the subunits or a model obtained by a homology modeling technique. The successful predictions of the unbound cases, where the input conformations of the subunits are different from their conformations within the target complex, suggest that the algorithm is robust. We expect that this type of algorithm should be particularly useful to predict the structures of large macromolecular assemblies, which are difficult to solve by experimental structure determination." ], "cite_N": [ "@cite_13", "@cite_14", "@cite_33", "@cite_24", "@cite_10" ], "mid": [ "2159653313", "2522679778", "2155286731", "2101980405", "2005303043" ] }
0
1907.04112
2957945053
When studying multi-body protein complexes, biochemists use computational tools that can suggest hundreds or thousands of their possible spatial configurations. However, it is not feasible to experimentally verify more than only a very small subset of them. In this paper, we propose a novel multiscale visual drilldown approach that was designed in tight collaboration with proteomic experts, enabling a systematic exploration of the configuration space. Our approach takes advantage of the hierarchical structure of the data -- from the whole ensemble of protein complex configurations to the individual configurations, their contact interfaces, and the interacting amino acids. Our new solution is based on interactively linked 2D and 3D views for individual hierarchy levels and at each level, we offer a set of selection and filtering operations enabling the user to narrow down the number of configurations that need to be manually scrutinized. Furthermore, we offer a dedicated filter interface, which provides the users with an overview of the applied filtering operations and enables them to examine their impact on the explored ensemble. This way, we maintain the history of the exploration process and thus enable the user to return to an earlier point of the exploration. We demonstrate the effectiveness of our approach on two case studies conducted by collaborating proteomic experts.
Although these schematic representations are conveying the information about a single configuration, they do not support the comparison and interactive filtering of entire ensembles of configurations. This issue is addressed in the CoCoMaps @cite_22 and COZOID @cite_25 tools. Both tools come with linked visualizations, aiding the users in analyzing and comparing interactions between protein pairs. CoCoMaps and its successor CONS-COCOMAPS @cite_1 enable to measure and visualize the consensus in multiple docking solutions and display the conservation of residue contacts using intermolecular contact maps. The COZOID tool uses a set of linked views for the interactive exploration of large ensembles of protein pairs, supporting a visual drilldown approach for narrowing down the set of possibly relevant configurations. The main limitation of these approaches is that they are operating only on protein pairs (i.e., single ) and cannot be directly applied to multi-body complexes. The multiscale aspect in molecular visualization can be explored on different granularity levels, as shown in the recent survey of @cite_6 .
{ "abstract": [ "Background The development of accurate protein-protein docking programs is making this kind of simulations an effective tool to predict the 3D structure and the surface of interaction between the molecular partners in macromolecular complexes. However, correctly scoring multiple docking solutions is still an open problem. As a consequence, the accurate and tedious screening of many docking models is usually required in the analysis step.", "", "Summary: Herein we present COCOMAPS, a novel tool for analyzing, visualizing and comparing the interface in protein-protein and protein-nucleic acids complexes. COCOMAPS combines traditional analyses and 3D visualization of the interface with the effectiveness of intermolecular contact maps. Availability: COCOMAPS is accessible as a public web tool at http: www.molnac.unisa.it BioTools cocomaps", "Abstract We provide a high-level survey of multiscale molecular visualization techniques, with a focus on application-domain questions, challenges, and tasks. We provide a general introduction to molecular visualization basics and describe a number of domain-specific tasks that drive this work. These tasks, in turn, serve as the general structure of the following survey. First, we discuss methods that support the visual analysis of molecular dynamics simulations. We discuss, in particular, visual abstraction and temporal aggregation. In the second part, we survey multiscale approaches that support the design, analysis, and manipulation of DNA nanostructures and related concepts for abstraction, scale transition, scale-dependent modeling, and navigation of the resulting abstraction spaces. In the third part of the survey, we showcase approaches that support interactive exploration within large structural biology assemblies up to the size of bacterial cells. We describe fundamental rendering techniques as well as approaches for element instantiation, visibility management, visual guidance, camera control, and support of depth perception. We close the survey with a brief listing of important tools that implement many of the discussed approaches and a conclusion that provides some research challenges in the field." ], "cite_N": [ "@cite_1", "@cite_25", "@cite_22", "@cite_6" ], "mid": [ "2015934148", "", "2129562311", "2890964348" ] }
0
1907.04112
2957945053
When studying multi-body protein complexes, biochemists use computational tools that can suggest hundreds or thousands of their possible spatial configurations. However, it is not feasible to experimentally verify more than only a very small subset of them. In this paper, we propose a novel multiscale visual drilldown approach that was designed in tight collaboration with proteomic experts, enabling a systematic exploration of the configuration space. Our approach takes advantage of the hierarchical structure of the data -- from the whole ensemble of protein complex configurations to the individual configurations, their contact interfaces, and the interacting amino acids. Our new solution is based on interactively linked 2D and 3D views for individual hierarchy levels and at each level, we offer a set of selection and filtering operations enabling the user to narrow down the number of configurations that need to be manually scrutinized. Furthermore, we offer a dedicated filter interface, which provides the users with an overview of the applied filtering operations and enables them to examine their impact on the explored ensemble. This way, we maintain the history of the exploration process and thus enable the user to return to an earlier point of the exploration. We demonstrate the effectiveness of our approach on two case studies conducted by collaborating proteomic experts.
In our case, we were not only concerned with designing proper visual representations of the individual hierarchy levels of large ensembles of multi-body complexes, but also with how to interactively explore and filter these ensembles to support the identification of biochemically most relevant instances. @cite_34 focus on the problem of interactive visual steering of hierarchical simulation ensembles. In their substantially different application case, they also deal with linking representations on different levels of detail as well as with the challenge that the ensemble can grow during the exploration process.
{ "abstract": [ "Multi-level simulation models, i.e., models where different components are simulated using sub-models of varying levels of complexity, belong to the current state-of-the-art in simulation. The existing analysis practice for multi-level simulation results is to manually compare results from different levels of complexity, amounting to a very tedious and error-prone, trial-and-error exploration process. In this paper, we introduce hierarchical visual steering, a new approach to the exploration and design of complex systems. Hierarchical visual steering makes it possible to explore and analyze hierarchical simulation ensembles at different levels of complexity. At each level, we deal with a dynamic simulation ensemble — the ensemble grows during the exploration process. There is at least one such ensemble per simulation level, resulting in a collection of dynamic ensembles, analyzed simultaneously. The key challenge is to map the multi-dimensional parameter space of one ensemble to the multi-dimensional parameter space of another ensemble (from another level). In order to support the interactive visual analysis of such complex data we propose a novel approach to interactive and semi-automatic parameter space segmentation and comparison. The approach combines a novel interaction technique and automatic, computational methods — clustering, concave hull computation, and concave polygon overlapping — to support the analysts in the cross-ensemble parameter space mapping. In addition to the novel parameter space segmentation we also deploy coordinated multiple views with standard plots. We describe the abstract analysis tasks, identified during a case study, i.e., the design of a variable valve actuation system of a car engine. The study is conducted in cooperation with experts from the automotive industry. Very positive feedback indicates the usefulness and efficiency of the newly proposed approach." ], "cite_N": [ "@cite_34" ], "mid": [ "2186401311" ] }
0
1901.00363
2907819829
Most text detection methods hypothesize texts are horizontal or multi-oriented and thus define quadrangles as the basic detection unit. However, text in the wild is usually perspectively distorted or curved, which can not be easily tackled by existing approaches. In this paper, we propose a deep character embedding network (CENet) which simultaneously predicts the bounding boxes of characters and their embedding vectors, thus making text detection a simple clustering task in the character embedding space. The proposed method does not require strong assumptions of forming a straight line on general text detection, which provides flexibility on arbitrarily curved or perspectively distorted text. For character detection task, a dense prediction subnetwork is designed to obtain the confidence score and bounding boxes of characters. For character embedding task, a subnet is trained with contrastive loss to project detected characters into embedding space. The two tasks share a backbone CNN from which the multi-scale feature maps are extracted. The final text regions can be easily achieved by a thresholding process on character confidence and embedding distance of character pairs. We evaluated our method on ICDAR13, ICDAR15, MSRA-TD500, and Total-Text. The proposed method achieves state-of-the-art or comparable performance on all these datasets, and shows substantial improvement in the irregular-text datasets, i.e. Total-Text.
MSER @cite_5 and SWT @cite_21 are classical text component extraction methods. In the era of deep learning, CTPN @cite_17 extracts horizontal text components with fixed-size width using a modified Faster R-CNN framework. Horizontal text lines are easily generated, since CTPN adjusted the Faster R-CNN @cite_7 framework to output dense text components. SegLink @cite_36 proposed a kind of oriented text component (i.e. segment) and a component-pair connection structure (i.e. link). A link indicates which two segments should be connected. Naturally, SegLink dealt better with multi-oriented texts than CTPN. PixelLink @cite_1 provided an instance segmentation based solution that detects text pixels and their linkage with neighbor pixels. Positive pixels with positive links are grouped as the collection of connected components. Besides, Markov Clustering Network @cite_11 regarded detected text pixels as nodes and associated them with computed attractors by a designed markov clustering networks. The above mentioned methods provided inspiring ideas on text detection. However, the regions between characters are sometimes in-discriminative with background in some cases, especially in text lines where distances between characters are large.
{ "abstract": [ "A novel framework named Markov Clustering Network (MCN) is proposed for fast and robust scene text detection. MCN predicts instance-level bounding boxes by firstly converting an image into a Stochastic Flow Graph (SFG) and then performing Markov Clustering on this graph. Our method can detect text objects with arbitrary size and orientation without prior knowledge of object size. The stochastic flow graph encode objects' local correlation and semantic information. An object is modeled as strongly connected nodes, which allows flexible bottom-up detection for scale-varying and rotated objects. MCN generates bounding boxes without using Non-Maximum Suppression, and it can be fully parallelized on GPUs. The evaluation on public benchmarks shows that our method outperforms the existing methods by a large margin in detecting multi-oriented text objects. MCN achieves new state-of-art performance on challenging MSRA-TD500 dataset with precision of 0.88, recall of 0.79 and F-score of 0.83. Also, MCN achieves realtime inference with frame rate of 34 FPS, which is 1.5 A— speedup when compared with the fastest scene text detection algorithm.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.", "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "Most state-of-the-art scene text detection algorithms are deep learning based methods that depend on bounding box regression and perform at least two kinds of predictions: text non-text classification and location regression. Regression plays a key role in the acquisition of bounding boxes in these methods, but it is not indispensable because text non-text prediction can also be considered as a kind of semantic segmentation that contains full location information in itself. However, text instances in scene images often lie very close to each other, making them very difficult to separate via semantic segmentation. Therefore, instance segmentation is needed to address this problem. In this paper, PixelLink, a novel scene text detection algorithm based on instance segmentation, is proposed. Text instances are first segmented out by linking pixels within the same instance together. Text bounding boxes are then extracted directly from the segmentation result without location regression. Experiments show that, compared with regression-based methods, PixelLink can achieve better or comparable performance on several benchmarks, while requiring many fewer training iterations and less training data.", "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com ." ], "cite_N": [ "@cite_11", "@cite_7", "@cite_36", "@cite_21", "@cite_1", "@cite_5", "@cite_17" ], "mid": [ "2963161243", "2613718673", "2950143680", "2142159465", "2781529199", "2061802763", "2519818067" ] }
Detecting Text in the Wild with Deep Character Embedding Network
Optical Character Recognition (OCR) is a long-standing problem that attracts the interest of many researchers with its recent focus on scene text. It enables computers to extract text from images, which facilitates various applications, such as scene text translation, scanned document reading, etc. As the first step of OCR, the flexibility and robustness of text detection significantly affect the overall performance of OCR system. The goal for text detection algorithms is to generate bounding boundaries of text units as tight as possible. these authors contribute equally in this work. arXiv:1901.00363v1 [cs.CV] 2 Jan 2019 When dealing with different kinds of text, different text unit should be defined in advance. When detecting text in Latin, the text unit is usually "word"; while if in Asian language, it is "text line" instead. Words or lines have a strong prior by their nature. The characters in them tend to usually cluster as straight lines. Therefore, it is natural to define rectangles or quadrangles that wrap text as the objective of detection. This prior has been widely used in many text detection works and achieved promising results [41,31,12,32,24,18,17,5,25]. However, when text appears in the wild, it often suffers from severe deformation and distortion. Even worse, some text are curved as designed. In such scenario, this strong prior does not hold. Fig. 1 shows curved text with quadrangle bounding boxes and curved tight bounding boundaries. It can be easily observed the quadrangle bounding box inevitably contains extra background, making it more ambiguous than curved polygon boundaries. We realized that if characters can be detected and a flexible way to group them into text can be found, tight bounding boundaries will be easily generated with the boundary of characters. Characters are also fundamental elements of text, this idea can be naturally extended to irregular text. In early attempts [31,36,37], scholars turned to use a heuristic clustering method with hand-crafted features to link detected character parts into text lines. The non data-driven heuristic clustering methods are fragile, requiring a thorough check on corner cases manually. Also, the hand-crafted features ignore large parts of visual context information of text, making it less discriminative to determine the closeness between characters. Thereby, we propose a Character Embedding Network (CENet) in a fully data-driven way. The model detects characters as the first step. After characters being detected, they are projected into an embedding space by the same model where characters belonging to the same text unit are close to each other, and characters belonging to different text units are far from each other. During the training stage, the network is jointly trained with a character detection loss and a character embedding loss. During the inference stage, a single forward pass could produce character candidates as well as their representation in the embedding space. A simple distance thresholding is then applied to determine connected character pairs. Connected character pairs further form text groups by chaining the characters together. After the connection relationships are properly learned, the text units could be detected regardless of text length or distortion the text suffers. To the best of our knowledge, the proposed CENet is the first to model text grouping problem as a character embedding learning problem. It does not rely on strong priors, making it capable of detecting arbitrarily curved or distorted text. Moreover, since both character detection and character grouping tasks are based on local patch of images, our model could be directly expand from "word" detection to "line" detection without modifying the backbone model for larger receptive field. Our model also avoids complicated heuristic grouping rules or hand-crafted features. At last, our single model performs two tasks with a single forward pass, only adding minimal overhead over character detection network. The contributions of this paper are three-fold: -We propose a multi-task network to detect arbitrarily curved text in the wild. The character detection subnet is trained to detect character proposals, and the character embedding subnet learns a way to project characters into embedding space. Complicated post-processing steps, e.g. character grouping and word partition, are then be simplified as a simple distance thresholding step in the embedding space. -We adopt a weakly supervised method to train character detector with only word-level polygon annotations, without the strong hypothesis that text should appear in a straight line. -We conduct extensive experiments on several benchmarks to detect horizontal words, multi-oriented words, multi-oriented lines and curved words, demonstrating the superior performance of of our method over the existing methods. Method There are two tasks that our model is supposed to solve. One is to detect characters and the other is to project characters into an embedding space where characters belonging to the same group are close, and characters belonging to different groups are far from each other. Sharing a backbone CNN, the two tasks are implemented by separate subnets, i.e., a character detection subnet and a character embedding subnet. To put it another way, our framework is a single backbone network with two output heads. With the calculated character candidates and their corresponding embedding vectors, the post processing removes false positive and groups characters in an efficient and effective manner. Network design We use ResNet-50 [7] as the backbone network of our model. Following recent network design practices [31,19,12], we concatenate semantic features from three different layers of the backbone ResNet-50 network. After deconvolutional operations, the features are concatenated as shared feature maps which are 1/4 of the original image in size. A character detection subnet and a character embedding subnet are stacked on top of the shared feature maps. The character detection subnet is a convolutional network that produces 5 channels as the final output. The channels are offsets ∆x tl , ∆y tl , ∆x br , ∆y br and confidence score, where tl means top left and br means bottom right. The top left and bottom right bounding box coordinates of detected character candidates could be calculated by (x − ∆x tl , y − ∆y tl ) and (x + ∆x br , y + ∆y br ), where x and y are coordinates of pixel whose confidence score greater than a threshold s. The bounding boxes further serve as RoIs of characters. The character embedding subnet takes the residual convolution unit (RCU) as the basic blocks which is simplified residual block without batch normalization. The design was inspired by [31] where the authors showed that the scores and bounding box sizes of character proposals offer strong clues on whether they belong to the same group, and the feature maps extracted by the backbone network contains such information. Therefore, residual units were chosen to preserve score and bounding box information from feature maps, directly passing them to top layers by skip connection. On top of the RCU blocks, we employ a 1x1 convolution layer with linear activation function to output a 128-channel final embedding map. RoI pooing with 1 × 1 kernel is applied on the embedding maps extracting embedding vectors for each character. During inference, we extract confidence map, offset maps and embedding maps from the two heads of the model. After thresholding on the score map and performing NMS on character proposals, the embedding vectors are extracted by 1×1 RoI pooling on embedding map. In the end, we output character candidates with the format of {score, coordinates(x, y) of character center, width, height, 128D embedding vector}. Characters are finally clustered into text blocks as the last post-processing step. The overall structure of the model and pipeline are shown in Fig. 2. Res Training character detector Loss definition The character detector consists of two tasks that include text/non-text classification and box regression. The loss can be formulated as L char = L cls + λ 1 L reg ,(1) where L cls denotes the binary classification loss, L reg represents the box regression loss, and λ 1 is a factor to balance the two losses. In this paper, we use pixel-wise hinge-loss as classification cost. Some measures for class balance or boosting (e.g., OHEM [29]) are adopted in our experiments. Usually, we set the sampling ratio of 1 : 3 to balance the positive and negative samples, where 30% of negative samples selected from the top hardest in a training batch. Here, IoUloss [38] is adopted as the regression cost which handles the problem of bounding box accuracy bias between large and small objects instead of L2-loss. Learning character detector from coarse annotation Since it is laborintensive to annotate character-level boxes, most of public benchmarks like IC-DAR15 [15] and Total-Text [2] provide only quadrangle or polygon annotations for words, and MSRA-TD500 provides annotations for sentences. Those annotations are all coarse annotations. Inspired by WordSup [12], which recursively rectifies character-level supervisions and updates the model parameters with the rectified character-level supervision, a new rectification rule is designed for producing character-level supervision. This rule is capable of training character detector from bounding boundary annotations with polygon format, while WordSup may fail. Our design follows the general observation that the short side of a nearly horizontal (or vertical) text is approximately equal to the heights (or width) of characters in it. The short side could be used to rectify the imprecise predicted characters with the following pipeline. Firstly, each annotated quadrangle or polygon is uniformly divided into N bounding boxes along the center line, where N denotes the character number of the text transcript. We call the preliminary bounding box segmentations as coarse-char boxes. After one forward pass, some candidate character boxes (namely pred-char boxes) with high confidence are collected. Finer character boxes (namely fine-char boxes) are produced from coarse-char boxes and their corresponding matched pred-char boxes. If no matched pred-char is founded, the coarse-char box is used as a fine-char box directly. Otherwise, if the annotated text is more horizontal, the width of the fine-char box is set to be the width of pred-char box, and the height is set to the height of the coarse-char box; if more vertical, the width is the width of coarse-char box, and the height is the height of pred-char box. The obtained fine-char boxes are used as "ground truth" in Equ. 1 to update model. The matched pred-char box p of a coarse-char box c should meet the following constraints: S(p) > t 1 IoU (p, c) > t 2 ,(2) where S(p) denotes the confidence score of pred-char box p, IoU (p, c) means Intersection over Union between the pred-char box and coarse-char box. t 1 and t 2 are predefined to 0.2 and 0.5 in our experiments. The visualization of the rectification procedure is shown in Fig. 3. Learning character embedding The character embedding subnet is another crucial part in our model. In an ideal case, we hope the subnet projects the characters into an embedding space. Distances between characters among the same text unit are small in the learned space, and that between those belong to different units to be large. Therefore we can group characters into text blocks by performing clustering in the embedding space. This case resembles the objective of metric learning, which aims to learn a distance function to measure similarities between samples. Inspired by previous works in metric learning, we select the most straightforward contrastive loss to train our model. Contrastive loss takes pairs of characters into calculation. Let i and j denote the index of character candidates in a pair, v i and v j denote their embedding vectors that are extracted by the embedding subnet, and l i,j denotes whether they belong to the same text unit. If they do, we name pair (i, j) to be positive pair and l i,j = 1. Otherwise, pair (i, j) is defined as negative pair and l i,j = 0 . The Contrastive Loss is defined as J(i, j) = l i,j [D(v i , v j )] 2 + (1 − l i,j )max(0, 1 − D(v i , v j )) 2 ,(3) where D denotes the distance measure. In training, v i and v j are pulled close to each other if l i,j = 1. If l i,j = 0, v j and v i are pushed away from each other until D(v i , v j ) > 1. Constructing Local Character Pairs It is worth-noting that in every definition of text, characters in the same text unit are naturally close in the image. Two small characters are unlikely from the same text unit if they are too far from each other in the image. However, if they are on the endpoints of a line of characters, the probability of their belonging to same text line are significantly increased. The key difference is whether there are closely scattered characters, namely local character pairs, that connect individual characters in one text unit. In addition, it is unnecessary to train models with all possible character pairs. Instead, when all the local character pairs are correctly labeled, all of the text units would be correctly detected. Working with local character pairs also reduces the requirement of large receptive field when detecting long text. In this work, we employ k nearest neighbor with radius (r-KNN) to incorporate such information. When producing possible pairs, each character was selected as anchor in turn. With an anchor selected, at most k characters which are closest to anchor in the image were taken form pairs. Another useful heuristic rule is that a character is more likely to be connected with characters with similar box size. Therefore, only characters within radius were kept. To formalize this empirical pair sampling method, we define c i , w i , and h i as the center coordinates, width, and height of character i in image respectively; and KNN(i) be a function that generates the set of k nearest neighbors of character i in the image. Then j ∈ r-KNN(i, βr(i)) represents j is in KNN(i) and the spatial distance D(c i , c j ) < β w 2 i + h 2 i . Both k and β were set to 5 in our experiments. When j ∈ r-KNN(i), we call i and j produces a locally connected pair. Here we define the set of all locally connected pairs as LCP = {(i, j), i ∈ M, j ∈ r-KNN(i)}, where M is the total number of character candidates in one image. With r-KNN preprocessing, there are only O(kM ) locally connected pairs remaining, reducing the size of character pairs to a reasonable level. We noticed that the positive pairs are redundant. The minimum requisite for error-less positive pairs is that at least one chain connects all characters in a text unit. Positive pairs with large embedding distances do not contribute any text level error as long as the minimum requisite is satisfied. However, a negative pair with small embedding distance will certainly mis-connect two text units and generate text level error. Meanwhile, we found there are about 3/4 of local character pairs are positive. According to the above analysis, we assume the negative pairs should be weighted more than the positive pairs in training. Therefore, we sample R pairs from LCP of batch images so that there are α pairs are negative in a batch. Let's denote the sampled pairs set as SP , the final re-weighted loss for learning embedding is defined as Equ. 4. We found R = 1024 and α = 60% work well in our experiments. L emb = i,j∈SP J(i, j) R .(4) The loss function to train the whole network then becomes L = L cls + λ 1 L reg + λ 2 L emb ,(5) where λ 1 and λ 2 control the balance among the losses. We set both λ 1 and λ 2 to 1 in our experiments. Post-processing In testing, we employ two threshold values (s and d) to filter false character candidates and group characters into text units. After a forward pass, the proposed model would provide a set of character candidates and their corresponding embedding vectors. Then, the character candidates with confidence scores greater than s are kept. Next, r-KNN is performed on each character, outputting the local character pairs in whole image. To address the character grouping problem, we simply cut down the connected pairs whose embedding distances are over d. Following the steps above, we can quickly find characters from the same groups. The final step is to represent the character groups in a suitable way. In this paper, we adopted the piecewise linear method that used in WordSup [12] to format the boundary of character groups. This method provides various configurable boundary formats, which meet the requirements of different benchmarks. On ICDAR15, a filtering strategy that removes short words with less than two detected characters are applied. This strategy aims to further remove false alarm from the detection results. Experiments We conduct experiments on ICDAR13, ICDAR15, MSRA-TD500, and Total-Text datasets, to explore how the proposed approach performs in different sce-narios. The four chosen datasets focus on horizontal-oriented text, multi-oriented text, sentence-level long text, as well as curved-oriented text respectively. Experiments on synthetic data are also conducted for structural search and pretraining. We also list recent state-of-art methods for comparison. Datasets and Evaluation Five datasets are used in the experiments: -VGG 50k. The VGG SynthText dataset [6] consists of 800,000 images, where the synthesized text are rendered in various background images. The dataset provides detailed character-level, word-level and line-level annotations. For the experimental efficiency, we randomly select 50,000 images for training and 500 images for validation. This subset is referred as VGG 50k. -ICDAR13. The ICDAR13 dataset [16] is from ICDAR 2013 Robust Reading Competition. The texts are well focused and horizontal oriented. Annotations on character-level and word-level bounding boxes are both provided. There are 229 training images and 233 testing images. -ICDAR15. The ICDAR15 dataset [15] is from ICDAR 2015 Robust Reading Competition. The images are captured in an incidental way with Google Glass. Only word-level quadrangles annotations are provided in ICDAR15. There are 1000 natural images for training and 500 for testing. Experiments under this dataset shows our method's performance in word-level Latin text detection task. -MSRA-TD500. The MSRA-TD500 dataset [35] is a dataset comprises of 300 training images and 200 test images. Text regions are arbitrarily orientated and annotated at sentence level. Different from the other datasets, it contains both English and Chinese text. We test our method on this dataset to show it is scalability across different languages and different detection level (line level in this dataset). -Total-Text. The Total-Text dataset [2] is recently released in ICDAR2017. Unlike the ICDAR datasets, there are plenty of curved-oriented text as well as horizontal and multi-oriented text in Total-Text. There are 1255 images in training set, and 300 images in test set. Two kinds of annotations are provided: one is word level polygon bounding regions that bind ground-truth words tightly, and word level rectangular bounding boxes as other datasets provided. Since many of the words in this datasets are curved or distorted, it is adopted to validate the generalization ability of our method on irregular text detection tasks. Implementation details Since the training samples are not abundant in these available datasets, we use VGG 50k data to pretrain a base model, and then finetune the base model on other benchmark datasets accordingly. Two models are trained with the wordlevel annotation and line-level annotation of VGG 50k data respectively. The backbone ResNet-50 model was first pretrained on ImageNet dataset. Then the models are trained on VGG 50k dataset for character detection and further finetuned with both character detection and character embedding loss. The converged models are used as pretrained models for training other benchmarks. We have not adopted any more data augmentation when training models with VGG 50k data. For the remaining benchmark datasets, we perform multi scale data augmentation by resizing the image to [0.65, 0.75, 1, 1.2] scales of the original image respectively, and cropped with a sliding window of size 512 × 512 with stride 256 to generate images for training. During training, we randomly rotate the cropped image to 90 o , 180 o or 270 o , and distort brightness, contrast and color on all three benchmark datasets. When training with data without character level annotation, the supervision for character detection comes from the weak supervision mechanism depicted above. Boxes used to train character embedding are the same coarse-char box used for character detection. We found a "mixing batch" trick helps. In practice, a half of the mixing batch are sampled from benchmark data, and the other half are from VGG 50k which provide character-level annotation. Character supervision for data from VGG 50k comes from their character annotation. The optimizer is SGD with momentum in all the model training. We train the models 50k iteration at learning rate of 0.001, 30K iterations at 0.0001, and 20K iterations at 0.00001. The momentum was set to 0.9 in all the experiments. The two threshold for post-processing, i.e. s and g, are tuned by grid search on training set. All the experiments are carried out on a shared server with a NVIDIA Tesla P40 GPU. Training a batch takes about 2s. Inference was done on original images. The average inference time cost is 276ms per image with size 768 × 1280, the forward pass, r-KNN search, NMS, and other operations cost 181ms, 21ms, 51ms and 23ms, respectively. Ablation Study As shown in Tab. 1, ablation experiments have been done on ICDAR15 dataset. Three key components in our pipeline are evaluated. Specifically, the mixing batch trick used in weak supervision, the positive-negative pair reweighting strategy, and short word removal strategy are added progressively to show their impact on overall performance. Without bells and whistles, the model trained merely with weak character supervision and local character pairs converges successfully but gives mediocre results (73% in Recall). The character detection subnet was more likely overfitted on text components instead of characters. With "mixing batch" trick, word recall is improved strikingly by about 4% with similar precision. The finding here may imply that this trick, as a regularization measure, prevents the weak character supervision from prevailing. In other words, weak character supervision tends to results in a certain amount of "soft" ground truths while the precise character supervision can pull the trained model to its correct position. If we further add positive-negative pair reweighting trick in character embedding, performances in both precision and recall increase by 2%. In accordance to our previous analysis in Sec.3.3, more balanced positive-negative pairs are behind the improvement. In addition, a detected word is error-prone if it is too short. Removal of the word less than 2 characters is adopted, which indicates 3.8% improvement in precision without hurting recall. Tab. 2 lists the results on ICDAR13 dataset of various state-of-art methods. Our model presents a competitive performance on this scenario. The demonstrated that the proposed CENet is capable of learning horizontal text line. Note that WordSup adopted the horizontal nature of text directly when grouping characters into text lines, and the data-driven CENet could achieve a similar performance without utilizing that strong prior. s, d are set to 0.4 and 0.45 in this dataset. Experiments on Scene Text Benchmarks (a) (b) (c) (d) (e) (f) (g) (h) We conduct experiments on ICDAR15 dataset, comparing the results of the proposed CENet with other state-of-the-art methods. As shown in Tab. 3, our single scale CENet outperforms most of the existing approaches in terms of F-measure. This shows that character detection and character embedding together can handle most cases in regular text word detection. Our model learns both the character proposals and their relationship in terms of grouping, reducing wrongly-grouped and wrongly-split characters compared with word based methods [41,10]. s, d are set to 0.35 and 0.38 in this dataset. Tab. 4 lists the results on MSRA-TD500 dataset. Our model achieve best result w.r.t F-measure on this dataset. The dataset is multilingual and is a good test-bed for generalization. For our model, it is basic unit is character which is dependent on local patch and character embedding connects neighboring units by propagation. Therefore it escapes from the large receptive field requirement of one stage methods. s, d are set to 0.4 and 0.3 in this dataset. On the most challenging Total-text dataset, the proposed method presents an overwhelming advantage over other methods in comparison, as is shown in Tab 4. The baseline comes from DeconveNet that predicts a score map of text followed by connected component analysis. VGG 50K dataset contains some curved text, Table 4. Results of different methods on MSRA-TD500. Method Recall Precision F-measure Zhang et al. [39] 67 83 74 EAST [41] 67.43 87.28 76.08 He et al. [10] 70 77 74 PixelLink [4] 83.0 73.2 77.8 CENet(VGG 50k+MSRA TD500 finetune) 75.26 85.88 80.21 We visualize detection results of our model on four benchmarks, illustrates in Fig. 4. Results show our model can tackle text detection in various scenarios, especially on curved texts. Future Works Our model predicts rich information including text level boundaries as well as character bounding boxes. With a view to these advantages, we hope to incorporate the acquired detection information into the follow-up text recognition. For instance, we may use the predicted character position to align the attention weight or boost CTC based recognition. Conclusion Observing the demerits of previous text detection methods, we present a novel scene text detection model. The model is more flexible to detect texts that captured unconstrained, the curved or severely distorted texts in particular. it is completely data-driven in an end-to-end way and thus makes little use of heuristic rules or handcrafted features. it is also trained with two correlated tasks, i.e., the character detection and character embedding, which is unprecedented. To train the network smoothly, we also propose several measures, i.e. weak supervision mechanism for training character detector and positive-negative pair reweighting, to facilitate training and boost the performance. Extensive experiments on benchmarks show that the proposed framework could achieve superior performances even though texts are displayed in multi-orientated, line-level or curved ways.
4,337
1901.00363
2907819829
Most text detection methods hypothesize texts are horizontal or multi-oriented and thus define quadrangles as the basic detection unit. However, text in the wild is usually perspectively distorted or curved, which can not be easily tackled by existing approaches. In this paper, we propose a deep character embedding network (CENet) which simultaneously predicts the bounding boxes of characters and their embedding vectors, thus making text detection a simple clustering task in the character embedding space. The proposed method does not require strong assumptions of forming a straight line on general text detection, which provides flexibility on arbitrarily curved or perspectively distorted text. For character detection task, a dense prediction subnetwork is designed to obtain the confidence score and bounding boxes of characters. For character embedding task, a subnet is trained with contrastive loss to project detected characters into embedding space. The two tasks share a backbone CNN from which the multi-scale feature maps are extracted. The final text regions can be easily achieved by a thresholding process on character confidence and embedding distance of character pairs. We evaluated our method on ICDAR13, ICDAR15, MSRA-TD500, and Total-Text. The proposed method achieves state-of-the-art or comparable performance on all these datasets, and shows substantial improvement in the irregular-text datasets, i.e. Total-Text.
Recently, quite a few works @cite_9 @cite_38 @cite_27 @cite_0 @cite_15 @cite_31 @cite_22 @cite_32 have put emphasis on adjusting some popular object detection frameworks including Faster R-CNN @cite_7 , SSD @cite_23 and Densebox @cite_10 to detect word boundary. In contrast to general objects, texts appearing in the real-world have larger varieties of aspect ratios and orientations. @cite_27 and @cite_38 directly added more anchor boxes of large aspect ratio to cover texts of wider range. @cite_9 and @cite_15 added the angle property to the bounding box to deal with the problem of multiple orientations, while EAST @cite_0 and @cite_31 provided a looser representation namely quadrangle. These methods seem to easily achieve high performance on benchmarks with word-level annotations, but not on non-Latin scripts or curved text with polygon-level annotations.
{ "abstract": [ "In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.", "In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. In the context of multioriented scene text detection, we analyze the drawbacks of indirect regression, which covers the state-of-the-art detection structures Faster-RCNN and SSD as instances, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial to localize incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F-measure of 81 , which is a new state-ofthe-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance.", "We present a novel single-shot text detector that directly outputs word-level bounding boxes in a natural image. We propose an attention mechanism which roughly identifies text regions via an automatically learned attentional map. This substantially suppresses background interference in the convolutional features, which is the key to producing accurate inference of words, particularly at extremely small sizes. This results in a single model that essentially works in a coarse-to-fine manner. It departs from recent FCN-based text detectors which cascade multiple FCN models to achieve an accurate prediction. Furthermore, we develop a hierarchical inception module which efficiently aggregates multi-scale inception features. This enhances local details, and also encodes strong context information, allowing the detector to work reliably on multi-scale and multi-orientation text with single-scale images. Our text detector achieves an F-measure of 77 on the ICDAR 2015 benchmark, advancing the state-of-the-art results in [18, 28]. Demo is available at: http: sstd.whuang.org .", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-toend object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2 on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.", "Previous deep learning based state-of-the-art scene text detection methods can be roughly classified into two categories. The first category treats scene text as a type of general objects and follows general object detection paradigm to localize scene text by regressing the text box locations, but troubled by the arbitrary-orientation and large aspect ratios of scene text. The second one segments text regions directly, but mostly needs complex post processing. In this paper, we present a method that combines the ideas of the two types of methods while avoiding their shortcomings. We propose to detect scene text by localizing corner points of text bounding boxes and segmenting text regions in relative positions. In inference stage, candidate boxes are generated by sampling and grouping corner points, which are further scored by segmentation maps and suppressed by NMS. Compared with previous methods, our method can handle long oriented text naturally and doesn't need complex post processing. The experiments on ICDAR2013, ICDAR2015, MSRA-TD500, MLT and COCO-Text demonstrate that the proposed algorithm achieves better or comparable results in both accuracy and efficiency. Based on VGG16, it achieves an F-measure of 84.3 on ICDAR2015 and 81.5 on MSRA-TD500.", "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.", "This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Detecting incidental scene text is a challenging task because of multi-orientation, perspective distortion, and variation of text size, color and scale. Retrospective research has only focused on using rectangular bounding box or horizontal sliding window to localize text, which may result in redundant background noise, unnecessary overlap or even information loss. To address these issues, we propose a new Convolutional Neural Networks (CNNs) based method, named Deep Matching Prior Network (DMPNet), to detect text with tighter quadrangle. First, we use quadrilateral sliding windows in several specific intermediate convolutional layers to roughly recall the text with higher overlapping area and then a shared Monte-Carlo method is proposed for fast and accurate computing of the polygonal areas. After that, we designed a sequential protocol for relative regression which can exactly predict text with compact quadrangle. Moreover, a auxiliary smooth Ln loss is also proposed for further regressing the position of text, which has better overall performance than L2 loss and smooth L1 loss in terms of robustness and stability. The effectiveness of our approach is evaluated on a public word-level, multi-oriented scene text database, ICDAR 2015 Robust Reading Competition Challenge 4 Incidental scene text localization. The performance of our method is evaluated by using F-measure and found to be 70.64 , outperforming the existing state-of-the-art method with F-measure 63.76 .", "How can a single fully convolutional neural network (FCN) perform on object detection? We introduce DenseBox, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image. Our contribution is two-fold. First, we show that a single FCN, if designed and optimized carefully, can detect multiple different objects extremely accurately and efficiently. Second, we show that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray. We present experimental results on public benchmark datasets including MALF face detection and KITTI car detection, that indicate our DenseBox is the state-of-the-art system for detecting challenging objects such as faces and cars." ], "cite_N": [ "@cite_38", "@cite_31", "@cite_22", "@cite_7", "@cite_9", "@cite_32", "@cite_0", "@cite_27", "@cite_23", "@cite_15", "@cite_10" ], "mid": [ "2704256938", "2604735854", "2963977642", "2613718673", "2343052201", "2963840241", "2951285986", "2962773189", "2193145675", "2604243686", "2129987527" ] }
Detecting Text in the Wild with Deep Character Embedding Network
Optical Character Recognition (OCR) is a long-standing problem that attracts the interest of many researchers with its recent focus on scene text. It enables computers to extract text from images, which facilitates various applications, such as scene text translation, scanned document reading, etc. As the first step of OCR, the flexibility and robustness of text detection significantly affect the overall performance of OCR system. The goal for text detection algorithms is to generate bounding boundaries of text units as tight as possible. these authors contribute equally in this work. arXiv:1901.00363v1 [cs.CV] 2 Jan 2019 When dealing with different kinds of text, different text unit should be defined in advance. When detecting text in Latin, the text unit is usually "word"; while if in Asian language, it is "text line" instead. Words or lines have a strong prior by their nature. The characters in them tend to usually cluster as straight lines. Therefore, it is natural to define rectangles or quadrangles that wrap text as the objective of detection. This prior has been widely used in many text detection works and achieved promising results [41,31,12,32,24,18,17,5,25]. However, when text appears in the wild, it often suffers from severe deformation and distortion. Even worse, some text are curved as designed. In such scenario, this strong prior does not hold. Fig. 1 shows curved text with quadrangle bounding boxes and curved tight bounding boundaries. It can be easily observed the quadrangle bounding box inevitably contains extra background, making it more ambiguous than curved polygon boundaries. We realized that if characters can be detected and a flexible way to group them into text can be found, tight bounding boundaries will be easily generated with the boundary of characters. Characters are also fundamental elements of text, this idea can be naturally extended to irregular text. In early attempts [31,36,37], scholars turned to use a heuristic clustering method with hand-crafted features to link detected character parts into text lines. The non data-driven heuristic clustering methods are fragile, requiring a thorough check on corner cases manually. Also, the hand-crafted features ignore large parts of visual context information of text, making it less discriminative to determine the closeness between characters. Thereby, we propose a Character Embedding Network (CENet) in a fully data-driven way. The model detects characters as the first step. After characters being detected, they are projected into an embedding space by the same model where characters belonging to the same text unit are close to each other, and characters belonging to different text units are far from each other. During the training stage, the network is jointly trained with a character detection loss and a character embedding loss. During the inference stage, a single forward pass could produce character candidates as well as their representation in the embedding space. A simple distance thresholding is then applied to determine connected character pairs. Connected character pairs further form text groups by chaining the characters together. After the connection relationships are properly learned, the text units could be detected regardless of text length or distortion the text suffers. To the best of our knowledge, the proposed CENet is the first to model text grouping problem as a character embedding learning problem. It does not rely on strong priors, making it capable of detecting arbitrarily curved or distorted text. Moreover, since both character detection and character grouping tasks are based on local patch of images, our model could be directly expand from "word" detection to "line" detection without modifying the backbone model for larger receptive field. Our model also avoids complicated heuristic grouping rules or hand-crafted features. At last, our single model performs two tasks with a single forward pass, only adding minimal overhead over character detection network. The contributions of this paper are three-fold: -We propose a multi-task network to detect arbitrarily curved text in the wild. The character detection subnet is trained to detect character proposals, and the character embedding subnet learns a way to project characters into embedding space. Complicated post-processing steps, e.g. character grouping and word partition, are then be simplified as a simple distance thresholding step in the embedding space. -We adopt a weakly supervised method to train character detector with only word-level polygon annotations, without the strong hypothesis that text should appear in a straight line. -We conduct extensive experiments on several benchmarks to detect horizontal words, multi-oriented words, multi-oriented lines and curved words, demonstrating the superior performance of of our method over the existing methods. Method There are two tasks that our model is supposed to solve. One is to detect characters and the other is to project characters into an embedding space where characters belonging to the same group are close, and characters belonging to different groups are far from each other. Sharing a backbone CNN, the two tasks are implemented by separate subnets, i.e., a character detection subnet and a character embedding subnet. To put it another way, our framework is a single backbone network with two output heads. With the calculated character candidates and their corresponding embedding vectors, the post processing removes false positive and groups characters in an efficient and effective manner. Network design We use ResNet-50 [7] as the backbone network of our model. Following recent network design practices [31,19,12], we concatenate semantic features from three different layers of the backbone ResNet-50 network. After deconvolutional operations, the features are concatenated as shared feature maps which are 1/4 of the original image in size. A character detection subnet and a character embedding subnet are stacked on top of the shared feature maps. The character detection subnet is a convolutional network that produces 5 channels as the final output. The channels are offsets ∆x tl , ∆y tl , ∆x br , ∆y br and confidence score, where tl means top left and br means bottom right. The top left and bottom right bounding box coordinates of detected character candidates could be calculated by (x − ∆x tl , y − ∆y tl ) and (x + ∆x br , y + ∆y br ), where x and y are coordinates of pixel whose confidence score greater than a threshold s. The bounding boxes further serve as RoIs of characters. The character embedding subnet takes the residual convolution unit (RCU) as the basic blocks which is simplified residual block without batch normalization. The design was inspired by [31] where the authors showed that the scores and bounding box sizes of character proposals offer strong clues on whether they belong to the same group, and the feature maps extracted by the backbone network contains such information. Therefore, residual units were chosen to preserve score and bounding box information from feature maps, directly passing them to top layers by skip connection. On top of the RCU blocks, we employ a 1x1 convolution layer with linear activation function to output a 128-channel final embedding map. RoI pooing with 1 × 1 kernel is applied on the embedding maps extracting embedding vectors for each character. During inference, we extract confidence map, offset maps and embedding maps from the two heads of the model. After thresholding on the score map and performing NMS on character proposals, the embedding vectors are extracted by 1×1 RoI pooling on embedding map. In the end, we output character candidates with the format of {score, coordinates(x, y) of character center, width, height, 128D embedding vector}. Characters are finally clustered into text blocks as the last post-processing step. The overall structure of the model and pipeline are shown in Fig. 2. Res Training character detector Loss definition The character detector consists of two tasks that include text/non-text classification and box regression. The loss can be formulated as L char = L cls + λ 1 L reg ,(1) where L cls denotes the binary classification loss, L reg represents the box regression loss, and λ 1 is a factor to balance the two losses. In this paper, we use pixel-wise hinge-loss as classification cost. Some measures for class balance or boosting (e.g., OHEM [29]) are adopted in our experiments. Usually, we set the sampling ratio of 1 : 3 to balance the positive and negative samples, where 30% of negative samples selected from the top hardest in a training batch. Here, IoUloss [38] is adopted as the regression cost which handles the problem of bounding box accuracy bias between large and small objects instead of L2-loss. Learning character detector from coarse annotation Since it is laborintensive to annotate character-level boxes, most of public benchmarks like IC-DAR15 [15] and Total-Text [2] provide only quadrangle or polygon annotations for words, and MSRA-TD500 provides annotations for sentences. Those annotations are all coarse annotations. Inspired by WordSup [12], which recursively rectifies character-level supervisions and updates the model parameters with the rectified character-level supervision, a new rectification rule is designed for producing character-level supervision. This rule is capable of training character detector from bounding boundary annotations with polygon format, while WordSup may fail. Our design follows the general observation that the short side of a nearly horizontal (or vertical) text is approximately equal to the heights (or width) of characters in it. The short side could be used to rectify the imprecise predicted characters with the following pipeline. Firstly, each annotated quadrangle or polygon is uniformly divided into N bounding boxes along the center line, where N denotes the character number of the text transcript. We call the preliminary bounding box segmentations as coarse-char boxes. After one forward pass, some candidate character boxes (namely pred-char boxes) with high confidence are collected. Finer character boxes (namely fine-char boxes) are produced from coarse-char boxes and their corresponding matched pred-char boxes. If no matched pred-char is founded, the coarse-char box is used as a fine-char box directly. Otherwise, if the annotated text is more horizontal, the width of the fine-char box is set to be the width of pred-char box, and the height is set to the height of the coarse-char box; if more vertical, the width is the width of coarse-char box, and the height is the height of pred-char box. The obtained fine-char boxes are used as "ground truth" in Equ. 1 to update model. The matched pred-char box p of a coarse-char box c should meet the following constraints: S(p) > t 1 IoU (p, c) > t 2 ,(2) where S(p) denotes the confidence score of pred-char box p, IoU (p, c) means Intersection over Union between the pred-char box and coarse-char box. t 1 and t 2 are predefined to 0.2 and 0.5 in our experiments. The visualization of the rectification procedure is shown in Fig. 3. Learning character embedding The character embedding subnet is another crucial part in our model. In an ideal case, we hope the subnet projects the characters into an embedding space. Distances between characters among the same text unit are small in the learned space, and that between those belong to different units to be large. Therefore we can group characters into text blocks by performing clustering in the embedding space. This case resembles the objective of metric learning, which aims to learn a distance function to measure similarities between samples. Inspired by previous works in metric learning, we select the most straightforward contrastive loss to train our model. Contrastive loss takes pairs of characters into calculation. Let i and j denote the index of character candidates in a pair, v i and v j denote their embedding vectors that are extracted by the embedding subnet, and l i,j denotes whether they belong to the same text unit. If they do, we name pair (i, j) to be positive pair and l i,j = 1. Otherwise, pair (i, j) is defined as negative pair and l i,j = 0 . The Contrastive Loss is defined as J(i, j) = l i,j [D(v i , v j )] 2 + (1 − l i,j )max(0, 1 − D(v i , v j )) 2 ,(3) where D denotes the distance measure. In training, v i and v j are pulled close to each other if l i,j = 1. If l i,j = 0, v j and v i are pushed away from each other until D(v i , v j ) > 1. Constructing Local Character Pairs It is worth-noting that in every definition of text, characters in the same text unit are naturally close in the image. Two small characters are unlikely from the same text unit if they are too far from each other in the image. However, if they are on the endpoints of a line of characters, the probability of their belonging to same text line are significantly increased. The key difference is whether there are closely scattered characters, namely local character pairs, that connect individual characters in one text unit. In addition, it is unnecessary to train models with all possible character pairs. Instead, when all the local character pairs are correctly labeled, all of the text units would be correctly detected. Working with local character pairs also reduces the requirement of large receptive field when detecting long text. In this work, we employ k nearest neighbor with radius (r-KNN) to incorporate such information. When producing possible pairs, each character was selected as anchor in turn. With an anchor selected, at most k characters which are closest to anchor in the image were taken form pairs. Another useful heuristic rule is that a character is more likely to be connected with characters with similar box size. Therefore, only characters within radius were kept. To formalize this empirical pair sampling method, we define c i , w i , and h i as the center coordinates, width, and height of character i in image respectively; and KNN(i) be a function that generates the set of k nearest neighbors of character i in the image. Then j ∈ r-KNN(i, βr(i)) represents j is in KNN(i) and the spatial distance D(c i , c j ) < β w 2 i + h 2 i . Both k and β were set to 5 in our experiments. When j ∈ r-KNN(i), we call i and j produces a locally connected pair. Here we define the set of all locally connected pairs as LCP = {(i, j), i ∈ M, j ∈ r-KNN(i)}, where M is the total number of character candidates in one image. With r-KNN preprocessing, there are only O(kM ) locally connected pairs remaining, reducing the size of character pairs to a reasonable level. We noticed that the positive pairs are redundant. The minimum requisite for error-less positive pairs is that at least one chain connects all characters in a text unit. Positive pairs with large embedding distances do not contribute any text level error as long as the minimum requisite is satisfied. However, a negative pair with small embedding distance will certainly mis-connect two text units and generate text level error. Meanwhile, we found there are about 3/4 of local character pairs are positive. According to the above analysis, we assume the negative pairs should be weighted more than the positive pairs in training. Therefore, we sample R pairs from LCP of batch images so that there are α pairs are negative in a batch. Let's denote the sampled pairs set as SP , the final re-weighted loss for learning embedding is defined as Equ. 4. We found R = 1024 and α = 60% work well in our experiments. L emb = i,j∈SP J(i, j) R .(4) The loss function to train the whole network then becomes L = L cls + λ 1 L reg + λ 2 L emb ,(5) where λ 1 and λ 2 control the balance among the losses. We set both λ 1 and λ 2 to 1 in our experiments. Post-processing In testing, we employ two threshold values (s and d) to filter false character candidates and group characters into text units. After a forward pass, the proposed model would provide a set of character candidates and their corresponding embedding vectors. Then, the character candidates with confidence scores greater than s are kept. Next, r-KNN is performed on each character, outputting the local character pairs in whole image. To address the character grouping problem, we simply cut down the connected pairs whose embedding distances are over d. Following the steps above, we can quickly find characters from the same groups. The final step is to represent the character groups in a suitable way. In this paper, we adopted the piecewise linear method that used in WordSup [12] to format the boundary of character groups. This method provides various configurable boundary formats, which meet the requirements of different benchmarks. On ICDAR15, a filtering strategy that removes short words with less than two detected characters are applied. This strategy aims to further remove false alarm from the detection results. Experiments We conduct experiments on ICDAR13, ICDAR15, MSRA-TD500, and Total-Text datasets, to explore how the proposed approach performs in different sce-narios. The four chosen datasets focus on horizontal-oriented text, multi-oriented text, sentence-level long text, as well as curved-oriented text respectively. Experiments on synthetic data are also conducted for structural search and pretraining. We also list recent state-of-art methods for comparison. Datasets and Evaluation Five datasets are used in the experiments: -VGG 50k. The VGG SynthText dataset [6] consists of 800,000 images, where the synthesized text are rendered in various background images. The dataset provides detailed character-level, word-level and line-level annotations. For the experimental efficiency, we randomly select 50,000 images for training and 500 images for validation. This subset is referred as VGG 50k. -ICDAR13. The ICDAR13 dataset [16] is from ICDAR 2013 Robust Reading Competition. The texts are well focused and horizontal oriented. Annotations on character-level and word-level bounding boxes are both provided. There are 229 training images and 233 testing images. -ICDAR15. The ICDAR15 dataset [15] is from ICDAR 2015 Robust Reading Competition. The images are captured in an incidental way with Google Glass. Only word-level quadrangles annotations are provided in ICDAR15. There are 1000 natural images for training and 500 for testing. Experiments under this dataset shows our method's performance in word-level Latin text detection task. -MSRA-TD500. The MSRA-TD500 dataset [35] is a dataset comprises of 300 training images and 200 test images. Text regions are arbitrarily orientated and annotated at sentence level. Different from the other datasets, it contains both English and Chinese text. We test our method on this dataset to show it is scalability across different languages and different detection level (line level in this dataset). -Total-Text. The Total-Text dataset [2] is recently released in ICDAR2017. Unlike the ICDAR datasets, there are plenty of curved-oriented text as well as horizontal and multi-oriented text in Total-Text. There are 1255 images in training set, and 300 images in test set. Two kinds of annotations are provided: one is word level polygon bounding regions that bind ground-truth words tightly, and word level rectangular bounding boxes as other datasets provided. Since many of the words in this datasets are curved or distorted, it is adopted to validate the generalization ability of our method on irregular text detection tasks. Implementation details Since the training samples are not abundant in these available datasets, we use VGG 50k data to pretrain a base model, and then finetune the base model on other benchmark datasets accordingly. Two models are trained with the wordlevel annotation and line-level annotation of VGG 50k data respectively. The backbone ResNet-50 model was first pretrained on ImageNet dataset. Then the models are trained on VGG 50k dataset for character detection and further finetuned with both character detection and character embedding loss. The converged models are used as pretrained models for training other benchmarks. We have not adopted any more data augmentation when training models with VGG 50k data. For the remaining benchmark datasets, we perform multi scale data augmentation by resizing the image to [0.65, 0.75, 1, 1.2] scales of the original image respectively, and cropped with a sliding window of size 512 × 512 with stride 256 to generate images for training. During training, we randomly rotate the cropped image to 90 o , 180 o or 270 o , and distort brightness, contrast and color on all three benchmark datasets. When training with data without character level annotation, the supervision for character detection comes from the weak supervision mechanism depicted above. Boxes used to train character embedding are the same coarse-char box used for character detection. We found a "mixing batch" trick helps. In practice, a half of the mixing batch are sampled from benchmark data, and the other half are from VGG 50k which provide character-level annotation. Character supervision for data from VGG 50k comes from their character annotation. The optimizer is SGD with momentum in all the model training. We train the models 50k iteration at learning rate of 0.001, 30K iterations at 0.0001, and 20K iterations at 0.00001. The momentum was set to 0.9 in all the experiments. The two threshold for post-processing, i.e. s and g, are tuned by grid search on training set. All the experiments are carried out on a shared server with a NVIDIA Tesla P40 GPU. Training a batch takes about 2s. Inference was done on original images. The average inference time cost is 276ms per image with size 768 × 1280, the forward pass, r-KNN search, NMS, and other operations cost 181ms, 21ms, 51ms and 23ms, respectively. Ablation Study As shown in Tab. 1, ablation experiments have been done on ICDAR15 dataset. Three key components in our pipeline are evaluated. Specifically, the mixing batch trick used in weak supervision, the positive-negative pair reweighting strategy, and short word removal strategy are added progressively to show their impact on overall performance. Without bells and whistles, the model trained merely with weak character supervision and local character pairs converges successfully but gives mediocre results (73% in Recall). The character detection subnet was more likely overfitted on text components instead of characters. With "mixing batch" trick, word recall is improved strikingly by about 4% with similar precision. The finding here may imply that this trick, as a regularization measure, prevents the weak character supervision from prevailing. In other words, weak character supervision tends to results in a certain amount of "soft" ground truths while the precise character supervision can pull the trained model to its correct position. If we further add positive-negative pair reweighting trick in character embedding, performances in both precision and recall increase by 2%. In accordance to our previous analysis in Sec.3.3, more balanced positive-negative pairs are behind the improvement. In addition, a detected word is error-prone if it is too short. Removal of the word less than 2 characters is adopted, which indicates 3.8% improvement in precision without hurting recall. Tab. 2 lists the results on ICDAR13 dataset of various state-of-art methods. Our model presents a competitive performance on this scenario. The demonstrated that the proposed CENet is capable of learning horizontal text line. Note that WordSup adopted the horizontal nature of text directly when grouping characters into text lines, and the data-driven CENet could achieve a similar performance without utilizing that strong prior. s, d are set to 0.4 and 0.45 in this dataset. Experiments on Scene Text Benchmarks (a) (b) (c) (d) (e) (f) (g) (h) We conduct experiments on ICDAR15 dataset, comparing the results of the proposed CENet with other state-of-the-art methods. As shown in Tab. 3, our single scale CENet outperforms most of the existing approaches in terms of F-measure. This shows that character detection and character embedding together can handle most cases in regular text word detection. Our model learns both the character proposals and their relationship in terms of grouping, reducing wrongly-grouped and wrongly-split characters compared with word based methods [41,10]. s, d are set to 0.35 and 0.38 in this dataset. Tab. 4 lists the results on MSRA-TD500 dataset. Our model achieve best result w.r.t F-measure on this dataset. The dataset is multilingual and is a good test-bed for generalization. For our model, it is basic unit is character which is dependent on local patch and character embedding connects neighboring units by propagation. Therefore it escapes from the large receptive field requirement of one stage methods. s, d are set to 0.4 and 0.3 in this dataset. On the most challenging Total-text dataset, the proposed method presents an overwhelming advantage over other methods in comparison, as is shown in Tab 4. The baseline comes from DeconveNet that predicts a score map of text followed by connected component analysis. VGG 50K dataset contains some curved text, Table 4. Results of different methods on MSRA-TD500. Method Recall Precision F-measure Zhang et al. [39] 67 83 74 EAST [41] 67.43 87.28 76.08 He et al. [10] 70 77 74 PixelLink [4] 83.0 73.2 77.8 CENet(VGG 50k+MSRA TD500 finetune) 75.26 85.88 80.21 We visualize detection results of our model on four benchmarks, illustrates in Fig. 4. Results show our model can tackle text detection in various scenarios, especially on curved texts. Future Works Our model predicts rich information including text level boundaries as well as character bounding boxes. With a view to these advantages, we hope to incorporate the acquired detection information into the follow-up text recognition. For instance, we may use the predicted character position to align the attention weight or boost CTC based recognition. Conclusion Observing the demerits of previous text detection methods, we present a novel scene text detection model. The model is more flexible to detect texts that captured unconstrained, the curved or severely distorted texts in particular. it is completely data-driven in an end-to-end way and thus makes little use of heuristic rules or handcrafted features. it is also trained with two correlated tasks, i.e., the character detection and character embedding, which is unprecedented. To train the network smoothly, we also propose several measures, i.e. weak supervision mechanism for training character detector and positive-negative pair reweighting, to facilitate training and boost the performance. Extensive experiments on benchmarks show that the proposed framework could achieve superior performances even though texts are displayed in multi-orientated, line-level or curved ways.
4,337
1901.00363
2907819829
Most text detection methods hypothesize texts are horizontal or multi-oriented and thus define quadrangles as the basic detection unit. However, text in the wild is usually perspectively distorted or curved, which can not be easily tackled by existing approaches. In this paper, we propose a deep character embedding network (CENet) which simultaneously predicts the bounding boxes of characters and their embedding vectors, thus making text detection a simple clustering task in the character embedding space. The proposed method does not require strong assumptions of forming a straight line on general text detection, which provides flexibility on arbitrarily curved or perspectively distorted text. For character detection task, a dense prediction subnetwork is designed to obtain the confidence score and bounding boxes of characters. For character embedding task, a subnet is trained with contrastive loss to project detected characters into embedding space. The two tasks share a backbone CNN from which the multi-scale feature maps are extracted. The final text regions can be easily achieved by a thresholding process on character confidence and embedding distance of character pairs. We evaluated our method on ICDAR13, ICDAR15, MSRA-TD500, and Total-Text. The proposed method achieves state-of-the-art or comparable performance on all these datasets, and shows substantial improvement in the irregular-text datasets, i.e. Total-Text.
The goal of metric learning or embedding methods @cite_3 @cite_18 @cite_4 is to learn a function that measures how similar two samples are. There are many successful applications of metric learning @cite_26 @cite_3 @cite_18 @cite_4 , such as ranking, image retrieval, face verification, speaker verification and so on. By far, applications of metric learning on document analysis or text reading were limited to the problem of word spotting and verification @cite_39 @cite_7 @cite_30 . In this work, we verify the effectiveness of deep metric learning in text detection task. Based on character candidates, we provide an end-to-end trainable network that can output the character bounding boxes and their embedding vectors simultaneously. Text regions could be easily detected by grouping characters which embedding distances are small.
{ "abstract": [ "", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Relevant Component Analysis (RCA) has been proposed for learning distance metrics with contextual constraints for image retrieval. However, RCA has two important disadvantages. One is the lack of exploiting negative constraints which can also be informative, and the other is its incapability of capturing complex nonlinear relationships between data instances with the contextual information. In this paper, we propose two algorithms to overcome these two disadvantages, i.e., Discriminative Component Analysis (DCA) and Kernel DCA. Compared with other complicated methods for distance metric learning, our algorithms are rather simple to understand and very easy to solve. We evaluate the performance of our algorithms on image retrieval in which experimental results show that our algorithms are effective and promising in learning good quality distance metrics for image retrieval.", "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves.", "This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks." ], "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_4", "@cite_7", "@cite_3", "@cite_39" ], "mid": [ "", "2096733369", "2137736727", "2963988212", "2613718673", "2157364932", "2053317383" ] }
Detecting Text in the Wild with Deep Character Embedding Network
Optical Character Recognition (OCR) is a long-standing problem that attracts the interest of many researchers with its recent focus on scene text. It enables computers to extract text from images, which facilitates various applications, such as scene text translation, scanned document reading, etc. As the first step of OCR, the flexibility and robustness of text detection significantly affect the overall performance of OCR system. The goal for text detection algorithms is to generate bounding boundaries of text units as tight as possible. these authors contribute equally in this work. arXiv:1901.00363v1 [cs.CV] 2 Jan 2019 When dealing with different kinds of text, different text unit should be defined in advance. When detecting text in Latin, the text unit is usually "word"; while if in Asian language, it is "text line" instead. Words or lines have a strong prior by their nature. The characters in them tend to usually cluster as straight lines. Therefore, it is natural to define rectangles or quadrangles that wrap text as the objective of detection. This prior has been widely used in many text detection works and achieved promising results [41,31,12,32,24,18,17,5,25]. However, when text appears in the wild, it often suffers from severe deformation and distortion. Even worse, some text are curved as designed. In such scenario, this strong prior does not hold. Fig. 1 shows curved text with quadrangle bounding boxes and curved tight bounding boundaries. It can be easily observed the quadrangle bounding box inevitably contains extra background, making it more ambiguous than curved polygon boundaries. We realized that if characters can be detected and a flexible way to group them into text can be found, tight bounding boundaries will be easily generated with the boundary of characters. Characters are also fundamental elements of text, this idea can be naturally extended to irregular text. In early attempts [31,36,37], scholars turned to use a heuristic clustering method with hand-crafted features to link detected character parts into text lines. The non data-driven heuristic clustering methods are fragile, requiring a thorough check on corner cases manually. Also, the hand-crafted features ignore large parts of visual context information of text, making it less discriminative to determine the closeness between characters. Thereby, we propose a Character Embedding Network (CENet) in a fully data-driven way. The model detects characters as the first step. After characters being detected, they are projected into an embedding space by the same model where characters belonging to the same text unit are close to each other, and characters belonging to different text units are far from each other. During the training stage, the network is jointly trained with a character detection loss and a character embedding loss. During the inference stage, a single forward pass could produce character candidates as well as their representation in the embedding space. A simple distance thresholding is then applied to determine connected character pairs. Connected character pairs further form text groups by chaining the characters together. After the connection relationships are properly learned, the text units could be detected regardless of text length or distortion the text suffers. To the best of our knowledge, the proposed CENet is the first to model text grouping problem as a character embedding learning problem. It does not rely on strong priors, making it capable of detecting arbitrarily curved or distorted text. Moreover, since both character detection and character grouping tasks are based on local patch of images, our model could be directly expand from "word" detection to "line" detection without modifying the backbone model for larger receptive field. Our model also avoids complicated heuristic grouping rules or hand-crafted features. At last, our single model performs two tasks with a single forward pass, only adding minimal overhead over character detection network. The contributions of this paper are three-fold: -We propose a multi-task network to detect arbitrarily curved text in the wild. The character detection subnet is trained to detect character proposals, and the character embedding subnet learns a way to project characters into embedding space. Complicated post-processing steps, e.g. character grouping and word partition, are then be simplified as a simple distance thresholding step in the embedding space. -We adopt a weakly supervised method to train character detector with only word-level polygon annotations, without the strong hypothesis that text should appear in a straight line. -We conduct extensive experiments on several benchmarks to detect horizontal words, multi-oriented words, multi-oriented lines and curved words, demonstrating the superior performance of of our method over the existing methods. Method There are two tasks that our model is supposed to solve. One is to detect characters and the other is to project characters into an embedding space where characters belonging to the same group are close, and characters belonging to different groups are far from each other. Sharing a backbone CNN, the two tasks are implemented by separate subnets, i.e., a character detection subnet and a character embedding subnet. To put it another way, our framework is a single backbone network with two output heads. With the calculated character candidates and their corresponding embedding vectors, the post processing removes false positive and groups characters in an efficient and effective manner. Network design We use ResNet-50 [7] as the backbone network of our model. Following recent network design practices [31,19,12], we concatenate semantic features from three different layers of the backbone ResNet-50 network. After deconvolutional operations, the features are concatenated as shared feature maps which are 1/4 of the original image in size. A character detection subnet and a character embedding subnet are stacked on top of the shared feature maps. The character detection subnet is a convolutional network that produces 5 channels as the final output. The channels are offsets ∆x tl , ∆y tl , ∆x br , ∆y br and confidence score, where tl means top left and br means bottom right. The top left and bottom right bounding box coordinates of detected character candidates could be calculated by (x − ∆x tl , y − ∆y tl ) and (x + ∆x br , y + ∆y br ), where x and y are coordinates of pixel whose confidence score greater than a threshold s. The bounding boxes further serve as RoIs of characters. The character embedding subnet takes the residual convolution unit (RCU) as the basic blocks which is simplified residual block without batch normalization. The design was inspired by [31] where the authors showed that the scores and bounding box sizes of character proposals offer strong clues on whether they belong to the same group, and the feature maps extracted by the backbone network contains such information. Therefore, residual units were chosen to preserve score and bounding box information from feature maps, directly passing them to top layers by skip connection. On top of the RCU blocks, we employ a 1x1 convolution layer with linear activation function to output a 128-channel final embedding map. RoI pooing with 1 × 1 kernel is applied on the embedding maps extracting embedding vectors for each character. During inference, we extract confidence map, offset maps and embedding maps from the two heads of the model. After thresholding on the score map and performing NMS on character proposals, the embedding vectors are extracted by 1×1 RoI pooling on embedding map. In the end, we output character candidates with the format of {score, coordinates(x, y) of character center, width, height, 128D embedding vector}. Characters are finally clustered into text blocks as the last post-processing step. The overall structure of the model and pipeline are shown in Fig. 2. Res Training character detector Loss definition The character detector consists of two tasks that include text/non-text classification and box regression. The loss can be formulated as L char = L cls + λ 1 L reg ,(1) where L cls denotes the binary classification loss, L reg represents the box regression loss, and λ 1 is a factor to balance the two losses. In this paper, we use pixel-wise hinge-loss as classification cost. Some measures for class balance or boosting (e.g., OHEM [29]) are adopted in our experiments. Usually, we set the sampling ratio of 1 : 3 to balance the positive and negative samples, where 30% of negative samples selected from the top hardest in a training batch. Here, IoUloss [38] is adopted as the regression cost which handles the problem of bounding box accuracy bias between large and small objects instead of L2-loss. Learning character detector from coarse annotation Since it is laborintensive to annotate character-level boxes, most of public benchmarks like IC-DAR15 [15] and Total-Text [2] provide only quadrangle or polygon annotations for words, and MSRA-TD500 provides annotations for sentences. Those annotations are all coarse annotations. Inspired by WordSup [12], which recursively rectifies character-level supervisions and updates the model parameters with the rectified character-level supervision, a new rectification rule is designed for producing character-level supervision. This rule is capable of training character detector from bounding boundary annotations with polygon format, while WordSup may fail. Our design follows the general observation that the short side of a nearly horizontal (or vertical) text is approximately equal to the heights (or width) of characters in it. The short side could be used to rectify the imprecise predicted characters with the following pipeline. Firstly, each annotated quadrangle or polygon is uniformly divided into N bounding boxes along the center line, where N denotes the character number of the text transcript. We call the preliminary bounding box segmentations as coarse-char boxes. After one forward pass, some candidate character boxes (namely pred-char boxes) with high confidence are collected. Finer character boxes (namely fine-char boxes) are produced from coarse-char boxes and their corresponding matched pred-char boxes. If no matched pred-char is founded, the coarse-char box is used as a fine-char box directly. Otherwise, if the annotated text is more horizontal, the width of the fine-char box is set to be the width of pred-char box, and the height is set to the height of the coarse-char box; if more vertical, the width is the width of coarse-char box, and the height is the height of pred-char box. The obtained fine-char boxes are used as "ground truth" in Equ. 1 to update model. The matched pred-char box p of a coarse-char box c should meet the following constraints: S(p) > t 1 IoU (p, c) > t 2 ,(2) where S(p) denotes the confidence score of pred-char box p, IoU (p, c) means Intersection over Union between the pred-char box and coarse-char box. t 1 and t 2 are predefined to 0.2 and 0.5 in our experiments. The visualization of the rectification procedure is shown in Fig. 3. Learning character embedding The character embedding subnet is another crucial part in our model. In an ideal case, we hope the subnet projects the characters into an embedding space. Distances between characters among the same text unit are small in the learned space, and that between those belong to different units to be large. Therefore we can group characters into text blocks by performing clustering in the embedding space. This case resembles the objective of metric learning, which aims to learn a distance function to measure similarities between samples. Inspired by previous works in metric learning, we select the most straightforward contrastive loss to train our model. Contrastive loss takes pairs of characters into calculation. Let i and j denote the index of character candidates in a pair, v i and v j denote their embedding vectors that are extracted by the embedding subnet, and l i,j denotes whether they belong to the same text unit. If they do, we name pair (i, j) to be positive pair and l i,j = 1. Otherwise, pair (i, j) is defined as negative pair and l i,j = 0 . The Contrastive Loss is defined as J(i, j) = l i,j [D(v i , v j )] 2 + (1 − l i,j )max(0, 1 − D(v i , v j )) 2 ,(3) where D denotes the distance measure. In training, v i and v j are pulled close to each other if l i,j = 1. If l i,j = 0, v j and v i are pushed away from each other until D(v i , v j ) > 1. Constructing Local Character Pairs It is worth-noting that in every definition of text, characters in the same text unit are naturally close in the image. Two small characters are unlikely from the same text unit if they are too far from each other in the image. However, if they are on the endpoints of a line of characters, the probability of their belonging to same text line are significantly increased. The key difference is whether there are closely scattered characters, namely local character pairs, that connect individual characters in one text unit. In addition, it is unnecessary to train models with all possible character pairs. Instead, when all the local character pairs are correctly labeled, all of the text units would be correctly detected. Working with local character pairs also reduces the requirement of large receptive field when detecting long text. In this work, we employ k nearest neighbor with radius (r-KNN) to incorporate such information. When producing possible pairs, each character was selected as anchor in turn. With an anchor selected, at most k characters which are closest to anchor in the image were taken form pairs. Another useful heuristic rule is that a character is more likely to be connected with characters with similar box size. Therefore, only characters within radius were kept. To formalize this empirical pair sampling method, we define c i , w i , and h i as the center coordinates, width, and height of character i in image respectively; and KNN(i) be a function that generates the set of k nearest neighbors of character i in the image. Then j ∈ r-KNN(i, βr(i)) represents j is in KNN(i) and the spatial distance D(c i , c j ) < β w 2 i + h 2 i . Both k and β were set to 5 in our experiments. When j ∈ r-KNN(i), we call i and j produces a locally connected pair. Here we define the set of all locally connected pairs as LCP = {(i, j), i ∈ M, j ∈ r-KNN(i)}, where M is the total number of character candidates in one image. With r-KNN preprocessing, there are only O(kM ) locally connected pairs remaining, reducing the size of character pairs to a reasonable level. We noticed that the positive pairs are redundant. The minimum requisite for error-less positive pairs is that at least one chain connects all characters in a text unit. Positive pairs with large embedding distances do not contribute any text level error as long as the minimum requisite is satisfied. However, a negative pair with small embedding distance will certainly mis-connect two text units and generate text level error. Meanwhile, we found there are about 3/4 of local character pairs are positive. According to the above analysis, we assume the negative pairs should be weighted more than the positive pairs in training. Therefore, we sample R pairs from LCP of batch images so that there are α pairs are negative in a batch. Let's denote the sampled pairs set as SP , the final re-weighted loss for learning embedding is defined as Equ. 4. We found R = 1024 and α = 60% work well in our experiments. L emb = i,j∈SP J(i, j) R .(4) The loss function to train the whole network then becomes L = L cls + λ 1 L reg + λ 2 L emb ,(5) where λ 1 and λ 2 control the balance among the losses. We set both λ 1 and λ 2 to 1 in our experiments. Post-processing In testing, we employ two threshold values (s and d) to filter false character candidates and group characters into text units. After a forward pass, the proposed model would provide a set of character candidates and their corresponding embedding vectors. Then, the character candidates with confidence scores greater than s are kept. Next, r-KNN is performed on each character, outputting the local character pairs in whole image. To address the character grouping problem, we simply cut down the connected pairs whose embedding distances are over d. Following the steps above, we can quickly find characters from the same groups. The final step is to represent the character groups in a suitable way. In this paper, we adopted the piecewise linear method that used in WordSup [12] to format the boundary of character groups. This method provides various configurable boundary formats, which meet the requirements of different benchmarks. On ICDAR15, a filtering strategy that removes short words with less than two detected characters are applied. This strategy aims to further remove false alarm from the detection results. Experiments We conduct experiments on ICDAR13, ICDAR15, MSRA-TD500, and Total-Text datasets, to explore how the proposed approach performs in different sce-narios. The four chosen datasets focus on horizontal-oriented text, multi-oriented text, sentence-level long text, as well as curved-oriented text respectively. Experiments on synthetic data are also conducted for structural search and pretraining. We also list recent state-of-art methods for comparison. Datasets and Evaluation Five datasets are used in the experiments: -VGG 50k. The VGG SynthText dataset [6] consists of 800,000 images, where the synthesized text are rendered in various background images. The dataset provides detailed character-level, word-level and line-level annotations. For the experimental efficiency, we randomly select 50,000 images for training and 500 images for validation. This subset is referred as VGG 50k. -ICDAR13. The ICDAR13 dataset [16] is from ICDAR 2013 Robust Reading Competition. The texts are well focused and horizontal oriented. Annotations on character-level and word-level bounding boxes are both provided. There are 229 training images and 233 testing images. -ICDAR15. The ICDAR15 dataset [15] is from ICDAR 2015 Robust Reading Competition. The images are captured in an incidental way with Google Glass. Only word-level quadrangles annotations are provided in ICDAR15. There are 1000 natural images for training and 500 for testing. Experiments under this dataset shows our method's performance in word-level Latin text detection task. -MSRA-TD500. The MSRA-TD500 dataset [35] is a dataset comprises of 300 training images and 200 test images. Text regions are arbitrarily orientated and annotated at sentence level. Different from the other datasets, it contains both English and Chinese text. We test our method on this dataset to show it is scalability across different languages and different detection level (line level in this dataset). -Total-Text. The Total-Text dataset [2] is recently released in ICDAR2017. Unlike the ICDAR datasets, there are plenty of curved-oriented text as well as horizontal and multi-oriented text in Total-Text. There are 1255 images in training set, and 300 images in test set. Two kinds of annotations are provided: one is word level polygon bounding regions that bind ground-truth words tightly, and word level rectangular bounding boxes as other datasets provided. Since many of the words in this datasets are curved or distorted, it is adopted to validate the generalization ability of our method on irregular text detection tasks. Implementation details Since the training samples are not abundant in these available datasets, we use VGG 50k data to pretrain a base model, and then finetune the base model on other benchmark datasets accordingly. Two models are trained with the wordlevel annotation and line-level annotation of VGG 50k data respectively. The backbone ResNet-50 model was first pretrained on ImageNet dataset. Then the models are trained on VGG 50k dataset for character detection and further finetuned with both character detection and character embedding loss. The converged models are used as pretrained models for training other benchmarks. We have not adopted any more data augmentation when training models with VGG 50k data. For the remaining benchmark datasets, we perform multi scale data augmentation by resizing the image to [0.65, 0.75, 1, 1.2] scales of the original image respectively, and cropped with a sliding window of size 512 × 512 with stride 256 to generate images for training. During training, we randomly rotate the cropped image to 90 o , 180 o or 270 o , and distort brightness, contrast and color on all three benchmark datasets. When training with data without character level annotation, the supervision for character detection comes from the weak supervision mechanism depicted above. Boxes used to train character embedding are the same coarse-char box used for character detection. We found a "mixing batch" trick helps. In practice, a half of the mixing batch are sampled from benchmark data, and the other half are from VGG 50k which provide character-level annotation. Character supervision for data from VGG 50k comes from their character annotation. The optimizer is SGD with momentum in all the model training. We train the models 50k iteration at learning rate of 0.001, 30K iterations at 0.0001, and 20K iterations at 0.00001. The momentum was set to 0.9 in all the experiments. The two threshold for post-processing, i.e. s and g, are tuned by grid search on training set. All the experiments are carried out on a shared server with a NVIDIA Tesla P40 GPU. Training a batch takes about 2s. Inference was done on original images. The average inference time cost is 276ms per image with size 768 × 1280, the forward pass, r-KNN search, NMS, and other operations cost 181ms, 21ms, 51ms and 23ms, respectively. Ablation Study As shown in Tab. 1, ablation experiments have been done on ICDAR15 dataset. Three key components in our pipeline are evaluated. Specifically, the mixing batch trick used in weak supervision, the positive-negative pair reweighting strategy, and short word removal strategy are added progressively to show their impact on overall performance. Without bells and whistles, the model trained merely with weak character supervision and local character pairs converges successfully but gives mediocre results (73% in Recall). The character detection subnet was more likely overfitted on text components instead of characters. With "mixing batch" trick, word recall is improved strikingly by about 4% with similar precision. The finding here may imply that this trick, as a regularization measure, prevents the weak character supervision from prevailing. In other words, weak character supervision tends to results in a certain amount of "soft" ground truths while the precise character supervision can pull the trained model to its correct position. If we further add positive-negative pair reweighting trick in character embedding, performances in both precision and recall increase by 2%. In accordance to our previous analysis in Sec.3.3, more balanced positive-negative pairs are behind the improvement. In addition, a detected word is error-prone if it is too short. Removal of the word less than 2 characters is adopted, which indicates 3.8% improvement in precision without hurting recall. Tab. 2 lists the results on ICDAR13 dataset of various state-of-art methods. Our model presents a competitive performance on this scenario. The demonstrated that the proposed CENet is capable of learning horizontal text line. Note that WordSup adopted the horizontal nature of text directly when grouping characters into text lines, and the data-driven CENet could achieve a similar performance without utilizing that strong prior. s, d are set to 0.4 and 0.45 in this dataset. Experiments on Scene Text Benchmarks (a) (b) (c) (d) (e) (f) (g) (h) We conduct experiments on ICDAR15 dataset, comparing the results of the proposed CENet with other state-of-the-art methods. As shown in Tab. 3, our single scale CENet outperforms most of the existing approaches in terms of F-measure. This shows that character detection and character embedding together can handle most cases in regular text word detection. Our model learns both the character proposals and their relationship in terms of grouping, reducing wrongly-grouped and wrongly-split characters compared with word based methods [41,10]. s, d are set to 0.35 and 0.38 in this dataset. Tab. 4 lists the results on MSRA-TD500 dataset. Our model achieve best result w.r.t F-measure on this dataset. The dataset is multilingual and is a good test-bed for generalization. For our model, it is basic unit is character which is dependent on local patch and character embedding connects neighboring units by propagation. Therefore it escapes from the large receptive field requirement of one stage methods. s, d are set to 0.4 and 0.3 in this dataset. On the most challenging Total-text dataset, the proposed method presents an overwhelming advantage over other methods in comparison, as is shown in Tab 4. The baseline comes from DeconveNet that predicts a score map of text followed by connected component analysis. VGG 50K dataset contains some curved text, Table 4. Results of different methods on MSRA-TD500. Method Recall Precision F-measure Zhang et al. [39] 67 83 74 EAST [41] 67.43 87.28 76.08 He et al. [10] 70 77 74 PixelLink [4] 83.0 73.2 77.8 CENet(VGG 50k+MSRA TD500 finetune) 75.26 85.88 80.21 We visualize detection results of our model on four benchmarks, illustrates in Fig. 4. Results show our model can tackle text detection in various scenarios, especially on curved texts. Future Works Our model predicts rich information including text level boundaries as well as character bounding boxes. With a view to these advantages, we hope to incorporate the acquired detection information into the follow-up text recognition. For instance, we may use the predicted character position to align the attention weight or boost CTC based recognition. Conclusion Observing the demerits of previous text detection methods, we present a novel scene text detection model. The model is more flexible to detect texts that captured unconstrained, the curved or severely distorted texts in particular. it is completely data-driven in an end-to-end way and thus makes little use of heuristic rules or handcrafted features. it is also trained with two correlated tasks, i.e., the character detection and character embedding, which is unprecedented. To train the network smoothly, we also propose several measures, i.e. weak supervision mechanism for training character detector and positive-negative pair reweighting, to facilitate training and boost the performance. Extensive experiments on benchmarks show that the proposed framework could achieve superior performances even though texts are displayed in multi-orientated, line-level or curved ways.
4,337
1901.00306
2949972480
Recommender systems have become important tools to support users in identifying relevant content in an overloaded information space. To ease the development of recommender systems, a number of recommender frameworks have been proposed that serve a wide range of application domains. Our TagRec framework is one of the few examples of an open-source framework tailored towards developing and evaluating tag-based recommender systems. In this paper, we present the current, updated state of TagRec, and we summarize and reflect on four use cases that have been implemented with TagRec: (i) tag recommendations, (ii) resource recommendations, (iii) recommendation evaluation, and (iv) hashtag recommendations. To date, TagRec served the development and or evaluation process of tag-based recommender systems in two large scale European research projects, which have been described in 17 research papers. Thus, we believe that this work is of interest for both researchers and practitioners of tag-based recommender systems.
A considerable contribution to this area is http: wiki.librec.net doku.php , a Java-based library that, so far, comprises around 70 resource recommendation algorithms and evaluation modules @cite_6 . Another Java-based, open-source framework is http: ranksys.org , which focuses on the evaluation of ranking problems and supports the investigation of novelty as well as diversity for academic research @cite_29 , which is reflected in its design (e.g., data input interfaces work with a triple of user, item and features).
{ "abstract": [ "Novelty and diversity have been identified, along with accuracy, as foremost properties of useful recommendations. Considerable progress has been made in the field in terms of the definition of methods to enhance such properties, as well as methodologies and metrics to assess how well such methods work. In this chapter we give an overview of the main contributions to this area in the field of recommender systems, and seek to relate them together in a unified view, analyzing the common elements underneath the different forms under which novelty and diversity have been addressed, and identifying connections to closely related work on diversity in other fields.", "The large array of recommendation algorithms proposed over the years brings a challenge in reproducing and comparing their performance. This paper introduces an open-source Java library that implements a suite of state-of-the-art algorithms as well as a series of evaluation metrics. We empirically find that LibRec performs faster than other such libraries, while achieving competitive evaluative performance." ], "cite_N": [ "@cite_29", "@cite_6" ], "mid": [ "2213191543", "2405923393" ] }
The TagRec Framework as a Toolkit for the Development of Tag-Based Recommender Systems
Recommender systems aim to predict the probability that a speci c user will like a speci c resource. erefore, recommender systems utilize the past user behavior (e.g., resources previously consumed by this user) in order to generate a personalized list of potentially relevant resources [32]. Popular application domains of recommender systems include online marketplaces (e.g., Amazon and Zalando), movie and music streaming services (e.g., Net ix and Spotify), job portals (e.g., LinkedIn and Xing), and social tagging systems (e.g., BibSonomy and CiteULike). Social tagging systems bear particularly great potential for recommender systems as, by nature, they produce a vast amount of user-generated resource-annotations (i.e., tags). us, possible use cases of these tag-based recommender systems include the suggestion of resources to extend a user's set of bookmarks [4,38] and the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permi ed. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. Request permissions from permissions@acm.org. Conference'17, Washington, DC, USA © 2016 ACM. 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 DOI: 10.1145/nnnnnnn.nnnnnnn suggestion of tags to assist in the annotation of these bookmarks. e la er one is known as the eld of tag recommendations [13]. Over the past years, various recommendation frameworks and libraries have been developed in order to support the development and evaluation of recommender systems (see Section 4). While these frameworks cover a wide range of application domains, to the best of our knowledge, an open-source recommendation framework to design and evaluate tag-based recommender systems was still lacking. erefore, in 2014, we have started developing TagRec, a standardized tag recommender benchmarking framework [19]. In the initial development phase of the framework, we mainly focused on evaluating tag recommendation algorithms. In 2015, the framework was extended by including resource recommendation algorithms that are based on social tagging data [36]. e aim of this paper, however, is to present the current, updated state of TagRec. is includes the extension of the framework for (i) the analysis of tag reuse practices [21], (ii) the evaluation of tag recommendations in real-world folksonomy and Technology Enhanced Learning se ings [16,20], and (iii) hashtag recommendations in Twi er [22]. Apart from that, we provide an updated framework description (see Section 2) as well as a summary of use cases in the eld of recommender research that have been completed using TagRec (see Section 3). Research areas encompass tag recommendations, resource recommendations, recommendation evaluation and hashtag recommendations. To date, TagRec has served the recommender development and/or evaluation processes in two large-scale European research projects, which have been published in 17 research papers. We conclude the paper with a discussion on future work and potential improvements of the framework (see Section 5). We believe that our work contributes to the rich portfolio of technical frameworks in the area of recommender systems. Furthermore, this paper presents an overview of use cases which can be realized with TagRec, and should be of interest for both researchers and developers of tag-based recommender systems. TAGREC TagRec is a Java-based recommendation framework for tag-based information retrieval se ings. It is open-source so ware and freely available via our Github repository 1 . e Github page also contains a detailed technical description on the usage of the framework. Figure 1 illustrates TagRec's system architecture. e framework consists of (i) a data processing component, which processes data sources, (ii) a data model and analytics component, which enables access to the processed data, (iii) recommendation algorithms, 1 Figure 1: System architecture of TagRec. Here, the data processing component processes data sources in order to create a data model and data analytics. en, this data model is used by recommendation algorithms to create recommendation results that are either forwarded to an evaluation engine or to a client application. which calculate recommendations, (iv) an evaluation engine, which evaluates the algorithms, and (v) recommendation results, which can be passed to a client application. e mentioned components are described in more detail in the remainder of this section. Apart from that, we describe practical aspects of the framework that should be helpful when implementing and/or evaluating a recommendation algorithm. Finally, Table 1 provides an overview of the supported datasets, recommendation algorithms and evaluation metrics. Data Processing. e data processing component is responsible for parsing and processing external data sources. Currently supported datasets are listed in Table 1. ese datasets serve a wide range of application domains such as social bookmarking systems, learning environments, microblogging tools and music/movie sharing portals. e set of datasets can easily be extended by implementing custom data pre-processing strategies. Furthermore, this component supports various data enrichment and transformation methods such as p-core pruning [8], topic modeling [25], training/test set spli ing [20] and data conversion into related formats (e.g., for MyMediaLite [9]). Data Model and Analytics. e data model is created based on described data processing steps and provides an object-oriented representation of the data in order to ease the implementation process of a novel recommendation algorithm. us, it enables easy access to the entities in the datasets via powerful query functionality (e.g., get the set of tags a user has used in the past). Furthermore, the data model of TagRec is connected to Apache Solr 2 and thus, enables fast access to content-based data of entities. Another role of this component is the provision of basic data analytics functionality to get a be er understanding of the dataset characteristics. For example, dataset statistics, such as the total number of distinct tags or the average number of bookmarks per user, can be retrieved. Recommendation Algorithms. TagRec contains a wide range of recommendation algorithms (see Table 1). As later described in 2 Section 3, algorithms for tag recommendations, resource recommendations and hashtag recommendations are provided. ese algorithms can be used as baseline approaches for a newly implemented algorithm. e complete list of all variants of these algorithms is provided on the TagRec's Github page. A key contributions of TagRec is that next to well-established approaches, such as Collaborative Filtering, MostPopular and Factorization machines, it also encompasses approaches based upon cognitive models of information retrieval, human memory theory and category learning. In [20], it has been shown that these cognitive-inspired approaches achieve high prediction accuracy estimates in comparison to classic recommendation algorithms. Besides, the algorithms have demonstrated their suitability for sparse datasets such and narrow folksonomies. Evaluation Engine. e evaluation engine quanti es the quality of implemented recommendation strategies by applying a rich set of evaluation metrics as listed in Table 1. One drawback of most recommendation evaluation frameworks is their focus on accuracy and ranking estimates, which restricts the evaluation to the performance of recommender systems [2]. To ll this gap, TagRec supports a variety of evaluation metrics to also o er indicators for diversity, novelty, runtime performance and memory consumption of algorithms. For evaluating an algorithm, TagRec has to be provided with three parameters, where the rst one speci es the algorithm, the second one speci es the dataset directory and the third one speci es the le name of the dataset sample. For example, java jar tagrec.jar cf bib bib sample runs Collaborative Filtering on a sample of the BibSonomy dataset. e calculated metrics are then either wri en to a "metrics" le or printed to the console. Recommendation Results. As indicated in Figure 1, the algorithms' recommendation results can be either forwarded to the evaluation engine to retrieve evaluation metrics or to a client application for further processing (e.g., visualization). e KnowBrain tool [7] is an example of such a client application. It is an open source social bookmarking tool, which has been extended to cater the requirements of tag recommender evaluations in online se ings. A screenshot of KnowBrain's graphical user interface is shown in Figure 2. It enables the bookmarking of Web links and their annotations by (i) selecting from a pre-de ned set of categories, and by (ii) assigning a variable number of tags. e user's tagging process is supported by a list of recommended tags that are selected based on Tag recommendations Research papers Model of human categorization [17,23,35] Activation processes in human memory [18,21,24,37] Informal learning se ings [5][6][7] Resource recommendations Research papers A ention-interpretation dynamics [15,34] Tag and time information [27,28] Recommendation evaluation Research papers Real-world folksonomies [20] Technology enhanced learning se ings [16] Hashtag recommendations Research papers Temporal e ects on hashtag reuse [22] algorithms of the TagRec framework. e elicitation of categories allows for semantic context-based recommendation algorithms such as 3Layers [35]. Furthermore, the comparison of actually used tags with recommended tags gives insights into the online performance (i.e., user acceptance) of recommendation strategies. USE CASES In this section, we describe use cases that have been implemented using the TagRec framework. To date, TagRec supported the recommender development and/or evaluation processes in two large-scale European research projects. Results have been published in 17 research papers (see Table 2). Tag Recommendations Tag recommendation systems assist users in nding descriptive tags to annotate resources. In other words, given a speci c user and a speci c resource, a tag recommendation algorithm predicts a set of tags a user is likely to apply in annotating the resource [13]. Within this context, TagRec was used for the creation of (i) cognitive-inspired algorithms, and (ii) the evaluation of approaches suitable for formal and informal learning se ings. Tag Recommendations Using a Model of Human Categorization. In [17,23,35], the authors introduced a tag recommendation algorithm based on the human categorization models ALCOVE [26] and MINERVA2 [12]. is algorithm is called 3Layers and simulates categorization processes in human memory. erefore, the categories assigned to a given resource, which a user is going to annotate, are matched against already annotated resources of this user. Based on this matchmaking process, a set of tags associated with semantically related resources is recommended. Since TagRec enables to link a list of categories to a resource, it supported the development of 3Layers by providing functions for analyzing and deriving category information of resources (e.g., via LDA topic modeling [25]). Utilizing Activation Processes in Human Memory. Activation processes in human memory describe the general and contextdependent usefulness of information. It was shown that these processes (especially usage frequency, recency and semantic context) greatly in uence the reuse probability of tags [21]. Based on this, a set of time-aware tag recommendation approaches (see [18,24,37]) was developed that utilize the activation equation of the cognitive architecture ACT-R [1]. erefore, TagRec was used to analyze the timestamps of tag assignments and to calculate tag co-occurrences for re ecting the semantic context of social tagging. Furthermore, TagRec enabled the hybrid combination of the components of the model (e.g., combining time-aware and context-aware recommendations). Tag Recommendations in Informal Learning Settings. In the course of the European-funded project Learning Layers 3 , which aims at supporting informal learning at the workplace, tag recommendations were used to support the individual user in nding descriptive tags and the collective in consolidating a shared tag vocabulary. ese tag recommendations were used in two tools: (i) the Dropbox-like environment KnowBrain [7], and (ii) the Sensemaking interface Bits & Pieces [6]. To achieve this, TagRec was integrated as a tag recommendation library into the Social Semantic Server [5], which was used as the technical back-end for KnowBrain and Bits & Pieces. is shows that TagRec cannot only be used as a standalone tool but also as a programming library (or toolkit) to include recommendation functionality in existing so ware. A similar approach will be followed in another European-funded project called AFEL 4 , which engages in design and development of analytics for everyday learning. Resource Recommendations Resource recommender systems suggest potentially relevant web items (e.g., movies, books, learning resources, URLs, etc.) to users. Most of these recommender systems are based on Collaborative Filtering (CF) techniques, which aim to calculate similarities between users to suggest the most suitable web resources to them [33]. TagRec was applied to support and improve the development of CF approaches in tag-based online environments. Mimicking Attention-Interpretation Dynamics. Seitlinger et al. [34] introduced the rst version of a CF-based recommendation approach that takes into consideration non-linear user-artifact dynamics, modeled by means of SUSTAIN. SUSTAIN (Supervised and Unsupervised STrati ed Adaptive Incremental Network) is a exible network model of human category learning that is thoroughly discussed in [29]. It assumes that learning is a dynamic process that takes place through the encounter of new examples (e.g., Web resources). roughout the learning trajectory, categories emerge and learners' a ention foci shi . In [15], an advanced, adapted version of the initial approach was presented and analyzed in detail. e resulting approach SU ST AI N + CF , rstly applies CF to calculate the most suitable resources for a user, and secondly reranks this list depending on a user's category learning model (i.e., SUSTAIN's user model). e algorithm has been implemented and developed within the TagRec framework. Features of the framework allowed for continuous evaluation and analysis of single factors of the model and with it, the associated change of recommendation performance. With this data, it was possible to gain deeper insight into the algorithmic approach and its parameters and thus, to further adapt the model to the requirements of our application area. Resource Recommendations using Tag and Time Information. In [27], the Collaborative Item Ranking Using Tag and Time Information (CIRTT) approach was presented. CIRTT uses Collaborative Filtering to identify a set of candidate resources and re-ranks these candidate resources by incorporating tag and time information. is is achieved via the Base-Level-Learning (BLL) equation, which is one component of ACT-R's activation equation [1]. Since TagRec contains a full implementation of the activation equation, it could be easily adapted for the task of resource recommendations as well. Apart from that, TagRec was used to compare CIRTTT to other related resource recommendation methods (e.g., [40]). Another study of the recency e ects in Collaborative Filtering recommender systems was provided in [28]. Recommendation Evaluation One of the most challenging tasks in the area of recommender systems, is the reproducible evaluation of recommendation results [11]. TagRec aims to support this process by providing standardized data processing methods, baseline algorithm implementations, evaluation protocols and metrics. Evaluating Tag Recommendations in Real-World Folksonomies. Because of the sparse nature of social tagging systems, most tag recommendation evaluation studies were conducted using pcore pruned datasets. is means that all users, resources and tags, which do not appear at least p times in the dataset, are removed. is clearly does not re ect a real-world folksonomy se ing as shown by [8]. To overcome this problem, TagRec was used in [20] to compare a rich set of tag recommendation algorithms using a wide range of evaluation metrics on six un ltered social tagging datasets (i.e., Flickr, CiteULike, BibSonomy, Delicious, LastFM and MovieLens). e results showed that the e cacy of a recommendation algorithm greatly depends on the given dataset characteristics, and that cognitive-inspired approaches provide the most robust results, even in sparse data folksonomy se ings. Comparing Recommendation Algorithms in Technology Enhanced Learning Settings. Kopeinik et al. [16] is another example of using TagRec for the evaluation of a variety of algorithms on di erent o ine datasets. e paper focused on technologyenhanced formal and informal learning environments, where due to fast changing domains and characteristic group learning settings, data is typically sparse. e evaluation was divided in two se ings, the performance of (i) resource recommendation strategies, and (ii) tag recommendation strategies. In both cases, the authors compared the recommendation accuracy of a number of computationally-inexpensive recommendation algorithms on six ofine datasets retrieved from various educational se ings (i.e., social bookmarking systems, social learning environments and massive open online courses). Investigated approaches are either state-ofthe-art recommendation approaches, or strategies that have been explicitly suggested in the context of TEL systems. To address the goals of this study, the TagRec framework already provided a wide range of required functionality such as the implemented data processing component, evaluation metrics and state-of-the-art algorithms. In the context of this research paper, it was further extended by a couple of algorithms that are considered particularly relevant to learning se ings and by additional statistics, which were needed to interpret evaluation results properly. Hashtag Recommendations Over the past years, hashtags have become very popular in systems such as Twi er, Instagram and Facebook. Similar to social tags, hashtags are freely-chosen keywords to categorize resources such as Twi er posts (i.e., tweets). One of the biggest advantages of hashtags is that they can be easily used by integrating them in the tweet text. Unsurprisingly, this has led to the development of hashtag recommendation algorithms that aim to support users in applying the most descriptive hashtags to their tweets [39]. Temporal E ects on Hashtag Reuse. In [22], a time-dependent and cognitive-inspired hashtag recommendation approach was proposed. In this paper, temporal e ects on hashtag reuse in Twi er have been analyzed with the help of TagRec in order to design a hashtag recommendation approach, which utilizes the BLL equation of the cognitive architecture ACT-R [1]. erefore, TagRec was extended with functions to access Apache Solr (see Section 2), which enables the content-based analysis of tweets using TF-IDF (see [22]). CONCLUSION AND FUTURE WORK In this paper, we presented the TagRec framework as a toolkit for the development and evaluation of tag-based recommender systems. TagRec is open-source so ware wri en in Java and can be freely downloaded from Github. e framework consists of ve components: (i) a data processing component, which processes data sources, (ii) a data model and analytics component, which enables access to the processed data, (iii) recommendation algorithms, which calculate recommendations, (iv) an evaluation engine, which evaluates the algorithms, and (v) recommendation results, which can be passed to client applications. Apart from that, we summarized various use cases realized with TagRec from the elds of tag recommendations, resource recommendations, recommendation evaluation and hashtag recommendations. To date, TagRec supported the development and/or evaluation process described in 17 research papers. Speci cally, our framework was used for the realization of recommendation algorithms based on models of cognitive science. In these papers, it was shown that the cognitive-inspired approaches provided the most robust results, even in sparse data folksonomy se ings. We believe that TagRec extends the already rich portfolio of recommender frameworks with a toolkit that is speci cally tailored to t tag-based se ings. Furthermore, the presentation of TagRec's use cases should be of interest for both researchers and developers of tag-based recommender systems. Limitations & future work. Currently, one limitation of TagRec is that the data access is not standardized. us, social tagging data is accessed from folksonomy les, whereas resource-related metadata (e.g., tweet content) is accessed from Apache Solr. us, our rst plan for future work is to implement a mechanism that integrates all data into Apache Solr. Apart from that, we want to further work on the stability and code quality of the framework. For example, we want to enhance the build and dependency management of the so ware using Apache Maven 10 .
3,236
1901.00306
2949972480
Recommender systems have become important tools to support users in identifying relevant content in an overloaded information space. To ease the development of recommender systems, a number of recommender frameworks have been proposed that serve a wide range of application domains. Our TagRec framework is one of the few examples of an open-source framework tailored towards developing and evaluating tag-based recommender systems. In this paper, we present the current, updated state of TagRec, and we summarize and reflect on four use cases that have been implemented with TagRec: (i) tag recommendations, (ii) resource recommendations, (iii) recommendation evaluation, and (iv) hashtag recommendations. To date, TagRec served the development and or evaluation process of tag-based recommender systems in two large scale European research projects, which have been described in 17 research papers. Thus, we believe that this work is of interest for both researchers and practitioners of tag-based recommender systems.
Other examples of open-source recommender software are http: www.mymedialite.net , an item recommender library that focuses on rating and ranking predictions in collaborative filtering approaches @cite_31 , https: github.com irecsys CARSKit , a recommendation library specifically designed for context-aware recommendations, and http: www.libfm.org tagrec.html , a software component that implements Tensor Factorization models for personalized tag recommendations in C++ @cite_26 .
{ "abstract": [ "MyMediaLite is a fast and scalable, multi-purpose library of recommender system algorithms, aimed both at recommender system researchers and practitioners. It addresses two common scenarios in collaborative filtering: rating prediction (e.g. on a scale of 1 to 5 stars) and item prediction from positive-only implicit feedback (e.g. from clicks or purchase actions). The library offers state-of-the-art algorithms for those two tasks. Programs that expose most of the library's functionality, plus a GUI demo, are included in the package. Efficient data structures and a common API are used by the implemented algorithms, and may be used to implement further algorithms. The API also contains methods for real-time updates and loading storing of already trained recommender models. MyMediaLite is free open source software, distributed under the terms of the GNU General Public License (GPL). Its methods have been used in four different industrial field trials of the MyMedia project, including one trial involving over 50,000 households.", "Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning. In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML PKDD Discovery Challenge 2009 for graph-based tag recommendation." ], "cite_N": [ "@cite_31", "@cite_26" ], "mid": [ "1965355809", "2089349245" ] }
The TagRec Framework as a Toolkit for the Development of Tag-Based Recommender Systems
Recommender systems aim to predict the probability that a speci c user will like a speci c resource. erefore, recommender systems utilize the past user behavior (e.g., resources previously consumed by this user) in order to generate a personalized list of potentially relevant resources [32]. Popular application domains of recommender systems include online marketplaces (e.g., Amazon and Zalando), movie and music streaming services (e.g., Net ix and Spotify), job portals (e.g., LinkedIn and Xing), and social tagging systems (e.g., BibSonomy and CiteULike). Social tagging systems bear particularly great potential for recommender systems as, by nature, they produce a vast amount of user-generated resource-annotations (i.e., tags). us, possible use cases of these tag-based recommender systems include the suggestion of resources to extend a user's set of bookmarks [4,38] and the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permi ed. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. Request permissions from permissions@acm.org. Conference'17, Washington, DC, USA © 2016 ACM. 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 DOI: 10.1145/nnnnnnn.nnnnnnn suggestion of tags to assist in the annotation of these bookmarks. e la er one is known as the eld of tag recommendations [13]. Over the past years, various recommendation frameworks and libraries have been developed in order to support the development and evaluation of recommender systems (see Section 4). While these frameworks cover a wide range of application domains, to the best of our knowledge, an open-source recommendation framework to design and evaluate tag-based recommender systems was still lacking. erefore, in 2014, we have started developing TagRec, a standardized tag recommender benchmarking framework [19]. In the initial development phase of the framework, we mainly focused on evaluating tag recommendation algorithms. In 2015, the framework was extended by including resource recommendation algorithms that are based on social tagging data [36]. e aim of this paper, however, is to present the current, updated state of TagRec. is includes the extension of the framework for (i) the analysis of tag reuse practices [21], (ii) the evaluation of tag recommendations in real-world folksonomy and Technology Enhanced Learning se ings [16,20], and (iii) hashtag recommendations in Twi er [22]. Apart from that, we provide an updated framework description (see Section 2) as well as a summary of use cases in the eld of recommender research that have been completed using TagRec (see Section 3). Research areas encompass tag recommendations, resource recommendations, recommendation evaluation and hashtag recommendations. To date, TagRec has served the recommender development and/or evaluation processes in two large-scale European research projects, which have been published in 17 research papers. We conclude the paper with a discussion on future work and potential improvements of the framework (see Section 5). We believe that our work contributes to the rich portfolio of technical frameworks in the area of recommender systems. Furthermore, this paper presents an overview of use cases which can be realized with TagRec, and should be of interest for both researchers and developers of tag-based recommender systems. TAGREC TagRec is a Java-based recommendation framework for tag-based information retrieval se ings. It is open-source so ware and freely available via our Github repository 1 . e Github page also contains a detailed technical description on the usage of the framework. Figure 1 illustrates TagRec's system architecture. e framework consists of (i) a data processing component, which processes data sources, (ii) a data model and analytics component, which enables access to the processed data, (iii) recommendation algorithms, 1 Figure 1: System architecture of TagRec. Here, the data processing component processes data sources in order to create a data model and data analytics. en, this data model is used by recommendation algorithms to create recommendation results that are either forwarded to an evaluation engine or to a client application. which calculate recommendations, (iv) an evaluation engine, which evaluates the algorithms, and (v) recommendation results, which can be passed to a client application. e mentioned components are described in more detail in the remainder of this section. Apart from that, we describe practical aspects of the framework that should be helpful when implementing and/or evaluating a recommendation algorithm. Finally, Table 1 provides an overview of the supported datasets, recommendation algorithms and evaluation metrics. Data Processing. e data processing component is responsible for parsing and processing external data sources. Currently supported datasets are listed in Table 1. ese datasets serve a wide range of application domains such as social bookmarking systems, learning environments, microblogging tools and music/movie sharing portals. e set of datasets can easily be extended by implementing custom data pre-processing strategies. Furthermore, this component supports various data enrichment and transformation methods such as p-core pruning [8], topic modeling [25], training/test set spli ing [20] and data conversion into related formats (e.g., for MyMediaLite [9]). Data Model and Analytics. e data model is created based on described data processing steps and provides an object-oriented representation of the data in order to ease the implementation process of a novel recommendation algorithm. us, it enables easy access to the entities in the datasets via powerful query functionality (e.g., get the set of tags a user has used in the past). Furthermore, the data model of TagRec is connected to Apache Solr 2 and thus, enables fast access to content-based data of entities. Another role of this component is the provision of basic data analytics functionality to get a be er understanding of the dataset characteristics. For example, dataset statistics, such as the total number of distinct tags or the average number of bookmarks per user, can be retrieved. Recommendation Algorithms. TagRec contains a wide range of recommendation algorithms (see Table 1). As later described in 2 Section 3, algorithms for tag recommendations, resource recommendations and hashtag recommendations are provided. ese algorithms can be used as baseline approaches for a newly implemented algorithm. e complete list of all variants of these algorithms is provided on the TagRec's Github page. A key contributions of TagRec is that next to well-established approaches, such as Collaborative Filtering, MostPopular and Factorization machines, it also encompasses approaches based upon cognitive models of information retrieval, human memory theory and category learning. In [20], it has been shown that these cognitive-inspired approaches achieve high prediction accuracy estimates in comparison to classic recommendation algorithms. Besides, the algorithms have demonstrated their suitability for sparse datasets such and narrow folksonomies. Evaluation Engine. e evaluation engine quanti es the quality of implemented recommendation strategies by applying a rich set of evaluation metrics as listed in Table 1. One drawback of most recommendation evaluation frameworks is their focus on accuracy and ranking estimates, which restricts the evaluation to the performance of recommender systems [2]. To ll this gap, TagRec supports a variety of evaluation metrics to also o er indicators for diversity, novelty, runtime performance and memory consumption of algorithms. For evaluating an algorithm, TagRec has to be provided with three parameters, where the rst one speci es the algorithm, the second one speci es the dataset directory and the third one speci es the le name of the dataset sample. For example, java jar tagrec.jar cf bib bib sample runs Collaborative Filtering on a sample of the BibSonomy dataset. e calculated metrics are then either wri en to a "metrics" le or printed to the console. Recommendation Results. As indicated in Figure 1, the algorithms' recommendation results can be either forwarded to the evaluation engine to retrieve evaluation metrics or to a client application for further processing (e.g., visualization). e KnowBrain tool [7] is an example of such a client application. It is an open source social bookmarking tool, which has been extended to cater the requirements of tag recommender evaluations in online se ings. A screenshot of KnowBrain's graphical user interface is shown in Figure 2. It enables the bookmarking of Web links and their annotations by (i) selecting from a pre-de ned set of categories, and by (ii) assigning a variable number of tags. e user's tagging process is supported by a list of recommended tags that are selected based on Tag recommendations Research papers Model of human categorization [17,23,35] Activation processes in human memory [18,21,24,37] Informal learning se ings [5][6][7] Resource recommendations Research papers A ention-interpretation dynamics [15,34] Tag and time information [27,28] Recommendation evaluation Research papers Real-world folksonomies [20] Technology enhanced learning se ings [16] Hashtag recommendations Research papers Temporal e ects on hashtag reuse [22] algorithms of the TagRec framework. e elicitation of categories allows for semantic context-based recommendation algorithms such as 3Layers [35]. Furthermore, the comparison of actually used tags with recommended tags gives insights into the online performance (i.e., user acceptance) of recommendation strategies. USE CASES In this section, we describe use cases that have been implemented using the TagRec framework. To date, TagRec supported the recommender development and/or evaluation processes in two large-scale European research projects. Results have been published in 17 research papers (see Table 2). Tag Recommendations Tag recommendation systems assist users in nding descriptive tags to annotate resources. In other words, given a speci c user and a speci c resource, a tag recommendation algorithm predicts a set of tags a user is likely to apply in annotating the resource [13]. Within this context, TagRec was used for the creation of (i) cognitive-inspired algorithms, and (ii) the evaluation of approaches suitable for formal and informal learning se ings. Tag Recommendations Using a Model of Human Categorization. In [17,23,35], the authors introduced a tag recommendation algorithm based on the human categorization models ALCOVE [26] and MINERVA2 [12]. is algorithm is called 3Layers and simulates categorization processes in human memory. erefore, the categories assigned to a given resource, which a user is going to annotate, are matched against already annotated resources of this user. Based on this matchmaking process, a set of tags associated with semantically related resources is recommended. Since TagRec enables to link a list of categories to a resource, it supported the development of 3Layers by providing functions for analyzing and deriving category information of resources (e.g., via LDA topic modeling [25]). Utilizing Activation Processes in Human Memory. Activation processes in human memory describe the general and contextdependent usefulness of information. It was shown that these processes (especially usage frequency, recency and semantic context) greatly in uence the reuse probability of tags [21]. Based on this, a set of time-aware tag recommendation approaches (see [18,24,37]) was developed that utilize the activation equation of the cognitive architecture ACT-R [1]. erefore, TagRec was used to analyze the timestamps of tag assignments and to calculate tag co-occurrences for re ecting the semantic context of social tagging. Furthermore, TagRec enabled the hybrid combination of the components of the model (e.g., combining time-aware and context-aware recommendations). Tag Recommendations in Informal Learning Settings. In the course of the European-funded project Learning Layers 3 , which aims at supporting informal learning at the workplace, tag recommendations were used to support the individual user in nding descriptive tags and the collective in consolidating a shared tag vocabulary. ese tag recommendations were used in two tools: (i) the Dropbox-like environment KnowBrain [7], and (ii) the Sensemaking interface Bits & Pieces [6]. To achieve this, TagRec was integrated as a tag recommendation library into the Social Semantic Server [5], which was used as the technical back-end for KnowBrain and Bits & Pieces. is shows that TagRec cannot only be used as a standalone tool but also as a programming library (or toolkit) to include recommendation functionality in existing so ware. A similar approach will be followed in another European-funded project called AFEL 4 , which engages in design and development of analytics for everyday learning. Resource Recommendations Resource recommender systems suggest potentially relevant web items (e.g., movies, books, learning resources, URLs, etc.) to users. Most of these recommender systems are based on Collaborative Filtering (CF) techniques, which aim to calculate similarities between users to suggest the most suitable web resources to them [33]. TagRec was applied to support and improve the development of CF approaches in tag-based online environments. Mimicking Attention-Interpretation Dynamics. Seitlinger et al. [34] introduced the rst version of a CF-based recommendation approach that takes into consideration non-linear user-artifact dynamics, modeled by means of SUSTAIN. SUSTAIN (Supervised and Unsupervised STrati ed Adaptive Incremental Network) is a exible network model of human category learning that is thoroughly discussed in [29]. It assumes that learning is a dynamic process that takes place through the encounter of new examples (e.g., Web resources). roughout the learning trajectory, categories emerge and learners' a ention foci shi . In [15], an advanced, adapted version of the initial approach was presented and analyzed in detail. e resulting approach SU ST AI N + CF , rstly applies CF to calculate the most suitable resources for a user, and secondly reranks this list depending on a user's category learning model (i.e., SUSTAIN's user model). e algorithm has been implemented and developed within the TagRec framework. Features of the framework allowed for continuous evaluation and analysis of single factors of the model and with it, the associated change of recommendation performance. With this data, it was possible to gain deeper insight into the algorithmic approach and its parameters and thus, to further adapt the model to the requirements of our application area. Resource Recommendations using Tag and Time Information. In [27], the Collaborative Item Ranking Using Tag and Time Information (CIRTT) approach was presented. CIRTT uses Collaborative Filtering to identify a set of candidate resources and re-ranks these candidate resources by incorporating tag and time information. is is achieved via the Base-Level-Learning (BLL) equation, which is one component of ACT-R's activation equation [1]. Since TagRec contains a full implementation of the activation equation, it could be easily adapted for the task of resource recommendations as well. Apart from that, TagRec was used to compare CIRTTT to other related resource recommendation methods (e.g., [40]). Another study of the recency e ects in Collaborative Filtering recommender systems was provided in [28]. Recommendation Evaluation One of the most challenging tasks in the area of recommender systems, is the reproducible evaluation of recommendation results [11]. TagRec aims to support this process by providing standardized data processing methods, baseline algorithm implementations, evaluation protocols and metrics. Evaluating Tag Recommendations in Real-World Folksonomies. Because of the sparse nature of social tagging systems, most tag recommendation evaluation studies were conducted using pcore pruned datasets. is means that all users, resources and tags, which do not appear at least p times in the dataset, are removed. is clearly does not re ect a real-world folksonomy se ing as shown by [8]. To overcome this problem, TagRec was used in [20] to compare a rich set of tag recommendation algorithms using a wide range of evaluation metrics on six un ltered social tagging datasets (i.e., Flickr, CiteULike, BibSonomy, Delicious, LastFM and MovieLens). e results showed that the e cacy of a recommendation algorithm greatly depends on the given dataset characteristics, and that cognitive-inspired approaches provide the most robust results, even in sparse data folksonomy se ings. Comparing Recommendation Algorithms in Technology Enhanced Learning Settings. Kopeinik et al. [16] is another example of using TagRec for the evaluation of a variety of algorithms on di erent o ine datasets. e paper focused on technologyenhanced formal and informal learning environments, where due to fast changing domains and characteristic group learning settings, data is typically sparse. e evaluation was divided in two se ings, the performance of (i) resource recommendation strategies, and (ii) tag recommendation strategies. In both cases, the authors compared the recommendation accuracy of a number of computationally-inexpensive recommendation algorithms on six ofine datasets retrieved from various educational se ings (i.e., social bookmarking systems, social learning environments and massive open online courses). Investigated approaches are either state-ofthe-art recommendation approaches, or strategies that have been explicitly suggested in the context of TEL systems. To address the goals of this study, the TagRec framework already provided a wide range of required functionality such as the implemented data processing component, evaluation metrics and state-of-the-art algorithms. In the context of this research paper, it was further extended by a couple of algorithms that are considered particularly relevant to learning se ings and by additional statistics, which were needed to interpret evaluation results properly. Hashtag Recommendations Over the past years, hashtags have become very popular in systems such as Twi er, Instagram and Facebook. Similar to social tags, hashtags are freely-chosen keywords to categorize resources such as Twi er posts (i.e., tweets). One of the biggest advantages of hashtags is that they can be easily used by integrating them in the tweet text. Unsurprisingly, this has led to the development of hashtag recommendation algorithms that aim to support users in applying the most descriptive hashtags to their tweets [39]. Temporal E ects on Hashtag Reuse. In [22], a time-dependent and cognitive-inspired hashtag recommendation approach was proposed. In this paper, temporal e ects on hashtag reuse in Twi er have been analyzed with the help of TagRec in order to design a hashtag recommendation approach, which utilizes the BLL equation of the cognitive architecture ACT-R [1]. erefore, TagRec was extended with functions to access Apache Solr (see Section 2), which enables the content-based analysis of tweets using TF-IDF (see [22]). CONCLUSION AND FUTURE WORK In this paper, we presented the TagRec framework as a toolkit for the development and evaluation of tag-based recommender systems. TagRec is open-source so ware wri en in Java and can be freely downloaded from Github. e framework consists of ve components: (i) a data processing component, which processes data sources, (ii) a data model and analytics component, which enables access to the processed data, (iii) recommendation algorithms, which calculate recommendations, (iv) an evaluation engine, which evaluates the algorithms, and (v) recommendation results, which can be passed to client applications. Apart from that, we summarized various use cases realized with TagRec from the elds of tag recommendations, resource recommendations, recommendation evaluation and hashtag recommendations. To date, TagRec supported the development and/or evaluation process described in 17 research papers. Speci cally, our framework was used for the realization of recommendation algorithms based on models of cognitive science. In these papers, it was shown that the cognitive-inspired approaches provided the most robust results, even in sparse data folksonomy se ings. We believe that TagRec extends the already rich portfolio of recommender frameworks with a toolkit that is speci cally tailored to t tag-based se ings. Furthermore, the presentation of TagRec's use cases should be of interest for both researchers and developers of tag-based recommender systems. Limitations & future work. Currently, one limitation of TagRec is that the data access is not standardized. us, social tagging data is accessed from folksonomy les, whereas resource-related metadata (e.g., tweet content) is accessed from Apache Solr. us, our rst plan for future work is to implement a mechanism that integrates all data into Apache Solr. Apart from that, we want to further work on the stability and code quality of the framework. For example, we want to enhance the build and dependency management of the so ware using Apache Maven 10 .
3,236
1907.03792
2961566780
Semi-supervised learning (SSL) uses unlabeled data for training and has been shown to greatly improve performances when compared to a supervised approach on the labeled data available. This claim depends both on the amount of labeled data available and on the algorithm used. In this paper, we compute analytically the gap between the best fully-supervised approach on labeled data and the best semi-supervised approach using both labeled and unlabeled data. We quantify the best possible increase in performance obtained thanks to the unlabeled data, i.e. we compute the accuracy increase due to the information contained in the unlabeled data. Our work deals with a simple high-dimensional Gaussian mixture model for the data in a Bayesian setting. Our rigorous analysis builds on recent theoretical breakthroughs in high-dimensional inference and a large body of mathematical tools from statistical physics initially developed for spin glasses.
Using exact but non-rigorous methods from statistical physics, @cite_17 @cite_27 determines the critical values for @math and @math at which it becomes information-theoretically possible to reconstruct the membership into clusters better than chance. Rigorous results on this model are given in @cite_10 where bounds on the critical values are obtained. The precises thresholds were then determined in @cite_2 . Our analysis builds on the techniques derived in this last reference with two main modifications: additional work is required to compute the classification accuracy (as opposed to the mean squared error) and to incorporate the side information.
{ "abstract": [ "We consider the high-dimensional inference problem where the signal is a low-rank matrix which is corrupted by an additive Gaussian noise. Given a probabilistic model for the low-rank matrix, we compute the limit in the large dimension setting for the mutual information between the signal and the observations, as well as the matrix minimum mean square error, while the rank of the signal remains constant. This allows to locate the information-theoretic threshold for this estimation problem, i.e. the critical value of the signal intensity below which it is impossible to recover the low-rank matrix.", "We consider the problem of Gaussian mixture clustering in the high-dimensional limit where the data consists of m points in n dimensions, n,m → ∞ and α = m n stays finite. Using exact but non-rigorous methods from statistical physics, we determine the critical value of α and the distance between the clusters at which it becomes information-theoretically possible to reconstruct the membership into clusters better than chance. We also determine the accuracy achievable by the Bayes-optimal estimation algorithm. In particular, we find that when the number of clusters is sufficiently large, r > 4+2√α, there is a gap between the threshold for information-theoretically optimal performance and the threshold at which known algorithms succeed.", "", "Estimating the density of data generated by Gaussian mixtures, using the maximum-likelihood criterion, is investigated. Solving the statistical mechanics of this problem we evaluate the quality of the estimation as a function of the number of data points, P= N, N being the dimensionality of the points, in the limit of large N. Below a critical value of , the estimated density consists of Gaussian centers that have zero overlap with the structure of the true mixture. We show numerically that estimating the centers by slowly reducing the estimated Gaussian width yields a good agreement with the theory even in the presence of many local minima." ], "cite_N": [ "@cite_2", "@cite_27", "@cite_10", "@cite_17" ], "mid": [ "2605304986", "2531902758", "", "2053881994" ] }
Asymptotic Bayes risk for Gaussian mixture in a semi-supervised setting
Semi-supervised learning (SSL) has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. The goal of SSL is to leverage large amounts of unlabeled data to improve the performance of supervised learning over small datasets. For unlabeled examples to be informative, assumption has to be made. The cluster assumption states that if two samples belong to the same cluster in the input distribution, then they are likely to belong to the same class. The cluster assumption is the same as the low-density separation assumption: the decision boundary should lie in the low-density region. In this paper, we explore analytically the simplest possible parametric model for the cluster assumption: the two clusters are modeled by mixture of Gaussians in high dimension so that the optimal decision boundary is a hyperplane. Our model can be seen as a classification problem in a semi-supervised setting. Our aim here is to define a model simple enough to be mathematically tractable while being practically relevant and capturing the main properties of a high-dimensional statistical inference problem. Our model has three parameters: the high-dimensionality of the data is captured by α the ratio of the number of samples divided by the ambient dimension; the fraction of labeled data point η and the amount of overlap between the clusters σ 2 . As a function of these three parameters, we compute the best possible accuracy (the Bayes risk) when only labeled data are used or when unlabeled data are also used. As a result, we obtain the added value due to the unlabeled data for the best possible algorithm. In particular, we observe a very clear diminishing return of the labeled data, i.e. the first labeled data points bring much more information than the last ones. Hence the regime with very few labeled data points is a priori a regime favorable to SSL. But in this case, we face in practice the problem of small validation sets [23] which makes hyperparameter tuning impossible. We find that the range of parameters for which SSL clearly outperforms either unsupervised learning or supervised learning on the labeled data is rather narrow. In a case with large overlap between the clusters (σ 2 → ∞), unsupervised learning fails and supervised learning on the labeled data is almost optimal. In a case with small overlap between the clusters (σ 2 → 0), unsupervised learning achieves performances very close to supervised learning with all labels available and supervised learning on the labeled dataset only fails. From a practical perspective, we can try to draw parallels between our results and the state of the art in SSL but we need to keep in mind that our results only give best achievable performances on our toy model. In particular, even in a setting where our results predict that unsupervised learning achieves roughly the same performances as supervised learning with all labels, it might be very useful in practice to use a few labels in addition to all unlabeled data. Such an approach is presented in [8] where extremely good performances are achieved for image classification with only a few labeled data per class and a new SSL algorithm: MixMatch. For example on CIFAR-10, with only 250 labeled images, MixMatch achieves an error rate of 11.08% and with 4000 labeled images, an error rate of 6.24% (to be compared with the 4.17% error rate for the fully supervised training on all 50000 samples). These results are aligned with our finding about diminishing returns of labeled data points. We make the following contributions: Bayes risk: to the best of our knowledge, our work is the first analytic computation of the Bayes risk in a high-dimensional Gaussian model in a semi-supervised setting. Rigorous analysis: our analysis builds on a series of recent works [11,4,18,22,5] with tools from information theory and mathematical physics originally developed for the analysis of spin glasses [24,26]. The rest of the paper is organized as follows. Our model and the main result is presented in Section 2. Related work is presented in Section 3. In Section 4, we give an heuristic derivation of the main result and in Section 5, we give a proof sketch while the more technical details are presented in the supplementary material Section 7. We conclude in Section 6 Model and main results We now define our classification problem with two classes. The points Y 1 , . . . , Y N of the dataset are in R D and given by the following process: Y j = V j U + σZ j , 1 ≤ j ≤ N, where U ∼ Unif(S D−1 ), V = (V 1 , . . . , V N ) i.i.d. ∼ Unif(−1, 1) and Z 1 , . . . , Z N i.i.d. ∼ N (0, Id D ) are all independent. In words, the dataset is composed of N points in R D divided into two classes with roughly equal sizes. The points with label V j = +1 are centered around +U ∈ R D and the points with label V j = −1 are centered around −U ∈ R D . The parameter σ controls the level of Gaussian noise around these centers. In a semi-supervised setting, the statistician has access to some labels. We consider a case where each label is revealed with probability η ∈ [0, 1] independently of everything else. To fix notation, the side information is given by the following process: S j = V j with probability η 0 with probability 1 − η. If S j = 0, then the label of the j-th data point is unknown whereas if S j = ±1, it corresponds to the label of the j-th data point. Finally, we consider the high-dimensional setting and all our results will be in a regime where N, D → ∞ while the ratio N/D tends towards a constant α > 0. Note that we are in a high noise regime since the squared norm of the signal is one whereas the squared norm of the noise is σ 2 D ≈ σ 2 N/α where N is the number of observations. To summarize, the three parameters of our model are: σ 2 > 0 the variance of the noise in the dataset, η ∈ [0, 1] the fraction of revealed labels and α > 0 the ratio between the number of data points (both labeled and unlabeled) and the dimension of the ambient space. We also assume that the statistician knows the priors, i.e. the distributions of U , V and Z. The task of the statistician is to use the dataset (Y , S) in order to make a prediction about the label of a new (unseen) data point. More formally, we define: Y new = V new U + σZ new , where V new ∼ Unif(−1, +1), Z new ∼ N (0, Id D ) . We are interested in the minimal achievable error in our model, i.e. the Bayes risk: R * D (η) = inf v P v(Y , S, Y new ) = V new where the infimum is taken over all estimators (measurable functions of Y , S, Y new ). Our main mathematical achievement is an analytic formula for the Bayes risk R * D in the large D limit, see Theorem 1 below. In order to state it, we need to introduce some additional notations. We start with some easy facts about our model. Oracle risk Assume that the statistician knows the center of the clusters, i.e. has access to the "oracle" vector U . Then the best classification error would be achieved thanks to the simple thresholding rule sign( U , Y new ), where ., . denotes the Euclidean dot product. In this case, the risk is given by: R oracle = P σ U , Z new > 1 = P σZ > 1) = 1 − Φ 1 σ ,(1) where Φ is the standard Gaussian cumulative distribution function. We have of course R oracle ≤ R * D (η). Fully supervised case Another instructive and simple case is the supervised case where η = 1. Since all the V j 's are known, we can assume wlog that they are all equal to one (multiply each Y j by V j ). More importantly, if we slightly modify the distribution of U by taking U = (U 1 , . . . U D ) i.i.d. ∼ N (0, 1/D), this will not change the results for our model and makes the analysis easier by decorrelating each component. Indeed, denote by Y j (resp. Z j ) the first component of Y j (resp. Z j ) and by U 1 the first component of U . Then we have N scalar noisy observations of the first component of U : Y j = U 1 + σZ j for 1 ≤ j ≤ N , so that we can construct an estimate for U 1 by taking the average of the observations. We get: Y 1 = 1 N N j=1 Y j = U 1 + σ √ N N (0, 1). Doing this for each component of U , we get an estimate of the vector U and we now use it to get an estimate of V new . First define Y = (Y 1 , . . . , Y D ) and consider Y new , Y = V new U , Y + σ Z new , Y , and note that as D → ∞, we have U , Y ≈ DE[U 1 Y 1 ] = DE U 2 1 = 1 and Z new , Y ≈ E Y 2 Z ≈ √ α + σ 2 / √ αN (0, 1), so that we get: Y new , Y ≈ V new + σ √ α + σ 2 √ α N (0, 1). Our main result will actually show that estimating V new with the sign of Y new , Y is optimal so that we get: lim N,D→∞ R * D (1) = P σ √ α + σ 2 √ α Z > 1 = 1 − Φ √ α σ √ α + σ 2 .(2) Unsupervised case In this paper, we concentrate on the case where η > 0. When η = 0, there is no side information and we are in an unsupervised setting studied in [22]. Due to the symmetry of our model, we have R * D = 1/2 because there is no way to guess the right classes ±1. In order to have a well-posed problem, the risk should be redefined as follows: R * D (0) = E Y min s=±1 inf v P (sv(Y , Y new ) = V new |Y ) Although, this measure of performance is not the one studied in [22], we can adapt the argument to show that: lim η→0 lim N,D→∞ R * D (η) = lim N,D→∞ R * D (0).(3) Main result We now state our main result: Theorem 1. Let us define, for α, σ > 0, η ∈ (0, 1], f α,σ,η (q) def = α(1 − η)i v (q/σ 2 ) + α 2σ 2 (1 − q) − 1 2 q + log(1 − q) .(4) Here (5) where a fraction η of the data points have labels and are used with all unlabeled data points. The supervised on full curve corresponds to (2) where all the labels are used. The supervised on labeled curve corresponds to (2) with the parameter α replaced by αη and is the best possible performance when only a fraction η of the data points having labels are used. The unsupervised curve corresponds to (3) where all the data points are used but without any label. Finally, the oracle curve corresponds to (1) where the centers of the clusters are known (corresponding to the case α → ∞). In the left of Figure 1, we clearly see that the first labeled data points (i.e. when η is small) decreases greatly the risk of semi-supervised learning. This corresponds to the diminishing return of the labeled data. In the right plot of Figure 1, we see that in the high-noise regime, unsupervised learning fails and that its risk decreases as soon as σ 2 < 1. i v (γ) = γ − E log cosh( √ γZ 0 + γ) where Z 0 ∼ N (0, 1). The function f α,σ,η admits a unique minimizer q * (α, σ, η) on [0, 1) and R * D (η) − −−−−− → N,D→∞ 1 − Φ( q * (α, σ, η)/σ).(5) This phenomena is known as the BBP phase transition [1,2,25]. We see that below this transition, the unlabeled data are of little help as the performance of SSL almost match the performance of supervised learning on labeled data only. Moreover after the transition, unsupervised learning reaches quite quickly the performance of SSL. In other words, the regime most favorable to SSL in term of noise corresponds precisely to the regime around the BBP phase transition where unsupervised learning is still not very good while supervised learning on labeled data saturates. Heuristic derivation of the main result We present now an heuristic derivation of our results, based on the "cavity method" [21] from statistical physics. Let s u = E[U |Y , S] and s v = E[V |Y , S] be the optimal estimators (in term of mean squared error) for estimating U and V . A natural hypothesis is to assume that the correlation s u, U converges as N, D → ∞ to some deterministic limit q * u ∈ [0, 1] and that 1 N s v, V → q * v ∈ R. The conditional expectation s u = E[U |Y , S] is the orthogonal projection (in L 2 sense) of the random vector U onto the subspace of Y , S-measurable random variables. The L 2 norm squared of the projection s u is equal to the scalar product of the vector U with its projection s u: E s u 2 = E s u, U . Assuming that s u 2 also admits a deterministic limit, this limits is then equal to q * u . We get for large N and D, s u 2 s u, U q * u . Analogously we have 1 N s v 2 1 N s v, V q * v . We will show below that q * u and q * v obey some fixed points equations that allow to determine them. As seen above, if we aim at estimating a label V i that we did not observe (i.e. S i = 0) given Y , S and the "oracle" U , we compute the sufficient statistic σN (0, 1). The estimator that minimizes the probability of error Y i = Y i , U = V i +P( v = V i ) is simply v i = sign( Y i ). The one that minimizes the mean squared error (MSE) is v i = E[V i | Y i ] which achieves a MSE of E[(V i − v i ) 2 ] = mmse v (1/σ 2 ) where we define for (V, Z) ∼ Unif(−1, +1) ⊗ N (0, 1) and γ > 0 (see Section 7.1 for more details): mmse v (γ) def = E (V − E[V | √ γV + Z]) 2 . In the case where we do not have access to the oracle U , one can still use s u as a proxy. We repeat the same procedure assuming that s u, Y i is a sufficient statistic for estimating V i . Although this is not strictly true, we shall see, that this leads to the correct fixed points equations for q * u , q * v . Compute s u, Y i = s u, U V i + σ s u, Z i q * u V i + σ s u, Z i . The posterior mean s u is not expected to depend much on the particular point Y i and therefore on Z i . This gives that the random vectors s u and Z i are approximately independent. Hence the distribution of s u, Z i is roughly N (0, q * u ), we recall that s u 2 q * u . We get 1 σ √ q * u s u, Y i q * u /σ 2 V i + Z(6) in law, where Z ∼ N (0, 1). The best (in terms of MSE) estimator v i one can then constructs using s u, Y i achieves a MSE of E[(V i − v i ) 2 ] mmse v (q * u /σ 2 ). We assumed that s u, Y i is a sufficient statistic for estimating V i , therefore v i = s v i . For all the ηN indices i such that S i = V i we have obviously s v i = V i . Hence 1 N E V − s v 2 = 1 N i|Si =0 E (V i − s v i ) 2 = 1 N i|Si =0 E (V i − v i ) 2 1 N i|Si =0 mmse v (q * u /σ 2 ) (1 − η)mmse v (q * u /σ 2 ). Since we have 1 N E V − s v 2 1 − q * v , we get 1 − q * v (1 − η)mmse v (q * u /σ 2 ). Doing the same reasoning with s u instead of s v leads to 1 − q * u = mmse u (αq * v /σ 2 ), where mmse u (γ) = E[(U − E[U | √ γU + Z]) 2 ] for U, Z i.i.d. ∼ N (0, 1). As shown in Section 7.1, we have mmse u (γ) = 1 1 + γ . We conclude that (q * u , q * v ) satisfies the following fixed point equations: q * v = 1 − (1 − η)mmse v (q * u /σ 2 )(7)q * u = αq * v σ 2 + αq * v .(8) We introduce the following mutual information i v (γ) = I(V 0 ; √ γV 0 + Z 0 )(9) where V 0 ∼ Unif(−1, +1) and Z 0 ∼ N (0, 1) are independent. An elementary computation leads to (see Section 7.1) i v (γ) = γ − E log cosh( √ γZ 0 + γ).(10) By the "I-MMSE" Theorem from [15], i v is related to mmse v : i v (γ) = mmse v (γ).(11) Let us compute the derivative of f α,σ,η defined by (4), using (11): f α,σ,η (q) = α 2σ 2 (1 − η)mmse v (q/σ 2 ) − α 2σ 2 + q 2(1 − q) . Using (7)- (8), one verifies easily that f α,σ,η (q * u ) = 0. By Proposition 1, f α,σ,η admits a unique critical point on [0, 1) which is its unique minimizer: q * u is therefore the minimizer of f α,σ,η . If we now want to estimate V new from Y , S and Y new we assume, as above that s u, Y new is a sufficient statistic. As for (6), we have 1 σ √ q * u s u, Y new q * u /σ 2 V new + Z in law, where Z ∼ N (0, 1) is independent of V new . The Bayes classifier is then v = sign( u, Y new ),(12) hence R * D (η) = P(V new = v) 1 − Φ( q * u /σ) , which is the statement of our main Theorem 1 above. Proof sketch Theorem 1 follows from Theorem 2 below. From now we simply write q * instead of q * (α, σ, η). The next theorem computes the limit of the log-likelihood ratio. Theorem 2. Conditionally on V new = ±1, log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) (d) − −−−−− → N,D→∞ N (±2q * /σ 2 , 4q * /σ 2 ). Proof. Let us look at the posterior distribution of V new , U given Y , S, Y new , i.e. From Bayes rule we get P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) = exp − 1 2σ 2 Y new − u 2 dP (u|Y , S) exp − 1 2σ 2 Y new + u 2 dP (u|Y , S) = exp 1 σ 2 Y new , u dP (u|Y , S) exp − 1 σ 2 Y new , u dP (u|Y , S) Let s u = E[U |Y , S]. The following lemma is proved in the supplementary material, see Section 7.3. (1) , u (2) For v ∈ {−1, +1} we define Lemma 1. Let u A N (v) = exp v σ 2 Y new , u dP (u|Y , S) B N (v) = exp v σ s u, Z new + vV new σ 2 q * . Using Lemma 1, we prove the following lemma in Section 7.3 Lemma 2. For v = ±1, A N (v) − B N (v) L 2 − −−−−− → N,D→∞ 0. Since | log A N (v) − log B N (v)| ≤ (A N (v) −1 + B N (v) −1 )(A N (v) − B N (v)) , we have by Cauchy-Schwarz inequality: E| log A N (v)−log B N (v)| ≤ √ 2 E A N (v) −2 +B N (v) −2 1/2 E (A N (v)−B N (v)) 2 1/2 − −−−−− → N,D→∞ 0, using Lemma 2 (one can verify easily that the first term of the product above is O(1)). We get log A N (v) − log B N (v) L 1 − −−−−− → N,D→∞ 0, hence log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) − 2 σ s u, Z new + 2 σ 2 q * V new L 1 − −−−−− → N,D→∞ 0. s u is independent of (V new , Z new ) and by Lemma 1 we have s u 2 → q * . Consequently s u, Z new (d) − −−−−− → N,D→∞ N (0, q * ) and we conclude: log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) (d) − −−−−− → N,D→∞ 2 σ Z 0 + 2 σ 2 q * V new where Z 0 ∼ N (0, q * ) is independent of V new . Conclusion We analyzed a simple high-dimensional Gaussian mixture model in a semi-supervised setting and computed the associated Bayes risk. In our model, we are able to compute the best possible accuracy of semi-supervised learning using both labeled and unlabeled data as well as the best possible performances of supervised learning using only the labeled data and unsupervised learning using all data but without any label. This allows us to quantify the added value of unlabeled data. When the clusters are well separated (probably the most realistic setting), we find that the value of unlabeled data is dominating. Labeled data can almost be ignored as unsupervised learning achieved roughly the same performance as semi-supervised learning. Nevertheless, using a few labeled data is often very helpful in practice as shown by the recent MixMatch algorithm [8]. We believe our main Theorem 1 gives new insights for semi-supervised learning and we designed our model with a focus on simplicity. However, our proof technique is very general and can handle a much more complex model. For example, we can deal with classes of different sizes by changing the prior of V new . Another extension for which our proof carries over consists in modifying the channel for the side information. Here, we considered the erasure channel corresponding to the standard SSL setting but our proof will still work for other channel like the binary symmetric channel or the Z channel corresponding to a setting with noisy labels. Supplementary material Gaussian channel We give here some easy computation for the Gaussian channel: Y = √ γU + Z, where Z ∼ N (0, 1) is independent of U . We first consider the case where U ∼ N (0, 1). We define mmse u (γ) = E (U − E [U |Y ]) 2 . Since, we are dealing with Gaussian random variables, E [U |Y ] is simply the orthogonal projection of U on Y : E [U |Y ] = E [U Y ] E [Y 2 ] Y = √ γ 1 + γ Y. Hence, we have mmse u (γ) = E U − γ 1 + γ U − √ γ 1 + γ Z 2 = 1 1 + γ . Thanks to the I-MMSE relation [15], we have 1 2 mmse u (γ) = ∂ ∂γ I(U ; Y ). For γ = 0, U and Y are independent: I(U ; Y ) γ=0 = 0, so that we get I(U ; Y ) = 1 2 log(1 + γ). We now consider the case where U ∼ Unif(−1, +1). We define i v (γ) = I(U ; Y ). Recall that I(U ; Y ) = E log dP (U,Y ) dP U ⊗ dP Y (U, Y ). And here, we have dP (U,Y ) dP U ⊗ dP Y (U, Y ) = e −1/2(Y − √ γU ) 2 e −1/2(Y − √ γu) 2 dP U (u) . Hence, we have i v (γ) = −E log dP U (u) exp ( √ γ(u − U )Y ) = √ γE[U Y ] − E log cosh( √ γY ) = γ − E log cosh ( √ γZ + γ) . Thanks to the I-MMSE relation, we have: 1 2 mmse v (γ) = i v (γ) = 1 − E 1 2 √ γ Z + 1 tanh ( √ γZ + γ) = 1 − E tanh √ λZ + λ − 1 2 E tanh √ λZ + λ = 1 2 − E tanh √ λZ + λ + 1 2 E tanh 2 √ λZ + λ = 1 2 1 − E tanh √ λZ + λ , so that we have mmse v (γ) = 1 − E tanh √ λZ + λ . Convergence of the mutual information Theorem 3. For all α, σ > 0, η ∈ (0, 1], 1) f α,σ,η (q). (13) Further, this minimum is achieved at a unique point q * (α, σ, η) and 1 N I U , V ; Y S − −−−−− → N,D→∞ min q∈[0,u, U − −−−−− → N,D→∞ q * (α, σ, η),(14) where u is a sample from the posterior distribution of U given Y , S, independently of everything else. Proof. The limit (13) was proved in [22] in the case η = 0. The proof can however be straightforwardly adapted to the case η = 0 and leads to 1 N I U , V ; Y S − −−−−− → N,D→∞ inf qu∈[0,1] sup qv∈[0,1] α(1−η)i v (q u /σ 2 )+i u (αq v /σ 2 )+ α 2σ 2 (1−q u )(1−q v ), where i u (γ) = 1 2 log(1 + γ). The supremum in q v can be easily computed, leading to: sup qv∈[0,1] α(1 − η)i v (q u /σ 2 ) + i u (αq v /σ 2 ) + α 2σ 2 (1 − q u )(1 − q v ) = α(1 − η)i v (q u /σ 2 ) + α 2σ 2 (1 − q u ) − 1 2 q u + log(1 − q u ) . This proves (13). The fact that f α,σ,η admits a unique minimizer q * u (α, σ, η) comes from Proposition 1. From the limit of the mutual information, one gets the limits of minimal mean squared errors (MMSE) using the "I-MMSE" relation [15]: E U U T − E[U U T |Y , S] 2 − −−−−− → N,D→∞ 1 − q * u (α, σ, η) 2 . Let u be a sample from the posterior distribution of U given Y , S, independently of everything else. Then we deduce This can be done (as in [5]) by adding a small amount of additional side-information to the model of the form Y = √ DU ⊗4 + W , where the entries of the tensor W are i.i.d. standard Gaussian: (W i1,i2,i3,i4 ) 1≤i1,i2,i3,i4≤D i.i.d. ∼ N (0, 1). We then apply the I-MMSE relation with respect to to obtain (15). Technical lemmas We now give the proof of Lemma 1 Proof. Notice that, by Bayes rule, we have (U , u (1) ) (d) = (u (2) , u (1) ). So we have by (14) U , u (1) , u (1) , u (2) − −−−−− → N,D→∞ q * . Now, by Jensen's inequality: (1) , u (2) −q * Y , S 2 ≤ E E u (1) , u (2) −q * Y , S, u (1) 2 ≤ E ( u (1) , u (2) −q * ) 2 . E E u Since E[ u (1) , u (2) (1) , u (2) |Y , S, u (1) ] = u (1) , E[u (2) |Y , S] = u (1) , s u , this leads to E ( s u 2 − q * ) 2 ≤ E ( s u, u (1) − q * ) 2 ≤ E ( u (1) , u (2) − q * ) 2 . We now give a proof of Lemma 2. Proof. In order to prove that A N (v) − B N (v) (1) , . . . u (2) be i.i.d. samples from the posterior distribution of U given Y , S, independently of everything else. Using Lemma 1, we compute:
5,126
1907.03792
2961566780
Semi-supervised learning (SSL) uses unlabeled data for training and has been shown to greatly improve performances when compared to a supervised approach on the labeled data available. This claim depends both on the amount of labeled data available and on the algorithm used. In this paper, we compute analytically the gap between the best fully-supervised approach on labeled data and the best semi-supervised approach using both labeled and unlabeled data. We quantify the best possible increase in performance obtained thanks to the unlabeled data, i.e. we compute the accuracy increase due to the information contained in the unlabeled data. Our work deals with a simple high-dimensional Gaussian mixture model for the data in a Bayesian setting. Our rigorous analysis builds on recent theoretical breakthroughs in high-dimensional inference and a large body of mathematical tools from statistical physics initially developed for spin glasses.
To the best of our knowledge, there are much fewer theoretical works dealing with a semi-supervised setting. @cite_18 studies a mixture model where the estimation problem is essentially reduced to the one of estimating the mixing parameter and shows that the information content of unlabeled examples decreases as classes overlap. More closely related to our work, @cite_3 provides the first information theoretic tight analysis for inference of latent community structure given a dense graph along with high dimensional node covariates, correlated with the same latent communities. @cite_20 studies a class of graph-oriented semi-supervised learning algorithms in the limit of large and numerous data similar to our setting.
{ "abstract": [ "We observe a training set Q composed of l labeled samples (X sub 1 , spl theta sub 1 ),...,(X sub l , spl theta sub l ) and u unlabeled samples X sub 1 ',...,X sub u ' . The labels spl theta sub i are independent random variables satisfying Pr spl theta sub i =1 = spl eta , Pr spl theta sub i =2 =1- spl eta . The labeled observations X sub i are independently distributed with conditional density f sub spl theta i ( spl middot ) given spl theta sub i . Let (X sub 0 , spl theta sub 0 ) be a new sample, independently distributed as the samples in the training set. We observe X sub 0 and we wish to infer the classification spl theta sub 0 . In this paper we first assume that the distributions f sub 1 ( spl middot ) and f sub 2 ( spl middot ) are given and that the mixing parameter is unknown. We show that the relative value of labeled and unlabeled samples in reducing the risk of optimal classifiers is the ratio of the Fisher informations they carry about the parameter spl eta . We then assume that two densities g sub 1 ( spl middot ) and g sub 2 ( spl middot ) are given, but we do not know whether g sub 1 ( spl middot )=f sub 1 ( spl middot ) and g sub 2 ( spl middot )=f sub 2 ( spl middot ) or if the opposite holds, nor do we know spl eta . Thus the learning problem consists of both estimating the optimum partition of the observation space and assigning the classifications to the decision regions. Here, we show that labeled samples are necessary to construct a classification rule and that they are exponentially more valuable than unlabeled samples.", "", "We provide the first information theoretical tight analysis for inference of latent community structure given a sparse graph along with high dimensional node covariates, correlated with the same latent communities. Our work bridges recent theoretical breakthroughs in detection of latent community structure without nodes covariates and a large body of empirical work using diverse heuristics for combining node covariates with graphs for inference. The tightness of our analysis implies in particular, the information theoretic necessity of combining the different sources of information. Our analysis holds for networks of large degrees as well as for a Gaussian version of the model." ], "cite_N": [ "@cite_18", "@cite_20", "@cite_3" ], "mid": [ "2131775048", "2963720057", "2883384506" ] }
Asymptotic Bayes risk for Gaussian mixture in a semi-supervised setting
Semi-supervised learning (SSL) has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. The goal of SSL is to leverage large amounts of unlabeled data to improve the performance of supervised learning over small datasets. For unlabeled examples to be informative, assumption has to be made. The cluster assumption states that if two samples belong to the same cluster in the input distribution, then they are likely to belong to the same class. The cluster assumption is the same as the low-density separation assumption: the decision boundary should lie in the low-density region. In this paper, we explore analytically the simplest possible parametric model for the cluster assumption: the two clusters are modeled by mixture of Gaussians in high dimension so that the optimal decision boundary is a hyperplane. Our model can be seen as a classification problem in a semi-supervised setting. Our aim here is to define a model simple enough to be mathematically tractable while being practically relevant and capturing the main properties of a high-dimensional statistical inference problem. Our model has three parameters: the high-dimensionality of the data is captured by α the ratio of the number of samples divided by the ambient dimension; the fraction of labeled data point η and the amount of overlap between the clusters σ 2 . As a function of these three parameters, we compute the best possible accuracy (the Bayes risk) when only labeled data are used or when unlabeled data are also used. As a result, we obtain the added value due to the unlabeled data for the best possible algorithm. In particular, we observe a very clear diminishing return of the labeled data, i.e. the first labeled data points bring much more information than the last ones. Hence the regime with very few labeled data points is a priori a regime favorable to SSL. But in this case, we face in practice the problem of small validation sets [23] which makes hyperparameter tuning impossible. We find that the range of parameters for which SSL clearly outperforms either unsupervised learning or supervised learning on the labeled data is rather narrow. In a case with large overlap between the clusters (σ 2 → ∞), unsupervised learning fails and supervised learning on the labeled data is almost optimal. In a case with small overlap between the clusters (σ 2 → 0), unsupervised learning achieves performances very close to supervised learning with all labels available and supervised learning on the labeled dataset only fails. From a practical perspective, we can try to draw parallels between our results and the state of the art in SSL but we need to keep in mind that our results only give best achievable performances on our toy model. In particular, even in a setting where our results predict that unsupervised learning achieves roughly the same performances as supervised learning with all labels, it might be very useful in practice to use a few labels in addition to all unlabeled data. Such an approach is presented in [8] where extremely good performances are achieved for image classification with only a few labeled data per class and a new SSL algorithm: MixMatch. For example on CIFAR-10, with only 250 labeled images, MixMatch achieves an error rate of 11.08% and with 4000 labeled images, an error rate of 6.24% (to be compared with the 4.17% error rate for the fully supervised training on all 50000 samples). These results are aligned with our finding about diminishing returns of labeled data points. We make the following contributions: Bayes risk: to the best of our knowledge, our work is the first analytic computation of the Bayes risk in a high-dimensional Gaussian model in a semi-supervised setting. Rigorous analysis: our analysis builds on a series of recent works [11,4,18,22,5] with tools from information theory and mathematical physics originally developed for the analysis of spin glasses [24,26]. The rest of the paper is organized as follows. Our model and the main result is presented in Section 2. Related work is presented in Section 3. In Section 4, we give an heuristic derivation of the main result and in Section 5, we give a proof sketch while the more technical details are presented in the supplementary material Section 7. We conclude in Section 6 Model and main results We now define our classification problem with two classes. The points Y 1 , . . . , Y N of the dataset are in R D and given by the following process: Y j = V j U + σZ j , 1 ≤ j ≤ N, where U ∼ Unif(S D−1 ), V = (V 1 , . . . , V N ) i.i.d. ∼ Unif(−1, 1) and Z 1 , . . . , Z N i.i.d. ∼ N (0, Id D ) are all independent. In words, the dataset is composed of N points in R D divided into two classes with roughly equal sizes. The points with label V j = +1 are centered around +U ∈ R D and the points with label V j = −1 are centered around −U ∈ R D . The parameter σ controls the level of Gaussian noise around these centers. In a semi-supervised setting, the statistician has access to some labels. We consider a case where each label is revealed with probability η ∈ [0, 1] independently of everything else. To fix notation, the side information is given by the following process: S j = V j with probability η 0 with probability 1 − η. If S j = 0, then the label of the j-th data point is unknown whereas if S j = ±1, it corresponds to the label of the j-th data point. Finally, we consider the high-dimensional setting and all our results will be in a regime where N, D → ∞ while the ratio N/D tends towards a constant α > 0. Note that we are in a high noise regime since the squared norm of the signal is one whereas the squared norm of the noise is σ 2 D ≈ σ 2 N/α where N is the number of observations. To summarize, the three parameters of our model are: σ 2 > 0 the variance of the noise in the dataset, η ∈ [0, 1] the fraction of revealed labels and α > 0 the ratio between the number of data points (both labeled and unlabeled) and the dimension of the ambient space. We also assume that the statistician knows the priors, i.e. the distributions of U , V and Z. The task of the statistician is to use the dataset (Y , S) in order to make a prediction about the label of a new (unseen) data point. More formally, we define: Y new = V new U + σZ new , where V new ∼ Unif(−1, +1), Z new ∼ N (0, Id D ) . We are interested in the minimal achievable error in our model, i.e. the Bayes risk: R * D (η) = inf v P v(Y , S, Y new ) = V new where the infimum is taken over all estimators (measurable functions of Y , S, Y new ). Our main mathematical achievement is an analytic formula for the Bayes risk R * D in the large D limit, see Theorem 1 below. In order to state it, we need to introduce some additional notations. We start with some easy facts about our model. Oracle risk Assume that the statistician knows the center of the clusters, i.e. has access to the "oracle" vector U . Then the best classification error would be achieved thanks to the simple thresholding rule sign( U , Y new ), where ., . denotes the Euclidean dot product. In this case, the risk is given by: R oracle = P σ U , Z new > 1 = P σZ > 1) = 1 − Φ 1 σ ,(1) where Φ is the standard Gaussian cumulative distribution function. We have of course R oracle ≤ R * D (η). Fully supervised case Another instructive and simple case is the supervised case where η = 1. Since all the V j 's are known, we can assume wlog that they are all equal to one (multiply each Y j by V j ). More importantly, if we slightly modify the distribution of U by taking U = (U 1 , . . . U D ) i.i.d. ∼ N (0, 1/D), this will not change the results for our model and makes the analysis easier by decorrelating each component. Indeed, denote by Y j (resp. Z j ) the first component of Y j (resp. Z j ) and by U 1 the first component of U . Then we have N scalar noisy observations of the first component of U : Y j = U 1 + σZ j for 1 ≤ j ≤ N , so that we can construct an estimate for U 1 by taking the average of the observations. We get: Y 1 = 1 N N j=1 Y j = U 1 + σ √ N N (0, 1). Doing this for each component of U , we get an estimate of the vector U and we now use it to get an estimate of V new . First define Y = (Y 1 , . . . , Y D ) and consider Y new , Y = V new U , Y + σ Z new , Y , and note that as D → ∞, we have U , Y ≈ DE[U 1 Y 1 ] = DE U 2 1 = 1 and Z new , Y ≈ E Y 2 Z ≈ √ α + σ 2 / √ αN (0, 1), so that we get: Y new , Y ≈ V new + σ √ α + σ 2 √ α N (0, 1). Our main result will actually show that estimating V new with the sign of Y new , Y is optimal so that we get: lim N,D→∞ R * D (1) = P σ √ α + σ 2 √ α Z > 1 = 1 − Φ √ α σ √ α + σ 2 .(2) Unsupervised case In this paper, we concentrate on the case where η > 0. When η = 0, there is no side information and we are in an unsupervised setting studied in [22]. Due to the symmetry of our model, we have R * D = 1/2 because there is no way to guess the right classes ±1. In order to have a well-posed problem, the risk should be redefined as follows: R * D (0) = E Y min s=±1 inf v P (sv(Y , Y new ) = V new |Y ) Although, this measure of performance is not the one studied in [22], we can adapt the argument to show that: lim η→0 lim N,D→∞ R * D (η) = lim N,D→∞ R * D (0).(3) Main result We now state our main result: Theorem 1. Let us define, for α, σ > 0, η ∈ (0, 1], f α,σ,η (q) def = α(1 − η)i v (q/σ 2 ) + α 2σ 2 (1 − q) − 1 2 q + log(1 − q) .(4) Here (5) where a fraction η of the data points have labels and are used with all unlabeled data points. The supervised on full curve corresponds to (2) where all the labels are used. The supervised on labeled curve corresponds to (2) with the parameter α replaced by αη and is the best possible performance when only a fraction η of the data points having labels are used. The unsupervised curve corresponds to (3) where all the data points are used but without any label. Finally, the oracle curve corresponds to (1) where the centers of the clusters are known (corresponding to the case α → ∞). In the left of Figure 1, we clearly see that the first labeled data points (i.e. when η is small) decreases greatly the risk of semi-supervised learning. This corresponds to the diminishing return of the labeled data. In the right plot of Figure 1, we see that in the high-noise regime, unsupervised learning fails and that its risk decreases as soon as σ 2 < 1. i v (γ) = γ − E log cosh( √ γZ 0 + γ) where Z 0 ∼ N (0, 1). The function f α,σ,η admits a unique minimizer q * (α, σ, η) on [0, 1) and R * D (η) − −−−−− → N,D→∞ 1 − Φ( q * (α, σ, η)/σ).(5) This phenomena is known as the BBP phase transition [1,2,25]. We see that below this transition, the unlabeled data are of little help as the performance of SSL almost match the performance of supervised learning on labeled data only. Moreover after the transition, unsupervised learning reaches quite quickly the performance of SSL. In other words, the regime most favorable to SSL in term of noise corresponds precisely to the regime around the BBP phase transition where unsupervised learning is still not very good while supervised learning on labeled data saturates. Heuristic derivation of the main result We present now an heuristic derivation of our results, based on the "cavity method" [21] from statistical physics. Let s u = E[U |Y , S] and s v = E[V |Y , S] be the optimal estimators (in term of mean squared error) for estimating U and V . A natural hypothesis is to assume that the correlation s u, U converges as N, D → ∞ to some deterministic limit q * u ∈ [0, 1] and that 1 N s v, V → q * v ∈ R. The conditional expectation s u = E[U |Y , S] is the orthogonal projection (in L 2 sense) of the random vector U onto the subspace of Y , S-measurable random variables. The L 2 norm squared of the projection s u is equal to the scalar product of the vector U with its projection s u: E s u 2 = E s u, U . Assuming that s u 2 also admits a deterministic limit, this limits is then equal to q * u . We get for large N and D, s u 2 s u, U q * u . Analogously we have 1 N s v 2 1 N s v, V q * v . We will show below that q * u and q * v obey some fixed points equations that allow to determine them. As seen above, if we aim at estimating a label V i that we did not observe (i.e. S i = 0) given Y , S and the "oracle" U , we compute the sufficient statistic σN (0, 1). The estimator that minimizes the probability of error Y i = Y i , U = V i +P( v = V i ) is simply v i = sign( Y i ). The one that minimizes the mean squared error (MSE) is v i = E[V i | Y i ] which achieves a MSE of E[(V i − v i ) 2 ] = mmse v (1/σ 2 ) where we define for (V, Z) ∼ Unif(−1, +1) ⊗ N (0, 1) and γ > 0 (see Section 7.1 for more details): mmse v (γ) def = E (V − E[V | √ γV + Z]) 2 . In the case where we do not have access to the oracle U , one can still use s u as a proxy. We repeat the same procedure assuming that s u, Y i is a sufficient statistic for estimating V i . Although this is not strictly true, we shall see, that this leads to the correct fixed points equations for q * u , q * v . Compute s u, Y i = s u, U V i + σ s u, Z i q * u V i + σ s u, Z i . The posterior mean s u is not expected to depend much on the particular point Y i and therefore on Z i . This gives that the random vectors s u and Z i are approximately independent. Hence the distribution of s u, Z i is roughly N (0, q * u ), we recall that s u 2 q * u . We get 1 σ √ q * u s u, Y i q * u /σ 2 V i + Z(6) in law, where Z ∼ N (0, 1). The best (in terms of MSE) estimator v i one can then constructs using s u, Y i achieves a MSE of E[(V i − v i ) 2 ] mmse v (q * u /σ 2 ). We assumed that s u, Y i is a sufficient statistic for estimating V i , therefore v i = s v i . For all the ηN indices i such that S i = V i we have obviously s v i = V i . Hence 1 N E V − s v 2 = 1 N i|Si =0 E (V i − s v i ) 2 = 1 N i|Si =0 E (V i − v i ) 2 1 N i|Si =0 mmse v (q * u /σ 2 ) (1 − η)mmse v (q * u /σ 2 ). Since we have 1 N E V − s v 2 1 − q * v , we get 1 − q * v (1 − η)mmse v (q * u /σ 2 ). Doing the same reasoning with s u instead of s v leads to 1 − q * u = mmse u (αq * v /σ 2 ), where mmse u (γ) = E[(U − E[U | √ γU + Z]) 2 ] for U, Z i.i.d. ∼ N (0, 1). As shown in Section 7.1, we have mmse u (γ) = 1 1 + γ . We conclude that (q * u , q * v ) satisfies the following fixed point equations: q * v = 1 − (1 − η)mmse v (q * u /σ 2 )(7)q * u = αq * v σ 2 + αq * v .(8) We introduce the following mutual information i v (γ) = I(V 0 ; √ γV 0 + Z 0 )(9) where V 0 ∼ Unif(−1, +1) and Z 0 ∼ N (0, 1) are independent. An elementary computation leads to (see Section 7.1) i v (γ) = γ − E log cosh( √ γZ 0 + γ).(10) By the "I-MMSE" Theorem from [15], i v is related to mmse v : i v (γ) = mmse v (γ).(11) Let us compute the derivative of f α,σ,η defined by (4), using (11): f α,σ,η (q) = α 2σ 2 (1 − η)mmse v (q/σ 2 ) − α 2σ 2 + q 2(1 − q) . Using (7)- (8), one verifies easily that f α,σ,η (q * u ) = 0. By Proposition 1, f α,σ,η admits a unique critical point on [0, 1) which is its unique minimizer: q * u is therefore the minimizer of f α,σ,η . If we now want to estimate V new from Y , S and Y new we assume, as above that s u, Y new is a sufficient statistic. As for (6), we have 1 σ √ q * u s u, Y new q * u /σ 2 V new + Z in law, where Z ∼ N (0, 1) is independent of V new . The Bayes classifier is then v = sign( u, Y new ),(12) hence R * D (η) = P(V new = v) 1 − Φ( q * u /σ) , which is the statement of our main Theorem 1 above. Proof sketch Theorem 1 follows from Theorem 2 below. From now we simply write q * instead of q * (α, σ, η). The next theorem computes the limit of the log-likelihood ratio. Theorem 2. Conditionally on V new = ±1, log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) (d) − −−−−− → N,D→∞ N (±2q * /σ 2 , 4q * /σ 2 ). Proof. Let us look at the posterior distribution of V new , U given Y , S, Y new , i.e. From Bayes rule we get P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) = exp − 1 2σ 2 Y new − u 2 dP (u|Y , S) exp − 1 2σ 2 Y new + u 2 dP (u|Y , S) = exp 1 σ 2 Y new , u dP (u|Y , S) exp − 1 σ 2 Y new , u dP (u|Y , S) Let s u = E[U |Y , S]. The following lemma is proved in the supplementary material, see Section 7.3. (1) , u (2) For v ∈ {−1, +1} we define Lemma 1. Let u A N (v) = exp v σ 2 Y new , u dP (u|Y , S) B N (v) = exp v σ s u, Z new + vV new σ 2 q * . Using Lemma 1, we prove the following lemma in Section 7.3 Lemma 2. For v = ±1, A N (v) − B N (v) L 2 − −−−−− → N,D→∞ 0. Since | log A N (v) − log B N (v)| ≤ (A N (v) −1 + B N (v) −1 )(A N (v) − B N (v)) , we have by Cauchy-Schwarz inequality: E| log A N (v)−log B N (v)| ≤ √ 2 E A N (v) −2 +B N (v) −2 1/2 E (A N (v)−B N (v)) 2 1/2 − −−−−− → N,D→∞ 0, using Lemma 2 (one can verify easily that the first term of the product above is O(1)). We get log A N (v) − log B N (v) L 1 − −−−−− → N,D→∞ 0, hence log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) − 2 σ s u, Z new + 2 σ 2 q * V new L 1 − −−−−− → N,D→∞ 0. s u is independent of (V new , Z new ) and by Lemma 1 we have s u 2 → q * . Consequently s u, Z new (d) − −−−−− → N,D→∞ N (0, q * ) and we conclude: log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) (d) − −−−−− → N,D→∞ 2 σ Z 0 + 2 σ 2 q * V new where Z 0 ∼ N (0, q * ) is independent of V new . Conclusion We analyzed a simple high-dimensional Gaussian mixture model in a semi-supervised setting and computed the associated Bayes risk. In our model, we are able to compute the best possible accuracy of semi-supervised learning using both labeled and unlabeled data as well as the best possible performances of supervised learning using only the labeled data and unsupervised learning using all data but without any label. This allows us to quantify the added value of unlabeled data. When the clusters are well separated (probably the most realistic setting), we find that the value of unlabeled data is dominating. Labeled data can almost be ignored as unsupervised learning achieved roughly the same performance as semi-supervised learning. Nevertheless, using a few labeled data is often very helpful in practice as shown by the recent MixMatch algorithm [8]. We believe our main Theorem 1 gives new insights for semi-supervised learning and we designed our model with a focus on simplicity. However, our proof technique is very general and can handle a much more complex model. For example, we can deal with classes of different sizes by changing the prior of V new . Another extension for which our proof carries over consists in modifying the channel for the side information. Here, we considered the erasure channel corresponding to the standard SSL setting but our proof will still work for other channel like the binary symmetric channel or the Z channel corresponding to a setting with noisy labels. Supplementary material Gaussian channel We give here some easy computation for the Gaussian channel: Y = √ γU + Z, where Z ∼ N (0, 1) is independent of U . We first consider the case where U ∼ N (0, 1). We define mmse u (γ) = E (U − E [U |Y ]) 2 . Since, we are dealing with Gaussian random variables, E [U |Y ] is simply the orthogonal projection of U on Y : E [U |Y ] = E [U Y ] E [Y 2 ] Y = √ γ 1 + γ Y. Hence, we have mmse u (γ) = E U − γ 1 + γ U − √ γ 1 + γ Z 2 = 1 1 + γ . Thanks to the I-MMSE relation [15], we have 1 2 mmse u (γ) = ∂ ∂γ I(U ; Y ). For γ = 0, U and Y are independent: I(U ; Y ) γ=0 = 0, so that we get I(U ; Y ) = 1 2 log(1 + γ). We now consider the case where U ∼ Unif(−1, +1). We define i v (γ) = I(U ; Y ). Recall that I(U ; Y ) = E log dP (U,Y ) dP U ⊗ dP Y (U, Y ). And here, we have dP (U,Y ) dP U ⊗ dP Y (U, Y ) = e −1/2(Y − √ γU ) 2 e −1/2(Y − √ γu) 2 dP U (u) . Hence, we have i v (γ) = −E log dP U (u) exp ( √ γ(u − U )Y ) = √ γE[U Y ] − E log cosh( √ γY ) = γ − E log cosh ( √ γZ + γ) . Thanks to the I-MMSE relation, we have: 1 2 mmse v (γ) = i v (γ) = 1 − E 1 2 √ γ Z + 1 tanh ( √ γZ + γ) = 1 − E tanh √ λZ + λ − 1 2 E tanh √ λZ + λ = 1 2 − E tanh √ λZ + λ + 1 2 E tanh 2 √ λZ + λ = 1 2 1 − E tanh √ λZ + λ , so that we have mmse v (γ) = 1 − E tanh √ λZ + λ . Convergence of the mutual information Theorem 3. For all α, σ > 0, η ∈ (0, 1], 1) f α,σ,η (q). (13) Further, this minimum is achieved at a unique point q * (α, σ, η) and 1 N I U , V ; Y S − −−−−− → N,D→∞ min q∈[0,u, U − −−−−− → N,D→∞ q * (α, σ, η),(14) where u is a sample from the posterior distribution of U given Y , S, independently of everything else. Proof. The limit (13) was proved in [22] in the case η = 0. The proof can however be straightforwardly adapted to the case η = 0 and leads to 1 N I U , V ; Y S − −−−−− → N,D→∞ inf qu∈[0,1] sup qv∈[0,1] α(1−η)i v (q u /σ 2 )+i u (αq v /σ 2 )+ α 2σ 2 (1−q u )(1−q v ), where i u (γ) = 1 2 log(1 + γ). The supremum in q v can be easily computed, leading to: sup qv∈[0,1] α(1 − η)i v (q u /σ 2 ) + i u (αq v /σ 2 ) + α 2σ 2 (1 − q u )(1 − q v ) = α(1 − η)i v (q u /σ 2 ) + α 2σ 2 (1 − q u ) − 1 2 q u + log(1 − q u ) . This proves (13). The fact that f α,σ,η admits a unique minimizer q * u (α, σ, η) comes from Proposition 1. From the limit of the mutual information, one gets the limits of minimal mean squared errors (MMSE) using the "I-MMSE" relation [15]: E U U T − E[U U T |Y , S] 2 − −−−−− → N,D→∞ 1 − q * u (α, σ, η) 2 . Let u be a sample from the posterior distribution of U given Y , S, independently of everything else. Then we deduce This can be done (as in [5]) by adding a small amount of additional side-information to the model of the form Y = √ DU ⊗4 + W , where the entries of the tensor W are i.i.d. standard Gaussian: (W i1,i2,i3,i4 ) 1≤i1,i2,i3,i4≤D i.i.d. ∼ N (0, 1). We then apply the I-MMSE relation with respect to to obtain (15). Technical lemmas We now give the proof of Lemma 1 Proof. Notice that, by Bayes rule, we have (U , u (1) ) (d) = (u (2) , u (1) ). So we have by (14) U , u (1) , u (1) , u (2) − −−−−− → N,D→∞ q * . Now, by Jensen's inequality: (1) , u (2) −q * Y , S 2 ≤ E E u (1) , u (2) −q * Y , S, u (1) 2 ≤ E ( u (1) , u (2) −q * ) 2 . E E u Since E[ u (1) , u (2) (1) , u (2) |Y , S, u (1) ] = u (1) , E[u (2) |Y , S] = u (1) , s u , this leads to E ( s u 2 − q * ) 2 ≤ E ( s u, u (1) − q * ) 2 ≤ E ( u (1) , u (2) − q * ) 2 . We now give a proof of Lemma 2. Proof. In order to prove that A N (v) − B N (v) (1) , . . . u (2) be i.i.d. samples from the posterior distribution of U given Y , S, independently of everything else. Using Lemma 1, we compute:
5,126
1907.03792
2961566780
Semi-supervised learning (SSL) uses unlabeled data for training and has been shown to greatly improve performances when compared to a supervised approach on the labeled data available. This claim depends both on the amount of labeled data available and on the algorithm used. In this paper, we compute analytically the gap between the best fully-supervised approach on labeled data and the best semi-supervised approach using both labeled and unlabeled data. We quantify the best possible increase in performance obtained thanks to the unlabeled data, i.e. we compute the accuracy increase due to the information contained in the unlabeled data. Our work deals with a simple high-dimensional Gaussian mixture model for the data in a Bayesian setting. Our rigorous analysis builds on recent theoretical breakthroughs in high-dimensional inference and a large body of mathematical tools from statistical physics initially developed for spin glasses.
In contrast, there are a number of practical works and proposed algorithms for semi-supervised learning based on transductive models @cite_4 , graph-based method @cite_11 or generative modeling @cite_1 , see the surveys @cite_26 and @cite_5 . SSL methods based on training a neural network by adding an additional loss term to ensure consistency regularization are presented in @cite_19 , @cite_12 , @cite_6 . We refer in particular to the recent work @cite_8 for an overview of these SSL methods (currently the state-of-the-art for SSL on image classification datasets). The algorithm MixMAtch introduced in @cite_15 obtains impressive results on all standard image benchmarks. Given these recent improvements, natural questions arise: what is the best possible achievable performance? to what extend can we generalize those improvement to other domains? We believe that our work is a first step in a theoretical understanding of these questions.
{ "abstract": [ "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "We present a new method for transductive learning, which can be seen as a transductive version of the k nearest-neighbor classifier. Unlike for many other transductive learning methods, the training problem has a meaningful relaxation that can be solved globally optimally using spectral methods. We propose an algorithm that robustly achieves good generalization performance and that can be trained efficiently. A key advantage of the algorithm is that it does not require additional heuristics to avoid unbalanced splits. Furthermore, we show a connection to transductive Support Vector Machines, and that an effective Co-Training algorithm arises as a special case.", "Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.", "Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.", "We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets.", "We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \"cluster assumption\". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces.", "", "Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38 to 11 ) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success.", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.", "An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks." ], "cite_N": [ "@cite_26", "@cite_4", "@cite_8", "@cite_1", "@cite_6", "@cite_19", "@cite_5", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2136504847", "2111557120", "2963956526", "2156718197", "2921087533", "2145494108", "", "2943865428", "2108501770", "2139823104" ] }
Asymptotic Bayes risk for Gaussian mixture in a semi-supervised setting
Semi-supervised learning (SSL) has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. The goal of SSL is to leverage large amounts of unlabeled data to improve the performance of supervised learning over small datasets. For unlabeled examples to be informative, assumption has to be made. The cluster assumption states that if two samples belong to the same cluster in the input distribution, then they are likely to belong to the same class. The cluster assumption is the same as the low-density separation assumption: the decision boundary should lie in the low-density region. In this paper, we explore analytically the simplest possible parametric model for the cluster assumption: the two clusters are modeled by mixture of Gaussians in high dimension so that the optimal decision boundary is a hyperplane. Our model can be seen as a classification problem in a semi-supervised setting. Our aim here is to define a model simple enough to be mathematically tractable while being practically relevant and capturing the main properties of a high-dimensional statistical inference problem. Our model has three parameters: the high-dimensionality of the data is captured by α the ratio of the number of samples divided by the ambient dimension; the fraction of labeled data point η and the amount of overlap between the clusters σ 2 . As a function of these three parameters, we compute the best possible accuracy (the Bayes risk) when only labeled data are used or when unlabeled data are also used. As a result, we obtain the added value due to the unlabeled data for the best possible algorithm. In particular, we observe a very clear diminishing return of the labeled data, i.e. the first labeled data points bring much more information than the last ones. Hence the regime with very few labeled data points is a priori a regime favorable to SSL. But in this case, we face in practice the problem of small validation sets [23] which makes hyperparameter tuning impossible. We find that the range of parameters for which SSL clearly outperforms either unsupervised learning or supervised learning on the labeled data is rather narrow. In a case with large overlap between the clusters (σ 2 → ∞), unsupervised learning fails and supervised learning on the labeled data is almost optimal. In a case with small overlap between the clusters (σ 2 → 0), unsupervised learning achieves performances very close to supervised learning with all labels available and supervised learning on the labeled dataset only fails. From a practical perspective, we can try to draw parallels between our results and the state of the art in SSL but we need to keep in mind that our results only give best achievable performances on our toy model. In particular, even in a setting where our results predict that unsupervised learning achieves roughly the same performances as supervised learning with all labels, it might be very useful in practice to use a few labels in addition to all unlabeled data. Such an approach is presented in [8] where extremely good performances are achieved for image classification with only a few labeled data per class and a new SSL algorithm: MixMatch. For example on CIFAR-10, with only 250 labeled images, MixMatch achieves an error rate of 11.08% and with 4000 labeled images, an error rate of 6.24% (to be compared with the 4.17% error rate for the fully supervised training on all 50000 samples). These results are aligned with our finding about diminishing returns of labeled data points. We make the following contributions: Bayes risk: to the best of our knowledge, our work is the first analytic computation of the Bayes risk in a high-dimensional Gaussian model in a semi-supervised setting. Rigorous analysis: our analysis builds on a series of recent works [11,4,18,22,5] with tools from information theory and mathematical physics originally developed for the analysis of spin glasses [24,26]. The rest of the paper is organized as follows. Our model and the main result is presented in Section 2. Related work is presented in Section 3. In Section 4, we give an heuristic derivation of the main result and in Section 5, we give a proof sketch while the more technical details are presented in the supplementary material Section 7. We conclude in Section 6 Model and main results We now define our classification problem with two classes. The points Y 1 , . . . , Y N of the dataset are in R D and given by the following process: Y j = V j U + σZ j , 1 ≤ j ≤ N, where U ∼ Unif(S D−1 ), V = (V 1 , . . . , V N ) i.i.d. ∼ Unif(−1, 1) and Z 1 , . . . , Z N i.i.d. ∼ N (0, Id D ) are all independent. In words, the dataset is composed of N points in R D divided into two classes with roughly equal sizes. The points with label V j = +1 are centered around +U ∈ R D and the points with label V j = −1 are centered around −U ∈ R D . The parameter σ controls the level of Gaussian noise around these centers. In a semi-supervised setting, the statistician has access to some labels. We consider a case where each label is revealed with probability η ∈ [0, 1] independently of everything else. To fix notation, the side information is given by the following process: S j = V j with probability η 0 with probability 1 − η. If S j = 0, then the label of the j-th data point is unknown whereas if S j = ±1, it corresponds to the label of the j-th data point. Finally, we consider the high-dimensional setting and all our results will be in a regime where N, D → ∞ while the ratio N/D tends towards a constant α > 0. Note that we are in a high noise regime since the squared norm of the signal is one whereas the squared norm of the noise is σ 2 D ≈ σ 2 N/α where N is the number of observations. To summarize, the three parameters of our model are: σ 2 > 0 the variance of the noise in the dataset, η ∈ [0, 1] the fraction of revealed labels and α > 0 the ratio between the number of data points (both labeled and unlabeled) and the dimension of the ambient space. We also assume that the statistician knows the priors, i.e. the distributions of U , V and Z. The task of the statistician is to use the dataset (Y , S) in order to make a prediction about the label of a new (unseen) data point. More formally, we define: Y new = V new U + σZ new , where V new ∼ Unif(−1, +1), Z new ∼ N (0, Id D ) . We are interested in the minimal achievable error in our model, i.e. the Bayes risk: R * D (η) = inf v P v(Y , S, Y new ) = V new where the infimum is taken over all estimators (measurable functions of Y , S, Y new ). Our main mathematical achievement is an analytic formula for the Bayes risk R * D in the large D limit, see Theorem 1 below. In order to state it, we need to introduce some additional notations. We start with some easy facts about our model. Oracle risk Assume that the statistician knows the center of the clusters, i.e. has access to the "oracle" vector U . Then the best classification error would be achieved thanks to the simple thresholding rule sign( U , Y new ), where ., . denotes the Euclidean dot product. In this case, the risk is given by: R oracle = P σ U , Z new > 1 = P σZ > 1) = 1 − Φ 1 σ ,(1) where Φ is the standard Gaussian cumulative distribution function. We have of course R oracle ≤ R * D (η). Fully supervised case Another instructive and simple case is the supervised case where η = 1. Since all the V j 's are known, we can assume wlog that they are all equal to one (multiply each Y j by V j ). More importantly, if we slightly modify the distribution of U by taking U = (U 1 , . . . U D ) i.i.d. ∼ N (0, 1/D), this will not change the results for our model and makes the analysis easier by decorrelating each component. Indeed, denote by Y j (resp. Z j ) the first component of Y j (resp. Z j ) and by U 1 the first component of U . Then we have N scalar noisy observations of the first component of U : Y j = U 1 + σZ j for 1 ≤ j ≤ N , so that we can construct an estimate for U 1 by taking the average of the observations. We get: Y 1 = 1 N N j=1 Y j = U 1 + σ √ N N (0, 1). Doing this for each component of U , we get an estimate of the vector U and we now use it to get an estimate of V new . First define Y = (Y 1 , . . . , Y D ) and consider Y new , Y = V new U , Y + σ Z new , Y , and note that as D → ∞, we have U , Y ≈ DE[U 1 Y 1 ] = DE U 2 1 = 1 and Z new , Y ≈ E Y 2 Z ≈ √ α + σ 2 / √ αN (0, 1), so that we get: Y new , Y ≈ V new + σ √ α + σ 2 √ α N (0, 1). Our main result will actually show that estimating V new with the sign of Y new , Y is optimal so that we get: lim N,D→∞ R * D (1) = P σ √ α + σ 2 √ α Z > 1 = 1 − Φ √ α σ √ α + σ 2 .(2) Unsupervised case In this paper, we concentrate on the case where η > 0. When η = 0, there is no side information and we are in an unsupervised setting studied in [22]. Due to the symmetry of our model, we have R * D = 1/2 because there is no way to guess the right classes ±1. In order to have a well-posed problem, the risk should be redefined as follows: R * D (0) = E Y min s=±1 inf v P (sv(Y , Y new ) = V new |Y ) Although, this measure of performance is not the one studied in [22], we can adapt the argument to show that: lim η→0 lim N,D→∞ R * D (η) = lim N,D→∞ R * D (0).(3) Main result We now state our main result: Theorem 1. Let us define, for α, σ > 0, η ∈ (0, 1], f α,σ,η (q) def = α(1 − η)i v (q/σ 2 ) + α 2σ 2 (1 − q) − 1 2 q + log(1 − q) .(4) Here (5) where a fraction η of the data points have labels and are used with all unlabeled data points. The supervised on full curve corresponds to (2) where all the labels are used. The supervised on labeled curve corresponds to (2) with the parameter α replaced by αη and is the best possible performance when only a fraction η of the data points having labels are used. The unsupervised curve corresponds to (3) where all the data points are used but without any label. Finally, the oracle curve corresponds to (1) where the centers of the clusters are known (corresponding to the case α → ∞). In the left of Figure 1, we clearly see that the first labeled data points (i.e. when η is small) decreases greatly the risk of semi-supervised learning. This corresponds to the diminishing return of the labeled data. In the right plot of Figure 1, we see that in the high-noise regime, unsupervised learning fails and that its risk decreases as soon as σ 2 < 1. i v (γ) = γ − E log cosh( √ γZ 0 + γ) where Z 0 ∼ N (0, 1). The function f α,σ,η admits a unique minimizer q * (α, σ, η) on [0, 1) and R * D (η) − −−−−− → N,D→∞ 1 − Φ( q * (α, σ, η)/σ).(5) This phenomena is known as the BBP phase transition [1,2,25]. We see that below this transition, the unlabeled data are of little help as the performance of SSL almost match the performance of supervised learning on labeled data only. Moreover after the transition, unsupervised learning reaches quite quickly the performance of SSL. In other words, the regime most favorable to SSL in term of noise corresponds precisely to the regime around the BBP phase transition where unsupervised learning is still not very good while supervised learning on labeled data saturates. Heuristic derivation of the main result We present now an heuristic derivation of our results, based on the "cavity method" [21] from statistical physics. Let s u = E[U |Y , S] and s v = E[V |Y , S] be the optimal estimators (in term of mean squared error) for estimating U and V . A natural hypothesis is to assume that the correlation s u, U converges as N, D → ∞ to some deterministic limit q * u ∈ [0, 1] and that 1 N s v, V → q * v ∈ R. The conditional expectation s u = E[U |Y , S] is the orthogonal projection (in L 2 sense) of the random vector U onto the subspace of Y , S-measurable random variables. The L 2 norm squared of the projection s u is equal to the scalar product of the vector U with its projection s u: E s u 2 = E s u, U . Assuming that s u 2 also admits a deterministic limit, this limits is then equal to q * u . We get for large N and D, s u 2 s u, U q * u . Analogously we have 1 N s v 2 1 N s v, V q * v . We will show below that q * u and q * v obey some fixed points equations that allow to determine them. As seen above, if we aim at estimating a label V i that we did not observe (i.e. S i = 0) given Y , S and the "oracle" U , we compute the sufficient statistic σN (0, 1). The estimator that minimizes the probability of error Y i = Y i , U = V i +P( v = V i ) is simply v i = sign( Y i ). The one that minimizes the mean squared error (MSE) is v i = E[V i | Y i ] which achieves a MSE of E[(V i − v i ) 2 ] = mmse v (1/σ 2 ) where we define for (V, Z) ∼ Unif(−1, +1) ⊗ N (0, 1) and γ > 0 (see Section 7.1 for more details): mmse v (γ) def = E (V − E[V | √ γV + Z]) 2 . In the case where we do not have access to the oracle U , one can still use s u as a proxy. We repeat the same procedure assuming that s u, Y i is a sufficient statistic for estimating V i . Although this is not strictly true, we shall see, that this leads to the correct fixed points equations for q * u , q * v . Compute s u, Y i = s u, U V i + σ s u, Z i q * u V i + σ s u, Z i . The posterior mean s u is not expected to depend much on the particular point Y i and therefore on Z i . This gives that the random vectors s u and Z i are approximately independent. Hence the distribution of s u, Z i is roughly N (0, q * u ), we recall that s u 2 q * u . We get 1 σ √ q * u s u, Y i q * u /σ 2 V i + Z(6) in law, where Z ∼ N (0, 1). The best (in terms of MSE) estimator v i one can then constructs using s u, Y i achieves a MSE of E[(V i − v i ) 2 ] mmse v (q * u /σ 2 ). We assumed that s u, Y i is a sufficient statistic for estimating V i , therefore v i = s v i . For all the ηN indices i such that S i = V i we have obviously s v i = V i . Hence 1 N E V − s v 2 = 1 N i|Si =0 E (V i − s v i ) 2 = 1 N i|Si =0 E (V i − v i ) 2 1 N i|Si =0 mmse v (q * u /σ 2 ) (1 − η)mmse v (q * u /σ 2 ). Since we have 1 N E V − s v 2 1 − q * v , we get 1 − q * v (1 − η)mmse v (q * u /σ 2 ). Doing the same reasoning with s u instead of s v leads to 1 − q * u = mmse u (αq * v /σ 2 ), where mmse u (γ) = E[(U − E[U | √ γU + Z]) 2 ] for U, Z i.i.d. ∼ N (0, 1). As shown in Section 7.1, we have mmse u (γ) = 1 1 + γ . We conclude that (q * u , q * v ) satisfies the following fixed point equations: q * v = 1 − (1 − η)mmse v (q * u /σ 2 )(7)q * u = αq * v σ 2 + αq * v .(8) We introduce the following mutual information i v (γ) = I(V 0 ; √ γV 0 + Z 0 )(9) where V 0 ∼ Unif(−1, +1) and Z 0 ∼ N (0, 1) are independent. An elementary computation leads to (see Section 7.1) i v (γ) = γ − E log cosh( √ γZ 0 + γ).(10) By the "I-MMSE" Theorem from [15], i v is related to mmse v : i v (γ) = mmse v (γ).(11) Let us compute the derivative of f α,σ,η defined by (4), using (11): f α,σ,η (q) = α 2σ 2 (1 − η)mmse v (q/σ 2 ) − α 2σ 2 + q 2(1 − q) . Using (7)- (8), one verifies easily that f α,σ,η (q * u ) = 0. By Proposition 1, f α,σ,η admits a unique critical point on [0, 1) which is its unique minimizer: q * u is therefore the minimizer of f α,σ,η . If we now want to estimate V new from Y , S and Y new we assume, as above that s u, Y new is a sufficient statistic. As for (6), we have 1 σ √ q * u s u, Y new q * u /σ 2 V new + Z in law, where Z ∼ N (0, 1) is independent of V new . The Bayes classifier is then v = sign( u, Y new ),(12) hence R * D (η) = P(V new = v) 1 − Φ( q * u /σ) , which is the statement of our main Theorem 1 above. Proof sketch Theorem 1 follows from Theorem 2 below. From now we simply write q * instead of q * (α, σ, η). The next theorem computes the limit of the log-likelihood ratio. Theorem 2. Conditionally on V new = ±1, log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) (d) − −−−−− → N,D→∞ N (±2q * /σ 2 , 4q * /σ 2 ). Proof. Let us look at the posterior distribution of V new , U given Y , S, Y new , i.e. From Bayes rule we get P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) = exp − 1 2σ 2 Y new − u 2 dP (u|Y , S) exp − 1 2σ 2 Y new + u 2 dP (u|Y , S) = exp 1 σ 2 Y new , u dP (u|Y , S) exp − 1 σ 2 Y new , u dP (u|Y , S) Let s u = E[U |Y , S]. The following lemma is proved in the supplementary material, see Section 7.3. (1) , u (2) For v ∈ {−1, +1} we define Lemma 1. Let u A N (v) = exp v σ 2 Y new , u dP (u|Y , S) B N (v) = exp v σ s u, Z new + vV new σ 2 q * . Using Lemma 1, we prove the following lemma in Section 7.3 Lemma 2. For v = ±1, A N (v) − B N (v) L 2 − −−−−− → N,D→∞ 0. Since | log A N (v) − log B N (v)| ≤ (A N (v) −1 + B N (v) −1 )(A N (v) − B N (v)) , we have by Cauchy-Schwarz inequality: E| log A N (v)−log B N (v)| ≤ √ 2 E A N (v) −2 +B N (v) −2 1/2 E (A N (v)−B N (v)) 2 1/2 − −−−−− → N,D→∞ 0, using Lemma 2 (one can verify easily that the first term of the product above is O(1)). We get log A N (v) − log B N (v) L 1 − −−−−− → N,D→∞ 0, hence log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) − 2 σ s u, Z new + 2 σ 2 q * V new L 1 − −−−−− → N,D→∞ 0. s u is independent of (V new , Z new ) and by Lemma 1 we have s u 2 → q * . Consequently s u, Z new (d) − −−−−− → N,D→∞ N (0, q * ) and we conclude: log P (V new = +1|Y , S, Y new ) P (V new = −1|Y , S, Y new ) (d) − −−−−− → N,D→∞ 2 σ Z 0 + 2 σ 2 q * V new where Z 0 ∼ N (0, q * ) is independent of V new . Conclusion We analyzed a simple high-dimensional Gaussian mixture model in a semi-supervised setting and computed the associated Bayes risk. In our model, we are able to compute the best possible accuracy of semi-supervised learning using both labeled and unlabeled data as well as the best possible performances of supervised learning using only the labeled data and unsupervised learning using all data but without any label. This allows us to quantify the added value of unlabeled data. When the clusters are well separated (probably the most realistic setting), we find that the value of unlabeled data is dominating. Labeled data can almost be ignored as unsupervised learning achieved roughly the same performance as semi-supervised learning. Nevertheless, using a few labeled data is often very helpful in practice as shown by the recent MixMatch algorithm [8]. We believe our main Theorem 1 gives new insights for semi-supervised learning and we designed our model with a focus on simplicity. However, our proof technique is very general and can handle a much more complex model. For example, we can deal with classes of different sizes by changing the prior of V new . Another extension for which our proof carries over consists in modifying the channel for the side information. Here, we considered the erasure channel corresponding to the standard SSL setting but our proof will still work for other channel like the binary symmetric channel or the Z channel corresponding to a setting with noisy labels. Supplementary material Gaussian channel We give here some easy computation for the Gaussian channel: Y = √ γU + Z, where Z ∼ N (0, 1) is independent of U . We first consider the case where U ∼ N (0, 1). We define mmse u (γ) = E (U − E [U |Y ]) 2 . Since, we are dealing with Gaussian random variables, E [U |Y ] is simply the orthogonal projection of U on Y : E [U |Y ] = E [U Y ] E [Y 2 ] Y = √ γ 1 + γ Y. Hence, we have mmse u (γ) = E U − γ 1 + γ U − √ γ 1 + γ Z 2 = 1 1 + γ . Thanks to the I-MMSE relation [15], we have 1 2 mmse u (γ) = ∂ ∂γ I(U ; Y ). For γ = 0, U and Y are independent: I(U ; Y ) γ=0 = 0, so that we get I(U ; Y ) = 1 2 log(1 + γ). We now consider the case where U ∼ Unif(−1, +1). We define i v (γ) = I(U ; Y ). Recall that I(U ; Y ) = E log dP (U,Y ) dP U ⊗ dP Y (U, Y ). And here, we have dP (U,Y ) dP U ⊗ dP Y (U, Y ) = e −1/2(Y − √ γU ) 2 e −1/2(Y − √ γu) 2 dP U (u) . Hence, we have i v (γ) = −E log dP U (u) exp ( √ γ(u − U )Y ) = √ γE[U Y ] − E log cosh( √ γY ) = γ − E log cosh ( √ γZ + γ) . Thanks to the I-MMSE relation, we have: 1 2 mmse v (γ) = i v (γ) = 1 − E 1 2 √ γ Z + 1 tanh ( √ γZ + γ) = 1 − E tanh √ λZ + λ − 1 2 E tanh √ λZ + λ = 1 2 − E tanh √ λZ + λ + 1 2 E tanh 2 √ λZ + λ = 1 2 1 − E tanh √ λZ + λ , so that we have mmse v (γ) = 1 − E tanh √ λZ + λ . Convergence of the mutual information Theorem 3. For all α, σ > 0, η ∈ (0, 1], 1) f α,σ,η (q). (13) Further, this minimum is achieved at a unique point q * (α, σ, η) and 1 N I U , V ; Y S − −−−−− → N,D→∞ min q∈[0,u, U − −−−−− → N,D→∞ q * (α, σ, η),(14) where u is a sample from the posterior distribution of U given Y , S, independently of everything else. Proof. The limit (13) was proved in [22] in the case η = 0. The proof can however be straightforwardly adapted to the case η = 0 and leads to 1 N I U , V ; Y S − −−−−− → N,D→∞ inf qu∈[0,1] sup qv∈[0,1] α(1−η)i v (q u /σ 2 )+i u (αq v /σ 2 )+ α 2σ 2 (1−q u )(1−q v ), where i u (γ) = 1 2 log(1 + γ). The supremum in q v can be easily computed, leading to: sup qv∈[0,1] α(1 − η)i v (q u /σ 2 ) + i u (αq v /σ 2 ) + α 2σ 2 (1 − q u )(1 − q v ) = α(1 − η)i v (q u /σ 2 ) + α 2σ 2 (1 − q u ) − 1 2 q u + log(1 − q u ) . This proves (13). The fact that f α,σ,η admits a unique minimizer q * u (α, σ, η) comes from Proposition 1. From the limit of the mutual information, one gets the limits of minimal mean squared errors (MMSE) using the "I-MMSE" relation [15]: E U U T − E[U U T |Y , S] 2 − −−−−− → N,D→∞ 1 − q * u (α, σ, η) 2 . Let u be a sample from the posterior distribution of U given Y , S, independently of everything else. Then we deduce This can be done (as in [5]) by adding a small amount of additional side-information to the model of the form Y = √ DU ⊗4 + W , where the entries of the tensor W are i.i.d. standard Gaussian: (W i1,i2,i3,i4 ) 1≤i1,i2,i3,i4≤D i.i.d. ∼ N (0, 1). We then apply the I-MMSE relation with respect to to obtain (15). Technical lemmas We now give the proof of Lemma 1 Proof. Notice that, by Bayes rule, we have (U , u (1) ) (d) = (u (2) , u (1) ). So we have by (14) U , u (1) , u (1) , u (2) − −−−−− → N,D→∞ q * . Now, by Jensen's inequality: (1) , u (2) −q * Y , S 2 ≤ E E u (1) , u (2) −q * Y , S, u (1) 2 ≤ E ( u (1) , u (2) −q * ) 2 . E E u Since E[ u (1) , u (2) (1) , u (2) |Y , S, u (1) ] = u (1) , E[u (2) |Y , S] = u (1) , s u , this leads to E ( s u 2 − q * ) 2 ≤ E ( s u, u (1) − q * ) 2 ≤ E ( u (1) , u (2) − q * ) 2 . We now give a proof of Lemma 2. Proof. In order to prove that A N (v) − B N (v) (1) , . . . u (2) be i.i.d. samples from the posterior distribution of U given Y , S, independently of everything else. Using Lemma 1, we compute:
5,126
1907.03670
2953433119
In this paper, we propose the part-aware and aggregation neural network (Part-A^2 net) for 3D object detection from point cloud. The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage learns to simultaneously predict coarse 3D proposals and accurate intra-object part locations with the free-of-charge supervisions derived from 3D ground-truth boxes. The predicted intra-object part locations within the same proposals are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the features of 3D proposals. Then the part-aggregation stage learns to re-score the box and refine the box location based on the pooled part locations. We present extensive experiments on the KITTI 3D object detection dataset, which demonstrate that both the predicted intra-object part locations and the proposed RoI-aware point cloud pooling scheme benefit 3D object detection and our Part-A^2 net outperforms state-of-the-art methods by utilizing only point cloud data.
Point cloud feature learning for 3D object detection. There are generally three ways of learning features from point cloud for 3D detection. @cite_32 @cite_27 @cite_0 @cite_8 @cite_12 projected point cloud to bird-view map and utilized 2D CNN for feature extraction. @cite_34 @cite_42 @cite_41 conducted PointNet @cite_1 @cite_31 to learn the point cloud features directly from raw point cloud. @cite_9 proposed VoxelNet and @cite_20 applied sparse convolution to speed up the VoxelNet for feature learning. Inspired by VoxelNet, we designed a UNet-like @cite_39 backbone network by using sparse convolution and deconvolution to extract discriminative point features for predicting intra-object part locations and 3D object detection.
{ "abstract": [ "", "", "Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.", "", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions. The input representation, network architecture, and model optimization are specially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at 10 FPS.", "We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at", "", "In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.", "", "In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art." ], "cite_N": [ "@cite_8", "@cite_41", "@cite_9", "@cite_42", "@cite_1", "@cite_32", "@cite_39", "@cite_0", "@cite_27", "@cite_31", "@cite_34", "@cite_20", "@cite_12" ], "mid": [ "2899302124", "", "2963727135", "", "2560609797", "2555618208", "1901129140", "2798965597", "2963400571", "", "2769205412", "2897529137", "2894705404" ] }
0
1812.11740
2906722931
Recommendation system could help the companies to persuade users to visit or consume at a particular place, which was based on many traditional methods such as the set of collaborative filtering algorithms. Most research discusses the model design or feature engineering methods to minimize the root mean square error (RMSE) of rating prediction, but lacks exploring the ways to generate the reasons for recommendations. This paper proposed an integrated neural network based model which integrates rating scores prediction and explainable words generation. Based on the experimental results, this model presented lower RMSE compared with traditional methods, and generate the explanation of recommendation to convince customers to visit the recommended place.
Much empirical research focuses on either rating stars treating the scores as numerical data or explainable text generation offering the features of the POIs, and those studies adopt different models, features and evaluation methods @cite_25 @cite_21 @cite_7 @cite_14 @cite_24 @cite_0 . This project is motivated by these empirical studies on rating stars prediction, and we proposed an integrated model which can predict rating stars and generate explainable opinion-aspect pairs for users.
{ "abstract": [ "Location-based social networks (LBSNs) offer researchers rich data to study people's online activities and mobility patterns. One important application of such studies is to provide personalized point-of-interest (POI) recommendations to enhance user experience in LBSNs. Previous solutions directly predict users' preference on locations but fail to provide insights about users' preference transitions among locations. In this work, we propose a novel category-aware POI recommendation model, which exploits the transition patterns of users' preference over location categories to improve location recommendation accuracy. Our approach consists of two stages: (1) preference transition (over location categories) prediction, and (2) category-aware POI recommendation. Matrix factorization is employed to predict a user's preference transitions over categories and then her preference on locations in the corresponding categories. Real data based experiments demonstrate that our approach outperforms the state-of-the-art POI recommendation models by at least 39.75 in terms of recall.", "As location-based social networks (LBSNs) rapidly grow, it is a timely topic to study how to recommend users with interesting locations, known as points-of-interest (POIs). Most existing POI recommendation techniques only employ the check-in data of users in LBSNs to learn their preferences on POIs by assuming a user's check-in frequency to a POI explicitly reflects the level of her preference on the POI. However, in reality users usually visit POIs only once, so the users' check-ins may not be sufficient to derive their preferences using their check-in frequencies only. Actually, the preferences of users are exactly implied in their opinions in text-based tips commenting on POIs. In this paper, we propose an opinion-based POI recommendation framework called ORec to take full advantage of the user opinions on POIs expressed as tips. In ORec, there are two main challenges: (i) detecting the polarities of tips (positive, neutral or negative), and (ii) integrating them with check-in data including social links between users and geographical information of POIs. To address these two challenges, (1) we develop a supervised aspect-dependent approach to detect the polarity of a tip, and (2) we devise a method to fuse tip polarities with social links and geographical information into a unified POI recommendation framework. Finally, we conduct a comprehensive performance evaluation for ORec using two large-scale real data sets collected from Foursquare and Yelp. Experimental results show that ORec achieves significantly superior polarity detection and POI recommendation accuracy compared to other state-of-the-art polarity detection and POI recommendation techniques.", "The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.", "Predicting the popularity of Point of Interest (POI) has become increasingly crucial for location-based services, such as POI recommendation. Most of the existing methods can seldom achieve satisfactory performance due to the scarcity of POI's information, which tendentiously confines the recommendation to popular scenic spots, and ignores the unpopular attractions with potentially precious values. In this paper, we propose a novel approach, termed Hierarchical Multi-Clue Fusion (HMCF), for predicting the popularity of POIs. Specifically, we devise an effective hierarchy to comprehensively describe POI by integrating various types of media information (e.g., image and text) from multiple social sources. For each individual POI, we simultaneously inject semantic knowledge as well as multi-clue representative power. We collect a multi-source POI dataset from four widely-used tourism platforms. Extensive experimental results show that the proposed method can significantly improve the performance of predicting the attractions' popularity as compared to several baselines.", "Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.", "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an all-around evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation." ], "cite_N": [ "@cite_14", "@cite_7", "@cite_21", "@cite_24", "@cite_0", "@cite_25" ], "mid": [ "2072609015", "2077480106", "2139809240", "2739709122", "2140310134", "2728796024" ] }
A Neural Network Based Explainable Recommender System
User-generated content such as rating and reviews about a place is gradually becoming a key factor for other people making a decision. Some platforms such as Yelp utilize recommender system to analyze the user preferences based on the reviews and rating to recommend a new place for other customers. According to (Liu et al., 2017), they introduce a phrase point-of-interests (POIs) to represent a set of places such as restaurant and tourist attractions where users are interested. In this pa-per, we will continue to use this phrase and focus on restaurant rating prediction and explanation generation. Currently, much empirical research focuses on model design or feature engineering methods to improve model performance rather than discussing the ways to generate recommended reasons. This project proposed an integrated neural network based explainable recommender system, instead of using traditional methods such as user-based models, and Matrix Factorization, to generate explainable opinion-aspect pairs for a user, helping them explore their unfamiliar places, and predict rating score from that user for each POI and recommend POI with high predicted rating to each user. Our model could generate user latent preference from embedding layer in neural network after training rating predictions for POIs Instead of training models on user-POI rating matrix directly. The main contributions of this paper are summarized as follows: • Predict rating stars for POIs based on user-POI rating matrix using neuralbased model. • Generate opinion-aspect pairs for users to explain reasons for recommended POI. • Compare the model performance with the classical recommender system methods. • Propose a way to evaluate the prediction of explainable pairs. In the rest of the paper, we will discuss the related work in Section 2. Provide the de-tails of data preprocessing, feature extraction, and data exploration in section 3. We will introduce the proposed model and other traditional models in Section 4. In Section 5 and Section 6, We discuss the method to evaluate the RMSE of predicting rating stars and the fscore of predicting explainable tuples. Then, we present the results including comparison with other baseline models. Finally, we conclude findings, limitation, and future work of this paper in section 7. Collaborative Filtering for Rating Prediction Collaborative Filtering is commonly used in recommender system (Chen et al., 2014;Konstas et al., 2009;Su and Khoshgoftaar, 2009;Wang and Blei, 2011) -exploiting similarity among the preference of users to generate recommendations. Research is done using users historical data to predict ratings, and deliver the results of recommendation to users (Balabanovi and Shoham, 1997;Rennie and Srebro, 2005). The traditional methods in rating prediction are the user-based model, and matrix factorization(MF). According to (Ricci et al., 2015), they introduced the user-based method which predicts the rating scores based on the other users who have similar preferences. Regarding matrix factorization, Ricci et al. mentioned the applications of two commonly used methods: Singular Value Decomposition(SVD) and Non-negative Matrix Factorization(NMF) (Ricci et al., 2015). Ad-ditionally, many researchers utilized neuralbased collaborative filtering model for rating prediction (van den Oord et al., 2013;. However, there are many limitations for the traditional algorithms such as scalability problems, and lacking bias terms. As for neural collaborative filtering, it performs better on rating prediction, but there is no description of generating explainable text in empirical works. Explainable Recommendation System Explainable recommendation system (ERS) gradually attracts more researchers to explore the ways to generate convincing explanations for users. According to (Hou et al., 2018), they state two forms of ERS to output explanations. One is to extract noun words or phrases to represent item features, and there are many works utilizes this form for explainable recommendation. However, Hou et al. claim that it lacks consideration of sentiment for each aspect summarized from other visited customers which is less convincing for future users. Another method is to generate a set of words containing opinion and aspect phrases from reviews to explain fine-grained aspect information for future users (Hou et al., 2018). In this project, we investigated the second form of ERS, and summarized many empirical works about this type as follows: Baccianella et al. introduced a method using SentiWordNet in extracting opinion words and quantifying the sentiment polarity and strength of each word to the POI (Baccianella et al., 2010). Pero et al. proposed a recommendation system by integrating opinion information with the rating score for the POI (Pero and Horvth, 2013). Chen et al. demonstrated an algorithm about tensor decomposition to predict rating based on opinion-aspect pairs in review over multiple domains (Chen et al., 2014). However, these method lacks of fine-grained sentiment differences based on some specific aspect of POIs (Hou et al., 2018). Though given plenty of research focusing on POI rating prediction in recommender systems, less research has investigated the recommender system offering the fine-grained explanation for specific aspects of POIs. Former Collaborative Filtering research focuses on improving the precision of recommendation but lacks information to help target users better understand the proposed recommendation. Our proposed model could resolve the mentioned limitations from empirical works, which integrate rating prediction and explainable pairs generation, and provide better performance compared with other traditional models. 3 Data Data Preprocessing and Feature Extraction The data is composed of business ID, user ID, rating score (range from 1 to 5), reviews, and date in the city Pittsburgh from Yelp (Developers, 2018) challenge dataset. Considering the impact of time,we set 2017 (full year) as the training dataset and 2018 (until June) as test dataset which contains 26918 reviews (from 13225 users to 450 restaurants). After tokenizing and POS tagging, review texts are transformed into syntax relations (shown in Figure 1) by utilizing the spaCy CNN dependency parsing model (Kiperwasser and Goldberg, 2016;Goldberg and Nivre, 2012), and these relation pairs containing users opinions towards aspects are extracted in the form of tuples liked<opinion, aspect>. Additionally, SentiWordNet (Baccianella et al., 2010) is incorporated to quantify the opinion sentiment polarity and strength, which will be described in detail in the evaluation section. Finally, after lemmatizing and lowercasing the words, we use 100-dimension Glove word embedding pretrained by twitter corpus (Pennington et al., 2014) , and then horizontally concatenate the word embedding to tuple embedding. Data Exploration Analysis After preprocessing the data, the stars distribution is visualized in Figure 2 which indicates that users rarely give 1 or 2 stars for POIs. Then, we explore the opinion-aspect pairs distribution in the space of frequency and average rating. In Figure 3, these pairs are visualized based on the frequency greater than 700 and average stars that they appeared. Additionally, there are many examples having comparable high average stars which accord with their opinion words such as 'best restaurant', and 'best food'. On the other hand, many tuples such as 'good service' have positive opinion but present the position lower than the average stars line. This case will be discussed in our limitation part. Method There are many traditional model framework used for rating prediction. In this section, we experimented with four algorithms: neuralbased model, K nearest neighbor, SVD matrix factorization and NMF matrix factorization to predict the rating stars that a user would give for a POI based on the ratings of reviews posted in the past. Then, we demonstrate the ways of our models to generate opinionaspect pairs for target users after completing the model training for RMSE. Baseline Model for Rating Prediction User-based collaborative filtering The first baseline model is a user-based collaborative filtering algorithm based on the K nearest neighbor (KNN) algorithm. According to (Ricci et al., 2015), they claim that this method utilizes a ratings matrix to compute centered cosine similarity (Pearson relationship) between each user. Then, predict the rating score for the POI where the target user has never been there before based on the weight of similarity and scores from other top K users who are most similar to target user (Ricci et al., 2015). This method could provide decent performance in rating prediction. However, this method is based on the KNN algorithm which will suffer serious scalability problems when the number of users and the number of items increase (Shani and Gunawardana, 2011;Ricci et al., 2015). Matrix Factorization Matrix factorization is a commonly used collaborative filtering method (Koren, 2010) which decomposes the user-restaurant rating matrix into two latent feature matrices of user and restaurant and can be used to predict the rating stars of restaurants the users who have not visited. Singular value decomposition (SVD) and non-negative matrix factorization (NMF) are two commonly used matrix factorization methods and will be used as baseline models in this paper. Matrix Factorization algorithms are known to have high computational efficiency and generate user and item embedding containing information about users preferences and item features, which can be extracted for further analysis. However, traditional Matrix Factorization methods have a hindered performance compared to neural based models due to their ignorance of users and items bias terms. Integrated Neural Collaborative Filtering Our proposed neural-based model is motivated by (He et al., 2017), which is comprised of two parts -rating prediction and explainable pair generation (shown in Figure 4). As for rating prediction, the model is trained using the user-POI rating matrix and predicts the rating stars of each user on each POI. Concerning explainable pairs generation, we use the user embedding learned from embedding layer to find the top-K users who have the most similar preferences to the target user and then generate explanations based on their reviews over each specific POI. Our model resolves the issues mentioned in the baseline models and considers the bias terms when calculating users and POIs embedding. Additionally, the most similar users could be detected from user embedding instead of using other formulas to compute the correlation. Rating Prediction We predict ratings for POIs that a user has not visited before. In rating prediction part (shown in Figure 4. right block), our model takes user-POI matrix as the input and predicts rating stars for the user over the POI. After training, the model vectorizes the latent preferences of users about POIs in the embedding layers (He et al., 2017), which will be used as the input for the explanation generation. (shown in Figure 4. left block). After the score of each POI is predicted, the top-K POIs with highest ratings that the user has not visited in the past are offered as the recommended POIs for the user. Our neuralbased model is established using Keras in Python. The model is trained using the review data of restaurants in Pittsburgh in 2017 and validated using data in 2018 and sets RMSE as the loss. We trained our model in 200 epochs, converging at around 0.17 (shown in Figure 5). Explainable Pairs Generation As the neural model completed the training phase, the user embedding layer is extracted from user-POI matrix which contains the information about the latent preferences of each user. After determining the POIs which will be recommended to each target user, we use the extracted user embedding to find the top-K most similar users of our target user who have been to the same POIs. The similarity between users is measured by calculating the cosine similarity using the user embedding. Then, we extract the pairs from the reviews of the top-K users generated for the POIs using dependency parsing and sample from these pairs as the explanation provided to the target user. Model Evaluation Evaluation of Rating Prediction The introduced recommendation strategy relies on the prediction of ratings over unvisited POIs. To evaluate the rating prediction model, we use root mean square error (RMSE) to measure its performance (Lu et al., 2015). As mentioned in the Section 3, we treat 2017 as training set and 2018 as testing set, so we predict the rating stars that a user will give for an unvisited POI in 2018, and use the ratings in test dataset to measure RMSE of the result. RM SE = Σ n i=1 (P i − R i ) 2 n (1) Evaluation of Explanation Generation To evaluate the quality of the generated explanations, we calculate the cosine similarity between the pairs that we predicted for POIs and pairs extracted from the user review of corresponding POIs in the test set. Since many opinion-aspect pairs present the same meaning but different word, this project sets the cosine similarity greater than 0.8 as true, otherwise false. Then, we calculate the median number of pairs of the target user in the training dataset and predict same amount of pairs according the median number. The way that we propose for evaluation of explanation prediction will bring some problems including false positive and false negative. However, if we use exact match, it will ignore the same meaning between tuples with semantic similarity but different words such as 'great beef' and 'great steak'. Since we focus more on related pairs recommendation to users, we decide to propose this way to evaluate explainable pairs. After examining many tuples in the dataset, we found that setting the cosine similarity threshold as 0.8 can generally distinguish relevant pairs from irrelevant ones (shown in Table 1 Furthermore, we found that many pairs have the same aspect but different opinion words like 'good' or 'bad' which cannot be effectively distinguished simply using GloVe vectors. Therefore, this paper introduces a cosine similarity based F-score with sentiment penalty and uses it as the evaluation method. In this method, if two words have similarity greater than 0.8 but with opposite sentiment polarity, we consider this as a false case (shown in equation 2). In Table 1, the cosine similarity with (+) means two words have the same sentiment polarity and (-) means different. Although 'good service' and 'bad service' have quite high similarity (0.952), this case will be labelled with False because the sentiment penalty presents (-). We will discuss more details about the impact without using sentiment penalty in Section 6. To calculate the F-score during evaluation, we first determine the median number of pair counts of each user generated from reviews for each POI in training set. We set the median count number as the number of explanation pairs generated using our model individually for each user. According to (Shani and Gunawardana, 2011), they listed ways to calculate the recall rate (shown in equation 3), precision (shown in equation 4), and F-score (shown in equation 5). Then, we following these ways to calculate F-score for our case. The evaluation method accommodates the error caused by using GloVe vectors during explanation evaluation. The modified F-score consolidates sentiment penalty requiring a match with a cosine similarity under 0.8 and both opinion words having the same sentiment polarity, which can be written as follows: ni = 1, if si · si > 0 and similarity > 0.8 0, otherwise recalln i = number of T rue predicted pairs total number of pairs in reviews (3) precisionn i = number of T rue predicted pairs total number of pairs predicted (4) F1 = 2 · recalln i · precisionn i recalln i + precisionn i(5) Where s i is the sentiment score of opinion in the pair generated by our model predicted using SentiWordNet (Baccianella et al., 2010), s i is the sentiment score of opinion in the pair extracted from users review. Results Discussion and Analysis We apply the same dataset to the three baseline methods: KNN, SVD matrix factorization, and NMF matrix factorization, which refer to the algorithms in (Ricci et al., 2015;Luo et al., 2014). The result presents that our proposed integrated neural collaborative filtering method (NCF) outputted the lowest RMSE compared with other methods (shown in Table 2). NCF KNN MF -SVD MF -NMF 0.1733 0.2555 0.3032 0.3890 Then, we evaluate our model in terms of explainable pairs over this dataset. The baseline model we introduced is to randomly sample pairs extracted from all users' reviews under the predicted POIs which correspondes to human random guess. The random sampling method provides information extracted from historical reviews of the recommended POI but does not consider users' preferences. By using this as a baseline, we can validate how much improvement our model achieved through incorporating information of users' preferences. To collect unbiased result, we use ten folds cross-validation for each experiment, and the result shows (in Table 3) that NCF with sentiment penalty outcomes an Fscore of 0.5088, which is higher than 0.0349 of random sampling. As we mentioned the sentiment penalty in the process of evaluating our explainable results before, it could help to modify the labeling results. Since 'bad service' and 'good service' have 0.952 similarity but they present different sentiment, we add sentiment penalty to label these cases as False. Therefore, after incorporating the sentiment penalty, the F-score drops from 0.5696 to 0.5088 (shown in Conclusion This paper introduces a neural-based explainable recommender system comprised of 2 parts: providing recommendation to users based on rating prediction and integrating the recommendation with explanations that can convince users to visit the proposed recommendation. We deployed a neural collaborative filtering model to predict ratings for the target user. Further, we utilize the user embedding layer which contains information on users' latent preference to generate explanations based on users' historical review data. Then, the neural based rating prediction model is evaluated using Yelp dataset and compared with commonly used collaborative filtering methods. Finally, we introduce cosine similarity based F-score with sentiment penalty to evaluate our explanation generation method. Result shows that our model achieves a better performance in rating pre-diction than classical collaborative filtering model and a significant improvement over the random sampling baseline in explanation generation. Limitation and Future Work The evaluation results of rating and explanation present that our model is useful to solve some problems and could be generalized to other types of POI recommendation such as hotels and tourist attractions. However, there are also some limitations which are discovered during our experiments. We listed these limitations and discussed the ways to improve our model performance in future work. In opinion-aspects pair extraction, we only explore the aspect-opinion pairs in the form of adjectival modifier and noun using dependency parsing. Current method incorporated excludes part of POI features that are not presented in the form of opinion-aspect pairs, such as 'cafe near the river' and 'restaurant in the city center'. If we consider higher level n-gram (n>2) for extracting opinionaspect pairs by using dependency parsing method, more noise pairs will also be extracted. Hence, we consider using deep learning model for explanation generation in the future. The rating prediction phase only considers the rating scores between users and POIs. Since the customer reviews also relate to the rating stars, we will incorporate other information about users such as social links to improve the model performance in future work. Review information may be incorporated by adding another embedding layer which encodes users' reviews and import reviews as input of the layer. Our model assumes that the users have visiting records before, and continue to use the website which contains user generated content about POIs. Therefore, we will explore the methods for the users who don't have any visited record in browser or website.
3,173
1812.11740
2906722931
Recommendation system could help the companies to persuade users to visit or consume at a particular place, which was based on many traditional methods such as the set of collaborative filtering algorithms. Most research discusses the model design or feature engineering methods to minimize the root mean square error (RMSE) of rating prediction, but lacks exploring the ways to generate the reasons for recommendations. This paper proposed an integrated neural network based model which integrates rating scores prediction and explainable words generation. Based on the experimental results, this model presented lower RMSE compared with traditional methods, and generate the explanation of recommendation to convince customers to visit the recommended place.
0.05in Collaborative Filtering is commonly used in recommender system @cite_12 @cite_11 @cite_16 @cite_2 --- exploiting similarity among the preference of users to generate recommendations. Research is done using users’ historical data to predict ratings, and deliver the results of recommendation to users @cite_26 @cite_9 . The traditional methods in rating prediction are the user-based model, and matrix factorization(MF). According to @cite_18 , they introduced the user-based method which predicts the rating scores based on the other users who have similar preferences. Regarding matrix factorization, mentioned the applications of two commonly used methods: Singular Value Decomposition(SVD) and Non-negative Matrix Factorization(NMF) @cite_18 . Additionally, many researchers utilized neural-based collaborative filtering model for rating prediction @cite_1 @cite_23 . However, there are many limitations for the traditional algorithms such as scalability problems, and lacking bias terms. As for neural collaborative filtering, it performs better on rating prediction, but there is no description of generating explainable text in empirical works.
{ "abstract": [ "Recommender Systems (RSs) are software tools and techniques that provide suggestions for items that are most likely of interest to a particular user. In this introductory chapter, we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook. Additionally, we aim to help the reader navigate the rich and detailed content that this handbook offers.", "", "Maximum Margin Matrix Factorization (MMMF) was recently suggested (, 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.", "Automatic music recommendation has become an increasingly relevant problem in recent years, since a lot of music is now sold and consumed digitally. Most recommender systems rely on collaborative filtering. However, this approach suffers from the cold start problem: it fails when no usage data is available, so it is not effective for recommending new and unpopular songs. In this paper, we propose to use a latent factor model for recommendation, and predict the latent factors from music audio when they cannot be obtained from usage data. We compare a traditional approach using a bag-of-words representation of the audio signals with deep convolutional neural networks, and evaluate the predictions quantitatively and qualitatively on the Million Song Dataset. We show that using predicted latent factors produces sensible recommendations, despite the fact that there is a large semantic gap between the characteristics of a song that affect user preference and the corresponding audio signal. We also show that recent advances in deep learning translate very well to the music recommendation setting, with deep convolutional neural networks significantly outperforming the traditional approach.", "Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recently advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art.", "Researchers have access to large online archives of scientific articles. As a consequence, finding relevant papers has become more difficult. Newly formed online communities of researchers sharing citations provides a new way to solve this problem. In this paper, we develop an algorithm to recommend scientific articles to users of an online community. Our approach combines the merits of traditional collaborative filtering and probabilistic topic modeling. It provides an interpretable latent structure for users and items, and can form recommendations about both existing and newly published articles. We study a large subset of data from CiteULike, a bibliography sharing service, and show that our algorithm provides a more effective recommender system than traditional collaborative filtering.", "As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.", "", "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method." ], "cite_N": [ "@cite_18", "@cite_26", "@cite_9", "@cite_1", "@cite_23", "@cite_2", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2200988052", "2043403353", "1976618413", "2137028279", "2157881433", "2135790056", "2100235918", "", "1976320242" ] }
A Neural Network Based Explainable Recommender System
User-generated content such as rating and reviews about a place is gradually becoming a key factor for other people making a decision. Some platforms such as Yelp utilize recommender system to analyze the user preferences based on the reviews and rating to recommend a new place for other customers. According to (Liu et al., 2017), they introduce a phrase point-of-interests (POIs) to represent a set of places such as restaurant and tourist attractions where users are interested. In this pa-per, we will continue to use this phrase and focus on restaurant rating prediction and explanation generation. Currently, much empirical research focuses on model design or feature engineering methods to improve model performance rather than discussing the ways to generate recommended reasons. This project proposed an integrated neural network based explainable recommender system, instead of using traditional methods such as user-based models, and Matrix Factorization, to generate explainable opinion-aspect pairs for a user, helping them explore their unfamiliar places, and predict rating score from that user for each POI and recommend POI with high predicted rating to each user. Our model could generate user latent preference from embedding layer in neural network after training rating predictions for POIs Instead of training models on user-POI rating matrix directly. The main contributions of this paper are summarized as follows: • Predict rating stars for POIs based on user-POI rating matrix using neuralbased model. • Generate opinion-aspect pairs for users to explain reasons for recommended POI. • Compare the model performance with the classical recommender system methods. • Propose a way to evaluate the prediction of explainable pairs. In the rest of the paper, we will discuss the related work in Section 2. Provide the de-tails of data preprocessing, feature extraction, and data exploration in section 3. We will introduce the proposed model and other traditional models in Section 4. In Section 5 and Section 6, We discuss the method to evaluate the RMSE of predicting rating stars and the fscore of predicting explainable tuples. Then, we present the results including comparison with other baseline models. Finally, we conclude findings, limitation, and future work of this paper in section 7. Collaborative Filtering for Rating Prediction Collaborative Filtering is commonly used in recommender system (Chen et al., 2014;Konstas et al., 2009;Su and Khoshgoftaar, 2009;Wang and Blei, 2011) -exploiting similarity among the preference of users to generate recommendations. Research is done using users historical data to predict ratings, and deliver the results of recommendation to users (Balabanovi and Shoham, 1997;Rennie and Srebro, 2005). The traditional methods in rating prediction are the user-based model, and matrix factorization(MF). According to (Ricci et al., 2015), they introduced the user-based method which predicts the rating scores based on the other users who have similar preferences. Regarding matrix factorization, Ricci et al. mentioned the applications of two commonly used methods: Singular Value Decomposition(SVD) and Non-negative Matrix Factorization(NMF) (Ricci et al., 2015). Ad-ditionally, many researchers utilized neuralbased collaborative filtering model for rating prediction (van den Oord et al., 2013;. However, there are many limitations for the traditional algorithms such as scalability problems, and lacking bias terms. As for neural collaborative filtering, it performs better on rating prediction, but there is no description of generating explainable text in empirical works. Explainable Recommendation System Explainable recommendation system (ERS) gradually attracts more researchers to explore the ways to generate convincing explanations for users. According to (Hou et al., 2018), they state two forms of ERS to output explanations. One is to extract noun words or phrases to represent item features, and there are many works utilizes this form for explainable recommendation. However, Hou et al. claim that it lacks consideration of sentiment for each aspect summarized from other visited customers which is less convincing for future users. Another method is to generate a set of words containing opinion and aspect phrases from reviews to explain fine-grained aspect information for future users (Hou et al., 2018). In this project, we investigated the second form of ERS, and summarized many empirical works about this type as follows: Baccianella et al. introduced a method using SentiWordNet in extracting opinion words and quantifying the sentiment polarity and strength of each word to the POI (Baccianella et al., 2010). Pero et al. proposed a recommendation system by integrating opinion information with the rating score for the POI (Pero and Horvth, 2013). Chen et al. demonstrated an algorithm about tensor decomposition to predict rating based on opinion-aspect pairs in review over multiple domains (Chen et al., 2014). However, these method lacks of fine-grained sentiment differences based on some specific aspect of POIs (Hou et al., 2018). Though given plenty of research focusing on POI rating prediction in recommender systems, less research has investigated the recommender system offering the fine-grained explanation for specific aspects of POIs. Former Collaborative Filtering research focuses on improving the precision of recommendation but lacks information to help target users better understand the proposed recommendation. Our proposed model could resolve the mentioned limitations from empirical works, which integrate rating prediction and explainable pairs generation, and provide better performance compared with other traditional models. 3 Data Data Preprocessing and Feature Extraction The data is composed of business ID, user ID, rating score (range from 1 to 5), reviews, and date in the city Pittsburgh from Yelp (Developers, 2018) challenge dataset. Considering the impact of time,we set 2017 (full year) as the training dataset and 2018 (until June) as test dataset which contains 26918 reviews (from 13225 users to 450 restaurants). After tokenizing and POS tagging, review texts are transformed into syntax relations (shown in Figure 1) by utilizing the spaCy CNN dependency parsing model (Kiperwasser and Goldberg, 2016;Goldberg and Nivre, 2012), and these relation pairs containing users opinions towards aspects are extracted in the form of tuples liked<opinion, aspect>. Additionally, SentiWordNet (Baccianella et al., 2010) is incorporated to quantify the opinion sentiment polarity and strength, which will be described in detail in the evaluation section. Finally, after lemmatizing and lowercasing the words, we use 100-dimension Glove word embedding pretrained by twitter corpus (Pennington et al., 2014) , and then horizontally concatenate the word embedding to tuple embedding. Data Exploration Analysis After preprocessing the data, the stars distribution is visualized in Figure 2 which indicates that users rarely give 1 or 2 stars for POIs. Then, we explore the opinion-aspect pairs distribution in the space of frequency and average rating. In Figure 3, these pairs are visualized based on the frequency greater than 700 and average stars that they appeared. Additionally, there are many examples having comparable high average stars which accord with their opinion words such as 'best restaurant', and 'best food'. On the other hand, many tuples such as 'good service' have positive opinion but present the position lower than the average stars line. This case will be discussed in our limitation part. Method There are many traditional model framework used for rating prediction. In this section, we experimented with four algorithms: neuralbased model, K nearest neighbor, SVD matrix factorization and NMF matrix factorization to predict the rating stars that a user would give for a POI based on the ratings of reviews posted in the past. Then, we demonstrate the ways of our models to generate opinionaspect pairs for target users after completing the model training for RMSE. Baseline Model for Rating Prediction User-based collaborative filtering The first baseline model is a user-based collaborative filtering algorithm based on the K nearest neighbor (KNN) algorithm. According to (Ricci et al., 2015), they claim that this method utilizes a ratings matrix to compute centered cosine similarity (Pearson relationship) between each user. Then, predict the rating score for the POI where the target user has never been there before based on the weight of similarity and scores from other top K users who are most similar to target user (Ricci et al., 2015). This method could provide decent performance in rating prediction. However, this method is based on the KNN algorithm which will suffer serious scalability problems when the number of users and the number of items increase (Shani and Gunawardana, 2011;Ricci et al., 2015). Matrix Factorization Matrix factorization is a commonly used collaborative filtering method (Koren, 2010) which decomposes the user-restaurant rating matrix into two latent feature matrices of user and restaurant and can be used to predict the rating stars of restaurants the users who have not visited. Singular value decomposition (SVD) and non-negative matrix factorization (NMF) are two commonly used matrix factorization methods and will be used as baseline models in this paper. Matrix Factorization algorithms are known to have high computational efficiency and generate user and item embedding containing information about users preferences and item features, which can be extracted for further analysis. However, traditional Matrix Factorization methods have a hindered performance compared to neural based models due to their ignorance of users and items bias terms. Integrated Neural Collaborative Filtering Our proposed neural-based model is motivated by (He et al., 2017), which is comprised of two parts -rating prediction and explainable pair generation (shown in Figure 4). As for rating prediction, the model is trained using the user-POI rating matrix and predicts the rating stars of each user on each POI. Concerning explainable pairs generation, we use the user embedding learned from embedding layer to find the top-K users who have the most similar preferences to the target user and then generate explanations based on their reviews over each specific POI. Our model resolves the issues mentioned in the baseline models and considers the bias terms when calculating users and POIs embedding. Additionally, the most similar users could be detected from user embedding instead of using other formulas to compute the correlation. Rating Prediction We predict ratings for POIs that a user has not visited before. In rating prediction part (shown in Figure 4. right block), our model takes user-POI matrix as the input and predicts rating stars for the user over the POI. After training, the model vectorizes the latent preferences of users about POIs in the embedding layers (He et al., 2017), which will be used as the input for the explanation generation. (shown in Figure 4. left block). After the score of each POI is predicted, the top-K POIs with highest ratings that the user has not visited in the past are offered as the recommended POIs for the user. Our neuralbased model is established using Keras in Python. The model is trained using the review data of restaurants in Pittsburgh in 2017 and validated using data in 2018 and sets RMSE as the loss. We trained our model in 200 epochs, converging at around 0.17 (shown in Figure 5). Explainable Pairs Generation As the neural model completed the training phase, the user embedding layer is extracted from user-POI matrix which contains the information about the latent preferences of each user. After determining the POIs which will be recommended to each target user, we use the extracted user embedding to find the top-K most similar users of our target user who have been to the same POIs. The similarity between users is measured by calculating the cosine similarity using the user embedding. Then, we extract the pairs from the reviews of the top-K users generated for the POIs using dependency parsing and sample from these pairs as the explanation provided to the target user. Model Evaluation Evaluation of Rating Prediction The introduced recommendation strategy relies on the prediction of ratings over unvisited POIs. To evaluate the rating prediction model, we use root mean square error (RMSE) to measure its performance (Lu et al., 2015). As mentioned in the Section 3, we treat 2017 as training set and 2018 as testing set, so we predict the rating stars that a user will give for an unvisited POI in 2018, and use the ratings in test dataset to measure RMSE of the result. RM SE = Σ n i=1 (P i − R i ) 2 n (1) Evaluation of Explanation Generation To evaluate the quality of the generated explanations, we calculate the cosine similarity between the pairs that we predicted for POIs and pairs extracted from the user review of corresponding POIs in the test set. Since many opinion-aspect pairs present the same meaning but different word, this project sets the cosine similarity greater than 0.8 as true, otherwise false. Then, we calculate the median number of pairs of the target user in the training dataset and predict same amount of pairs according the median number. The way that we propose for evaluation of explanation prediction will bring some problems including false positive and false negative. However, if we use exact match, it will ignore the same meaning between tuples with semantic similarity but different words such as 'great beef' and 'great steak'. Since we focus more on related pairs recommendation to users, we decide to propose this way to evaluate explainable pairs. After examining many tuples in the dataset, we found that setting the cosine similarity threshold as 0.8 can generally distinguish relevant pairs from irrelevant ones (shown in Table 1 Furthermore, we found that many pairs have the same aspect but different opinion words like 'good' or 'bad' which cannot be effectively distinguished simply using GloVe vectors. Therefore, this paper introduces a cosine similarity based F-score with sentiment penalty and uses it as the evaluation method. In this method, if two words have similarity greater than 0.8 but with opposite sentiment polarity, we consider this as a false case (shown in equation 2). In Table 1, the cosine similarity with (+) means two words have the same sentiment polarity and (-) means different. Although 'good service' and 'bad service' have quite high similarity (0.952), this case will be labelled with False because the sentiment penalty presents (-). We will discuss more details about the impact without using sentiment penalty in Section 6. To calculate the F-score during evaluation, we first determine the median number of pair counts of each user generated from reviews for each POI in training set. We set the median count number as the number of explanation pairs generated using our model individually for each user. According to (Shani and Gunawardana, 2011), they listed ways to calculate the recall rate (shown in equation 3), precision (shown in equation 4), and F-score (shown in equation 5). Then, we following these ways to calculate F-score for our case. The evaluation method accommodates the error caused by using GloVe vectors during explanation evaluation. The modified F-score consolidates sentiment penalty requiring a match with a cosine similarity under 0.8 and both opinion words having the same sentiment polarity, which can be written as follows: ni = 1, if si · si > 0 and similarity > 0.8 0, otherwise recalln i = number of T rue predicted pairs total number of pairs in reviews (3) precisionn i = number of T rue predicted pairs total number of pairs predicted (4) F1 = 2 · recalln i · precisionn i recalln i + precisionn i(5) Where s i is the sentiment score of opinion in the pair generated by our model predicted using SentiWordNet (Baccianella et al., 2010), s i is the sentiment score of opinion in the pair extracted from users review. Results Discussion and Analysis We apply the same dataset to the three baseline methods: KNN, SVD matrix factorization, and NMF matrix factorization, which refer to the algorithms in (Ricci et al., 2015;Luo et al., 2014). The result presents that our proposed integrated neural collaborative filtering method (NCF) outputted the lowest RMSE compared with other methods (shown in Table 2). NCF KNN MF -SVD MF -NMF 0.1733 0.2555 0.3032 0.3890 Then, we evaluate our model in terms of explainable pairs over this dataset. The baseline model we introduced is to randomly sample pairs extracted from all users' reviews under the predicted POIs which correspondes to human random guess. The random sampling method provides information extracted from historical reviews of the recommended POI but does not consider users' preferences. By using this as a baseline, we can validate how much improvement our model achieved through incorporating information of users' preferences. To collect unbiased result, we use ten folds cross-validation for each experiment, and the result shows (in Table 3) that NCF with sentiment penalty outcomes an Fscore of 0.5088, which is higher than 0.0349 of random sampling. As we mentioned the sentiment penalty in the process of evaluating our explainable results before, it could help to modify the labeling results. Since 'bad service' and 'good service' have 0.952 similarity but they present different sentiment, we add sentiment penalty to label these cases as False. Therefore, after incorporating the sentiment penalty, the F-score drops from 0.5696 to 0.5088 (shown in Conclusion This paper introduces a neural-based explainable recommender system comprised of 2 parts: providing recommendation to users based on rating prediction and integrating the recommendation with explanations that can convince users to visit the proposed recommendation. We deployed a neural collaborative filtering model to predict ratings for the target user. Further, we utilize the user embedding layer which contains information on users' latent preference to generate explanations based on users' historical review data. Then, the neural based rating prediction model is evaluated using Yelp dataset and compared with commonly used collaborative filtering methods. Finally, we introduce cosine similarity based F-score with sentiment penalty to evaluate our explanation generation method. Result shows that our model achieves a better performance in rating pre-diction than classical collaborative filtering model and a significant improvement over the random sampling baseline in explanation generation. Limitation and Future Work The evaluation results of rating and explanation present that our model is useful to solve some problems and could be generalized to other types of POI recommendation such as hotels and tourist attractions. However, there are also some limitations which are discovered during our experiments. We listed these limitations and discussed the ways to improve our model performance in future work. In opinion-aspects pair extraction, we only explore the aspect-opinion pairs in the form of adjectival modifier and noun using dependency parsing. Current method incorporated excludes part of POI features that are not presented in the form of opinion-aspect pairs, such as 'cafe near the river' and 'restaurant in the city center'. If we consider higher level n-gram (n>2) for extracting opinionaspect pairs by using dependency parsing method, more noise pairs will also be extracted. Hence, we consider using deep learning model for explanation generation in the future. The rating prediction phase only considers the rating scores between users and POIs. Since the customer reviews also relate to the rating stars, we will incorporate other information about users such as social links to improve the model performance in future work. Review information may be incorporated by adding another embedding layer which encodes users' reviews and import reviews as input of the layer. Our model assumes that the users have visiting records before, and continue to use the website which contains user generated content about POIs. Therefore, we will explore the methods for the users who don't have any visited record in browser or website.
3,173
1812.11852
2908069757
The vast majority of photos taken today are by mobile phones. While their quality is rapidly growing, due to physical limitations and cost constraints, mobile phone cameras struggle to compare in quality with DSLR cameras. This motivates us to computationally enhance these images. We extend upon the results of , where they are able to translate images from compact mobile cameras into images with comparable quality to high-resolution photos taken by DSLR cameras. However, the neural models employed require large amounts of computational resources and are not lightweight enough to run on mobile devices. We build upon the prior work and explore different network architectures targeting an increase in image quality and speed. With an efficient network architecture which does most of its processing in a lower spatial resolution, we achieve a significantly higher mean opinion score (MOS) than the baseline while speeding up the computation by 6.3 times on a consumer-grade CPU. This suggests a promising direction for neural-network-based photo enhancement using the phone hardware of the future.
A considerable body of work is dedicated to automatic photo enhancement. However, it traditionally only focused on a specific subproblem, such as super-resolution, denoising, deblurring, or colorization. All of these subproblems are tackled simultaneously when we generate plausible high-quality photos from low-end ones. Furthermore, these older works commonly train with artifacts that have been artificially applied to the target image dataset. Recreating and simulating all the flaws in one camera given a picture from another is close to impossible, therefore in order to achieve real-world photo enhancement we use the photos simultaneously captured by a capture rig from Ignatov @cite_7 . Despite their limitations, the related works contain many useful ideas, which we briefly review in this section.
{ "abstract": [ "Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations – small sensor size, compact lenses and the lack of specific hardware, – impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera." ], "cite_N": [ "@cite_7" ], "mid": [ "2607202125" ] }
Fast Perceptual Image Enhancement
The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters. Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings. The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image. In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality. General Purpose Image-to-Image Translation and Enhancement The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose. Dataset The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images. Baseline As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used. Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions. The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work. Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then X b and Y b are their blurred versions, using X b (i, j) = k,l X(i + k, j + l) · G(k, l),(1) where G is the 2D Gaussian blur operator G(k, l) = A exp − (k − µ x ) 2 2σ x − (l − µ y ) 2 2σ y .(2) The color loss can then be written as L color (X, Y ) = X b − Y b 2 2 .(3) We use the same parameters as defined in [8], namely A = 0.053, µ x,y = 0, and σ x,y = 3. Fig. 1. The overall architecture of the DPED baseline [8] Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8,26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as L content = 1 C j H j W j ψ j (F w (I s )) − ψ j (I t )(4) where ψ j (·) is the feature map of the VGG-19 network after its j-th convolutional layer, C j , H j , and W j are the number, height, and width of this map, and F W (I s ) denotes the enhanced image. Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator's weights are optimized in the opposite direction, to try to minimize the discriminator's accuracy, therefore producing convincing fake images. Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as L texture = − i log D(F W (I s ), I t ),(5) where F W and D denote the generator and discriminator networks, respectively. Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise. L tv = 1 CHW ∇ x F W (I s ) + ∇ y F W (I s )(6) Again, C, H, and W are the number of channels, height, and width of the generated image F W (I s ). It is given a low weight overall. Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses. L total = L content + 0.4 · L texture + 0.1 · L color + 400 · L tv ,(7) Ignatov et al. [8] use the relu 5 4 layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset. Experiments and Results Experiments Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline's residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter's kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of 5 × 5 were also attempted instead of 3 × 3, but did not provide the quality improvements necessary to justify their computational costs. In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored. Parametric ReLU. Parametric ReLU [6] is an activation function defined as where y i is the i-th element of the feature vector, and a i is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost. PReLU (y i ) = y i , if y i > 0 a i y i , if y i ≤ 0(8) In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality. Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12]. This down-sampling operation is followed by two residual blocks at this new, 4× reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution. At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function. This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations. Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed. The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of 3 × 3, except in the strided convolutional layers, where we opted for 4 × 4 instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50. Results Our network 1 takes only 3.2 s of CPU time to enhance a 1280 × 720 px image compared to the baseline's 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB. As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution. With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper's main result, but was trained for only 33k iterations. Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix. While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works. Conclusion Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16 % as much computation time. Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.
2,126
1812.11852
2908069757
The vast majority of photos taken today are by mobile phones. While their quality is rapidly growing, due to physical limitations and cost constraints, mobile phone cameras struggle to compare in quality with DSLR cameras. This motivates us to computationally enhance these images. We extend upon the results of , where they are able to translate images from compact mobile cameras into images with comparable quality to high-resolution photos taken by DSLR cameras. However, the neural models employed require large amounts of computational resources and are not lightweight enough to run on mobile devices. We build upon the prior work and explore different network architectures targeting an increase in image quality and speed. With an efficient network architecture which does most of its processing in a lower spatial resolution, we achieve a significantly higher mean opinion score (MOS) than the baseline while speeding up the computation by 6.3 times on a consumer-grade CPU. This suggests a promising direction for neural-network-based photo enhancement using the phone hardware of the future.
is the task of increasing the resolution of an image, which is usually trained with down-scaled versions of the target image as inputs. Many prior works have been dedicated to doing this using CNNs of progressively larger and more complex nature @cite_17 @cite_0 @cite_22 @cite_3 @cite_8 @cite_24 . Initially, a simple pixel-wise mean squared error (MSE) loss was often used to guarantee high fidelity of the reconstructed images, but this often led to blurry results due to uncertainty in pixel intensity space. Recent works @cite_25 aim at perceptual quality and employ losses based on VGG layers @cite_13 , and generative adversarial networks (GANs) @cite_11 @cite_16 , which seem to be well suited to generating plausible-looking, realistic high-frequency details.
{ "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and deconvolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. Deconvolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and deconvolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, the skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to deconvolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than recent state-of-the-art methods.", "This paper reviews the 2nd NTIRE challenge on single image super-resolution (restoration of rich details in a low resolution image) with focus on proposed solutions and results. The challenge had 4 tracks. Track 1 employed the standard bicubic downscaling setup, while Tracks 2, 3 and 4 had realistic unknown downgrading operators simulating camera image acquisition pipeline. The operators were learnable through provided pairs of low and high resolution train images. The tracks had 145, 114, 101, and 113 registered participants, resp., and 31 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptual-driven methods to compete alongside algorithms that target PSNR maximization. Twenty-one participating teams introduced algorithms which well-improved upon the existing state-of-the-art methods in perceptual SR, as confirmed by a human opinion study. We also analyze popular image quality measures and draw conclusions regarding which of them correlates best with human opinion scores. We conclude with an analysis of the current trends in perceptual SR, as reflected from the leading submissions.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage." ], "cite_N": [ "@cite_11", "@cite_22", "@cite_8", "@cite_3", "@cite_0", "@cite_24", "@cite_16", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "2099471712", "2964046669", "2915130236", "2476548250", "2242218935", "", "2523714292", "2331128040", "2964060609", "54257720" ] }
Fast Perceptual Image Enhancement
The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters. Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings. The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image. In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality. General Purpose Image-to-Image Translation and Enhancement The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose. Dataset The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images. Baseline As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used. Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions. The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work. Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then X b and Y b are their blurred versions, using X b (i, j) = k,l X(i + k, j + l) · G(k, l),(1) where G is the 2D Gaussian blur operator G(k, l) = A exp − (k − µ x ) 2 2σ x − (l − µ y ) 2 2σ y .(2) The color loss can then be written as L color (X, Y ) = X b − Y b 2 2 .(3) We use the same parameters as defined in [8], namely A = 0.053, µ x,y = 0, and σ x,y = 3. Fig. 1. The overall architecture of the DPED baseline [8] Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8,26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as L content = 1 C j H j W j ψ j (F w (I s )) − ψ j (I t )(4) where ψ j (·) is the feature map of the VGG-19 network after its j-th convolutional layer, C j , H j , and W j are the number, height, and width of this map, and F W (I s ) denotes the enhanced image. Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator's weights are optimized in the opposite direction, to try to minimize the discriminator's accuracy, therefore producing convincing fake images. Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as L texture = − i log D(F W (I s ), I t ),(5) where F W and D denote the generator and discriminator networks, respectively. Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise. L tv = 1 CHW ∇ x F W (I s ) + ∇ y F W (I s )(6) Again, C, H, and W are the number of channels, height, and width of the generated image F W (I s ). It is given a low weight overall. Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses. L total = L content + 0.4 · L texture + 0.1 · L color + 400 · L tv ,(7) Ignatov et al. [8] use the relu 5 4 layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset. Experiments and Results Experiments Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline's residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter's kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of 5 × 5 were also attempted instead of 3 × 3, but did not provide the quality improvements necessary to justify their computational costs. In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored. Parametric ReLU. Parametric ReLU [6] is an activation function defined as where y i is the i-th element of the feature vector, and a i is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost. PReLU (y i ) = y i , if y i > 0 a i y i , if y i ≤ 0(8) In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality. Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12]. This down-sampling operation is followed by two residual blocks at this new, 4× reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution. At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function. This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations. Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed. The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of 3 × 3, except in the strided convolutional layers, where we opted for 4 × 4 instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50. Results Our network 1 takes only 3.2 s of CPU time to enhance a 1280 × 720 px image compared to the baseline's 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB. As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution. With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper's main result, but was trained for only 33k iterations. Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix. While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works. Conclusion Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16 % as much computation time. Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.
2,126
1812.11852
2908069757
The vast majority of photos taken today are by mobile phones. While their quality is rapidly growing, due to physical limitations and cost constraints, mobile phone cameras struggle to compare in quality with DSLR cameras. This motivates us to computationally enhance these images. We extend upon the results of , where they are able to translate images from compact mobile cameras into images with comparable quality to high-resolution photos taken by DSLR cameras. However, the neural models employed require large amounts of computational resources and are not lightweight enough to run on mobile devices. We build upon the prior work and explore different network architectures targeting an increase in image quality and speed. With an efficient network architecture which does most of its processing in a lower spatial resolution, we achieve a significantly higher mean opinion score (MOS) than the baseline while speeding up the computation by 6.3 times on a consumer-grade CPU. This suggests a promising direction for neural-network-based photo enhancement using the phone hardware of the future.
In , the aim is to hallucinate color for each pixel, given only its luminosity. It is trained on images with their color artificially removed. Isola @cite_15 achieve state of the art performance using a GAN to solve the more general problem of image-to-image translation.
{ "abstract": [ "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either." ], "cite_N": [ "@cite_15" ], "mid": [ "2552465644" ] }
Fast Perceptual Image Enhancement
The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters. Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings. The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image. In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality. General Purpose Image-to-Image Translation and Enhancement The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose. Dataset The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images. Baseline As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used. Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions. The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work. Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then X b and Y b are their blurred versions, using X b (i, j) = k,l X(i + k, j + l) · G(k, l),(1) where G is the 2D Gaussian blur operator G(k, l) = A exp − (k − µ x ) 2 2σ x − (l − µ y ) 2 2σ y .(2) The color loss can then be written as L color (X, Y ) = X b − Y b 2 2 .(3) We use the same parameters as defined in [8], namely A = 0.053, µ x,y = 0, and σ x,y = 3. Fig. 1. The overall architecture of the DPED baseline [8] Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8,26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as L content = 1 C j H j W j ψ j (F w (I s )) − ψ j (I t )(4) where ψ j (·) is the feature map of the VGG-19 network after its j-th convolutional layer, C j , H j , and W j are the number, height, and width of this map, and F W (I s ) denotes the enhanced image. Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator's weights are optimized in the opposite direction, to try to minimize the discriminator's accuracy, therefore producing convincing fake images. Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as L texture = − i log D(F W (I s ), I t ),(5) where F W and D denote the generator and discriminator networks, respectively. Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise. L tv = 1 CHW ∇ x F W (I s ) + ∇ y F W (I s )(6) Again, C, H, and W are the number of channels, height, and width of the generated image F W (I s ). It is given a low weight overall. Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses. L total = L content + 0.4 · L texture + 0.1 · L color + 400 · L tv ,(7) Ignatov et al. [8] use the relu 5 4 layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset. Experiments and Results Experiments Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline's residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter's kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of 5 × 5 were also attempted instead of 3 × 3, but did not provide the quality improvements necessary to justify their computational costs. In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored. Parametric ReLU. Parametric ReLU [6] is an activation function defined as where y i is the i-th element of the feature vector, and a i is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost. PReLU (y i ) = y i , if y i > 0 a i y i , if y i ≤ 0(8) In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality. Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12]. This down-sampling operation is followed by two residual blocks at this new, 4× reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution. At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function. This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations. Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed. The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of 3 × 3, except in the strided convolutional layers, where we opted for 4 × 4 instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50. Results Our network 1 takes only 3.2 s of CPU time to enhance a 1280 × 720 px image compared to the baseline's 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB. As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution. With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper's main result, but was trained for only 33k iterations. Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix. While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works. Conclusion Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16 % as much computation time. Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.
2,126
1812.11852
2908069757
The vast majority of photos taken today are by mobile phones. While their quality is rapidly growing, due to physical limitations and cost constraints, mobile phone cameras struggle to compare in quality with DSLR cameras. This motivates us to computationally enhance these images. We extend upon the results of , where they are able to translate images from compact mobile cameras into images with comparable quality to high-resolution photos taken by DSLR cameras. However, the neural models employed require large amounts of computational resources and are not lightweight enough to run on mobile devices. We build upon the prior work and explore different network architectures targeting an increase in image quality and speed. With an efficient network architecture which does most of its processing in a lower spatial resolution, we achieve a significantly higher mean opinion score (MOS) than the baseline while speeding up the computation by 6.3 times on a consumer-grade CPU. This suggests a promising direction for neural-network-based photo enhancement using the phone hardware of the future.
aim to remove optical distortions from photos that have been taken out of focus, while the camera was moving, or of faraway geographical or astronomical features. The neural models employed are CNNs, typically trained on images with artificially added blur or haze, using a MSE loss function @cite_26 @cite_23 @cite_5 @cite_14 @cite_9 . Recently, datasets with both hazy and haze-free images were introduced @cite_4 and solutions such as the one of Ki @cite_6 were proposed, which use a GAN, in addition to L1 and perceptual losses. Similar techniques are effective for as well @cite_27 @cite_18 @cite_19 @cite_10 .
{ "abstract": [ "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "In this work we address the problem of blind deconvolution and denoising. We focus on restoration of text documents and we show that this type of highly structured data can be successfully restored by a convolutional neural network. The networks are trained to reconstruct high-quality images directly from blurry inputs without assuming any specific blur and noise models. We demonstrate the performance of the convolutional networks on a large set of text documents and on a combination of realistic de-focus and camera shake blur kernels. On this artificial data, the convolutional networks significantly outperform existing blind deconvolution methods, including those optimized for text, in terms of image quality and OCR accuracy. In fact, the networks outperform even state-of-the-art non-blind methods for anything but the lowest noise levels. The approach is validated on real photos taken by various devices.", "The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.", "This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing.", "Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.", "A receptive field is defined as the region in an input image space that an output image pixel is looking at. Thus, the receptive field size influences the learning of deep convolution neural networks. Especially, in single image dehazing problems, larger receptive fields often show more effective dehazying by considering the brightness and color of the entire input hazy image without additional information (e.g. scene transmission map, depth map, and atmospheric light). The conventional generative adversarial network (GAN) with small-sized receptive fields cannot be effective for hazy images of ultra-high resolution. Thus, we proposed a fully end-to-end learning based conditional boundary equilibrium generative adversarial network (BEGAN) with the receptive field sizes enlarged for single image dehazing. In our conditional BEGAN, its discriminator is trained ultra-high resolution conditioned on downscale input hazy images, so that the haze can effectively be removed with the original structures of images stably preserved. From this, we can obtain the high PSNR performance (Track 1 - Indoor: top 4th-ranked) and fast computation speeds. Also, we combine an L1 loss, a perceptual loss and a GAN loss as the generator's loss of the proposed conditional BEGAN, which allows to obtain stable dehazing results for various hazy images.", "In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain accumulation. Our core ideas lie in our new rain image models and a novel deep learning architecture. We first modify the commonly used model, which is a linear combination of a rain streak layer and a background layer, by adding a binary map that locates rain streak regions. Second, we create a model consisting of a component representing rain accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which normally happen in heavy rain. Based on the first model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. In many cases though, rain streaks can be dense and large in their size, thus to obtain the clean background, we need spatial contextual information. For this, we utilize the dilated convolution. To handle rain accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose an iterative information feedback (IIF) network that removes rain streaks and clears up the rain accumulation iteratively and progressively. Overall, this multi-task learning and iterative information feedback benefits each other and constitutes a network that is end-to-end trainable. Our extensive evaluation on real images, particularly on heavy rain, shows the effectiveness of our novel models and architecture, outperforming the state-of-the-art methods significantly.", "We propose a depth image denoising and enhancement framework using a light convolutional network. The network contains three layers for high dimension projection, missing data completion and image reconstruction. We jointly use both depth and visual images as inputs. For the gray image, we design a pre-processing procedure to enhance the edges and remove unnecessary detail. For the depth image, we propose a data augmentation strategy to regenerate and increase essential training data. Further, we propose a weighted loss function for network training to adaptively improve the learning efficiency. We tested our algorithm on benchmark data and obtained very promising visual and quantitative results at real-time speed.", "", "State-of-the-art single image dehazing algorithms have some challenges to deal with images captured under complex weather conditions because their assumptions usually do not hold in those situations. In this paper, we develop a deep transmission network for robust single image dehazing. This deep transmission network simultaneously copes with three color channels and local patch information to automatically explore and exploit haze-relevant features in a learning framework. We further explore different network structures and parameter settings to achieve tradeoffs between performance and speed, which shows that color channels information is the most useful haze-relevant feature rather than local information. Experiment results demonstrate that the proposed algorithm outperforms state-of-the-art methods on both synthetic and real-world datasets.", "This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level." ], "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_4", "@cite_9", "@cite_6", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_10" ], "mid": [ "2508457857", "2319561215", "2519481857", "2899026108", "2256362396", "2899857045", "2525037006", "2402704303", "", "2508992006", "2345337169" ] }
Fast Perceptual Image Enhancement
The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters. Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings. The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image. In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality. General Purpose Image-to-Image Translation and Enhancement The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose. Dataset The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images. Baseline As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used. Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions. The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work. Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then X b and Y b are their blurred versions, using X b (i, j) = k,l X(i + k, j + l) · G(k, l),(1) where G is the 2D Gaussian blur operator G(k, l) = A exp − (k − µ x ) 2 2σ x − (l − µ y ) 2 2σ y .(2) The color loss can then be written as L color (X, Y ) = X b − Y b 2 2 .(3) We use the same parameters as defined in [8], namely A = 0.053, µ x,y = 0, and σ x,y = 3. Fig. 1. The overall architecture of the DPED baseline [8] Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8,26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as L content = 1 C j H j W j ψ j (F w (I s )) − ψ j (I t )(4) where ψ j (·) is the feature map of the VGG-19 network after its j-th convolutional layer, C j , H j , and W j are the number, height, and width of this map, and F W (I s ) denotes the enhanced image. Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator's weights are optimized in the opposite direction, to try to minimize the discriminator's accuracy, therefore producing convincing fake images. Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as L texture = − i log D(F W (I s ), I t ),(5) where F W and D denote the generator and discriminator networks, respectively. Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise. L tv = 1 CHW ∇ x F W (I s ) + ∇ y F W (I s )(6) Again, C, H, and W are the number of channels, height, and width of the generated image F W (I s ). It is given a low weight overall. Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses. L total = L content + 0.4 · L texture + 0.1 · L color + 400 · L tv ,(7) Ignatov et al. [8] use the relu 5 4 layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset. Experiments and Results Experiments Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline's residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter's kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of 5 × 5 were also attempted instead of 3 × 3, but did not provide the quality improvements necessary to justify their computational costs. In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored. Parametric ReLU. Parametric ReLU [6] is an activation function defined as where y i is the i-th element of the feature vector, and a i is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost. PReLU (y i ) = y i , if y i > 0 a i y i , if y i ≤ 0(8) In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality. Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12]. This down-sampling operation is followed by two residual blocks at this new, 4× reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution. At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function. This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations. Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed. The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of 3 × 3, except in the strided convolutional layers, where we opted for 4 × 4 instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50. Results Our network 1 takes only 3.2 s of CPU time to enhance a 1280 × 720 px image compared to the baseline's 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB. As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution. With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper's main result, but was trained for only 33k iterations. Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix. While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works. Conclusion Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16 % as much computation time. Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.
2,126
1812.11852
2908069757
The vast majority of photos taken today are by mobile phones. While their quality is rapidly growing, due to physical limitations and cost constraints, mobile phone cameras struggle to compare in quality with DSLR cameras. This motivates us to computationally enhance these images. We extend upon the results of , where they are able to translate images from compact mobile cameras into images with comparable quality to high-resolution photos taken by DSLR cameras. However, the neural models employed require large amounts of computational resources and are not lightweight enough to run on mobile devices. We build upon the prior work and explore different network architectures targeting an increase in image quality and speed. With an efficient network architecture which does most of its processing in a lower spatial resolution, we achieve a significantly higher mean opinion score (MOS) than the baseline while speeding up the computation by 6.3 times on a consumer-grade CPU. This suggests a promising direction for neural-network-based photo enhancement using the phone hardware of the future.
The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola @cite_15 propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu @cite_20 relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov introduce paired @cite_7 and unpaired @cite_12 GAN architectures that are specially designed for this purpose.
{ "abstract": [ "Low-end and compact mobile cameras demonstrate limited photo quality mainly due to space, hardware and budget constraints. In this work, we propose a deep learning solution that translates photos taken by cameras with limited capabilities into DSLR-quality photos automatically. We tackle this problem by introducing a weakly supervised photo enhancer (WESPE) - a novel image-to-image Generative Adversarial Network-based architecture. The proposed model is trained by under weak supervision: unlike previous works, there is no need for strong supervision in the form of a large annotated dataset of aligned original enhanced photo pairs. The sole requirement is two distinct datasets: one from the source camera, and one composed of arbitrary high-quality images that can be generally crawled from the Internet - the visual content they exhibit may be unrelated. Hence, our solution is repeatable for any camera: collecting the data and training can be achieved in a couple of hours. In this work, we emphasize on extensive evaluation of obtained results. Besides standard objective metrics and subjective user study, we train a virtual rater in the form of a separate CNN that mimics human raters on Flickr data and use this network to get reference scores for both original and enhanced photos. Our experiments on the DPED, KITTI and Cityscapes datasets as well as pictures from several generations of smartphones demonstrate that WESPE produces comparable or improved qualitative results with state-of-the-art strongly supervised methods.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations – small sensor size, compact lenses and the lack of specific hardware, – impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach." ], "cite_N": [ "@cite_12", "@cite_15", "@cite_7", "@cite_20" ], "mid": [ "2753918352", "2552465644", "2607202125", "2962793481" ] }
Fast Perceptual Image Enhancement
The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters. Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings. The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image. In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality. General Purpose Image-to-Image Translation and Enhancement The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose. Dataset The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images. Baseline As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used. Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions. The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work. Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then X b and Y b are their blurred versions, using X b (i, j) = k,l X(i + k, j + l) · G(k, l),(1) where G is the 2D Gaussian blur operator G(k, l) = A exp − (k − µ x ) 2 2σ x − (l − µ y ) 2 2σ y .(2) The color loss can then be written as L color (X, Y ) = X b − Y b 2 2 .(3) We use the same parameters as defined in [8], namely A = 0.053, µ x,y = 0, and σ x,y = 3. Fig. 1. The overall architecture of the DPED baseline [8] Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8,26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as L content = 1 C j H j W j ψ j (F w (I s )) − ψ j (I t )(4) where ψ j (·) is the feature map of the VGG-19 network after its j-th convolutional layer, C j , H j , and W j are the number, height, and width of this map, and F W (I s ) denotes the enhanced image. Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator's weights are optimized in the opposite direction, to try to minimize the discriminator's accuracy, therefore producing convincing fake images. Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as L texture = − i log D(F W (I s ), I t ),(5) where F W and D denote the generator and discriminator networks, respectively. Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise. L tv = 1 CHW ∇ x F W (I s ) + ∇ y F W (I s )(6) Again, C, H, and W are the number of channels, height, and width of the generated image F W (I s ). It is given a low weight overall. Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses. L total = L content + 0.4 · L texture + 0.1 · L color + 400 · L tv ,(7) Ignatov et al. [8] use the relu 5 4 layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset. Experiments and Results Experiments Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline's residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter's kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of 5 × 5 were also attempted instead of 3 × 3, but did not provide the quality improvements necessary to justify their computational costs. In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored. Parametric ReLU. Parametric ReLU [6] is an activation function defined as where y i is the i-th element of the feature vector, and a i is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost. PReLU (y i ) = y i , if y i > 0 a i y i , if y i ≤ 0(8) In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality. Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12]. This down-sampling operation is followed by two residual blocks at this new, 4× reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution. At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function. This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations. Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed. The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of 3 × 3, except in the strided convolutional layers, where we opted for 4 × 4 instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50. Results Our network 1 takes only 3.2 s of CPU time to enhance a 1280 × 720 px image compared to the baseline's 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB. As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution. With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper's main result, but was trained for only 33k iterations. Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix. While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works. Conclusion Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16 % as much computation time. Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.
2,126
1812.11852
2908069757
The vast majority of photos taken today are by mobile phones. While their quality is rapidly growing, due to physical limitations and cost constraints, mobile phone cameras struggle to compare in quality with DSLR cameras. This motivates us to computationally enhance these images. We extend upon the results of , where they are able to translate images from compact mobile cameras into images with comparable quality to high-resolution photos taken by DSLR cameras. However, the neural models employed require large amounts of computational resources and are not lightweight enough to run on mobile devices. We build upon the prior work and explore different network architectures targeting an increase in image quality and speed. With an efficient network architecture which does most of its processing in a lower spatial resolution, we achieve a significantly higher mean opinion score (MOS) than the baseline while speeding up the computation by 6.3 times on a consumer-grade CPU. This suggests a promising direction for neural-network-based photo enhancement using the phone hardware of the future.
The DPED dataset @cite_7 consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images.
{ "abstract": [ "Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations – small sensor size, compact lenses and the lack of specific hardware, – impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera." ], "cite_N": [ "@cite_7" ], "mid": [ "2607202125" ] }
Fast Perceptual Image Enhancement
The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters. Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings. The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image. In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality. General Purpose Image-to-Image Translation and Enhancement The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose. Dataset The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100x100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images. Baseline As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used. Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions. The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work. Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then X b and Y b are their blurred versions, using X b (i, j) = k,l X(i + k, j + l) · G(k, l),(1) where G is the 2D Gaussian blur operator G(k, l) = A exp − (k − µ x ) 2 2σ x − (l − µ y ) 2 2σ y .(2) The color loss can then be written as L color (X, Y ) = X b − Y b 2 2 .(3) We use the same parameters as defined in [8], namely A = 0.053, µ x,y = 0, and σ x,y = 3. Fig. 1. The overall architecture of the DPED baseline [8] Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8,26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as L content = 1 C j H j W j ψ j (F w (I s )) − ψ j (I t )(4) where ψ j (·) is the feature map of the VGG-19 network after its j-th convolutional layer, C j , H j , and W j are the number, height, and width of this map, and F W (I s ) denotes the enhanced image. Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator's weights are optimized in the opposite direction, to try to minimize the discriminator's accuracy, therefore producing convincing fake images. Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as L texture = − i log D(F W (I s ), I t ),(5) where F W and D denote the generator and discriminator networks, respectively. Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise. L tv = 1 CHW ∇ x F W (I s ) + ∇ y F W (I s )(6) Again, C, H, and W are the number of channels, height, and width of the generated image F W (I s ). It is given a low weight overall. Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses. L total = L content + 0.4 · L texture + 0.1 · L color + 400 · L tv ,(7) Ignatov et al. [8] use the relu 5 4 layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset. Experiments and Results Experiments Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline's residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter's kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of 5 × 5 were also attempted instead of 3 × 3, but did not provide the quality improvements necessary to justify their computational costs. In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored. Parametric ReLU. Parametric ReLU [6] is an activation function defined as where y i is the i-th element of the feature vector, and a i is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost. PReLU (y i ) = y i , if y i > 0 a i y i , if y i ≤ 0(8) In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality. Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12]. This down-sampling operation is followed by two residual blocks at this new, 4× reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution. At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function. This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations. Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed. The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of 3 × 3, except in the strided convolutional layers, where we opted for 4 × 4 instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50. Results Our network 1 takes only 3.2 s of CPU time to enhance a 1280 × 720 px image compared to the baseline's 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB. As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution. With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper's main result, but was trained for only 33k iterations. Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix. While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works. Conclusion Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16 % as much computation time. Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.
2,126
1812.11326
2800824532
The development of self-interference (SI) cancelation technology makes full-duplex (FD) communication possible. Considering the quality of service (QoS) of flows in small cells densely deployed scenario with limited time slot resources, this paper introduces the FD communication into the concurrent scheduling problem of millimeter-wave wireless backhaul network. We propose a QoS-aware FD concurrent scheduling algorithm to maximize the number of flows with their QoS requirements satisfied. Based on the contention graph, the algorithm makes full use of the FD condition. Both residual SI and multi-user interference are considered. Besides, it also fully considers the QoS requirements of flows and ensures the flows can be transmitted at high rates. Extensive simulations at 60 GHz demonstrate that with high SI cancelation level and appropriate contention threshold, the proposed FD algorithm can achieve superior performance in terms of the number of flows with their QoS requirements satisfied and the system throughput compared with other state-of-the-art schemes.
Compared with the serial TDMA scheme, concurrent transmission scheduling can significantly increase the system throughput, and thus has been extensively studied @cite_7 , @cite_0 - @cite_19 . Cai @cite_0 proposed a scheduling algorithm based on exclusive region to support concurrent transmissions. To maximize the number of flows scheduled in the network so that the QoS requirement of each flow is satisfied, Qiao . @cite_4 proposed a flip-based scheduling algorithm. In @cite_7 , Zhu proposed a Maximum QoS aware Independent Set (MQIS) based scheduling algorithm for mmWave backhaul networks to maximize the number of flows with their QoS requirements satisfied. In MQIS, the concurrent transmission and the QoS aware priority are exploited to achieve more successfully scheduled flows and higher network throughput. In @cite_18 , based on Stackelberg game, Li proposed a distributed transmission power control solution for the concurrent transmission scheduling between interference D2D links to further enhance the network throughput. Niu @cite_19 proposed an energy efficient scheduling scheme for the mmWave backhaul network, which exploits concurrent transmissions to achieve higher energy efficiency. However, all the above scheduling algorithms assume the devices are HD.
{ "abstract": [ "Millimeter wave (mmWave) communication has been a promising technology of future fifth generation (5G) cellular networks. Due to the tremendous propagation loss of mmWave communication, device-to-device (D2D) communications are widely used over directional mmWave networks to improve the network throughput. In this paper, a new time resource sharing scheme is proposed based on Stackelberg game for interference D2D links to further enhance the network throughput. The D2D links causing interference can access to the time resource by paying higher price, while the D2D links causing no interference can also be scheduled in the scheme. Concurrent transmission scheduling between D2D links causing interference is formulated as a non-cooperative game, which achieves a distributed transmission power control solution among the interference D2D links. Moreover, the price strategy can be adjusted by setting the interference threshold such that the transmission quality can be guaranteed. The simulation results show that the proposed scheme can achieve significant network throughput gain compared with traditional concurrent transmission scheme.", "In this paper, a concurrent transmission scheduling algorithm is proposed to enhance the resource utilization efficiency for multi-Gbps millimeter-wave (mmWave) networks. Specifically, we exploit spatial-time division multiple access (STDMA) to improve the system throughput by allowing both non-interfering and interfering links to transmit concurrently, considering the high propagation loss at mmWave band and the utilization of directional antenna. Concurrent transmission scheduling in mmWave networks is formulated as an optimization model to maximize the number of flows scheduled in the network such that the quality of service (QoS) requirement of each flow is satisfied. We further decompose the optimization problem and propose a flip-based heuristic scheduling algorithm with low computational complexity to solve the problem. Extensive simulations demonstrate that the proposed algorithm can significantly improve the network performance in terms of network throughput and the number of supported flows.", "With the explosive growth of mobile data demand, small cells densely deployed underlying the homogeneous macro-cells are emerging as a promising candidate for the fifth generation (5G) mobile network. The backhaul communication for small cells poses a significant challenge, and with huge bandwidth available in the mmWave band, the wireless backhaul at mmWave frequencies can be a promising backhaul solution for small cells. In this paper, we propose the Maximum QoS-aware Independent Set (MQIS) based scheduling algorithm for the mmWave backhaul network of small cells to maximize the number of flows with their QoS requirements satisfied. In the algorithm, concurrent transmissions and the QoS aware priority are exploited to achieve more successfully scheduled flows and higher network throughput. Simulations in the 73 GHz band are conducted to demonstrate the superior performance of our algorithm in terms of the number of successfully scheduled flows and the system throughput compared with other existing schemes.", "Millimeter-wave (mmWave) transmissions are promising technologies for high data rate (multi-Gbps) Wireless Personal Area Networks (WPANs). In this paper, we first introduce the concept of exclusive region (ER) to allow concurrent transmissions to explore the spatial multiplexing gain of wireless networks. Considering the unique characteristics of mmWave communications and the use of omni-directional or directional antennae, we derive the ER conditions which ensure that concurrent transmissions can always outperform serial TDMA transmissions in a mmWave WPAN. We then propose REX, a randomized ER based scheduling scheme, to decide a set of senders that can transmit simultaneously. In addition, the expected number of flows that can be scheduled for concurrent transmissions is obtained analytically. Extensive simulations are conducted to validate the analysis and demonstrate the effectiveness and efficiency of the proposed REX scheduling scheme. The results should provide important guidelines for future deployment of mmWave based WPANs.", "Heterogeneous cellular networks (HCNs) are emerging as a promising candidate for the fifth-generation (5G) mobile network. With base stations (BSs) of small cells densely deployed, the cost-effective, flexible, and green backhaul solution has become one of the most urgent and critical challenges. With vast amounts of spectrum available, wireless backhaul in the millimeter-wave (mmWave) band is able to provide transmission rates of several gigabits per second. The mmWave backhaul utilizes beamforming to achieve directional transmission, and concurrent transmissions under low interlink interference can be enabled to improve network capacity. To achieve an energy-efficient solution for mmWave backhauling, we first formulate the problem of minimizing the energy consumption via concurrent transmission scheduling and power control into a mixed integer nonlinear program (MINLP). Then, we develop an energy-efficient and practical mmWave backhauling scheme, which consists of the maximum independent set (MIS)-based scheduling algorithm and the power control algorithm. We also theoretically analyze the conditions that our scheme reduces energy consumption, as well as the choice of the interference threshold. Through extensive simulations under various traffic patterns and system parameters, we demonstrate the superior performance of our scheme in terms of energy efficiency and analyze the choice of the interference threshold under different traffic loads, BS distributions, and the maximum transmission power." ], "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_0", "@cite_19" ], "mid": [ "2741082453", "2042822440", "2501770904", "2118060203", "2234862916" ] }
QoS-aware Full-duplex Concurrent Scheduling for Millimeter Wave Wireless Backhaul Networks
In the fifth generation (5G) mobile cellular network, due to the densification of small cells, the massive backhaul traffic becomes a significant problem [1], [2]. Compared with the fiber based backhaul network, the wireless backhaul network in millimeter-wave (mmWave) bands also has huge bandwidth, and can provide a more cost-effective and flexible solution to this problem [3]. In the mmWave wireless backhaul network, directional antennas and beamforming techniques are often used to compensate for the high path loss [4], [5]. The directional communication can reduce the interference between different flows, and thus concurrent transmissions (i.e. spatial reuse) of flows become possible. Concurrent transmissions can significantly increase the system throughput [6]. However, the concurrent transmissions of multiple flows result in higher W. Ding, Y. Niu mutual interference, which will conversely degrade the system performance. Therefore, how to efficiently schedule the flows transmitted concurrently is worth to study and thus has attracted considerable interest from researchers [2], [7]- [10]. Most existing concurrent scheduling schemes [2], [7]- [10] in mmWave bands hold the assumption of half-duplex (HD). Recently, with the development of self interference (SI) cancelation technology [11]- [15], it becomes possible to enable the full-duplex (FD) communication in mmWave wireless backhaul networks [16]. Here, the SI means the transmitted signal received by the local receiver at the same base station (BS) [15], which is shown in Figure 1. It seriously affects the performance of FD system [17]. By transmitting and receiving information simultaneously at the same BS over the same frequency [11], [17], the FD communication may theoretically double the spectral efficiency [18], which brings an important opportunity for the concurrent scheduling problem in mmWave wireless backhaul networks. However, the SI can't be completely eliminated in practice. There is still residual self interference (RSI) in the system. Therefore, for the FD backhaul system, the interference we need to consider is more complex than that in HD system: not only multi-user interference (MUI), but also RSI. This is a big challenge for the concurrent scheduling problem in mmWave backhaul networks. Moreover, in the future 5G mmWave backhaul network, many applications are bandwidth-intensive (e.g. uncompressed video streaming), and should be provided with multi-Gbps throughput [8]. The data flows of these applications all have their own minimum throughput requirements. In the following paper, the minimum throughput requirements will be referred to as the quality of service (QoS) requirements. To guarantee the required quality of service, the QoS requirements of flows need to be satisfied [2]. Although in [ duced the FD communication into the scheduling scheme for 5G mmWave backhaul networks, the scheduling solution was designed for the case with sufficient time slot (TS) resources. The QoS requirements were not specially considered in [16]. Therefore, for the case where the TS resources are limited compared with the intensive traffic demands of users [2], [8], how to satisfy the QoS requirements of flows as many as possible is still a challenge. The above opportunities and challenges motivate us to investigate a QoS-aware FD concurrent scheduling scheme for the mmWave wireless backhaul network with limited TS resources. The contributions of this paper can be summarized as follows. • We innovatively introduce the FD technology into the concurrent scheduling problem of mmWave wireless backhaul networks with limited number of TSs. Both RSI and MUI are simultaneously taken into account so that the advantages of the FD technology and the concurrent transmission can be brought into full play. • The QoS requirements of flows in the case where the TS resources are limited are specially considered. We formulate a nonlinear integer programming (NLIP) problem aiming at maximizing the number of flows with their QoS requirements satisfied. Then, a QoS-aware FD scheduling algorithm is proposed, which can keep the flow rate high and satisfy the QoS requirements of flows as many as possible. • We evaluate the proposed algorithm in the 60GHz mmWave wireless backhaul network with limited TS resources. The extensive simulations demonstrate that compared with other state-of-the-art algorithms, the proposed QoS-aware FD algorithm can significantly improve the number of flows with their QoS requirements satisfied and the total system throughput. Furthermore, we also analyze the impact of SI cancelation level and contention threshold on the performance improvement. The structure of this paper is organized as follows. Section II introduces the related work. Section III introduces the system overview and assumption. In Section IV, the optimal concurrent scheduling problem in FD mmWave wireless backhaul networks with limited TSs is formulated into an NLIP. In Section V, a QoS-aware FD concurrent scheduling algorithm is proposed. In Section VI, we conduct extensive simulations, and in Section VII we conclude this paper. III. SYSTEM OVERVIEW AND ASSUMPTION In this paper, we consider a typical FD mmWave wireless backhaul network in the small cells densely deployed scenario. As shown in Figure 2, the network includes N BSs. The BSs are connected through backhaul links in the mmWave band. When there are some traffic demands from one BS to another, we say there is a flow between them. As shown in Figure 3, each BS operates in FD mode and is equipped with two steerable directional antennas: one for transmitting and another for receiving. Therefore, a BS can at most simultaneously support two flows. It can simultaneously serve as the transmitter of one flow and the receiver of another, but it can't simultaneously serve as the transmitters or receivers of both two flows. There are one or more BSs connected to the backbone network via the macrocell, which is (are) called gateway(s) [10]. A backhaul network controller (BNC) resides on one of the gateways, which can synchronize the network, receive the QoS requirements of flows and obtain the locations of BSs [19]. A. The Received Power Since non-line-of-sight (NLOS) transmissions suffer from high attenuation, we use the line-of-sight (LOS) path loss model for mmWave as described in [2]. For flow f , the received signal power at its receiver r f from its transmitter t f can be expressed as P r (t f , r f ) = kP t G t (t f , r f )G r (t f , r f )d −n t f r f . (1) k is a factor that is proportional to λ 4π 2 , where λ denotes the wave length; P t denotes the transmission power of the transmitter; G t (t f , r f ) denotes the transmitter antenna gain in the direction of from t f to r f , and G r (t f , r f ) denotes the receiver antenna gain in the direction of from t f to r f ; d t f r f denotes the distance between t f and r f and n is the path loss exponent [8]. According to the FD assumption mentioned above, the two flows scheduled simultaneously either have no common node or one's transmitter is the receiver of the other. Therefore, the interference between different flows can be divided into two cases: 1) the interference between two flows without any common node, namely, MUI; 2) the RSI after SI cancelation. The MUI caused by the transmitter t l of flow l to the receiver r f of flow f is defined as P r (t l , r f ) = ρkP t G t (t l , r f )G r (t l , r f )d −n t l r f ,(2) where ρ is the MUI factor between different flows, which is related to the cross correlation of signals from different flows [2]. According to [11], after SI cancelation, the effect of RSI can be modeled in terms of the SNR loss. Therefore, we can use β n N 0 W to denote the RSI, where the non-negative parameter β n represents the SI cancelation level of the nth BS. The smaller β n , the higher the level of SI cancelation. Due to various factors, we assume the parameters for different BSs are different. N 0 is the onesided power spectral density of white Gaussian noise; W is the channel bandwidth. B. Data Rate With the reduction of multipath effect, mmWave channel can be approximated as Gaussian channel. With the interference from other flows, the data rate of flow f can be estimated according to the Shannon's channel capacity [10]. C. Antenna Model In this paper, we adopt the realistic antenna model in [10]. The gain of a directional antenna in units of dB can be expressed as G(θ) =    G 0 − 3.01 × 2θ θ-3dB 2 , 0 • ≤ θ ≤ θ ml /2 G sl . θ ml /2 < θ ≤ 180 •(3) θ denotes an angle within the range [0 • , 180 • ]. The maximum antenna gain G 0 can be calculated as G 0 = 10log(1.6162/sin(θ -3dB /2)) 2 . θ -3dB is the angle of the halfpower beamwidth. The main lobe width θ ml in units of degrees can be calculated as θ ml = 2.6 × θ -3dB . The sidelobe gain G sl = −0.4111 × ln(θ -3dB ) − 10.579 [10]. IV. PROBLEM FORMULATION In this paper, we consider a QoS-aware FD concurrent scheduling problem when the time is limited. System time is divided into a series of non-overlapping frames. As shown in Figure 4, each frame consists of a scheduling phase, where a transmission schedule S is computed by the BNC, and a transmission phase, where the BSs start concurrent transmissions following the schedule [6]. The transmission Scheduling Phase Transmission Phase phase is further divided into M equal TSs. It's assumed that there are F flows in the network and each flow f has its QoS requirement q f . For each flow f , we define a binary variable a i f to indicate whether flow f is scheduled in the ith TS. If so, a i f = 1; otherwise, a i f = 0. Since there may be different flows to be transmitted in different TSs, we denote the actual transmission rate of flow f in the ith TS by R i f . According to the Shannon's channel capacity [10], R i f can be calculated as R i f = ηW log 2 (1+ a i f P r (t f , r f ) N 0 W + h a i h β t h N 0 W + l a i l P r (t l , r f ) ). (4) where η is the factor that describes the efficiency of the transceiver design, which is in the range of (0, 1). W is the bandwidth, and N 0 is the one-sided power spectra density of white Gaussian noise. h denotes the flow whose transmitter is the same as the receiver of flow f . β t h is the parameter of SI cancelation level at BS t h . l denotes the flow without any common node with f . Then we can define the actual throughput of flow f based on the schedule S as T f = M i=1 R i f t T s + M t ,(5) where T s is the time of scheduling phase and t is the time of one TS. When the actual throughput T f of flow f is greater than its QoS requirement q f , we say flow f has satisfied its QoS requirement, and call the flow a completed flow. A binary variable I f is used to indicate whether flow f is completed. I f = 1 indicates f is completed, while I f = 0 indicates f is not completed. As we investigate a QoS-aware scheduling for a backhaul network with limited time, given the QoS requirements of flows, with the limited number of TSs in the transmission phase, the optimal schedule should accommodate as many flows as possible [2]. In other words, we should aim at maximizing the number of flows that satisfy their QoS requirements (i.e. the number of flows that are completed). Therefore, the objective function can be formulated as max F f =1 I f ,(6) and the first constraint is I f = 1, T f ≥ q f ; 0, otherwise.(7) Next, we analyze the other constraints. Firstly, we use variable f n to denote the flow whose transmitter or receiver is the nth BS B n , such as the transmitting flow and the receiving flow in Figure 1; thus a i fn indicates whether flow f n is scheduled in the ith TS, that is, whether f n does use B n in the ith TS. According to our FD assumption described in section III, because each BS is just equipped with two steerable directional antennas, the number of flows that simultaneously use the same BS B n can't exceed two; this constraint can be expressed as fn a i fn ≤ 2, ∀i, n. Then we use f 1 n and f 2 n stand for the two flows that simultaneously use B n ; we also use T (B n ) and R(B n ) stand for the wireless links with B n as the transmitter and the receiver, respectively. As assumed in section III, for the two antennas of a FD BS, one of them is a transmitting antenna and the other is a receiving antenna. Therefore, when two flows simultaneously use the same BS, the BS can only serve as the transmitter for one flow and as the receiver for the other, which can be expressed as: f 1 n ∈ T (B n )&f 2 n ∈ R(B n ) or f 1 n ∈ R(B n )&f 2 n ∈ T (B n ), if fn a i fn = 2.(9) In summary, the problem of optimal scheduling (P1) can be formulated as follows: max F f =1 I f s.t. Constraints (7) -(9) This is a nonlinear integer programming (NLIP) problem and is NP-hard [2]. The optimization problem is similar to that in [2]. [2] is for the HD scenario while ours is for the FD scenario. Compared with [2], the number of constraints for our optimization problem is more, and the problem is obviously more complex than that in [2]. [2] is NP-hard, and thus our optimization problem is also NP-hard. In each TS, every flow is either scheduled or unscheduled. Therefore, when the number of TSs is M and the number of flows is F , the computational complexity using exhaustive search algorithm is 2 M F , which is exponential. In the small cells densely deployed scenario, the number of flows may be large, and thus it will be time-consuming if we use exhaustive algorithm to solve P1. The computational time is unacceptable for practical mmWave small cells where the duration of one TS is only a few microseconds [20]. Consequently, a heuristic algorithm with low complexity is desired to solve it in practice. V. QOS-AWARE FULL-DUPLEX SCHEDULING ALGORITHM In this section, we propose a QoS-aware full-duplex concurrent scheduling algorithm for problem P1. Borrowing the idea of contention graph from [10], the algorithm makes full use of the FD condition and satisfies the QoS requirements of flows as many as possible. Next, we first describe how to construct the contention graph and then describe the proposed algorithm in detail. A. The Construction of Contention Graph In FD mmWave wireless backhaul networks, not all pairs of flows can be concurrently scheduled. In contention graph [10], when the two flows can't be concurrently scheduled, we define there is a contention between them. In this paper, based on the assumption and analysis mentioned above, we define the flows that can't be concurrently scheduled into the following two cases. Firstly, according to the FD assumption described in section III, for the two antennas of a FD BS, one of them is a transmitting antenna and the other is a receiving antenna. Therefore, the two flows that simultaneously use the same BS as their transmitters (or receivers) can't be concurrently scheduled. This case is shown in Figure 5. Figure 5 (a) shows that two flows simultaneously use the same BS as their transmitters. Similarly, Figure 5 (b) shows that two flows simultaneously use the same BS as their receivers. Accordingly, based on the analysis for this case, the flows that can be concurrently scheduled are divided into following three cases. 1) As shown in Figure 6 (a), the transmitter of flow f is the receiver of flow l, but the receiver of flow f is not the transmitter of flow l. 2) As shown in Figure 6 Secondly, considering the QoS requirements of flows, to guarantee the flow rate and the system throughput, the two flows whose relative interference (RI) [2] between each other is large can't be concurrently scheduled. When the RI between two flows is large, the rates of the flows become low. The low rates result in inefficient resource utilization. In other words, the TS resources are allocated to the flows, but the QoS requirements of them are hard to be satisfied, and thus they can't support the specific applications [8]. For the three cases in Figure 6, we now define their RI, respectively. 1) For the case in Figure 6 (a), the interference from flow f to flow l is RSI. Therefore, the RI from flow f to flow l can be defined as RI f,l = N 0 W + β t f N 0 W P r (t l , r l ) ,(10) where P r (t l , r l ) is calculated as (1). The interference from flow l to flow f is MUI; so the RI from flow l to flow f is defined as RI l,f = N 0 W + P r (t l , r f ) P r (t f , r f ) ,(11) where P r (t l , r f ) is calculated as (2) and P r (t f , r f ) is calculated as (1). 2) For the case in Figure 6 (b), both the interference from flow f to l and that from flow l to f is RSI. Therefore, the RI between the two flows are both similar to (10). 3) For the case in Figure 6 (c), both the interference from flow f to l and that from flow l to f is MUI. Therefore, the RI between the two flows are both similar to (11). Next, let's construct the contention graph. In the contention graph, each vertex represents a flow. If two flows can't be concurrently scheduled (i.e., there is a contention between them), an edge is inserted between the two corresponding vertices. For example, as shown in Figure 7, there is a contention between flow 1 and flow 2. In contrast, there is no contention between flow 1 and flow 3. Specifically, for the two pairs of flows in Figure 5, there is an edge between the two corresponding vertices, respectively. In addition, for the three pairs of flows in Figure 6, we should examine whether the RI between the flows is too large. When the RI between two flows is larger than a contention threshold σ, we say there is a contention between them. In other words, if max(RI f,l , RI l,f ) > σ, an edge is inserted into the two corresponding vertices. B. The QoS-aware Full-duplex Scheduling Algorithm Based on the contention graph, we now concretely describe the proposed algorithm. The pseudo code for it is shown in Algorithm 1. To begin with, line 1 is some preparation work. The BNC obtains the BS location (Loc), the SI cancelation level (β n ) at each BS and the QoS requirement (q f ) of each flow. Next, in line 2, we calculate the number of TSs that each flow spends to complete its QoS requirement when there is no interference from others. The number of TSs is calculated as ξ f = q f * (T s + M t) R f * t . (12) R f is the rate of flow f without interference from others, which can be calculated as Since the scheduling problem we investigate is in limited time, i.e., in M TSs, the flow whose ξ f is greater than M should be removed. In the actual scheduling, there exists interference from other flows; so the flow rates will be further reduced and the spent number of TS will be further increased. The judgment for these flows becomes meaningless to our optimization goal. Removing the flows (represented by set D) can not only reduce the complexity of subsequent scheduling, but also save more TSs to schedule more worthwhile flows, that is, the flows that can be completed in M TSs. Pseudo code for this step is shown in line 3. R f = ηW log 2 (1 + P r (t f , r f ) N 0 W ).(13) Then, as shown in lines 4-5, we sort the remaining F flows in non-decreasing order according to ξ f and call the set of the sorted flows "pre-scheduling set" P. Next, we construct the contention graph G for all the flows in P. Then we make the scheduling decision slot by slot. In lines 9-12, to complete more flows in the limited TSs, we first determine the flow with the smallest ξ. In other words, we determine the flows in P one by one from the beginning. If flow f has never been scheduled and has no contention with the flow(s) that is(are) ongoing, then the profit of scheduling the flow is evaluated: if scheduling it can increase the total system throughput, we schedule it; otherwise, skip the flow and determine the next. These rules help to guarantee the flow rate and the system throughput; so it's more QoS-aware. In every TS, as shown in lines 14-16, it is necessary to check whether some flow(s) has(have) completed its(their) QoS requirement. If so, the corresponding S i (f ) is(are) set to -1, which means the flow(s) will never be scheduled later. When one flow is completed, allocating resources to it is of little significance to further improve its QoS. Therefore, we should stop scheduling it and save the TSs to serve more other flows. At the same time, change is set to 1. In fact, as shown in line 8, only when it's the 1th TS or some flow(s) is(are) newly completed, that is, when change = 1, we need to make new scheduling decision. If change = 0, the scheduling vector is the same as the previous TS, which is shown in line 13. In this way, the scheduling complexity is greatly reduced. The algorithm is repeated until M TSs are over, and we finally obtain the scheduling vector for each TS. Obviously, in the worst case, the variable change is 1 in every TS, that is, there is(are) some flow(s) to be newly completed in every TS. Therefore, for the M TSs and F flows, the worst computational complexity of Algorithm 1 is O(M F ). VI. PERFORMANCE EVALUATION A. Simulation Setup In the simulations, we evaluate the performance of the proposed algorithm in a 60GHz mmWave wireless backhaul network that 10 BSs are uniformly distributed in a 100m × 100m square area. Every BS has the same transmission power P t . The transmitters and receivers of flows are randomly selected, and the QoS requirements of flows are uniformly distributed between 1Gbps and 3Gbps. The SI cancelation parameters β for different BSs are uniformly distributed in a certain range. To be more realistic, other parameters are shown in Table I. Because we focus on the QoS of flows, according to our optimization goal, we use the number of completed flows and the system throughput as evaluation metrics. When one flow achieves its QoS requirement, it is called a completed flow. System throughput represents the throughput of all flows in the network per slot. To show the advantages of the proposed QoS-aware FD concurrent scheduling algorithm (Proposed-FD) in the network system with limited TS resources, we compare it with the following four schemes. 1) TDMA: In TDMA, the flows are transmitted serially. We use TDMA as the baseline for evaluating performance without concurrent transmissions. 2) MQIS: MQIS [2] is a HD concurrent scheduling algorithm based on the maximum QoS-aware independent set. It first schedules the flow with the smallest degree in contention graph. It doesn't remove the flow(s) spent too much slots, nor does it evaluate the profits when adding a new flow. To the best of our knowledge, MQIS achieves the best performance in the network system with limited TS resources in terms of the number of completed flows and system throughput among the existing scheduling algorithms. Therefore, we use it as the baseline for evaluating performance without FD communication. 3) Proposed-HD: It uses the same scheduling algorithm with Proposed-FD, but it only allows the HD communication. We also use it as a baseline without the FD communication. 4) FDP: Full-Duplex (FDP) scheme [16] is for the system where the TS resources are sufficient. It aims at accomplishing all of the transmissions with the minimum time. In every phase, higher priority is given to the flow that occupies the most TSs. If another flow is qualified to be transmitted together in the current phase, i.e., the number of flows simultaneously using the same BS doesn't exceed the number of RF chains and the SINR is larger than a certain threshold, the corresponding flow is also scheduled. Only when all the flows scheduled together in one phase are completed, can we start the next phase and make a new scheduling decision. We use it as a baseline for FD communication. Each simulation performs 100 times to get a more reliable average result. B. Simulation Results 1) Under different numbers of flows: In this case, the contention threshold σ is set to 0.001, and the SI cancelation level parameter β is uniformly distributed between 2 − 4. The simulation results are shown in Figure 8. We can find that the Proposed-FD algorithm always shows superior performance compared with other algorithms, and the more the number of flows, the more obvious the advantages of the Proposed-FD. Compared with the HD algorithms, the Proposed-FD allows simultaneous transmission and reception at the same BS. In fact, the Proposed-HD algorithm also performs better than MQIS. This is because when deciding whether or not to schedule a flow, we consider whether adding the flow can improve the system throughput. This makes each flow be scheduled at a higher rate, and thus the QoS requirements of flows can be achieved more quickly. As for the FDP algorithm, the number of completed flows for it is not large enough. This is mainly because the issue they investigate and the optimization goal are different from ours. It is for the network where the TS resources are sufficient and aims at accomplishing all the transmissions with the minimum time. Therefore, it is not suitable for the investigated problem in this paper that maximizing the completed flows in limited time. Moreover, with the increase of the total number of flows, the number of completed flows for FDP doesn't increase significantly. This is because in FDP, only when all the flows scheduled together in one phase are completed, can we start the next phase and make a new scheduling decision. As a result, when some flows are completed quickly, due to the lack of scheduling of new flows, a large amount of TS resources are wasted. Thus, in limited time, the number of completed flows has almost no change. However, the system throughput of FDP is higher than other HD algorithms. This is because FDP prefers the flows that occupy more TSs. These flows usually have higher QoS requirements (i.e. the minimum throughput requirement), so even the number of completed flows is small, the system throughput is still high. In particular, when the number of flows is 90, the Proposed-FD improves the number of completed flows by 30.1% compared with Proposed-HD and improves the system throughput by 34.1% compared with FDP. 2) Under different SI cancelation levels: For the two FD algorithms (the Proposed-FD and FDP), the SI cancelation level β has an obvious impact on the performance. Thus, we simulate the performance under different magnitudes of β, as shown in Figure 9. The abscissa x is the magnitude of β. For example, when x = 2, β is uniformly distributed in 2 × 10 2 − 4 × 10 2 . In this case, the total number of flows is 90, and σ = 0.001. We can find that the performance of the Proposed-FD is better when β is smaller, that is, when the SI cancelation level is higher. As β becomes larger, the performance of the Proposed-FD gradually deteriorates. In particular, when β reaches 10 4 magnitude, the Proposed-FD has the same performance as the Proposed-HD. This tells us that not in any case can the FD communication improve the system performance, and better SI cancelation techniques are needed. The trend of the performance for FDP is similar to the Proposed-FD. However, due to the applicable scenario and optimization goal are different from ours, the performance of FDP is relatively poor. 3) Under different contention thresholds: To study the impact of contention threshold σ on the performance, we simulate the two metrics under different σ, as shown in Figure 10. The abscissa x represents the magnitude of σ. For example, x = -3 means σ = 10 (−3) . In this case, the number of flows is 90, and β is uniformly distributed between 2 − 4. We can observe that as σ increases, in addition to TDMA, the performance of the other four solutions first increases, then degrades and finally almost keep unchanged. This is because when σ is small, it is not conducive to concurrent transmissions. When σ is greater than a certain threshold (e.g., 10 (−3) for Proposed-FD), there is severe interference between concurrent flows, which leads to the rate reduction and is harmful to satisfy the QoS requirements. Therefore, to achieve the best performance, we should choose the appropriate threshold. Under the simulation conditions in this paper, we choose σ = 10 (−3) . Specifically, when σ = 10 (−3) , the Proposed-FD improves the number of completed flows by 29.9% compared with Proposed-HD and improves the system throughput by 35.9% compared with FDP. Although FDP doesn't use the contention graph, we convert its SINR threshold into the contention threshold, so its performance also varies with σ. VII. CONCLUSION In this paper, we propose a QoS-aware full-duplex concurrent scheduling algorithm for mmWave wireless backhaul networks. Considering the FD characteristics and the QoS requirements of flows in the system with limited TS resources, the proposed algorithm exploit the contention graph to find the concurrently scheduled flows and maximize the number of completed flows. Extensive simulations show that the proposed FD scheduling algorithm can significantly increase the number of completed flows and the system throughput compared with other scheduling schemes. In addition, the effects of SI cancelation level and contention threshold on the performance are also simulated to guide a better scheduling. In the future work, we will also consider the blockage problem in mmWave communications into problem, and propose a robust scheme for the mmWave full-duplex backhaul network.
5,247
1812.11326
2800824532
The development of self-interference (SI) cancelation technology makes full-duplex (FD) communication possible. Considering the quality of service (QoS) of flows in small cells densely deployed scenario with limited time slot resources, this paper introduces the FD communication into the concurrent scheduling problem of millimeter-wave wireless backhaul network. We propose a QoS-aware FD concurrent scheduling algorithm to maximize the number of flows with their QoS requirements satisfied. Based on the contention graph, the algorithm makes full use of the FD condition. Both residual SI and multi-user interference are considered. Besides, it also fully considers the QoS requirements of flows and ensures the flows can be transmitted at high rates. Extensive simulations at 60 GHz demonstrate that with high SI cancelation level and appropriate contention threshold, the proposed FD algorithm can achieve superior performance in terms of the number of flows with their QoS requirements satisfied and the system throughput compared with other state-of-the-art schemes.
Recently, the development of SI cancelation technology has made FD communication possible. Jain @cite_5 proposed the signal inversion and adaptive cancelation. Combining signal inversion cancelation with digital cancelation can reduce SI by up to 73dB. Everett @cite_8 showed the BS could exploit directional diversity by using directional antennas to achieve additional passive suppression of the SI. Besides, Miura @cite_3 proposed a novel node architecture introducing directional antennas into FD wireless technology. Rajagopal @cite_11 proved enabling backhaul transmission on one panel while simultaneously receiving backhaul on an adjacent panel is attainable for next generation backhaul designs. In @cite_12 , Xiao showed the configuration with separate Tx Rx antenna arrays appeared more flexible in SI suppression, and proposed the beamforming cancelation in FD mmWave communication.
{ "abstract": [ "The use of directional antennas in wireless networks has been widely studied with two main motivations: 1) decreasing interference between devices and 2) improving power efficiency. We identify a third motivation for utilizing directional antennas: pushing the range limitations of full-duplex wireless communication. A characterization of full-duplex performance in the context of a base station transmitting to one device while receiving from another is presented. In this scenario, the base station can exploit “directional diversity” by using directional antennas to achieve additional passive suppression of the self-interference. The characterization shows that at 10 m distance and with 12 dBm transmit power the gains over half-duplex are as high as 90 and no lower than 60 as long as the directional antennas at the base station are separated by 45° or more. At 15 m distance the gains are no lower than 40 for separations of 90° and larger. Passive suppression via directional antennas also allows full-duplex to achieve significant gains over half-duplex even without resorting to the use of extra hardware for performing RF cancellation as has been required in the previous work.", "In this paper, we propose a novel node architecture introducing directional antennas into full duplex wireless (FDW) technology. In the proposed architecture, each element of switched-beam antenna is connected to a software switch for directional antenna selection. A node needs only one set of digital and analog cancellation circuits, which is almost the same circuit scale and complexity of the conventional omnidirectional FDW node. In addition, we propose a MAC protocol for the proposed node architecture with FDW and directional antennas for avoiding collisions and obtaining the advantages of both techniques. We evaluate the performance of the proposed protocol via computer simulations, and show that the proposed protocol can improve end-to-end throughput performance up to 114 percent in a line-type multihop network. Moreover, we extend the proposed MAC protocol for mitigating the performance degradation in two-way traffic in line-type multihop networks, and confirm its effectiveness. To the best of our knowledge, this is the first work that attempts to use directional antennas in FDW networks.", "This paper presents a full duplex radio design using signal inversion and adaptive cancellation. Signal inversion uses a simple design based on a balanced unbalanced (Balun) transformer. This new design, unlike prior work, supports wideband and high power systems. In theory, this new design has no limitation on bandwidth or power. In practice, we find that the signal inversion technique alone can cancel at least 45dB across a 40MHz bandwidth. Further, combining signal inversion cancellation with cancellation in the digital domain can reduce self-interference by up to 73dB for a 10MHz OFDM signal. This paper also presents a full duplex medium access control (MAC) design and evaluates it using a testbed of 5 prototype full duplex nodes. Full duplex reduces packet losses due to hidden terminals by up to 88 . Full duplex also mitigates unfair channel allocation in AP-based networks, increasing fairness from 0.85 to 0.98 while improving downlink throughput by 110 and uplink throughput by 15 . These experimental results show that a re- design of the wireless network stack to exploit full duplex capability can result in significant improvements in network performance.", "The potential of doubling the spectrum efficiency of FD transmission motivates us to investigate FD-mmWave communication. To realize FD transmission in the mmWave band, we first introduce possible antenna configurations for FD-mmWave transmission. It is shown that, different from the cases in microwave band FD communications, the configuration with separate Tx Rx antenna arrays appears more flexible in SI suppression while it may increase some cost and area versus that with the same array. We then model the mmWave SI channel with separate Tx Rx arrays, where a near-field propagation model is adopted for the LOS path, and it is found that the established LOS-SI channel with separate Tx Rx arrays also shows spatial sparsity. Based on the SI channel, we further explore approaches to mitigate SI by signal processing, and we focus on a new cancellation approach in FD-mmWave communication, that is, beamforming cancellation. Centered on the CA constraint of the beamforming vectors, we propose several candidate solutions. Lastly, we consider an FD-mmWave multi-user scenario, and show that even if there are no FD users in an FD-mmWave cellular system, the FD benefit can still be exploited in the FD base station. Candidate solutions are also discussed to mitigate both SI and MUI simultaneously.", "" ], "cite_N": [ "@cite_8", "@cite_3", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2545232960", "2008689146", "2128938148", "2963903846", "2058707124" ] }
QoS-aware Full-duplex Concurrent Scheduling for Millimeter Wave Wireless Backhaul Networks
In the fifth generation (5G) mobile cellular network, due to the densification of small cells, the massive backhaul traffic becomes a significant problem [1], [2]. Compared with the fiber based backhaul network, the wireless backhaul network in millimeter-wave (mmWave) bands also has huge bandwidth, and can provide a more cost-effective and flexible solution to this problem [3]. In the mmWave wireless backhaul network, directional antennas and beamforming techniques are often used to compensate for the high path loss [4], [5]. The directional communication can reduce the interference between different flows, and thus concurrent transmissions (i.e. spatial reuse) of flows become possible. Concurrent transmissions can significantly increase the system throughput [6]. However, the concurrent transmissions of multiple flows result in higher W. Ding, Y. Niu mutual interference, which will conversely degrade the system performance. Therefore, how to efficiently schedule the flows transmitted concurrently is worth to study and thus has attracted considerable interest from researchers [2], [7]- [10]. Most existing concurrent scheduling schemes [2], [7]- [10] in mmWave bands hold the assumption of half-duplex (HD). Recently, with the development of self interference (SI) cancelation technology [11]- [15], it becomes possible to enable the full-duplex (FD) communication in mmWave wireless backhaul networks [16]. Here, the SI means the transmitted signal received by the local receiver at the same base station (BS) [15], which is shown in Figure 1. It seriously affects the performance of FD system [17]. By transmitting and receiving information simultaneously at the same BS over the same frequency [11], [17], the FD communication may theoretically double the spectral efficiency [18], which brings an important opportunity for the concurrent scheduling problem in mmWave wireless backhaul networks. However, the SI can't be completely eliminated in practice. There is still residual self interference (RSI) in the system. Therefore, for the FD backhaul system, the interference we need to consider is more complex than that in HD system: not only multi-user interference (MUI), but also RSI. This is a big challenge for the concurrent scheduling problem in mmWave backhaul networks. Moreover, in the future 5G mmWave backhaul network, many applications are bandwidth-intensive (e.g. uncompressed video streaming), and should be provided with multi-Gbps throughput [8]. The data flows of these applications all have their own minimum throughput requirements. In the following paper, the minimum throughput requirements will be referred to as the quality of service (QoS) requirements. To guarantee the required quality of service, the QoS requirements of flows need to be satisfied [2]. Although in [ duced the FD communication into the scheduling scheme for 5G mmWave backhaul networks, the scheduling solution was designed for the case with sufficient time slot (TS) resources. The QoS requirements were not specially considered in [16]. Therefore, for the case where the TS resources are limited compared with the intensive traffic demands of users [2], [8], how to satisfy the QoS requirements of flows as many as possible is still a challenge. The above opportunities and challenges motivate us to investigate a QoS-aware FD concurrent scheduling scheme for the mmWave wireless backhaul network with limited TS resources. The contributions of this paper can be summarized as follows. • We innovatively introduce the FD technology into the concurrent scheduling problem of mmWave wireless backhaul networks with limited number of TSs. Both RSI and MUI are simultaneously taken into account so that the advantages of the FD technology and the concurrent transmission can be brought into full play. • The QoS requirements of flows in the case where the TS resources are limited are specially considered. We formulate a nonlinear integer programming (NLIP) problem aiming at maximizing the number of flows with their QoS requirements satisfied. Then, a QoS-aware FD scheduling algorithm is proposed, which can keep the flow rate high and satisfy the QoS requirements of flows as many as possible. • We evaluate the proposed algorithm in the 60GHz mmWave wireless backhaul network with limited TS resources. The extensive simulations demonstrate that compared with other state-of-the-art algorithms, the proposed QoS-aware FD algorithm can significantly improve the number of flows with their QoS requirements satisfied and the total system throughput. Furthermore, we also analyze the impact of SI cancelation level and contention threshold on the performance improvement. The structure of this paper is organized as follows. Section II introduces the related work. Section III introduces the system overview and assumption. In Section IV, the optimal concurrent scheduling problem in FD mmWave wireless backhaul networks with limited TSs is formulated into an NLIP. In Section V, a QoS-aware FD concurrent scheduling algorithm is proposed. In Section VI, we conduct extensive simulations, and in Section VII we conclude this paper. III. SYSTEM OVERVIEW AND ASSUMPTION In this paper, we consider a typical FD mmWave wireless backhaul network in the small cells densely deployed scenario. As shown in Figure 2, the network includes N BSs. The BSs are connected through backhaul links in the mmWave band. When there are some traffic demands from one BS to another, we say there is a flow between them. As shown in Figure 3, each BS operates in FD mode and is equipped with two steerable directional antennas: one for transmitting and another for receiving. Therefore, a BS can at most simultaneously support two flows. It can simultaneously serve as the transmitter of one flow and the receiver of another, but it can't simultaneously serve as the transmitters or receivers of both two flows. There are one or more BSs connected to the backbone network via the macrocell, which is (are) called gateway(s) [10]. A backhaul network controller (BNC) resides on one of the gateways, which can synchronize the network, receive the QoS requirements of flows and obtain the locations of BSs [19]. A. The Received Power Since non-line-of-sight (NLOS) transmissions suffer from high attenuation, we use the line-of-sight (LOS) path loss model for mmWave as described in [2]. For flow f , the received signal power at its receiver r f from its transmitter t f can be expressed as P r (t f , r f ) = kP t G t (t f , r f )G r (t f , r f )d −n t f r f . (1) k is a factor that is proportional to λ 4π 2 , where λ denotes the wave length; P t denotes the transmission power of the transmitter; G t (t f , r f ) denotes the transmitter antenna gain in the direction of from t f to r f , and G r (t f , r f ) denotes the receiver antenna gain in the direction of from t f to r f ; d t f r f denotes the distance between t f and r f and n is the path loss exponent [8]. According to the FD assumption mentioned above, the two flows scheduled simultaneously either have no common node or one's transmitter is the receiver of the other. Therefore, the interference between different flows can be divided into two cases: 1) the interference between two flows without any common node, namely, MUI; 2) the RSI after SI cancelation. The MUI caused by the transmitter t l of flow l to the receiver r f of flow f is defined as P r (t l , r f ) = ρkP t G t (t l , r f )G r (t l , r f )d −n t l r f ,(2) where ρ is the MUI factor between different flows, which is related to the cross correlation of signals from different flows [2]. According to [11], after SI cancelation, the effect of RSI can be modeled in terms of the SNR loss. Therefore, we can use β n N 0 W to denote the RSI, where the non-negative parameter β n represents the SI cancelation level of the nth BS. The smaller β n , the higher the level of SI cancelation. Due to various factors, we assume the parameters for different BSs are different. N 0 is the onesided power spectral density of white Gaussian noise; W is the channel bandwidth. B. Data Rate With the reduction of multipath effect, mmWave channel can be approximated as Gaussian channel. With the interference from other flows, the data rate of flow f can be estimated according to the Shannon's channel capacity [10]. C. Antenna Model In this paper, we adopt the realistic antenna model in [10]. The gain of a directional antenna in units of dB can be expressed as G(θ) =    G 0 − 3.01 × 2θ θ-3dB 2 , 0 • ≤ θ ≤ θ ml /2 G sl . θ ml /2 < θ ≤ 180 •(3) θ denotes an angle within the range [0 • , 180 • ]. The maximum antenna gain G 0 can be calculated as G 0 = 10log(1.6162/sin(θ -3dB /2)) 2 . θ -3dB is the angle of the halfpower beamwidth. The main lobe width θ ml in units of degrees can be calculated as θ ml = 2.6 × θ -3dB . The sidelobe gain G sl = −0.4111 × ln(θ -3dB ) − 10.579 [10]. IV. PROBLEM FORMULATION In this paper, we consider a QoS-aware FD concurrent scheduling problem when the time is limited. System time is divided into a series of non-overlapping frames. As shown in Figure 4, each frame consists of a scheduling phase, where a transmission schedule S is computed by the BNC, and a transmission phase, where the BSs start concurrent transmissions following the schedule [6]. The transmission Scheduling Phase Transmission Phase phase is further divided into M equal TSs. It's assumed that there are F flows in the network and each flow f has its QoS requirement q f . For each flow f , we define a binary variable a i f to indicate whether flow f is scheduled in the ith TS. If so, a i f = 1; otherwise, a i f = 0. Since there may be different flows to be transmitted in different TSs, we denote the actual transmission rate of flow f in the ith TS by R i f . According to the Shannon's channel capacity [10], R i f can be calculated as R i f = ηW log 2 (1+ a i f P r (t f , r f ) N 0 W + h a i h β t h N 0 W + l a i l P r (t l , r f ) ). (4) where η is the factor that describes the efficiency of the transceiver design, which is in the range of (0, 1). W is the bandwidth, and N 0 is the one-sided power spectra density of white Gaussian noise. h denotes the flow whose transmitter is the same as the receiver of flow f . β t h is the parameter of SI cancelation level at BS t h . l denotes the flow without any common node with f . Then we can define the actual throughput of flow f based on the schedule S as T f = M i=1 R i f t T s + M t ,(5) where T s is the time of scheduling phase and t is the time of one TS. When the actual throughput T f of flow f is greater than its QoS requirement q f , we say flow f has satisfied its QoS requirement, and call the flow a completed flow. A binary variable I f is used to indicate whether flow f is completed. I f = 1 indicates f is completed, while I f = 0 indicates f is not completed. As we investigate a QoS-aware scheduling for a backhaul network with limited time, given the QoS requirements of flows, with the limited number of TSs in the transmission phase, the optimal schedule should accommodate as many flows as possible [2]. In other words, we should aim at maximizing the number of flows that satisfy their QoS requirements (i.e. the number of flows that are completed). Therefore, the objective function can be formulated as max F f =1 I f ,(6) and the first constraint is I f = 1, T f ≥ q f ; 0, otherwise.(7) Next, we analyze the other constraints. Firstly, we use variable f n to denote the flow whose transmitter or receiver is the nth BS B n , such as the transmitting flow and the receiving flow in Figure 1; thus a i fn indicates whether flow f n is scheduled in the ith TS, that is, whether f n does use B n in the ith TS. According to our FD assumption described in section III, because each BS is just equipped with two steerable directional antennas, the number of flows that simultaneously use the same BS B n can't exceed two; this constraint can be expressed as fn a i fn ≤ 2, ∀i, n. Then we use f 1 n and f 2 n stand for the two flows that simultaneously use B n ; we also use T (B n ) and R(B n ) stand for the wireless links with B n as the transmitter and the receiver, respectively. As assumed in section III, for the two antennas of a FD BS, one of them is a transmitting antenna and the other is a receiving antenna. Therefore, when two flows simultaneously use the same BS, the BS can only serve as the transmitter for one flow and as the receiver for the other, which can be expressed as: f 1 n ∈ T (B n )&f 2 n ∈ R(B n ) or f 1 n ∈ R(B n )&f 2 n ∈ T (B n ), if fn a i fn = 2.(9) In summary, the problem of optimal scheduling (P1) can be formulated as follows: max F f =1 I f s.t. Constraints (7) -(9) This is a nonlinear integer programming (NLIP) problem and is NP-hard [2]. The optimization problem is similar to that in [2]. [2] is for the HD scenario while ours is for the FD scenario. Compared with [2], the number of constraints for our optimization problem is more, and the problem is obviously more complex than that in [2]. [2] is NP-hard, and thus our optimization problem is also NP-hard. In each TS, every flow is either scheduled or unscheduled. Therefore, when the number of TSs is M and the number of flows is F , the computational complexity using exhaustive search algorithm is 2 M F , which is exponential. In the small cells densely deployed scenario, the number of flows may be large, and thus it will be time-consuming if we use exhaustive algorithm to solve P1. The computational time is unacceptable for practical mmWave small cells where the duration of one TS is only a few microseconds [20]. Consequently, a heuristic algorithm with low complexity is desired to solve it in practice. V. QOS-AWARE FULL-DUPLEX SCHEDULING ALGORITHM In this section, we propose a QoS-aware full-duplex concurrent scheduling algorithm for problem P1. Borrowing the idea of contention graph from [10], the algorithm makes full use of the FD condition and satisfies the QoS requirements of flows as many as possible. Next, we first describe how to construct the contention graph and then describe the proposed algorithm in detail. A. The Construction of Contention Graph In FD mmWave wireless backhaul networks, not all pairs of flows can be concurrently scheduled. In contention graph [10], when the two flows can't be concurrently scheduled, we define there is a contention between them. In this paper, based on the assumption and analysis mentioned above, we define the flows that can't be concurrently scheduled into the following two cases. Firstly, according to the FD assumption described in section III, for the two antennas of a FD BS, one of them is a transmitting antenna and the other is a receiving antenna. Therefore, the two flows that simultaneously use the same BS as their transmitters (or receivers) can't be concurrently scheduled. This case is shown in Figure 5. Figure 5 (a) shows that two flows simultaneously use the same BS as their transmitters. Similarly, Figure 5 (b) shows that two flows simultaneously use the same BS as their receivers. Accordingly, based on the analysis for this case, the flows that can be concurrently scheduled are divided into following three cases. 1) As shown in Figure 6 (a), the transmitter of flow f is the receiver of flow l, but the receiver of flow f is not the transmitter of flow l. 2) As shown in Figure 6 Secondly, considering the QoS requirements of flows, to guarantee the flow rate and the system throughput, the two flows whose relative interference (RI) [2] between each other is large can't be concurrently scheduled. When the RI between two flows is large, the rates of the flows become low. The low rates result in inefficient resource utilization. In other words, the TS resources are allocated to the flows, but the QoS requirements of them are hard to be satisfied, and thus they can't support the specific applications [8]. For the three cases in Figure 6, we now define their RI, respectively. 1) For the case in Figure 6 (a), the interference from flow f to flow l is RSI. Therefore, the RI from flow f to flow l can be defined as RI f,l = N 0 W + β t f N 0 W P r (t l , r l ) ,(10) where P r (t l , r l ) is calculated as (1). The interference from flow l to flow f is MUI; so the RI from flow l to flow f is defined as RI l,f = N 0 W + P r (t l , r f ) P r (t f , r f ) ,(11) where P r (t l , r f ) is calculated as (2) and P r (t f , r f ) is calculated as (1). 2) For the case in Figure 6 (b), both the interference from flow f to l and that from flow l to f is RSI. Therefore, the RI between the two flows are both similar to (10). 3) For the case in Figure 6 (c), both the interference from flow f to l and that from flow l to f is MUI. Therefore, the RI between the two flows are both similar to (11). Next, let's construct the contention graph. In the contention graph, each vertex represents a flow. If two flows can't be concurrently scheduled (i.e., there is a contention between them), an edge is inserted between the two corresponding vertices. For example, as shown in Figure 7, there is a contention between flow 1 and flow 2. In contrast, there is no contention between flow 1 and flow 3. Specifically, for the two pairs of flows in Figure 5, there is an edge between the two corresponding vertices, respectively. In addition, for the three pairs of flows in Figure 6, we should examine whether the RI between the flows is too large. When the RI between two flows is larger than a contention threshold σ, we say there is a contention between them. In other words, if max(RI f,l , RI l,f ) > σ, an edge is inserted into the two corresponding vertices. B. The QoS-aware Full-duplex Scheduling Algorithm Based on the contention graph, we now concretely describe the proposed algorithm. The pseudo code for it is shown in Algorithm 1. To begin with, line 1 is some preparation work. The BNC obtains the BS location (Loc), the SI cancelation level (β n ) at each BS and the QoS requirement (q f ) of each flow. Next, in line 2, we calculate the number of TSs that each flow spends to complete its QoS requirement when there is no interference from others. The number of TSs is calculated as ξ f = q f * (T s + M t) R f * t . (12) R f is the rate of flow f without interference from others, which can be calculated as Since the scheduling problem we investigate is in limited time, i.e., in M TSs, the flow whose ξ f is greater than M should be removed. In the actual scheduling, there exists interference from other flows; so the flow rates will be further reduced and the spent number of TS will be further increased. The judgment for these flows becomes meaningless to our optimization goal. Removing the flows (represented by set D) can not only reduce the complexity of subsequent scheduling, but also save more TSs to schedule more worthwhile flows, that is, the flows that can be completed in M TSs. Pseudo code for this step is shown in line 3. R f = ηW log 2 (1 + P r (t f , r f ) N 0 W ).(13) Then, as shown in lines 4-5, we sort the remaining F flows in non-decreasing order according to ξ f and call the set of the sorted flows "pre-scheduling set" P. Next, we construct the contention graph G for all the flows in P. Then we make the scheduling decision slot by slot. In lines 9-12, to complete more flows in the limited TSs, we first determine the flow with the smallest ξ. In other words, we determine the flows in P one by one from the beginning. If flow f has never been scheduled and has no contention with the flow(s) that is(are) ongoing, then the profit of scheduling the flow is evaluated: if scheduling it can increase the total system throughput, we schedule it; otherwise, skip the flow and determine the next. These rules help to guarantee the flow rate and the system throughput; so it's more QoS-aware. In every TS, as shown in lines 14-16, it is necessary to check whether some flow(s) has(have) completed its(their) QoS requirement. If so, the corresponding S i (f ) is(are) set to -1, which means the flow(s) will never be scheduled later. When one flow is completed, allocating resources to it is of little significance to further improve its QoS. Therefore, we should stop scheduling it and save the TSs to serve more other flows. At the same time, change is set to 1. In fact, as shown in line 8, only when it's the 1th TS or some flow(s) is(are) newly completed, that is, when change = 1, we need to make new scheduling decision. If change = 0, the scheduling vector is the same as the previous TS, which is shown in line 13. In this way, the scheduling complexity is greatly reduced. The algorithm is repeated until M TSs are over, and we finally obtain the scheduling vector for each TS. Obviously, in the worst case, the variable change is 1 in every TS, that is, there is(are) some flow(s) to be newly completed in every TS. Therefore, for the M TSs and F flows, the worst computational complexity of Algorithm 1 is O(M F ). VI. PERFORMANCE EVALUATION A. Simulation Setup In the simulations, we evaluate the performance of the proposed algorithm in a 60GHz mmWave wireless backhaul network that 10 BSs are uniformly distributed in a 100m × 100m square area. Every BS has the same transmission power P t . The transmitters and receivers of flows are randomly selected, and the QoS requirements of flows are uniformly distributed between 1Gbps and 3Gbps. The SI cancelation parameters β for different BSs are uniformly distributed in a certain range. To be more realistic, other parameters are shown in Table I. Because we focus on the QoS of flows, according to our optimization goal, we use the number of completed flows and the system throughput as evaluation metrics. When one flow achieves its QoS requirement, it is called a completed flow. System throughput represents the throughput of all flows in the network per slot. To show the advantages of the proposed QoS-aware FD concurrent scheduling algorithm (Proposed-FD) in the network system with limited TS resources, we compare it with the following four schemes. 1) TDMA: In TDMA, the flows are transmitted serially. We use TDMA as the baseline for evaluating performance without concurrent transmissions. 2) MQIS: MQIS [2] is a HD concurrent scheduling algorithm based on the maximum QoS-aware independent set. It first schedules the flow with the smallest degree in contention graph. It doesn't remove the flow(s) spent too much slots, nor does it evaluate the profits when adding a new flow. To the best of our knowledge, MQIS achieves the best performance in the network system with limited TS resources in terms of the number of completed flows and system throughput among the existing scheduling algorithms. Therefore, we use it as the baseline for evaluating performance without FD communication. 3) Proposed-HD: It uses the same scheduling algorithm with Proposed-FD, but it only allows the HD communication. We also use it as a baseline without the FD communication. 4) FDP: Full-Duplex (FDP) scheme [16] is for the system where the TS resources are sufficient. It aims at accomplishing all of the transmissions with the minimum time. In every phase, higher priority is given to the flow that occupies the most TSs. If another flow is qualified to be transmitted together in the current phase, i.e., the number of flows simultaneously using the same BS doesn't exceed the number of RF chains and the SINR is larger than a certain threshold, the corresponding flow is also scheduled. Only when all the flows scheduled together in one phase are completed, can we start the next phase and make a new scheduling decision. We use it as a baseline for FD communication. Each simulation performs 100 times to get a more reliable average result. B. Simulation Results 1) Under different numbers of flows: In this case, the contention threshold σ is set to 0.001, and the SI cancelation level parameter β is uniformly distributed between 2 − 4. The simulation results are shown in Figure 8. We can find that the Proposed-FD algorithm always shows superior performance compared with other algorithms, and the more the number of flows, the more obvious the advantages of the Proposed-FD. Compared with the HD algorithms, the Proposed-FD allows simultaneous transmission and reception at the same BS. In fact, the Proposed-HD algorithm also performs better than MQIS. This is because when deciding whether or not to schedule a flow, we consider whether adding the flow can improve the system throughput. This makes each flow be scheduled at a higher rate, and thus the QoS requirements of flows can be achieved more quickly. As for the FDP algorithm, the number of completed flows for it is not large enough. This is mainly because the issue they investigate and the optimization goal are different from ours. It is for the network where the TS resources are sufficient and aims at accomplishing all the transmissions with the minimum time. Therefore, it is not suitable for the investigated problem in this paper that maximizing the completed flows in limited time. Moreover, with the increase of the total number of flows, the number of completed flows for FDP doesn't increase significantly. This is because in FDP, only when all the flows scheduled together in one phase are completed, can we start the next phase and make a new scheduling decision. As a result, when some flows are completed quickly, due to the lack of scheduling of new flows, a large amount of TS resources are wasted. Thus, in limited time, the number of completed flows has almost no change. However, the system throughput of FDP is higher than other HD algorithms. This is because FDP prefers the flows that occupy more TSs. These flows usually have higher QoS requirements (i.e. the minimum throughput requirement), so even the number of completed flows is small, the system throughput is still high. In particular, when the number of flows is 90, the Proposed-FD improves the number of completed flows by 30.1% compared with Proposed-HD and improves the system throughput by 34.1% compared with FDP. 2) Under different SI cancelation levels: For the two FD algorithms (the Proposed-FD and FDP), the SI cancelation level β has an obvious impact on the performance. Thus, we simulate the performance under different magnitudes of β, as shown in Figure 9. The abscissa x is the magnitude of β. For example, when x = 2, β is uniformly distributed in 2 × 10 2 − 4 × 10 2 . In this case, the total number of flows is 90, and σ = 0.001. We can find that the performance of the Proposed-FD is better when β is smaller, that is, when the SI cancelation level is higher. As β becomes larger, the performance of the Proposed-FD gradually deteriorates. In particular, when β reaches 10 4 magnitude, the Proposed-FD has the same performance as the Proposed-HD. This tells us that not in any case can the FD communication improve the system performance, and better SI cancelation techniques are needed. The trend of the performance for FDP is similar to the Proposed-FD. However, due to the applicable scenario and optimization goal are different from ours, the performance of FDP is relatively poor. 3) Under different contention thresholds: To study the impact of contention threshold σ on the performance, we simulate the two metrics under different σ, as shown in Figure 10. The abscissa x represents the magnitude of σ. For example, x = -3 means σ = 10 (−3) . In this case, the number of flows is 90, and β is uniformly distributed between 2 − 4. We can observe that as σ increases, in addition to TDMA, the performance of the other four solutions first increases, then degrades and finally almost keep unchanged. This is because when σ is small, it is not conducive to concurrent transmissions. When σ is greater than a certain threshold (e.g., 10 (−3) for Proposed-FD), there is severe interference between concurrent flows, which leads to the rate reduction and is harmful to satisfy the QoS requirements. Therefore, to achieve the best performance, we should choose the appropriate threshold. Under the simulation conditions in this paper, we choose σ = 10 (−3) . Specifically, when σ = 10 (−3) , the Proposed-FD improves the number of completed flows by 29.9% compared with Proposed-HD and improves the system throughput by 35.9% compared with FDP. Although FDP doesn't use the contention graph, we convert its SINR threshold into the contention threshold, so its performance also varies with σ. VII. CONCLUSION In this paper, we propose a QoS-aware full-duplex concurrent scheduling algorithm for mmWave wireless backhaul networks. Considering the FD characteristics and the QoS requirements of flows in the system with limited TS resources, the proposed algorithm exploit the contention graph to find the concurrently scheduled flows and maximize the number of completed flows. Extensive simulations show that the proposed FD scheduling algorithm can significantly increase the number of completed flows and the system throughput compared with other scheduling schemes. In addition, the effects of SI cancelation level and contention threshold on the performance are also simulated to guide a better scheduling. In the future work, we will also consider the blockage problem in mmWave communications into problem, and propose a robust scheme for the mmWave full-duplex backhaul network.
5,247
1812.11326
2800824532
The development of self-interference (SI) cancelation technology makes full-duplex (FD) communication possible. Considering the quality of service (QoS) of flows in small cells densely deployed scenario with limited time slot resources, this paper introduces the FD communication into the concurrent scheduling problem of millimeter-wave wireless backhaul network. We propose a QoS-aware FD concurrent scheduling algorithm to maximize the number of flows with their QoS requirements satisfied. Based on the contention graph, the algorithm makes full use of the FD condition. Both residual SI and multi-user interference are considered. Besides, it also fully considers the QoS requirements of flows and ensures the flows can be transmitted at high rates. Extensive simulations at 60 GHz demonstrate that with high SI cancelation level and appropriate contention threshold, the proposed FD algorithm can achieve superior performance in terms of the number of flows with their QoS requirements satisfied and the system throughput compared with other state-of-the-art schemes.
Considering the potential of the FD communication in increasing network performance, Feng @cite_10 proposed a design framework for 5G mmWave backhaul, which combined FD transmissions and hybrid beamforming with routing and scheduling schemes. However, the scheduling solution in @cite_10 was for the system with sufficient TS resources and aimed at accomplishing all of the transmissions with the minimum time. Thus, there was no special consideration for the QoS requirements of flows in limited time. Therefore, for mmWave backhaul networks with limited TS resources, a more QoS-favorable FD scheduling algorithm is needed.
{ "abstract": [ "The trend for dense deployment in future 5G mobile communication networks makes current wired backhaul infeasible owing to the high cost. Millimetre-wave (mm-wave) communication, a promising technique with the capability of providing a multi-gigabit transmission rate, offers a flexible and cost-effective candidate for 5G backhauling. By exploiting highly directional antennas, it becomes practical to cope with explosive traffic demands and to deal with interference problems. Several advancements in physical layer technology, such as hybrid beamforming and full duplexing, bring new challenges and opportunities for mm-wave backhaul. This article introduces a design framework for 5G mm-wave backhaul, including routing, spatial reuse scheduling and physical layer techniques. The associated optimization model, open problems and potential solutions are discussed to fully exploit the throughput gain of the backhaul network. Extensive simulations are conducted to verify the potential benefits of the proposed method for the 5G mm-wave backhaul design." ], "cite_N": [ "@cite_10" ], "mid": [ "2427477319" ] }
QoS-aware Full-duplex Concurrent Scheduling for Millimeter Wave Wireless Backhaul Networks
In the fifth generation (5G) mobile cellular network, due to the densification of small cells, the massive backhaul traffic becomes a significant problem [1], [2]. Compared with the fiber based backhaul network, the wireless backhaul network in millimeter-wave (mmWave) bands also has huge bandwidth, and can provide a more cost-effective and flexible solution to this problem [3]. In the mmWave wireless backhaul network, directional antennas and beamforming techniques are often used to compensate for the high path loss [4], [5]. The directional communication can reduce the interference between different flows, and thus concurrent transmissions (i.e. spatial reuse) of flows become possible. Concurrent transmissions can significantly increase the system throughput [6]. However, the concurrent transmissions of multiple flows result in higher W. Ding, Y. Niu mutual interference, which will conversely degrade the system performance. Therefore, how to efficiently schedule the flows transmitted concurrently is worth to study and thus has attracted considerable interest from researchers [2], [7]- [10]. Most existing concurrent scheduling schemes [2], [7]- [10] in mmWave bands hold the assumption of half-duplex (HD). Recently, with the development of self interference (SI) cancelation technology [11]- [15], it becomes possible to enable the full-duplex (FD) communication in mmWave wireless backhaul networks [16]. Here, the SI means the transmitted signal received by the local receiver at the same base station (BS) [15], which is shown in Figure 1. It seriously affects the performance of FD system [17]. By transmitting and receiving information simultaneously at the same BS over the same frequency [11], [17], the FD communication may theoretically double the spectral efficiency [18], which brings an important opportunity for the concurrent scheduling problem in mmWave wireless backhaul networks. However, the SI can't be completely eliminated in practice. There is still residual self interference (RSI) in the system. Therefore, for the FD backhaul system, the interference we need to consider is more complex than that in HD system: not only multi-user interference (MUI), but also RSI. This is a big challenge for the concurrent scheduling problem in mmWave backhaul networks. Moreover, in the future 5G mmWave backhaul network, many applications are bandwidth-intensive (e.g. uncompressed video streaming), and should be provided with multi-Gbps throughput [8]. The data flows of these applications all have their own minimum throughput requirements. In the following paper, the minimum throughput requirements will be referred to as the quality of service (QoS) requirements. To guarantee the required quality of service, the QoS requirements of flows need to be satisfied [2]. Although in [ duced the FD communication into the scheduling scheme for 5G mmWave backhaul networks, the scheduling solution was designed for the case with sufficient time slot (TS) resources. The QoS requirements were not specially considered in [16]. Therefore, for the case where the TS resources are limited compared with the intensive traffic demands of users [2], [8], how to satisfy the QoS requirements of flows as many as possible is still a challenge. The above opportunities and challenges motivate us to investigate a QoS-aware FD concurrent scheduling scheme for the mmWave wireless backhaul network with limited TS resources. The contributions of this paper can be summarized as follows. • We innovatively introduce the FD technology into the concurrent scheduling problem of mmWave wireless backhaul networks with limited number of TSs. Both RSI and MUI are simultaneously taken into account so that the advantages of the FD technology and the concurrent transmission can be brought into full play. • The QoS requirements of flows in the case where the TS resources are limited are specially considered. We formulate a nonlinear integer programming (NLIP) problem aiming at maximizing the number of flows with their QoS requirements satisfied. Then, a QoS-aware FD scheduling algorithm is proposed, which can keep the flow rate high and satisfy the QoS requirements of flows as many as possible. • We evaluate the proposed algorithm in the 60GHz mmWave wireless backhaul network with limited TS resources. The extensive simulations demonstrate that compared with other state-of-the-art algorithms, the proposed QoS-aware FD algorithm can significantly improve the number of flows with their QoS requirements satisfied and the total system throughput. Furthermore, we also analyze the impact of SI cancelation level and contention threshold on the performance improvement. The structure of this paper is organized as follows. Section II introduces the related work. Section III introduces the system overview and assumption. In Section IV, the optimal concurrent scheduling problem in FD mmWave wireless backhaul networks with limited TSs is formulated into an NLIP. In Section V, a QoS-aware FD concurrent scheduling algorithm is proposed. In Section VI, we conduct extensive simulations, and in Section VII we conclude this paper. III. SYSTEM OVERVIEW AND ASSUMPTION In this paper, we consider a typical FD mmWave wireless backhaul network in the small cells densely deployed scenario. As shown in Figure 2, the network includes N BSs. The BSs are connected through backhaul links in the mmWave band. When there are some traffic demands from one BS to another, we say there is a flow between them. As shown in Figure 3, each BS operates in FD mode and is equipped with two steerable directional antennas: one for transmitting and another for receiving. Therefore, a BS can at most simultaneously support two flows. It can simultaneously serve as the transmitter of one flow and the receiver of another, but it can't simultaneously serve as the transmitters or receivers of both two flows. There are one or more BSs connected to the backbone network via the macrocell, which is (are) called gateway(s) [10]. A backhaul network controller (BNC) resides on one of the gateways, which can synchronize the network, receive the QoS requirements of flows and obtain the locations of BSs [19]. A. The Received Power Since non-line-of-sight (NLOS) transmissions suffer from high attenuation, we use the line-of-sight (LOS) path loss model for mmWave as described in [2]. For flow f , the received signal power at its receiver r f from its transmitter t f can be expressed as P r (t f , r f ) = kP t G t (t f , r f )G r (t f , r f )d −n t f r f . (1) k is a factor that is proportional to λ 4π 2 , where λ denotes the wave length; P t denotes the transmission power of the transmitter; G t (t f , r f ) denotes the transmitter antenna gain in the direction of from t f to r f , and G r (t f , r f ) denotes the receiver antenna gain in the direction of from t f to r f ; d t f r f denotes the distance between t f and r f and n is the path loss exponent [8]. According to the FD assumption mentioned above, the two flows scheduled simultaneously either have no common node or one's transmitter is the receiver of the other. Therefore, the interference between different flows can be divided into two cases: 1) the interference between two flows without any common node, namely, MUI; 2) the RSI after SI cancelation. The MUI caused by the transmitter t l of flow l to the receiver r f of flow f is defined as P r (t l , r f ) = ρkP t G t (t l , r f )G r (t l , r f )d −n t l r f ,(2) where ρ is the MUI factor between different flows, which is related to the cross correlation of signals from different flows [2]. According to [11], after SI cancelation, the effect of RSI can be modeled in terms of the SNR loss. Therefore, we can use β n N 0 W to denote the RSI, where the non-negative parameter β n represents the SI cancelation level of the nth BS. The smaller β n , the higher the level of SI cancelation. Due to various factors, we assume the parameters for different BSs are different. N 0 is the onesided power spectral density of white Gaussian noise; W is the channel bandwidth. B. Data Rate With the reduction of multipath effect, mmWave channel can be approximated as Gaussian channel. With the interference from other flows, the data rate of flow f can be estimated according to the Shannon's channel capacity [10]. C. Antenna Model In this paper, we adopt the realistic antenna model in [10]. The gain of a directional antenna in units of dB can be expressed as G(θ) =    G 0 − 3.01 × 2θ θ-3dB 2 , 0 • ≤ θ ≤ θ ml /2 G sl . θ ml /2 < θ ≤ 180 •(3) θ denotes an angle within the range [0 • , 180 • ]. The maximum antenna gain G 0 can be calculated as G 0 = 10log(1.6162/sin(θ -3dB /2)) 2 . θ -3dB is the angle of the halfpower beamwidth. The main lobe width θ ml in units of degrees can be calculated as θ ml = 2.6 × θ -3dB . The sidelobe gain G sl = −0.4111 × ln(θ -3dB ) − 10.579 [10]. IV. PROBLEM FORMULATION In this paper, we consider a QoS-aware FD concurrent scheduling problem when the time is limited. System time is divided into a series of non-overlapping frames. As shown in Figure 4, each frame consists of a scheduling phase, where a transmission schedule S is computed by the BNC, and a transmission phase, where the BSs start concurrent transmissions following the schedule [6]. The transmission Scheduling Phase Transmission Phase phase is further divided into M equal TSs. It's assumed that there are F flows in the network and each flow f has its QoS requirement q f . For each flow f , we define a binary variable a i f to indicate whether flow f is scheduled in the ith TS. If so, a i f = 1; otherwise, a i f = 0. Since there may be different flows to be transmitted in different TSs, we denote the actual transmission rate of flow f in the ith TS by R i f . According to the Shannon's channel capacity [10], R i f can be calculated as R i f = ηW log 2 (1+ a i f P r (t f , r f ) N 0 W + h a i h β t h N 0 W + l a i l P r (t l , r f ) ). (4) where η is the factor that describes the efficiency of the transceiver design, which is in the range of (0, 1). W is the bandwidth, and N 0 is the one-sided power spectra density of white Gaussian noise. h denotes the flow whose transmitter is the same as the receiver of flow f . β t h is the parameter of SI cancelation level at BS t h . l denotes the flow without any common node with f . Then we can define the actual throughput of flow f based on the schedule S as T f = M i=1 R i f t T s + M t ,(5) where T s is the time of scheduling phase and t is the time of one TS. When the actual throughput T f of flow f is greater than its QoS requirement q f , we say flow f has satisfied its QoS requirement, and call the flow a completed flow. A binary variable I f is used to indicate whether flow f is completed. I f = 1 indicates f is completed, while I f = 0 indicates f is not completed. As we investigate a QoS-aware scheduling for a backhaul network with limited time, given the QoS requirements of flows, with the limited number of TSs in the transmission phase, the optimal schedule should accommodate as many flows as possible [2]. In other words, we should aim at maximizing the number of flows that satisfy their QoS requirements (i.e. the number of flows that are completed). Therefore, the objective function can be formulated as max F f =1 I f ,(6) and the first constraint is I f = 1, T f ≥ q f ; 0, otherwise.(7) Next, we analyze the other constraints. Firstly, we use variable f n to denote the flow whose transmitter or receiver is the nth BS B n , such as the transmitting flow and the receiving flow in Figure 1; thus a i fn indicates whether flow f n is scheduled in the ith TS, that is, whether f n does use B n in the ith TS. According to our FD assumption described in section III, because each BS is just equipped with two steerable directional antennas, the number of flows that simultaneously use the same BS B n can't exceed two; this constraint can be expressed as fn a i fn ≤ 2, ∀i, n. Then we use f 1 n and f 2 n stand for the two flows that simultaneously use B n ; we also use T (B n ) and R(B n ) stand for the wireless links with B n as the transmitter and the receiver, respectively. As assumed in section III, for the two antennas of a FD BS, one of them is a transmitting antenna and the other is a receiving antenna. Therefore, when two flows simultaneously use the same BS, the BS can only serve as the transmitter for one flow and as the receiver for the other, which can be expressed as: f 1 n ∈ T (B n )&f 2 n ∈ R(B n ) or f 1 n ∈ R(B n )&f 2 n ∈ T (B n ), if fn a i fn = 2.(9) In summary, the problem of optimal scheduling (P1) can be formulated as follows: max F f =1 I f s.t. Constraints (7) -(9) This is a nonlinear integer programming (NLIP) problem and is NP-hard [2]. The optimization problem is similar to that in [2]. [2] is for the HD scenario while ours is for the FD scenario. Compared with [2], the number of constraints for our optimization problem is more, and the problem is obviously more complex than that in [2]. [2] is NP-hard, and thus our optimization problem is also NP-hard. In each TS, every flow is either scheduled or unscheduled. Therefore, when the number of TSs is M and the number of flows is F , the computational complexity using exhaustive search algorithm is 2 M F , which is exponential. In the small cells densely deployed scenario, the number of flows may be large, and thus it will be time-consuming if we use exhaustive algorithm to solve P1. The computational time is unacceptable for practical mmWave small cells where the duration of one TS is only a few microseconds [20]. Consequently, a heuristic algorithm with low complexity is desired to solve it in practice. V. QOS-AWARE FULL-DUPLEX SCHEDULING ALGORITHM In this section, we propose a QoS-aware full-duplex concurrent scheduling algorithm for problem P1. Borrowing the idea of contention graph from [10], the algorithm makes full use of the FD condition and satisfies the QoS requirements of flows as many as possible. Next, we first describe how to construct the contention graph and then describe the proposed algorithm in detail. A. The Construction of Contention Graph In FD mmWave wireless backhaul networks, not all pairs of flows can be concurrently scheduled. In contention graph [10], when the two flows can't be concurrently scheduled, we define there is a contention between them. In this paper, based on the assumption and analysis mentioned above, we define the flows that can't be concurrently scheduled into the following two cases. Firstly, according to the FD assumption described in section III, for the two antennas of a FD BS, one of them is a transmitting antenna and the other is a receiving antenna. Therefore, the two flows that simultaneously use the same BS as their transmitters (or receivers) can't be concurrently scheduled. This case is shown in Figure 5. Figure 5 (a) shows that two flows simultaneously use the same BS as their transmitters. Similarly, Figure 5 (b) shows that two flows simultaneously use the same BS as their receivers. Accordingly, based on the analysis for this case, the flows that can be concurrently scheduled are divided into following three cases. 1) As shown in Figure 6 (a), the transmitter of flow f is the receiver of flow l, but the receiver of flow f is not the transmitter of flow l. 2) As shown in Figure 6 Secondly, considering the QoS requirements of flows, to guarantee the flow rate and the system throughput, the two flows whose relative interference (RI) [2] between each other is large can't be concurrently scheduled. When the RI between two flows is large, the rates of the flows become low. The low rates result in inefficient resource utilization. In other words, the TS resources are allocated to the flows, but the QoS requirements of them are hard to be satisfied, and thus they can't support the specific applications [8]. For the three cases in Figure 6, we now define their RI, respectively. 1) For the case in Figure 6 (a), the interference from flow f to flow l is RSI. Therefore, the RI from flow f to flow l can be defined as RI f,l = N 0 W + β t f N 0 W P r (t l , r l ) ,(10) where P r (t l , r l ) is calculated as (1). The interference from flow l to flow f is MUI; so the RI from flow l to flow f is defined as RI l,f = N 0 W + P r (t l , r f ) P r (t f , r f ) ,(11) where P r (t l , r f ) is calculated as (2) and P r (t f , r f ) is calculated as (1). 2) For the case in Figure 6 (b), both the interference from flow f to l and that from flow l to f is RSI. Therefore, the RI between the two flows are both similar to (10). 3) For the case in Figure 6 (c), both the interference from flow f to l and that from flow l to f is MUI. Therefore, the RI between the two flows are both similar to (11). Next, let's construct the contention graph. In the contention graph, each vertex represents a flow. If two flows can't be concurrently scheduled (i.e., there is a contention between them), an edge is inserted between the two corresponding vertices. For example, as shown in Figure 7, there is a contention between flow 1 and flow 2. In contrast, there is no contention between flow 1 and flow 3. Specifically, for the two pairs of flows in Figure 5, there is an edge between the two corresponding vertices, respectively. In addition, for the three pairs of flows in Figure 6, we should examine whether the RI between the flows is too large. When the RI between two flows is larger than a contention threshold σ, we say there is a contention between them. In other words, if max(RI f,l , RI l,f ) > σ, an edge is inserted into the two corresponding vertices. B. The QoS-aware Full-duplex Scheduling Algorithm Based on the contention graph, we now concretely describe the proposed algorithm. The pseudo code for it is shown in Algorithm 1. To begin with, line 1 is some preparation work. The BNC obtains the BS location (Loc), the SI cancelation level (β n ) at each BS and the QoS requirement (q f ) of each flow. Next, in line 2, we calculate the number of TSs that each flow spends to complete its QoS requirement when there is no interference from others. The number of TSs is calculated as ξ f = q f * (T s + M t) R f * t . (12) R f is the rate of flow f without interference from others, which can be calculated as Since the scheduling problem we investigate is in limited time, i.e., in M TSs, the flow whose ξ f is greater than M should be removed. In the actual scheduling, there exists interference from other flows; so the flow rates will be further reduced and the spent number of TS will be further increased. The judgment for these flows becomes meaningless to our optimization goal. Removing the flows (represented by set D) can not only reduce the complexity of subsequent scheduling, but also save more TSs to schedule more worthwhile flows, that is, the flows that can be completed in M TSs. Pseudo code for this step is shown in line 3. R f = ηW log 2 (1 + P r (t f , r f ) N 0 W ).(13) Then, as shown in lines 4-5, we sort the remaining F flows in non-decreasing order according to ξ f and call the set of the sorted flows "pre-scheduling set" P. Next, we construct the contention graph G for all the flows in P. Then we make the scheduling decision slot by slot. In lines 9-12, to complete more flows in the limited TSs, we first determine the flow with the smallest ξ. In other words, we determine the flows in P one by one from the beginning. If flow f has never been scheduled and has no contention with the flow(s) that is(are) ongoing, then the profit of scheduling the flow is evaluated: if scheduling it can increase the total system throughput, we schedule it; otherwise, skip the flow and determine the next. These rules help to guarantee the flow rate and the system throughput; so it's more QoS-aware. In every TS, as shown in lines 14-16, it is necessary to check whether some flow(s) has(have) completed its(their) QoS requirement. If so, the corresponding S i (f ) is(are) set to -1, which means the flow(s) will never be scheduled later. When one flow is completed, allocating resources to it is of little significance to further improve its QoS. Therefore, we should stop scheduling it and save the TSs to serve more other flows. At the same time, change is set to 1. In fact, as shown in line 8, only when it's the 1th TS or some flow(s) is(are) newly completed, that is, when change = 1, we need to make new scheduling decision. If change = 0, the scheduling vector is the same as the previous TS, which is shown in line 13. In this way, the scheduling complexity is greatly reduced. The algorithm is repeated until M TSs are over, and we finally obtain the scheduling vector for each TS. Obviously, in the worst case, the variable change is 1 in every TS, that is, there is(are) some flow(s) to be newly completed in every TS. Therefore, for the M TSs and F flows, the worst computational complexity of Algorithm 1 is O(M F ). VI. PERFORMANCE EVALUATION A. Simulation Setup In the simulations, we evaluate the performance of the proposed algorithm in a 60GHz mmWave wireless backhaul network that 10 BSs are uniformly distributed in a 100m × 100m square area. Every BS has the same transmission power P t . The transmitters and receivers of flows are randomly selected, and the QoS requirements of flows are uniformly distributed between 1Gbps and 3Gbps. The SI cancelation parameters β for different BSs are uniformly distributed in a certain range. To be more realistic, other parameters are shown in Table I. Because we focus on the QoS of flows, according to our optimization goal, we use the number of completed flows and the system throughput as evaluation metrics. When one flow achieves its QoS requirement, it is called a completed flow. System throughput represents the throughput of all flows in the network per slot. To show the advantages of the proposed QoS-aware FD concurrent scheduling algorithm (Proposed-FD) in the network system with limited TS resources, we compare it with the following four schemes. 1) TDMA: In TDMA, the flows are transmitted serially. We use TDMA as the baseline for evaluating performance without concurrent transmissions. 2) MQIS: MQIS [2] is a HD concurrent scheduling algorithm based on the maximum QoS-aware independent set. It first schedules the flow with the smallest degree in contention graph. It doesn't remove the flow(s) spent too much slots, nor does it evaluate the profits when adding a new flow. To the best of our knowledge, MQIS achieves the best performance in the network system with limited TS resources in terms of the number of completed flows and system throughput among the existing scheduling algorithms. Therefore, we use it as the baseline for evaluating performance without FD communication. 3) Proposed-HD: It uses the same scheduling algorithm with Proposed-FD, but it only allows the HD communication. We also use it as a baseline without the FD communication. 4) FDP: Full-Duplex (FDP) scheme [16] is for the system where the TS resources are sufficient. It aims at accomplishing all of the transmissions with the minimum time. In every phase, higher priority is given to the flow that occupies the most TSs. If another flow is qualified to be transmitted together in the current phase, i.e., the number of flows simultaneously using the same BS doesn't exceed the number of RF chains and the SINR is larger than a certain threshold, the corresponding flow is also scheduled. Only when all the flows scheduled together in one phase are completed, can we start the next phase and make a new scheduling decision. We use it as a baseline for FD communication. Each simulation performs 100 times to get a more reliable average result. B. Simulation Results 1) Under different numbers of flows: In this case, the contention threshold σ is set to 0.001, and the SI cancelation level parameter β is uniformly distributed between 2 − 4. The simulation results are shown in Figure 8. We can find that the Proposed-FD algorithm always shows superior performance compared with other algorithms, and the more the number of flows, the more obvious the advantages of the Proposed-FD. Compared with the HD algorithms, the Proposed-FD allows simultaneous transmission and reception at the same BS. In fact, the Proposed-HD algorithm also performs better than MQIS. This is because when deciding whether or not to schedule a flow, we consider whether adding the flow can improve the system throughput. This makes each flow be scheduled at a higher rate, and thus the QoS requirements of flows can be achieved more quickly. As for the FDP algorithm, the number of completed flows for it is not large enough. This is mainly because the issue they investigate and the optimization goal are different from ours. It is for the network where the TS resources are sufficient and aims at accomplishing all the transmissions with the minimum time. Therefore, it is not suitable for the investigated problem in this paper that maximizing the completed flows in limited time. Moreover, with the increase of the total number of flows, the number of completed flows for FDP doesn't increase significantly. This is because in FDP, only when all the flows scheduled together in one phase are completed, can we start the next phase and make a new scheduling decision. As a result, when some flows are completed quickly, due to the lack of scheduling of new flows, a large amount of TS resources are wasted. Thus, in limited time, the number of completed flows has almost no change. However, the system throughput of FDP is higher than other HD algorithms. This is because FDP prefers the flows that occupy more TSs. These flows usually have higher QoS requirements (i.e. the minimum throughput requirement), so even the number of completed flows is small, the system throughput is still high. In particular, when the number of flows is 90, the Proposed-FD improves the number of completed flows by 30.1% compared with Proposed-HD and improves the system throughput by 34.1% compared with FDP. 2) Under different SI cancelation levels: For the two FD algorithms (the Proposed-FD and FDP), the SI cancelation level β has an obvious impact on the performance. Thus, we simulate the performance under different magnitudes of β, as shown in Figure 9. The abscissa x is the magnitude of β. For example, when x = 2, β is uniformly distributed in 2 × 10 2 − 4 × 10 2 . In this case, the total number of flows is 90, and σ = 0.001. We can find that the performance of the Proposed-FD is better when β is smaller, that is, when the SI cancelation level is higher. As β becomes larger, the performance of the Proposed-FD gradually deteriorates. In particular, when β reaches 10 4 magnitude, the Proposed-FD has the same performance as the Proposed-HD. This tells us that not in any case can the FD communication improve the system performance, and better SI cancelation techniques are needed. The trend of the performance for FDP is similar to the Proposed-FD. However, due to the applicable scenario and optimization goal are different from ours, the performance of FDP is relatively poor. 3) Under different contention thresholds: To study the impact of contention threshold σ on the performance, we simulate the two metrics under different σ, as shown in Figure 10. The abscissa x represents the magnitude of σ. For example, x = -3 means σ = 10 (−3) . In this case, the number of flows is 90, and β is uniformly distributed between 2 − 4. We can observe that as σ increases, in addition to TDMA, the performance of the other four solutions first increases, then degrades and finally almost keep unchanged. This is because when σ is small, it is not conducive to concurrent transmissions. When σ is greater than a certain threshold (e.g., 10 (−3) for Proposed-FD), there is severe interference between concurrent flows, which leads to the rate reduction and is harmful to satisfy the QoS requirements. Therefore, to achieve the best performance, we should choose the appropriate threshold. Under the simulation conditions in this paper, we choose σ = 10 (−3) . Specifically, when σ = 10 (−3) , the Proposed-FD improves the number of completed flows by 29.9% compared with Proposed-HD and improves the system throughput by 35.9% compared with FDP. Although FDP doesn't use the contention graph, we convert its SINR threshold into the contention threshold, so its performance also varies with σ. VII. CONCLUSION In this paper, we propose a QoS-aware full-duplex concurrent scheduling algorithm for mmWave wireless backhaul networks. Considering the FD characteristics and the QoS requirements of flows in the system with limited TS resources, the proposed algorithm exploit the contention graph to find the concurrently scheduled flows and maximize the number of completed flows. Extensive simulations show that the proposed FD scheduling algorithm can significantly increase the number of completed flows and the system throughput compared with other scheduling schemes. In addition, the effects of SI cancelation level and contention threshold on the performance are also simulated to guide a better scheduling. In the future work, we will also consider the blockage problem in mmWave communications into problem, and propose a robust scheme for the mmWave full-duplex backhaul network.
5,247
1812.11325
2906796011
We prove an invariance principle for a random Lorentz-gas particle in 3 dimensions under the Boltzmann-Grad limit and simultaneous diffusive scaling. That is, for the trajectory of a point-like particle moving among infinite-mass, hard-core, spherical scatterers of radius @math , placed according to a Poisson point process of density @math , in the limit @math , @math , @math up to time scales of order @math . To our knowledge this represents the first significant progress towards solving this problem in classical nonequilibrium statistical physics, since the groundbreaking work of Gallavotti (1970), Spohn (1978) and Boldrighini-Bunimovich-Sinai (1983). The novelty is that the diffusive scaling of particle trajectory and the kinetic (Boltzmann-Grad) limit are taken simulataneously. The main ingredients are a coupling of the mechanical trajectory with the Markovian random flight process, and probabilistic and geometric controls on the efficiency of this coupling.
In the case of infinite horizon (e.g. the plain @math arrangement of the spherical scatterers of diameter less than the lattice spacing) the free flight distribution of a particle flying in a uniformly sampled random direction has a heavy tail which causes a different type of long time behaviour of the particle displacement. The arguments of @cite_20 indicated that in the two-dimensional case super-diffusive scaling of order @math is expected. A central limit theorem with this anomalous scaling was proved with full rigour in @cite_7 , for the Lorentz-particle displacement in the @math -dimensional periodic case with infinite horizon. The periodic infinite horizon case in dimensions @math remains open.
{ "abstract": [ "As Bleher (J. Stat. Phys. 66(1):315–373, 1992) observed the free flight vector of the planar, infinite horizon, periodic Lorentz process S n ∣n=0,1,2,… belongs to the non-standard domain of attraction of the Gaussian law—actually with the ( n n ) scaling. Our first aim is to establish his conjecture that, indeed, ( S_ n n n ) converges in distribution to the Gaussian law (a Global Limit Theorem). Here the recent method of Balint and Gouezel (Commun. Math. Phys. 263:461–512, 2006), helped us to essentially simplify the ideas of our earlier sketchy proof (Szasz, D., Varju, T. in Modern dynamical systems and applications, pp. 433–445, 2004). Moreover, we can also derive (a) the local version of the Global Limit Theorem, (b) the recurrence of the planar, infinite horizon, periodic Lorentz process, and finally (c) the ergodicity of its infinite invariant measure.", "We study the asymptotic statistical behavior of the 2-dimensional periodic Lorentz gas with an infinite horizon. We consider a particle moving freely in the plane with elastic reflections from a periodic set of fixed convex scatterers. We assume that the initial position of the particle in the phase space is random with uniform distribution with respect to the Liouville measure of the periodic problem. We are interested in the asymptotic statistical behavior of the particle displacement in the plane as the timet goes to infinity. We assume that the particle horizon is infinite, which means that the length of free motion of the particle is unbounded. Then we show that under some natural assumptions on the free motion vector autocorrelation function, the limit distribution of the particle displacement in the plane is Gaussian, but the normalization factor is (t logt)1 2 and nott1 2 as in the classical case. We find the covariance matrix of the limit distribution." ], "cite_N": [ "@cite_7", "@cite_20" ], "mid": [ "2011017194", "2002700608" ] }
Invariance Principle for the Random Lorentz Gas -Beyond the Boltzmann-Grad Limit
We consider the Lorentz gas with randomly placed spherical hard core scatterers in R d . That is, place spherical balls of radius r and infinite mass centred on the points of a Poisson point process of intensity in R d , where r d is sufficiently small so that with positive probability there is free passage out to infinity, and define t → X r, (t) ∈ R d to be the trajectory of a point particle starting with randomly oriented unit velocity, performing free flight in the complement of the scatterers and scattering elastically on them. A major problem in mathematical statistical physics is to understand the diffusive scaling limit of the particle trajectory t → X r, (T t) √ T , as T → ∞.(1) Indeed, the Holy Grail of this field of research would be to prove an invariance principle (i.e. weak convergence to a Wiener process with nondegenerate variance) for the sequence of processes in (1) in either the quenched or annealed setting (discussed in section 1.1). For extensive discussion and historical background see the surveys [18,7,14] and the monograph [19]. The same problem in the periodic setting, when the scatterers are placed in a periodic array and randomness comes only with the initial conditions of the moving particle, is much better understood, due to the fact that in the periodic case the problem is reformulated as diffusive limit of particular additive functionals of billiards in compact domains and thus heavy artillery of hyperbolic dynamical systems theory is efficiently applicable. In order to put our results in context, we will summarize very succinctly the existing results, in section 1. 4. There has been, however, no progress in the study of the random Lorentz gas informally described above, since the ground-breaking work of Gallavotti [9,10], Spohn [17,18] and Boldrighini-Bunimovich-Sinai [3] where weak convergence of the process t → X r, (t) to a continuous time random walk t → Y (t) (called Markovian flight process) was established in the Boltzmann-Grad (a.k.a. low density) limit r → 0, → ∞, r d−1 → 1, in compact time intervals t ∈ [0, T ], with T < ∞, in the annealed [9,10,17,18], respectively, quenched [3] setting. Our main result (see Theorem 2 in subsection 1.3) proves an invariance principle in the annealed setting if we take the Boltzmann-Grad and diffusive limits simultaneously: r → 0, → ∞, r d−1 → 1 and T = T (r) → ∞. Thus while the diffusive limit (1) with fixed r and remains open, this is the first result proving convergence for infinite times in the setting of randomly placed scatterers, and hence it is a significant step towards the full resolution of the problem in the annealed setting. The random Lorentz gas We define now more formally the random Lorentz process. Place spherical balls of radius r and infinite mass centred on the points of a Poisson point process of intensity in R d , and define the trajectory t → X r, (t) ∈ R d of a particle moving among these scatterers as follows: -If the origin is covered by a scatterer then X r, (t) ≡ 0. -If the origin is not covered by a scatterer then t → X r, (t) is the trajectory of a point-like particle starting from the origin with random velocity sampled uniformly from the unit sphere S d−1 and flying with constant speed between successive elastic collisions on any one of the fixed, infinite mass scatterers. The randomness of the trajectory t → X r, (t) (when not identically 0) is due to two sources: the random placement of the scatterers and the random choice of initial velocity of the moving particle. Otherwise, the dynamics of the moving particle is fully deterministic, governed by classical Newtonian laws. With probability 1 (with respect to both sources of randomness) the trajectory t → X r, (t) is well defined. Due to elementary scaling and percolation arguments P the moving particle is not trapped in a compact domain = ϑ d ( r d ), where ϑ d : R + → [0, 1] is a percolation probability which is (i) monotone non-increasing; (ii) continuous except for one possible jump at a positive and finite critical value u c = u c (d) ∈ (0, ∞); (iii) vanishing for u ∈ (u c , ∞) and positive for u ∈ (0, u c ); (iv) lim u→0 ϑ d (u) = 1. We assume that r d < u c . In fact, in the Boltzmann-Grad limit considered in this paper (see (3) below) we will have r d → 0. As discussed above, the Holy Grail of this field is a mathematically rigorous proof of invariance principle of the processes (1) in either one of the following two settings. (Q) Quenched limit: For almost all (i.e. typical) realizations of the underlying Poisson point process, with averaging over the random initial velocity of the particle. In this case, it is expected that the variance of the limiting Wiener process is deterministic, not depending on the realization of the underlying Poisson point process. (AQ) Averaged-quenched (a.k.a. annealed ) limit: Averaging over the random initial velocity of the particle and the random placements of the scatterers. The Boltzmann-Grad limit The Boltzmann-Grad limit is the following low (relative) density limit of the scatterer configuration: r → 0, → ∞, r d−1 → v d−1 ,(3) where v d−1 is the area of the (d − 1)-dimensional unit disc. In this limit the expected free path length between two successive collisions will be 1. Other choices of lim r d−1 ∈ (0, ∞) are equally legitimate and would change the limit only by a time (or space) scaling factor. It is not difficult to see that in the averaged-quenched setting and under the Boltzmann-Grad limit (3) the distribution of the first free flight length starting at any deterministic time, converges to an EXP (1) and the jump in velocity after the free flight happens in a Markovian way with transition kernel P v out ∈ dv v in = v = σ(v, v )dv ,(4) where dv is the surface element on S d−1 and σ : S d−1 × S d−1 toR + is the normalised differential cross section of a spherical hard core scatterer, computable as σ(v, v ) = 1 4v d−1 v − v 3−d .(5) Note that in 3-dimensions the transition probability (4) of velocity jumps is uniform. That is, the outgoing velocity v out is uniformly distributed on S 2 , independently of the incoming velocity v in . It is intuitively compelling but far from easy to prove that under the Boltzmann-Grad limit (3) t → X r, (t) ⇒ t → Y (t) ,(6) where the symbol ⇒ stands for weak convergence (of probability measures) on the space of continuous trajectories in R d , see [1]. The process t → Y (t) on the right hand side is the Markovian random flight process consisting of independent free flights of EXP (1)-distributed length, with Markovian velocity changes according to the scattering transition kernel (4). A formal construction of the process t → Y (t) is given in section 2.1. The limit (6), valid in any compact time interval t ∈ [0, T ], T < ∞, is rigorously established in the averaged-quenched setting in [9,10,17,18], and in the quenched setting in [3]. In [17] more general point processes of the scatterer positions, with sufficiently strong mixing properties are considered. The limiting Markovian flight process t → Y (t) is a continuous time random walk. Therefore, by taking a second, diffusive limit after the Boltzmann-Grad limit (6), Donsker's theorem (see [1]) yields indeed the invariance principle, t → T −1/2 Y (T t) ⇒ t → W (t) ,(7) as T → ∞, where t → W (t) is the Wiener process in R d of nondegenerate variance. The variance of the limiting Wiener process W can be explicitly computed but its concrete value has no importance. The natural question arises whether one could somehow interpolate between the double limit of taking first the Boltzmann-Grad limit (6) and then the diffusive limit (7) and the plain diffusive limit for the Lorentz process, (1). Our main result, Theorem 2 formulated in section 1.3 gives a positive partial answer in dimension 3. Since our results are proved in three-dimensions from now on we formulate all statements in d = 3 rather than general dimension. Results In the rest of the paper we assume = (r) = πr −2 and drop the superscript from the notation of the Lorentz process. Our results (Theorems 1 and 2 formulated below) refer to a coupling -joint realisation on the same probability space -of the Markovian random flight process t → Y (t), and the quenched-averaged (annealed) Lorentz process t → X r (t). The coupling is informally described later in this section and constructed with full formal rigour in section 2.2. The first theorem states that in our coupling, up to to time T r −1 , the Markovian flight and Lorentz exploration processes stay together. Theorem 1. Let T = T (r) be such that lim r→0 T (r) = ∞ and lim r→0 rT (r) = 0. Then lim r→0 P inf{t : X r (t) = Y (t)} ≤ T = 0.(8) Although, this result is subsumed by our main result, it shows the strength of the coupling method employed in this paper. In particular, with some elementary arguments it provides a much stronger result than Gallavotti and Spohn [9,10,17] which states the weak limit (6) (which follows from (8)) for any fixed T < ∞. On the other hand the proof of this "naïve" result sheds some light on the structure of proof of the more sophisticated Theorem 2, which is our main result. Theorem 2. Let T = T (r) be such that lim r→0 T (r) = ∞ and lim r→0 r 2 |log r| 2 T (r) = 0. Then, for any δ > 0, lim r→0 P sup 0≤t≤T |X r (t) − Y (t)| > δ √ T = 0,(9) and hence t → T −1/2 X r (T t) ⇒ t → W (t) ,(10) as r → 0, in the averaged-quenched sense. On the right hand side of (10) W is a standard Wiener process of variance 1 in R 3 . Indeed, the invariance principle (10) readily follows from the invariance principle for the Markovian flight process, (7), and the closeness of the two processes quantified in (9). So, it remains to prove (9). This will be the content of the larger part of this paper, sections 4-7. The point of Theorem 2 is that the Boltzmann-Grad limit of scatterer configuration (3) and the diffusive scaling of the trajectory are done simultaneously, and not consecutively. The memory effects due to recollisions are controlled up to the time scale T = T (r) = o(r −2 |log r| −2 ). Remarks on dimension: (1) Our proof is not valid in 2-dimensions for two different reasons: (a) Probabilistic estimates at the core of the proof are valid only in the transient dimensions of random walk, d ≥ 3. (b) A subtle geometric argument which will show up in sections 6.4-6.6 below, is valid only in d ≥ 3, as well. This is unrelated to the recurrence/transience dichotomy and it is crucial in controlling the short range recollision events in the Boltzmann-Grad limit (3). (2) The fact that in d = 3 the differential cross section of hard spherical scatterers is uniform on S 2 , c.f. (4), (5), facilitates our arguments, since, in this case, the successive velocities of the random flight process Y (t) form an i.i.d. sequence. However, this is not of crucial importance. The same arguments could also be carried out for other differential cross sections, at the expense of more extensive arguments. We are not going to these generalisations here. Therefore the proofs presented in this paper are valid exactly in d = 3. The proof will be based on a coupling (that is: a joint realisation on the same probability space) of the Markovian flight process t → Y (t) and the averaged-quenched realisation of the Lorentz process t → X r (t), such that the maximum distance of their positions up to time T be small order of √ T . The Lorentz process t → X r (t) is realised as an exploration of the environment of scatterers. That is, as time goes on, more and more information is revealed about the position of the scatterers. As long as X r (t) traverses yet unexplored territories, it behaves just like the Markovian flight process Y (t), discovering new, yet-unseen scatterers with rate 1 and scattering on them. However, unlike the Markovian flight process it has long memory, the discovered scatterers are placed forever and if the process X r (t) returns to these positions, recollisions occur. Likewise, the area swept in the past by the Lorentz exploration process X r (t) -that is: a tube of radius r around its past trajectory -is recorded as a domain where new collisions can not occur. For a formal definition of the coupling see section 2.2. Let their velocity processes be U (t) :=Ẏ (t) and V r (t) :=Ẋ r (t). These are almost surely piecewise constant jump processes. The coupling is realized in such a way, that (A) At the very beginning the two velocities coincide, V r (0) = U (0). (B) Occasionally, with typical frequency of order r mismatches of the two velocity processes occur. These mismatches are caused by two possible effects: • Recollisions of the Lorentz exploration process with a scatterer placed in the past. This causes a collision event when V r (t) changes while U (t) does not. • Scatterings of the Markovian flight process Y (t) in a moment when the Lorentz exploration process is in the explored tube, where it can not encounter a not-yet-seen new scatterer. In these moments the process U (t) has a jump discontinuity, while the process V r (t) stays unchanged. We will call these events shadowed scatterings of the Markovian flight process. (C) However, shortly after the mismatch events described in item (B) above, a new jointly realised scattering event of the two processes occurs, recoupling the two velocity processes to identical values. These recouplings occur typically at an EXP (1)-distributed time after the mismatches. Figure 1: V r (t) V r (t) U (t) U (t) The above image shows a recollision (left) and a shadowing event (right). Note that after each event U and V r are no longer coupled. However at the next scattering, if possible, the velocities are recoupled. Summarizing: The coupled velocity processes t → (U (t), V r (t)) are realized in such a way that they assume the same values except for typical time intervals of length of order 1, separated by typical intervals of lengths of order r −1 . Other, more complicated mismatches of the two processes occur only at time scales of order r −2 |log r| −2 . If all these are controlled (this will be the content of the proof) then the following hold: Up to T = T (r) = o(r −1 ), with high probability there is no mismatch whatsoever between U (t) and V r (t). That is, lim r→0 P inf{t : V r (t) = U (t)} < T = lim r→0 P inf{t : X r (t) = Y (t)} < T = 0.(11) In particular, the invariance principle (10) also follows, with T = T (r) = o(r −1 ), rather than T = T (r) = o(r −2 |log r| −2 ). As a by-product of this argument a new and handier proof of the theorem (6) of Gallavotti [9,10] and Spohn [17,18] also drops out. Going up to T = T (r) = o(r −2 |log r| −2 ) needs more argument. The ideas exposed in the outline (A), (B), (C) above lead to the following chain of bounds: max 0≤t≤1 X r (T t) √ T − Y (T t) √ T = 1 √ T max 0≤t≤1 T t 0 (V r (s) − U (s)) ds ≤ 1 √ T T 0 |V r (s) − U (s)| ds 1 √ T T r = √ T r. In the step we use the arguments (B) and (C). Finally, choosing in the end T = T (r) = o(r −2 ) we obtain a tightly close coupling of the diffusively scaled processes t → X r (T t)/ √ T and t → Y (T t)/ √ T , (9), and hence the invariance principle (10), for this longer time scale. This hand-waving argument should, however, be taken with a grain of salt: it does not show the logarithmic factor, which arises in the fine-tuning. Scaling limit of the periodic Lorentz gas As already mentioned, diffusion in the periodic setting is much better understood than in the random setting. This is due to the fact that diffusion in the periodic Lorentz gas can be reduced to study the of limit theorems of some particular additive functionals of billiard flows in compact domains. Heavy tools of hyperbolic dynamics provide the technical arsenal for the study of these problems. The first breakthrough was the fully rigorous proof of the invariance principle (diffusive scaling limit) for the Lorentz particle trajectory in a two-dimensional periodic array of spherical scatterers with finite horizon, [4]. (Finite horizon means that the length of the straight path segments not intersecting a scatterer is bounded from above.) This result was extended to higher dimensions in [6], under a still-not-proved technical assumption on singularities of the corresponding billiard flow. In the case of infinite horizon (e.g. the plain Z d arrangement of the spherical scatterers of diameter less than the lattice spacing) the free flight distribution of a particle flying in a uniformly sampled random direction has a heavy tail which causes a different type of long time behaviour of the particle displacement. The arguments of [2] indicated that in the twodimensional case super-diffusive scaling of order √ t log t is expected. A central limit theorem with this anomalous scaling was proved with full rigour in [20], for the Lorentz-particle displacement in the 2-dimensional periodic case with infinite horizon. The periodic infinite horizon case in dimensions d ≥ 3 remains open. Boltzmann-Grad limit of the periodic Lorentz gas The Boltzmann-Grad limit in the periodic case means spherical scatterers of radii r 1 placed on the points of the hypercubic lattice r (d−1)/d Z d . The particle starts with random initial position and velocity sampled uniformly and collides elastically on the scatterers. For a full exposition of the long and complex history of this problem we quote the surveys [11,14] and recall only the final, definitive results. In [5] and [15] it is proved that in the Boltzmann-Grad limit the trajectory of the Lorentz particle in any compact time interval t ∈ [0, T ] with T < ∞ fixed, converges weakly to a non-Markovian flight process which has, however, a complete description in terms of a Markov chain of the successive collision impact parameters and, conditionally on this random sequence, independent flight lengths. (For a full description in these terms see [16].) As a second limit, an invariance principle is proved in [16] for this non-Markovian random flight process, with superdiffusive scaling √ t log t. Note that in this case the second limit doesn't just drop out from Donsker's theorem as it did in the random scatterer setting. The results of [5] are valid in d = 2 while those of [15] and [16] in arbitrary dimension. Interpolating between the plain scaling limit in the infinite horizon case (open in d ≥ 3) and the kinetic limit, by simultaneously taking the Boltzmann-Grad limit and scaling the trajectory by √ T log T , where T = T (r) → ∞ with some rate, would be the problem analogous to our Theorem 1 or Theorem 2. This is widely open. Miscellaneous The quantum analogue of the problem of the Boltzmann-Grad limit for the random Lorentz gas was considered in [8], where the long time evolution of a quantum particle interacting with a random potential in the Boltzmann-Grad limit is studied. It is proved that the phase space density of the quantum evolution converges weakly to a the solution of the linear Boltzmann equation. This is the precise quantum analogue of the classical problem solved by Gallavotti and Spohn in [9,10,17,18]. Looking into the future: Liverani investigates the periodic Lorentz gas with finite horizon with local random perturbations in the cells of periodicity: a basic periodic structure with spherical scatterers centred on Z d with extra scatterers placed randomly and independently within the cells of periodicity, [12]. This is an interesting mixture of the periodic and random settings which could succumb to a mixture of dynamical and probabilistic methods, so-called deterministic walks in random environment. Structure of the paper The rest of the paper is devoted to the rigorous statement and proof of the arguments exposed in (A), (B), (C) above. Its overall structure is as follows: -Section 2: We construct the Markovian flight process and the Lorentz exploration and thus lay out the coupling argument which is essential moving forward. Moreover we will introduce an auxiliary process, Z, which will be simpler to work with than X. -Section 3: We prove Theorem 1. We go through the proof of this result as it is both informative for the dynamics, and the proof of Theorem 2 in its full strength will follow partially similar lines, however with substantial differences. Sections 4-7 are fully devoted to the proof of Theorem 2, as follows: -Section 4: We break up the process Z into independent legs. From here we state two propositions which are central to the proof. They state that (i) with high probability the process X does not differ from Z in each leg; (ii) with high probability, the different legs of the process Z do not interact (up to times of our time scales). -Section 5: We prove the proposition concerning interactions between legs. -Section 6: We prove the proposition concerning coincidence, with high probability, of the processes X and Z within a single leg. This section is longer than the others, due to the subtle geometric arguments and estimates needed in this proof. -Section 7: We finish off the proof of Theorem 2. Construction Ingredients and the Markovian flight process Let ξ j ∈ R + and u j ∈ R 3 , j = −2, −1, 0, 1, 2, . . . , be completely independent random variables (defined on an unspecified probability space (Ω, F , P)) with distributions: ξ j ∼ EXP (1), u j ∼ U N I(S 2 ),(12) and let y j := ξ j u j ∈ R 3 .(13) For later use we also introduce the sequence of indicators j := 1{ξ j < 1},(14) and the corresponding conditional exponential distributions EXP (1|1) := distrib(ξ | = 1), respectively, EXP (1|0) = distrib(ξ | = 0), with distribution densities (e − 1) −1 e 1−x 1{0 ≤ x < 1}, respectively, e 1−x 1{1 ≤ x < ∞}. We will also use the notation := ( j ) j≥0 and call the sequence the signature of the i.i.d. EXP (1)-sequence (ξ j ) j≥0 . The variables ξ j and u j will be, respectively, the consecutive flight length/flight times and flight velocities of the Markovian flight process t → Y (t) ∈ R 3 defined below. Denote, for n ∈ Z + , t ∈ R + , τ n := n j=1 ξ j , ν t := max{n : τ n ≤ t}, {t} := t − τ νt .(15) That is: τ n denotes the consecutive scattering times of the flight process, ν t is the number of scattering events of the flight process Y occurring in the time interval (0, t], and {t} is the length of the last free flight before time t. Finally let Y n := n j=1 ξ j u j = n j=1 y j , Y (t) := Y νt + {t}u νt+1 . We shall refer to the process t → Y (t) as the Markovian flight process. This will be our fundamental probabilistic object. All variables and processes will be defined in terms of this process, and adapted to the natural continuous time filtration (F t ) t≥0 of the flight process: F t := σ(u 0 , (Y (s)) 0≤s≤t ). Note that the processes n → Y n , t → Y (t) and their respective natural filtrations (F n ) n≥0 , (F t ) t≥0 , do not depend on the parameter r. We also define, for later use, the virtual scatterers of the flight process t → Y (t). For n ≥ 0, let Y k := Y k + r u k − u k+1 |u n − u k+1 | = Y k + rẎ (τ − k ) −Ẏ (τ + k ) Ẏ (τ − k ) −Ẏ (τ + k ) , k ≥ 0, S Y n := {Y k ∈ R 3 : 0 ≤ k ≤ n}, n ≥ 0. Here and throughout the paper we use the notation f (t ± ) := lim ε↓0 f (t ± ε). The points Y n ∈ R 3 are the centres of virtual spherical scatterers of radius r which would have caused the nth scattering event of the flight process. They do not have any influence on the further trajectory of the flight process Y , but will play role in the forthcoming couplings. The Lorentz exploration process Let r > 0, and = (r) = πr −2 . We define the Lorentz exploration process t → X(t) = X r (t) ∈ R 3 , coupled with the flight process t → Y (t), adapted to the filtration (F t ) t≥0 . The process t → X(t) and all upcoming random variables related to it do depend on the choice of the parameter r (and ), but from now on we will suppress explicit notation of dependence upon these parameters. The construction goes inductively, on the successive time intervals [τ n−1 , τ n ), n = 1, 2, . . . . Start with [Step 1] and then iterate indefinitely [Step 2] and [ Step 3] below. [ Step 1] Start with X(0) = X 0 = 0, V (0 + ) = u 1 , X 0 := r u 0 − u 1 |u 0 − u 1 | S X 0 = {X 0 }. Note that the trajectory of the exploration process X begins with a collision at time t = 0. This is not exactly as described previously but is of no consequence and aids the later exposition. Go to [ Step 2]. [ Step 2] This step starts with given X(τ n−1 ) = X n−1 ∈ R 3 , V (τ + n−1 ) ∈ S 2 and S X n−1 = {X k : 0 ≤ k ≤ n − 1} ⊂ R 3 ∪ { }, where • is a fictitious point at infinity, with inf x∈R 3 |x − | = ∞, introduced for bookkeeping reasons; • |X n−1 − X k | ∈ (r, ∞] for 0 ≤ k < n − 1, and X n−1 − X n−1 ∈ {r, ∞}. The trajectory t → X(t), t ∈ [τ n−1 , τ n ), is defined as free motion with elastic collisions on fixed spherical scatterers of radius r centred at the points in S X n−1 . At the end of this time interval the position and velocity of the Lorentz exploration process are X(τ n ) =: X n , respectively, V (τ − n ). Go to [Step 3]. [Step 3] Let X n := X n + r V (τ − n ) − u n+1 V (τ − n ) − u n+1 , d n := min 0≤s<τn X(s) − X n . Note that d n ≤ r. • If d n < r then let X n := , and V (τ + n ) = V (τ − n ). • If d n = r then let X n := X n ,and V (τ + n ) = u n+1 . Set S X n = S X n−1 ∪ {X n }. Go back to [Step 2]. The process t → X(t) is indeed adapted to the filtration (F t ) 0≤t<∞ and indeed has the averagedquenched distribution of the Lorentz process. Our notation is fully consistent with the one used for the markovian process Y : X n := X(τ n ) and X k :=        X k + rẊ (τ − k ) −Ẋ(τ + k ) Ẋ (τ − k ) −Ẋ(τ + k ) ifẊ(τ − k ) =Ẋ(τ + k ), ifẊ(τ − k ) =Ẋ(τ + k ), k ≥ 0, S X n := {X k ∈ R 3 : 0 ≤ k ≤ n}, n ≥ 0. Mechanical consistency and compatibility of piece-wise linear trajectories in R 3 The key notion in the exploration construction of section 2.2 was mechanical r-consistency, and r-compatibility of finite segments of piece-wise linear trajectories in R 3 , which we are going to formalize now, for later reference. Let n ∈ N, τ 0 ∈ R, Z 0 ∈ R 3 , v 0 , . . . , v n+1 ∈ S 2 t 1 , . . . , t n ∈ R + , be given and define for j = 0, . . . , n, τ j := τ 0 + j k=1 t k , Z j := Z 0 + j k=1 t k v k , Z j :=    Z j + r v j − v j+1 |v j − v j+1 | if v j = v j+1 , if v j = v j+1 , and for t ∈ [τ j , τ j+1 ], j = 0, . . . , n, Z(t) := Z j + (t − τ j )v j+1 . We call the piece-wise linear trajectory Z(t) : τ − 0 < t < τ + n mechanically r-consistent or r-inconsistent, if min τ 0 ≤t≤τn min 0≤j≤n Z(t) − Z j = r, respectively, min τ 0 ≤t≤τn min 0≤j≤n Z(t) − Z j < r(16) Note, that by formal definition the minimum distance on the left hand side can not be strictly larger than r. Given two finite pieces of mechanically r-consistent trajectories Z a (t) : τ − a,0 < t < τ + a,na and Z b (t) : τ − b,0 < t < τ + b,n b , defined over non-overlapping time intervals: [τ a,0 , τ a,na ] ∩ [τ b,0 , τ b,n b ] = ∅, with τ a,na ≤ τ b,0 , we will call them mechanically r-compatible or r-incompatible if min{ min τ a,0 ≤t≤τa,n a min 0<j≤n b Z a (t) − Z b,j , min τ b,0 ≤t≤τ b,n b min 0≤j<na Z b (t) − Z a,j } ≥ r, min{ min τ a,0 ≤t≤τa,n a min 0<j≤n b Z a (t) − Z b,j , min τ b,0 ≤t≤τ b,n b min 0≤j<na Z b (t) − Z a,j } < r,(17) respectively. It is obvious that given a mechanically r-consistent trajectory, any non-overlapping parts of it are pairwise mechanically r-compatible, and given a finite number of non-overlapping mechanically r-consistent pieces of trajectories which are also pair-wise mechanically r-compatible their concatenation (in the most natural way) is mechanically r-consistent. An auxiliary process It will be convenient to introduce a third, auxiliary process t → Z(t) ∈ R 3 , and consider the joint realization of all three processes t → (Y (t), X(t), Z(t)) on the same probability space. This construction will not be needed until section 4, but this is the optimal logical point to introduce it. The reader may safely skip to section 3 and come back here before turning to section 4. The process t → Z(t) will be a forgetful version of the true physical process t → X(t) in the sense that in its construction only memory effects by the last seen scatterers are taken into account. That is: only direct recollisions with the last seen scatterer and shadowings by the last straight flight segment are incorporated, disregarding more complex memory effects. It will be shown that (a) up to times T = T (r) = o(r −2 |log r| −2 ) the trajectories of the forgetful process Z(t) and the true physical process X(t) coincide, and (b) the forgetful process Z(t) and the Markovian process Y (t) stay sufficiently close together with probability tending to 1 (as r → 0). Thus, the invariance principle (7) can be transferred to the true physical process X(t), thus yielding the invariance principle (10). Define the following indicator variables: η j = η(y j−2 , y j−1 , y j ) := 1 |y j−1 | < 1 and min 0≤t≤ξ j−2 y j−1 + r u j−1 − u j |u j−1 − u j | + tu j−2 < r , η j = η(y j−2 , y j−1 , y j ) := 1 |y j−1 | < 1 and min 0≤t≤ξ j y j−1 + r u j−1 − u j−2 |u j−1 − u j−2 | + tu j < r , η j := max{ η j , η j }.(18) Before constructing the auxiliary process t → Z(t) we prove the following Lemma 1. There exists a constant C < ∞ such that for any sequence of signatures = ( j ) j≥1 the following bounds hold E η j ≤ Cr,(19)E η j η k ≤ Cr 2 |log r| if |j − k| = 1, Cr 2 if |j − k| > 1.(20) Proof of Lemma 1. Define the following auxiliary, and simpler, indicators: η j := 1 ∠(−u j−1 , u j−2 ) < 2r ξ j−1 , η j := 1 ∠(−u j−1 , u j ) < 2r ξ j−1 . Here, and in the rest of the paper we use the notation ∠ : S 2 × S 2 → [0, π], ∠(u, v) := arccos(u · v). Then, clearly, η j ≤ η j , η j ≤ η j . It is straightforward that the indicators η j : 1 ≤ j < ∞ , and likewise, the indicators η j : 1 ≤ j < ∞ , are independent among themselves and one-dependent across the two sequences. This holds even if conditioned on the sequence of signatures . Therefore, the following simple computations prove the claim of the lemma. E η j ≤ Cr 2 ∞ 0 e −y min{y −2 , r −2 }dy ≤ Cr, E η j ≤ Cr 2 ∞ 0 e −y min{y −2 , r −2 }dy ≤ Cr, E η j+1 η j ≤ Cr 2 ∞ 0 ∞ 0 e −y e −z min{y −2 , z −2 , r −2 }dydz ≤ Cr 2 |log r| . We omit the elementary computational details. Lemma 1 assures that, as r → 0, with probability tending to 1, up to time of order T = T (r) = o(r −2 |log r| −1 ) it will not occur that two neighbouring or next-neighbouring η-s happen to take the value 1 which would obscure the following construction. The process t → Z(t) is constructed on the successive intervals [τ j−1 , τ j ), j = 1, 2, . . . , as follows: • (No interference with the past.) If η j = 0 then for τ j−1 ≤ t ≤ τ j , Z(t) = Z(τ j−1 ) + {t}u j . • (Direct shadowing.) If η j = 1, then for τ j−1 ≤ t ≤ τ j , Z(t) = Z(τ j−1 ) + {t}u j−1 . • (Direct recollision with the last seen scatterer.) If η j = 0 and η j = 1 then, in the time interval τ j−1 ≤ t ≤ τ j the trajectory t → Z(t) is defined as that of a mechanical particle starting with initial position Z(τ j−1 ), initial velocityŻ(τ + j−1 ) = u j and colliding elastically with two infinite-mass spherical scatterers of radius r centred at the points Z(τ j−1 ) + r u j−1 − u j |u j−1 − u j | , respectively Z(τ j−2 ) − r u j−1 − u j−2 |u j−1 − u j−2 | . Consistently with the notations adopted for the processes Y (t) and X(t), we denote (1), and therefore the coupling bound of Theorem 1 holds. On the way we establish various bounds to be used in later sections. This section is purely classical-probabilistic. It also prepares the ideas (and notation) for section 5 where a similar argument is explored in more complex form. Z k := Z(τ k ) for k ≥ 0. Y (t) Z(t) X(t) (a) (a) (b) (b) Interferences Let t → Y (t) and t → Y * (t) be two independent Markovian flight processes. Think about Y (t) as running forward and Y * (t) as running backwards in time. (Note, that the Markovian flight process has invariant law under time reversal.) Define the following events W j := {min{ Y (t) − Y j : 0 < t < τ j−1 } < r}, W j := {min{ Y k − Y (t) : 0 ≤ k < j − 1, τ j−1 < t < τ j } < r}, W * j := {min{ Y * (t) − Y 1 : 0 < t < τ j−1 } < r}, W * j := {min{ Y * k − Y (t) : 0 < k ≤ j − 1, 0 < t < τ 1 } < r}, W * ∞ := {min{ Y * (t) − Y 1 : 0 < t < ∞} < r}, W * ∞ := {min{ Y * k − Y (t) : 0 < k < ∞, 0 < t < τ 1 } < r}, In words W j is the event that the virtual collision at Y j is shadowed by the past path. While W j is the event that in the time interval (τ j−1 , τ j ) there is a virtual recollision with a past scatterer. It is obvious that P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ , P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ .(21) On the other hand, by union bound and independence P W * ∞ ≤ z∈Z 3 P {1 < k < ∞ : Y * k ∈ B zr,2r } = ∅ P {0 < t ≤ ξ : Y (t) ∈ B zr,2r } = ∅ ≤ z∈Z 3 (2r) −1 E |{1 < k < ∞ : Y * k ∈ B zr,2r }| E |{0 < t ≤ ξ : Y (t) ∈ B zr,3r }| P W * ∞ ≤ z∈Z 3 P {0 < t < ∞ : Y * (t) ∈ B zr,2r } = ∅ P Y 1 ∈ B zr,2r ≤ z∈Z 3 (2r) −1 E |{0 < t < ∞ : Y * (t) ∈ B zr,3r }| P Y 1 ∈ B zr,2r(22) Here and in the rest of the paper we use the notation |{· · · }| for either cardinality or Lebesgue measure of the set {· · · }, depending on context. Occupation measures (Green's functions) Define the following occupation measures (Green's functions): for A ⊂ R 3 g(A) := P Y 1 ∈ A h(A) := E |{0 < t ≤ ξ 1 : Y (t) ∈ A}| G(A) := E |{1 ≤ k < ∞ : Y k ∈ A}| H(A) := E |{0 < t < ∞ : Y (t) ∈ A}| . Obviously, G(A) = g(A) + R 3 g(A − x)G(dx) H(A) = h(A) + R 3 h(A − x)G(dx).(23) Bounds Lemma 2. The following identities and upper bounds hold: h(dx) = g(dx) ≤ L(dx) (24) H(dx) = G(dx) ≤ K(dx) + L(dx)(25) where K(dx) := C min{1, |x| −1 }dx, L(dx) := Ce −c|x| |x| −2 dx,(26) with appropriately chosen C < ∞ and c > 0. Proof of Lemma 2. The identity h = g is a direct consequence of the flight length ξ being EXP (1)-distributed. The distribution g has the explicit expression g(dx) = C |x| −2 e −|x| dx from which the the upper bound (24) follows. (25) then follows from (23) and standard Green's function estimate for a random walk with step distribution g. For later use we introduce the conditional versions -conditioned on the sequence (see (14)) -of the bounds (24), (25). In this order we define the conditional versions of the Green's functions, given ∈ {0, 1}, respectively ∈ {0, 1} N : g (A) := P Y 1 ∈ A h (A) := E |{0 < t ≤ ξ 1 : Y (t) ∈ A}| G (A) := E |{1 ≤ k < ∞ : Y k ∈ A}| H (A) := E |{0 < t < ∞ : Y (t) ∈ A}| , and state the conditional version of Lemma 2: Lemma 3. The following upper bounds hold uniformly in ∈ {0, 1} N : g (dx) ≤ L(dx), h (dx) ≤ L(dx),(27)G (dx) ≤ K(dx) + L(dx), H (dx) ≤ K(dx) + L(dx),(28) with K(x) and L(x) as in (26), with appropriately chosen constants C < ∞ and c > 0. Proof of Lemma 3. Noting that g (dx) ≤ C |x| −2 e −|x| dx, h (dx) ≤ C |x| −2 e −|x| dx, the proof of Lemma 3 follows very much the same lines as the proof of Lemma 2. We omit the details. Computation According to (21), (22), for every j = 1, 2, . . . P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 G(B zr,2r )h(B zr,3r ), P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 H(B zr,3r )g(B zr,2r ). Moreover, straightforward computations yield Proof of Lemma 4. The bounds (29) readily follow from explicit computations. We omit the details. We conclude this section with the following consequence of the above arguments and computations. Corollary 1. There exists a constant C < ∞ such that for any j ≥ 1: P W j ≤ Cr, P W j ≤ Cr.(30) 3.5 No mismatching -up to T ∼ o(r −1 ) Define the stopping time σ := min{j > 0 : max{1 W j , 1 W j } = 1}, and note that by construction inf{t > 0 : X(t) = Y (t)} ≥ τ σ−1 .(31) Lemma 5. Let T = T (r) be such that lim r→0 T (r) = ∞ and lim r→0 rT (r) = 0. Then lim r→0 P τ σ−1 < T = 0.(32) Proof of Lemma 5. P τ σ−1 < T ≤ P σ ≤ 2T + P 2T −1 j=1 ξ j < T ≤ CrT + Ce −cT ,(33) where C < ∞ and c > 0. The first term in the middle expression of (33) is bounded by union bound and (30) of Corollary 1. In bounding the second term we use a large deviation upper bound for the sum of independent EXP (1)-distributed ξ j -s. Finally, (32) readily follows from (33). (8) follows directly from (31) and (32), and this concludes the proof of Theorem 1. Beyond the naïve coupling The forthcoming parts of the paper rely on the joint realization (coupling) of the three processes t → Y (t), X(t), Z(t) as described in section 2. In particular, recall the construction of the process t → Z(t) from section 2.4. Breaking Z into legs Let Γ 0 := 0, Θ 0 = 0 and for n ≥ 1 Γ n := min{j ≥ Γ n−1 + 2 : min{ξ j−1 , ξ j , ξ j+1 , ξ j+2 } > 1}, γ n := Γ n − Γ n−1 , Θ n := τ Γn , θ n := Θ n − Θ n−1 ,(34) and denote ξ n,j := ξ Γ n−1 +j , u n,j := u Γ n−1 +j , y n,j := y Γ n−1 +j , 1 ≤ j ≤ γ n , Y n (t) := Y (Θ n−1 + t) − Y (Θ n−1 ), 0 ≤ t ≤ θ n , Z n (t) := Z(Θ n−1 + t) − Z(Θ n−1 ), 0 ≤ t ≤ θ n . Then, it is straightforward that the packs of random variables n := (γ n ; (ξ n,j , u n,j ) : 1 ≤ j ≤ γ n ) , n ≥ 0,(35) are fully independent (for n ≥ 0), and also identically distributed for n ≥ 1. (The zeroth pack is deficient if min{ξ 0 , ξ 1 } < 1.) It is also straightforward that the legs of the Markovian flight process (θ n ; Y n (t) : 0 ≤ t ≤ θ n ) , n ≥ 0, are fully independent, and identically distributed for n ≥ 1. A key observation is that due to the rules of construction of the process t → Z(t) exposed in section 2.4, the legs (θ n ; Z n (t) : 0 ≤ t ≤ θ n ) , n ≥ 0, of the auxiliary process t → Z(t) are also independently constructed from the packs (35), following the rules in section 2.4. Note, that the restrictions |y j−1 | < 1 in (18) were imposed exactly in order to ensure this independence of the legs (36). Therefore we will construct now the auxiliary process t → Z(t) and its time reversal t → Z * (t) from an infinite sequence of independent packs (35). In order to reduce unnecessary complications of notation from now on we assume min{ξ 0 , ξ 1 } > 1. Remark: In order to break up the auxiliary process t → Z(t) into independent legs the choice of simpler stopping times Γ n := min{j ≥ Γ n−1 + 1 : min{ξ j , ξ j+1 } > 1}, would work. However, we need the slightly more complicated stoppings Γ n , given in (34), for some other reasons which will become clear towards the end of section 4.2 and in the statement and proof of Lemma 6. One leg Let ξ j , u j , j ≥ 1, be fully independent random variables with the distributions (12), conditioned to min{ξ 1 , ξ 2 } > 1. and y j as in (13). Let γ := min{j ≥ 2 : min{ξ j−1 , ξ j , ξ j+1 , ξ j+2 } > 1} ∈ {2} ∪ {5, 6, . . . }.(37) Note that γ can not assume the values {1, 3, 4}. Call := (γ; (ξ j , u j ) : 1 ≤ j ≤ γ)(38) a pack, and keep the notation τ j := j k=1 ξ k , and θ := τ γ . The forward leg (θ; Z(t) : 0 ≤ t ≤ θ) is constructed from the pack according to the rules given in section 2.4. We will also denote Z j := Z(τ j ), 0 ≤ j ≤ γ; Z := Z γ = Z(θ). These are the discrete steps, respectively, the terminal position of the leg. It is easy to see that the distributions of γ and θ are exponentially tight: there exist constants C < ∞ and c > 0 such that for any s ∈ [0, ∞) P γ > s ≤ Ce −cs , P θ > s ≤ Ce −cs .(39) The backwards leg (θ; Z * (t) : 0 ≤ t ≤ θ) is constructed from the pack as Z * (t, ) := Z(θ − t, * ) − Z( * ), where the backwards pack * := (γ; (ξ γ−j , −u γ−j ) : 0 ≤ j ≤ γ) is the time reversion of the pack . Note that the forward and backward packs, and * , are identically distributed but the forward and backward processes t → Z(t) : 0 ≤ t ≤ θ and t → Z * (t) : 0 ≤ t ≤ θ are not. The backwards process t → Z * (t) could also be defined in stepwise terms, similar (but not identical) to those in section 2.4, but we will not rely on these step-wise rules and therefore omit their explicit formulation. Consistent with the previous notation, we denote Z * j := Z * (τ j ), 0 ≤ j ≤ γ; Z * := Z * γ = Z * (θ) = −Z. Note, that due to the construction rules of the forward and backward legs, their beginning, middle and ending parts (τ 1 ; Z(t) : 0 ≤ t ≤ τ 1 ) , (τ γ−1 − τ 1 ; Z(τ 1 + t) − Z(τ 1 ) : 0 ≤ t ≤ τ γ−1 − τ 1 ) , (τ γ − τ γ−1 ; Z(τ γ−1 + t) − Z(τ γ−1 ) : 0 ≤ t ≤ τ γ − τ γ−1 ) ,(40) are independent, and likewise for the backwards process Z * , (τ 1 ; Z * (t) : 0 ≤ t ≤ τ 1 ) , (τ γ−1 − τ 1 ; Z * (τ 1 + t) − Z * (τ 1 ) : 0 ≤ t ≤ τ γ−1 − τ 1 ) , (τ γ − τ γ−1 ; Z * (τ γ−1 + t) − Z * (τ γ−1 ) : 0 ≤ t ≤ τ γ − τ γ−1 ) .(41) This fact will be of crucial importance in the proof of Proposition 2, section 5.2 below. This is the reason (alluded to in the remark at the end of section 4.1) we chose the somewhat complicated stopping time as defined in (37). Multi-leg concatenation Let n = (γ n ; (ξ n,j , u n,j ) : 1 ≤ j ≤ γ n ), n ≥ 1, be a sequence of i.i.d packs (38), and denote θ n , (Z n (t) : 0 ≤ t ≤ θ n ), (Z n,j : 1 ≤ j ≤ γ n ), (Z * n (t) : 0 ≤ t ≤ θ n ), (Z * n,j : 1 ≤ j ≤ γ n ), Z n , Z * n the various objects defined in section 4.2, specified for the n-th independent leg. In order to construct the concatenated forward and backward processes t → Z(t), t → Z * (t), 0 ≤ t < ∞, we first define for n ∈ Z + , respectively t ∈ R + Γ n := Note that Ξ n and Ξ * n are random walks with independent steps; t → Z(t), 0 ≤ t < ∞, is exactly the Z-process constructed in section 2.4, with Z n = Z(τ n ), 0 ≤ n < ∞. Similarly, t → Z * (t), 0 ≤ t < ∞, is the time reversal of the Z-process and Z * n = Z * (τ n ), 0 ≤ n < ∞. Theorem 2 will follow from Propositions 1 and 2 of the next two sections. Mismatches within one leg Given a pack = (γ; (ξ j , u j ) : 1 ≤ j ≤ γ) (38), and arbitrary incoming and outgoing velocities u 0 , u γ+1 ∈ S 2 let (Y (t), X (t), Z(t)) : 0 − < t < θ + , be the triplet of Markovian flight process, Lorentz exploration process and auxiliary Z-process jointly constructed with these data, as described in sections 2.1, 2.2, respectively, 2.4. By 0 − < t < θ + we mean that the incoming velocities at 0 − are given asẎ (0 − ) =Ẋ (0 − ) =Ż(0 − ) = u 0 and the outgoing velocities at θ + arė Y (θ + ) =Ż(θ + ) = u γ+1 , whileẊ (θ + ) is determined by the construction from section 2.2. That is,Ẋ (θ + ) = u γ+1 if this last scattering is not shadowed by the trajectory X (t) : 0 ≤ t ≤ θ andẊ (θ + ) =Ẋ (θ − ) if it is shadowed. Proposition 1. There exists a constant C < ∞ such that for any u 0 , u γ+1 ∈ S 2 P X (t) ≡ Z(t) : 0 − < t < θ + ≤ Cr 2 |log r| 2 . (43) The proof of this Proposition relies on controlling the geometry of mismatchings, and is postponed until Section 6. Inter-leg mismatches Let t → Z(t) be a forward Z-process built up as concatenation of legs, as exposed in section 4.3 and define the following events W j := min{ Z(t) − Z k : 0 < t < Θ j−1 , Γ j−1 < k ≤ Γ j } < r , W j := min{ Z k − Z(t) : 0 ≤ k < Γ j−1 , Θ j−1 < t < Θ j } < r ,(44) In words W j is the event that a collision occuring in the j-th leg is shadowed by the past path. While W j is the event that within the j-th leg the Z-trajectory bumps into a scatterer placed in an earlier leg. That is, W j ∪ W j is precisely the event that the concatenated first j − 1 legs and the j-th leg are mechanically r-incompatible (see section 2.3). The following proposition indicates that on our time scales there are no "inter-leg mismatches": Proposition 2. There exists a constant C < ∞ such that for all j ≥ 1 P W j ≤ Cr 2 , P W j ≤ Cr 2(45) The proof of Proposition 2 is the content of Section 5 Proof of Proposition 2 This section is purely probabilistic and of similar spirit as section 3. The notation used is also similar. However, similar is not identical. The various Green's functions used here, although denoted g, h, G, H, as in section 3, are similar in their rôle but not the same. The estimates on them are also different. Occupation measures (Green's functions) Let now t → Z * (t), 0 ≤ t < ∞, be a backward Z * -process and t → Z(t), 0 ≤ t ≤ θ, a forward one-leg Z-process, assumed independent. In analogy with the events W j and W j defined in (44) we define W * j := min{ Z * (t) − Z k : 0 < t < Θ j−1 , 0 < k ≤ γ} < r , W * j := min{ Z * k − Z(t) : 0 < k ≤ Γ j−1 , 0 < t < θ} < r , W * ∞ := min{ Z * (t) − Z k : 0 < t < ∞, 0 < k ≤ γ} < r , W * ∞ := min{ Z * k − Z(t) : 0 < k < ∞, 0 < t < θ} < r . It is obvious that P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ , P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ .(46) On the other hand, by the union bound and independence we have P W * ∞ ≤ z∈Z 3 P {0 < t < ∞ : Z * (t) ∈ B zr,2r } = ∅ P {1 ≤ k ≤ γ : Z k ∈ B zr,2r } = ∅ ≤ z∈Z 3 (2r) −1 E |{0 < t < ∞ : Z * (t) ∈ B zr,3r }| E |{1 ≤ k ≤ γ : Z k ∈ B zr,2r }| P W * ∞ ≤ z∈Z 3 P {1 < k < ∞ : Z * k ∈ B zr,2r } = ∅ P {0 < t ≤ θ : Z(t) ∈ B zr,2r } = ∅ ≤ z∈Z 3 (2r) −1 E |{1 < k < ∞ : Z * k ∈ B zr,2r }| E |{0 < t ≤ θ : Z(t) ∈ B zr,3r }|(47) Therefore, in view of (46) we have to control the mean occupation time measures appearing on the right hand side of (47). Define the following mean occupation measures (Green's functions): for A ⊂ R 3 let g(A) := E |{1 ≤ k ≤ γ : Z k ∈ A}| , g * (A) := E |{1 ≤ k ≤ γ : Z * k ∈ A}| , h(A) := E |{0 < t ≤ θ : Z(t) ∈ A}| , h * (A) := E |{0 < t ≤ θ : Z * (t) ∈ A}| , R * (A) := E |{1 ≤ n < ∞ : Ξ * n ∈ A}| , G * (A) := E |{1 ≤ k < ∞ : Z * k ∈ A}| , H * (A) := E |{0 < t < ∞ : Z * (t) ∈ A}| . It is obvious that G * (A) = g * (A) + R 3 g * (A − x)R * (dx), H * (A) = h * (A) + R 3 h * (A − x)R * (dx). (48) Bounds Lemma 6. The following upper bounds hold: Proof of Lemma 6. The proof of the bounds (49) hinges on the decompositions (40) and (41) of the forward and backward legs into independent parts. Let max{g(dx), g * (dx)} ≤ M (dx), max{h(dx), h * (dx)} ≤ L(dx),(49)R * (dx) ≤ K(dx),(50)G * (dx) ≤ K(dx), H * (dx) ≤ K(dx) + L(dx),(51)g 1 (A) := P Z 1 ∈ A = P Z * 1 ∈ A = C A 1(|x| > 1)e −|x| dx, h 1 (A) := E |{t ≤ τ 1 : Z(t) ∈ A}| = E |{t ≤ τ 1 : Z * (t) ∈ A}| = C A |x| −2 e − max{1,|x|} dx,(52) and g 2 (A) := E |{1 ≤ k ≤ γ : Z k − Z 1 ∈ A}| , g * 2 (A) := E |{1 ≤ k ≤ γ : Z * k − Z * 1 ∈ A}| , h 2 (A) := E |{0 < t ≤ θ − τ 1 : Z(τ 1 + t) − Z 1 ∈ A}| , h * 2 (A) := E |{0 < t ≤ θ − τ 1 : Z * (τ 1 + t) − Z * 1 ∈ A}| . Due to the exponential tail of the distribution of γ and θ, (39), there are constants C < ∞ and c > 0 such that for any s < ∞ max{g 2 ({x : |x| > s}), g * 2 ({x : |x| > s})} ≤ Ce −cs , max{h 2 ({x : |x| > s}), h * 2 ({x : |x| > s})} ≤ Ce −cs ,(53) and furthermore, g 2 (R 3 ) = g * 2 (R 3 ) = E γ < ∞, h 2 (R 3 ) = h * 2 (R 3 ) = E θ − τ 1 < ∞.(54) From the independent decompositions (41) and (40) it follows that g(A) = R 3 g 2 (A − x)g 1 (dx), g * (A) = R 3 g * 2 (A − x)g 1 (dx), h(A) = R 3 h 2 (A − x)g 1 (dx) + h 1 (A), h * (A) = R 3 h * 2 (A − x)g 1 (dx) + h 1 (A).(55) The bounds (49) readily follow from the explicit expressions (52), the convolutions (55) and the bounds (53) and (54). The bound (50) is a straightforward Green's function bound for the the random walk Ξ * n defined in (42), by noting that the distribution of the i.i.d. steps Z * k of this random walk has bounded density and exponential tail decay. Finally, the bounds (51) follow from the convolutions (48) and the bounds (49), (50). Remark: On the difference between Lemmas 2 and 6. Note the difference between the upper bounds for g in (24), respectively, (49), and on G in (25), respectively, (51). These are important and are due to the fact that the length first step in a Zor Z * -leg is distributed as (ξ | ξ > 1) ∼ EXP (1|0) rather than ξ ∼ EXP (1). Computation According to (47) P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 H * (B zr,3r )g(B zr,2r ), P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 G * (B zr,2r )h r (B zr,3r ).(56) Lemma 7. In dimension d = 3 the following bounds hold, with some C < ∞ Proof of Lemma 7. The bounds (57) (similarly to the bounds (29)) readily follow from explicit computations which we omit. Proof of Proposition 2. Proposition 2 now follows by inserting the bounds (57) and one of the bounds in (29) into equations (56). Proof of Proposition 1 Given a pack = (γ; (ξ j , u j ) : 1 ≤ j ≤ γ) (38), and arbitrary u 0 , u γ+1 ∈ S 2 , let (Y (t), X (t), Z(t)) : 0 ≤ t ≤ θ be the triplet of Markovian flight process, Lorentz exploration process and auxiliary Z-process jointly constructed with these data. We will prove the following bounds, stated in increasing order of difficulty/complexity. P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j > 1} ≤ Cr 2 |log r| ,(58)P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} ≤ Cr 2 |log r| ,(59)P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 1} ≤ Cr 2 |log r| 2 .(60) Note that by construction η 1 = η 2 = η 3 = η γ = 0, so the sums on the left hand side go actually from 4 to γ − 1 . We stated and prove these bounds in their increasing order of complexity: (58) (proved in section 6.1) and (59) (proved in section 6.2) are of purely probabilistic nature while (60) (proved in sections 6.3-6.7) also relies on the the finer geometric understanding of the mismatch events η j = 1 and η j = 1. Proof of (58) This follows directly from Lemma 1. Indeed, given γ and = ( j ) 1≤j≤γ , due to (20), P γ j=1 η j > 1 ≤ γ max j P η j = η j+1 = 1 + γ 2 2 max j,k:|j−k|>1 P η j = η k = 1 ≤ Cγr 2 |log r| + Cγ 2 r 2 , and hence, due to the exponential tail bound (39) we get P γ−1 j=4 η j > 1 = E P γ−1 j=4 η j > 1 ≤ Cr 2 |log r| . which concludes the proof of (58). Proof of (59) First note that by construction of the processes (X (t), Z(t)) : 0 − < t < θ + the following identities hold: {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} = {X (t) ≡ Y (t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} {X (t) ≡ Y (t) : 0 − ≤ t ≤ θ + } = 0<j<γ min τ j ≤t≤θ Y j−1 − Y (t) < r ∪ min 0≤t≤τ j Y j+1 − Y (t) < r And, hence {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} (61) = 0<j<γ min τ j ≤t≤τ j+1 Y j−1 − Y (t) < r ∪ min τ j−1 ≤t≤τ j Y j+1 − Y (t) < r ∩ {ξ j > 1} ∪ 0<j<γ min τ j+1 ≤t≤θ Y j−1 − Y (t) < r ∪ min 0≤t≤τ j−1 Y j+1 − Y (t) < r ⊂ 0<j<γ min τ j ≤t≤τ j+1 |Y j−1 − Y (t)| < 2r ∪ min τ j−1 ≤t≤τ j |Y j+1 − Y (t)| < 2r ∩ {ξ j > 1} ∪ 0<j<γ min τ j+1 ≤t≤θ |Y j−1 − Y (t)| < 2r ∪ min 0≤t≤τ j−1 |Y j+1 − Y (t)| < 2r By simple geometric inspection we see min τ j ≤t≤τ j+1 |Y j−1 − Y (t)| < 2r ∩ {ξ j > 1} ⊂ {∠(−u j−1 , u j ) < 4r} , min τ j−1 ≤t≤τ j |Y j+1 − Y (t)| < 2r ∩ {ξ j > 1} ⊂ {∠(−u j+1 , u j ) < 4r} . And therefore, max P min τ j ≤t≤τ j+1 |Y j−1 − Y (t)| < 2r ∩ {ξ j > 1} ≤ Cr 2 max P min τ j−1 ≤t≤τ j |Y j+1 − Y (t)| < 2r ∩ {ξ j > 1} ≤ Cr 2 .(62) On the other hand, from the conditional Green's function computations of section 3, in particular from Lemma 3, we get max P min τ j+1 ≤t≤θ |Y j−1 − Y (t)| < 2r ≤ sup P min τ 2 ≤t<∞ |Y (t)| < 2r ≤ Cr 2 |log r| , max P min 0≤t≤τ j−1 |Y j+1 − Y (t)| < 2r ≤ sup P min τ 2 ≤t<∞ |Y (t)| < 2r ≤ Cr 2 |log r| .(63) Putting (61), (62) and (63) together yields P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ−1 j=4 η j = 0} ≤ Cγr 2 |log r| , and hence, taking expectation over , we get (59). Proof of (60) -preparations Let γ ∈ {2} ∪ {5, 6, . . . }, and = ( j ) 1≤j≤γ ∈ {0, 1} γ compatible with the definition of a pack, and 3 < k < γ be fixed. Given a pack with signature we define yet another auxiliary process Z (k) (t) : 0 − < t < θ + as follows: • On 0 − < t ≤ τ k−1 , Z (k) (t) = Y (t). • On τ k−1 < t ≤ τ k , Z (k) (t) is constructed according to the rules of the Z-process, given in section 2.4. • On τ k < t < θ + , Z (k) (t) = Z (k) (τ k ) + Y (t) − Y (τ k ). Note that on the event {η j = δ j,k : 1 ≤ j ≤ γ} we have Z (k) (t) ≡ Z(t), 0 − < t < θ + . We will show that max ,k P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η j = δ j,k : 1 ≤ j ≤ γ} ≤ max ,k P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η k = 1} ≤ Cγ 2 r 2 |log r| 2 ,(64) and hence max P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ k=1 η k = 1} ≤ γ max ,k P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ {η j = δ j,k : 1 ≤ j ≤ γ} ≤ Cγ 3 r 2 |log r| 2 . Then, taking expectation over we get (60). In order to prove (64) first write P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η j = δ j,k : 1 ≤ j ≤ γ} ≤ P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η k = 1} = P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ { η k = 1} + P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ { η k = 1} ∩ { η k = 0} , and note that the three parts Z (k) (t) : 0 − < t < τ k−3 = Y (t) : 0 − < t < τ k−3 , Z (k) (τ k−3 + t) − Z (k) (τ k−3 ) : 0 ≤ t ≤ τ k − τ k−3 , Z (k) (τ k ) + t) − Z (k) (τ k ) : 0 ≤ t < θ + − τ k = Y (τ k ) + t) − Y (τ k ) : 0 ≤ t < θ + − τ k ,(65) are independent -even if the events { η k = 1}, respectively, { η k = 1} ∩ { η k = 0} are specified. From the construction of the processes (X (t), Z (k) (t)) : 0 − < t < θ + it follows that if Z (k) (t) : 0 − < t < θ + is mechanically r-consistent then X (t) ≡ Z (k) (t) : 0 − < t < θ + . Denote by A (k) a,a , 1 ≤ a ≤ 3, the event that the a-th part of the decomposition (65) is mechanically r-inconsistent, and by A a,b = A b,a , 1 ≤ a, b ≤ 3, a = b, the event that the a-th and b-th parts of the decomposition (65) are mechanically r-incompatible -in the sense of the definitions (16) and (17) in section 2.3. In order to prove (64) we will have to prove appropriate upper bounds on the conditional probabilities P { η k = 1} ∩ A (k) a,b , P { η k = 1} ∩ { η k = 0} ∩ A (k) a,b , a, b = 1, 2, 3.(66) These are altogether 12 bounds. However, some of them are formally very similar. A (k) 1,1 , A(k) 3,3 and A (k) 1,3 do not involve the middle part and therefore do not rely on the geometric arguments of the forthcoming sections 6.4-6.6. Applying directly (19), (27), (29) and similar procedures as in section 3.4, without any new effort we get P { η k = 1} ∩ A (k) a,b ≤ Cγ 2 r 2 , P { η k = 1} ∩ { η k = 0} ∩ A (k) a,b ≤ Cγ 2 r 2 , a, b = 1, 3.(67) We omit the repetition of these details. The remaining six bounds rely on the geometric arguments of sections 6.4-6.6 and, therefore, are postponed to section 6.7 Geometric estimates We analyse the middle segment of the process Z (k) , presented in (65), restricted to the events { η k = 1}, respectively, { η k = 1} ∩ { η k = 0}. Since everything done in this analysis is invariant under time and space translations and also under rigid rotations of R 3 it will be notationally convenient to place the origin of space-time at (τ k−2 , Z(τ k−2 )) and choose u k−2 = e = (1, 0, 0), a fixed element of S 2 . So, the ingredient random variables are (ξ − , u, ξ, v, ξ + ), fully independent and distributed as ξ − ∼ EXP (1| k−2 ), ξ ∼ EXP (1| k−1 ) = EXP (1|1), ξ + ∼ EXP (1| k ), u, v ∼ U N I(S 2 ). It will be enlightening to group the ingredient variables as (ξ − , (u, ξ, v), ξ + ), and accordingly write the sample space of this reduced context as R + × D × R + , where D := S 2 × R + × S 2 , with the probability measure EXP (1| k−2 ) × µ × EXP (1| k ) where, on D, µ = U N I(S 2 ) × EXP (1|1) × U N I(S 2 ).(68) For r < 1, let σ r , σ r : D → R + ∪ {∞} be σ r (u, ξ, v) := inf{t : ξu + r u − v |u − v| + te < r}, σ r (u, ξ, v) := inf{t : ξu + r u − e |u − e| + tv < r}, (with the usual convention inf ∅ = ∞), and A r := {(u, ξ, v) ∈ D : σ r < ∞}, A r := {(u, ξ, v) ∈ D : σ r < ∞}. We define the process Z r (t) : −∞ < t < ∞ and Z r (t) : −∞ < t < ∞ in terms of (u, ξ, v) ∈ A r , respectively, (u, ξ, v) ∈ A r as follows. Strictly speaking, these are deficient processes, since µ( A r ) < 1, and µ( A r ) < 1. • On −∞ < t ≤ 0, Z r (t) = Z r (t) = te. • On 0 ≤ t ≤ ξ, Z r (t) = Z r (t) = tu, • On ξ ≤ t < ∞, •• Z r (t) = Z r (ξ) + (t − ξ)u, •• Z r (t) is the trajectory of a mechanical particle, with initial position Z r (ξ) and initial velocity˙ Z r (ξ + ) = v, bouncing elastically between two infinite-mass spherical scatterers centred at r e−u |e−u| , respectively, ξu + r u−v |u−v| , and, eventually, flying indefinitely with constant terminal velocity. The trapping time β r , β r ∈ R + and escape (terminal) velocity w r , w r ∈ S 2 of the process Z r (t), respectively, Z r (t), are β r := 0, w r := u, β r := sup{s < ∞ :˙ Z r (ξ + s + ) =˙ Z r (ξ + s − )}, w r :=˙ Z r (ξ + β + r ). (69) Note that β r ≥ σ r . The relation of the middle segment of (65) to Z r and Z r is the following: { η k = 1}, Z (k) (τ k−2 + t) − Z (k) (τ k−2 ) : −ξ k−2 ≤ t ≤ ξ k−1 + ξ k ∼ {ξ − > σ r }, Z r (t) : −ξ − ≤ t ≤ ξ + ξ + , { η k = 0} ∩ { η k = 1}, Z (k) (τ k−2 + t) − Z (k) (τ k−2 ) : −ξ k−2 ≤ t ≤ ξ k−1 + ξ k ∼ {ξ − ≤ σ r } ∩ {ξ + > σ r }, Z r (t) : −ξ − ≤ t ≤ ξ + ξ + ,(70) where ∼ stands for equality in distribution. So, in order to prove (64) we have to prove some subtle estimates for the processes Z r amd Z r . The main estimates are collected in Proposition 3 below Proposition 3. There exists a constant C < ∞, such that for all r < 1 and s ∈ (0, ∞), the following bounds hold: µ (u, h, v) ∈ A r : ∠(−e, w r ) < s ≤ Cr min{s, 1},(71)µ (u, h, v) ∈ A r : ∠(−e, w r ) < s ≤ Cr min{s(|log s| ∨ 1), 1} (72) µ (u, h, v) ∈ A r : r −1 β r > s ≤ Cr min{s −1 (|log s| ∨ 1), 1}.(73) Remarks: The bound (71) is sharp in the sense that a lower bound of the same order can be proved. In contrast, we think that the upper bound in (72) is not quite sharp. However, it is sufficient for our purposes so we don't strive for a better estimate. The following consequence of Proposition 3 will be used to prove (60). Corollary 2. There exists a constant C < ∞ such that the following bounds hold: P { η k = 1} ∩ { min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s} ≤ Crs(|log s| ∨ 1),(74)P { η k = 1} ∩ { min τ k−3 ≤t≤τ k−1 Z (k) (t) − Z (k) (τ k ) < s} ≤ Crs(|log s| ∨ 1),(75)P { η k = 0} ∩ { η k = 1} ∩ { min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s} (76) ≤ Cr max{s |log s| 2 , r |log r| 2 } P { η k = 0} ∩ { η k = 1} ∩ { min τ k−3 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k ) < s}(77) ≤ Cr max{s |log s| 2 , r |log r| 2 } Proposition 3 and its Corollary 2 are proved in sections 6.5, respectively, 6.6. 6.5 Geometric estimates ctd: Proof of Proposition 3 Preparations Beside the probability measure µ (see (68)) we will also need the flat Lebesgue measure on D, λ = U N I(S 2 ) × LEB(R + ) × U N I(S 2 ), so that dµ(u, h, v) = e 1−h e − 1 1{0 ≤ h < 1}dλ(u, h, v). For r > 0 we define the dilation map D r : D → D as D r (u, h, v) = (u, rh, v), and note that A r = D r A 1 A r = D r A 1 . In the forthcoming steps all events in A r and A r will be mapped by the inverse dilation D −1 r = D r −1 into A 1 , respectively, A 1 . Therefore, in order to simplify notation we will use A := A 1 and A := A 1 . The dilation D r transforms the measures µ as follows. Given an event E ⊂ D, µ(D r E) = DrE e 1−h e − 1 1{0 ≤ h ≤ 1}dλ(u, h, v) = r E e 1−rh e − 1 1{0 ≤ h ≤ r −1 }dλ(u, h, v),(78) and hence, for any event E ⊂ D and anyh < ∞ e 1−rh e − 1 rλ(E ∩ {h ≤h}) ≤ µ(D r E) ≤ e e − 1 rλ(E).(79) The following simple observation is of paramount importance in the forthcoming arguments: Proposition 4. In dimension 3 (and more) λ( A) = λ( A) < ∞.(80) Proof of Proposition 4. Obviously, A ⊂ A := {(u, h, v) ∈ D : ∠(−e, u) ≤ 2h −1 }, A ⊂ A := {(u, h, v) ∈ D : ∠(−u, v) ≤ 2h −1 }. Since, in dimension 3, {(u, v) ∈ S 2 × S 2 : ∠(−e, u) < 2h −1 } = {(u, v) ∈ S 2 × S 2 : ∠(−u, v) < 2h −1 } ≤ C min{h −2 , 1}, the claim follows by integrating over h ∈ R + . Remark: In 2-dimension, the corresponding sets A, A have infinite Lebesgue measure and, therefore, a similar proof would fail. Due to (80) in 3-dimensions the following conditional probability measures make sense λ A (·) = λ(· A) := λ(· ∩ A) λ( A) , λ A (·) = λ(· A) := λ(· ∩ A) λ( A) , and, moreover, due to (79) and (80), for any event E ∈ D lim r→0 µ(D r E | A r ) = λ A (E), lim r→0 µ(D r E | A r ) = λ A (E), In a technical sense, we will only use the upper bound in (79), and (80). In view of the upper bound in (79), in order to prove (71), (72) and (73) we need, in turn, λ (u, h, v) ∈ A : ∠(−e, w) ≤ s ≤ C min{s, 1},(81)λ (u, h, v) ∈ A : ∠(−e, w) ≤ s ≤ C min{s(|log s| ∨ 1), 1},(82)λ (u, h, v) ∈ A : β > s ≤ C min{s −1 (|log s| ∨ 1), 1}.(83) Here, and in the rest of this section, we use the simplified notation w := w 1 , w := w 1 , β := β 1 . Proof of (81) Proof. This is straightforward. Recall (69): w(u, h, v) = u. For easing notation let ϑ := ∠(−e, u) and note that for any t ∈ R + {u ∈ S 2 : 0 ≤ ϑ ≤ t} ≤ C min{t 2 , 1}, with some explicit C < ∞. Then, Figure 3: Above we show a 3 dimensional example of the geometric labelling used in this section. The Z trajectory enters with velocity e from beneath the relevant plane (the dotted line represents motion below the plane). After which the particle remains above the plane. λ (u, h, v) ∈ A : ∠(−e, w)) ≤ s ≤ λ (u, h, v) ∈ A : ϑ ≤ s ≤ λ (u, h, v) ∈ D : ϑ ≤ min{s, 2h −1 } = λ (u, h, v) ∈ D : {h ≤ 2s −1 } ∩ {ϑ ≤ s} + λ (u, h, v) ∈ D : {h ≥ 2s −1 } ∩ {ϑ ≤ 2h −1 } ≤ Cs. Let a and b be the vectors in R 3 pointing from the origin to the centre of the spherical scatterers of radius 1, on which the first, respectively, the second collision occurs: a = e − u |e − u| , b = hu + u − v |u − v| , and n the unit vector orthogonal to the plane determined by a and b, pointing so, that e · n > 0: n := a × b |a| |b| sin(∠(a, b)) , with a × b = (h + 1 |u − v| ) 1 |e − u| e × u − 1 |e − u| |u − v| e × v + 1 |e − u| |u − v| u × v,(84)|a| = 1, h − 1 ≤ |b| ≤ h + 1, 0 ≤ sin(∠(a, b)) ≤ 1.(85) are independent and distributed as w ∼ U N I(S 2 ), ϑ ∼ 1 {0≤t≤1} (1 − t 2 ) −1/2 tdt. Therefore, λ (u, h, v) ∈ A : |e · (u × v)| < 4s = ∞ 0 dh S 2 dw min{2/h,1} 0 (1 − t 2 ) −1/2 tdt1{|e · w| ≤ 4s t } = ∞ 0 dh min{2/h,1} 0 (1 − t 2 ) −1/2 dt min{4s, t} ≤ C min{s |log s| ∨ 1), 1}.(89) The last step follows from explicit computations which we omit. Finally, (87), (88) and (89) yield (82). Proof of (83). We proceed with the first (sharper) bound in (86) (the second (weaker) bound would yield only upper bound of order s −1/2 on the right hand side of (82)): λ (u, h, v) ∈ A : β > s ≤ λ (u, h, v) ∈ A : h > s 2 + λ (u, h, v) ∈ A : |v · n| < 2 s .(90) Bounding the first term on the right hand side of (90) is straightforward: λ (u, h, v) ∈ A : h > s 2 = ∞ s/2 {(u, v) ∈ S 2 × S 2 : ∠(−u, v) < 2h −1 } dh ≤ C ∞ s/2 min{h −2 , 1}dh ≤ C min{s −1 , 1}.(91) Concerning the second term on the right hand side of (90), this has exactly been done in the proof of (82) above, ending in (89) -with the rôle of s and s −1 swapped. (90), (91) and (89) yield (73). Geometric estimates ctd: Proof of Corollary 2 We start with the following straightforward geometric fact. Lemma 8. Let e, w ∈ S 2 and x ∈ R 3 . Then {t > 0 : min t≥0 x + t w + te < s} = {t > 0 : min t≥0 x + tw + t e < s} ≤ 4s ∠(−e, w) .(92) Proof of Lemma 8. This is elementary 3-dimensional geometry. We omit the details. Proof of (74) and (75). On { η k = 1} min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) ≥ min 0≤t |tu k−1 + ξ k−2 u k−2 | min τ k−3 ≤t≤τ k−1 Z (k) (t) − Z (k) (τ k ) ≥ min{min 0≤t |ξ k−1 u k−1 + tu k−2 + ξ k u k−1 | , ξ k }.(93) The bounds in (74) and (75) follow from applying (92) and (71), bearing in mind that the distribution density of ξ k−2 and ξ k is bounded. Since these are very similar we will only prove (74) here. P { η k = 1} ∩ { min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s} ≤ P { η k = 1} ∩ {min t≥0 |tu k−1 + ξ k−2 u k−2 | < s} = Ar P ξ − ∈ {t : min t≥0 tu + t e < s} dµ(u, h, v) ≤ C Ar min{ s ∠(−e, u) , 1}dµ(u, h, v) ≤ Crs(|log s| ∨ 1). In the first step we used (93). The second step follows from the representation (70). The third step relies on (92) and on uniform boundedness of the distribution density of ξ − (which is either EXP (1|1) or EXP (1|0), depending on the value of k−2 ). Finally, the last calculation is based on (71). Proof of (76). min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 )(94) = min min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k−3 ) , min τ k−1 + β≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) . Here, and in the rest of this proof, β and w denote the trapping time and escape direction of the recollision sequence: β := max{s ≤ ξ k :Ż (k) (τ k−1 + s − ) =Ż (k) (τ k−1 + s + )} w :=Ż (k) (τ k−1 + β + ). To bound the first expression on the right hand side of (94) we first observe that by the triangle inequality min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k−3 ) ≥ ξ k−2 − ξ k−1 − 4r(95) Applying the representation and bounds developed in sections 6.4, 6.5, P { η k = 0} ∩ { η k = 1} ∩ { min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k−3 ) < s} ≤ P { η k = 0} ∩ { η k = 1} ∩ {ξ k−2 ≤ ξ k−1 + 4r + s} = Ar P ξ − < h + 4r + s dµ(u, h, v) ≤ C Ar (min{h, 1} + 4r + s)dµ(u, h, v) ≤ Cr 2 + Crs + Cr 2 |log r| . In the first step we used (95). The second step follows from the representation (70). The third step relies on on uniform boundedness of the distribution density of ξ − (which is either EXP (1|1) or EXP (1|0), depending on the value of k−2 ). Finally, the last step follows from explicit calculation, using (79). To bound the second term on the right hands side of (94) we proceed as in the proof of (74) above. First note that min τ k−1 + β≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) ≥ min 0≤t (Z (k) (τ k−2 ) − Z (k) (τ k−1 + β)) + t w + ξ k−2 u k−2 .(97) Using in turn (97), (70), (92) and uniform boundedness of the distribution density of ξ − (which is either EXP (1|1) or EXP (1|0), depending on the value of k−2 ), and finally (72), we obtain: P { η k = 0} ∩ { η k = 1} ∩ min τ k−1 + β≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s ≤ P { η k = 0} ∩ { η k = 1} ∩ {min 0≤t (Z (k) (τ k−2 ) − Z (k) (τ k−1 + β)) + t w + ξ k−2 u k−2 < s} = Ar P ξ − ∈ {t : min 0≤t Z r ( β r ) + t w r + t e < s} dµ(u, h, v) ≤ C Ar min{ s ∠(−e, w r ) , 1}dµ(u, h, v) ≤ Crs(|log s| 2 ∨ 1).(98) From (94), (96) and (98) we obtain (76). Proof of (77). We proceed very similarly as in the proof of (76). min τ k−3 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k )(99) ≥ min min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k ) , min τ k−3 ≤t≤τ k−2 Z (k) (t) − Z (k) (τ k ) . To bound the first expression on the right hand side of (99) we first observe that by the triangle inequality min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k ) ≥ ξ k − 2 β − 4r(100) Using in turn (100), (70), (73) and explicit computation based on uniform boundedness of the distribution density of ξ + (which is either EXP (1|1) or EXP (1|0), depending on the value of k ) we write P { η k = 0} ∩ { η k = 1} ∩ { min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k) < s} ≤ P { η k = 0} ∩ { η k = 1} ∩ {ξ k < 8r + 2s} + P { η k = 0} ∩ { η k = 1} ∩ {ξ k < 4 β} = P ξ + < 8r + 2s µ( A r ) + E µ((u, h, v) ∈ A r : ξ + ≤ 4 β r ) ≤ Cr(r + s) + CrE min{ ξ + 2r −1 log ξ + 2r ∨ 1 , 1} ≤ Cr 2 + Crs + Cr 2 |log r| 2 . The second term on the right hand side of (99) is bounded in a very similar way as the analogous second term on the right hand side of (94), see (97)-(98). Without repeating these details we state that P { η k = 0} ∩ { η k = 1} ∩ min τ k−2 ≤t≤τ k−1 Z (k) (t) − Z (k) (τ k ) < s ≤ Crs |log s| 2 .(102) Eventually, from (99), (101) and (102) we obtain (77). Proof of (60) -concluded Recall the events A P { η k = 1} ∩ A (k) 2,2 ≤ Cγr 2 |log r| , P { η k = 1} ∩ { η k = 0} ∩ A (k) 2,2 ≤ Cγr 2 |log r| 2 .(103) It remains to prove P { η k = 1} ∩ A (k) b,2 ≤ Cγr 2 |log r| , P { η k = 1} ∩ { η k = 0} ∩ A (k) b,2 ≤ Cγr 2 |log r| 2 , b = 1, 3.(104) Since the cases b = 1 and b = 3 are formally identical we will go through the steps of proof with b = 3 only. In order to do this we first define the necessary occupation time measures (Green's functions). For A ⊂ R 3 , define the following occupation time measures for the last part of (65) G (k) (A) :=E #{1 ≤ j ≤ γ − k : Y (τ j ) ∈ A} k+j : 1 ≤ j ≤ γ − k =E #{k + 1 ≤ j ≤ γ : Z (k) (τ j ) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} =E #{k + 1 ≤ j ≤ γ : Z (k) (τ j ) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} ∩ { η k = 0} , H (k) (A) :=E |{0 ≤ t ≤ τ γ−k : Y (t) ∈ A}| k+j : 1 ≤ j ≤ γ − k =E {τ k ≤ t ≤ θ : Z (k) (t) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} =E {τ k ≤ t ≤ θ : Z (k) (t) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} ∩ { η k = 0} . Similarly, define the following occupation time measures for the middle part of (65) G (k) (A) := E #{1 ≤ j ≤ 3 : Z (k) (τ k−j ) − Z (k) (τ k ) ∈ A} · η k H (k) (A) := E {τ k−3 ≤ t ≤ τ k : Z (k) (t) − Z (k) (τ k ) ∈ A} · η k G (k) (A) := E #{1 ≤ j ≤ 3 : Z (k) (τ k−j ) − Z (k) (τ k ) ∈ A} · η k · (1 − η k ) H (k) (A) := E {τ k−3 ≤ t ≤ τ k : Z (k) (t) − Z (k) (τ k ) ∈ A} · η k · (1 − η k ) . Using the independence of the middle and last parts in the decomposition (65), similarly as (22) or (47), following bounds are obtained P { η k = 1} ∩ A (k) 3,2 ≤ Cr −1 R 3 G (k) (B x,2r ) H (k) (dx) + Cr −1 R 3 H (k) (B x,3r ) G (k) (dx) P { η k = 1} ∩ { η k = 0} ∩ A (k) 3,2 ≤ ≤ Cr −1 R 3 G (k) (B x,2r ) H (k) (dx) + Cr −1 R 3 H (k) (B x,3r ) G (k) (dx)(105) Due to (28) of Lemma 3 by direct computations the following upper bounds hold G (k) (B x,2r ) ≤ CF (|x|), H (k) (B x,3r ) ≤ CF (|x|),(106) where C < ∞ is an appropriately chosen constant and F : R + → R, F (u) := r1{0 ≤ u < r} + r 3 u 2 1{r ≤ u < 1} + Finally, we also have the global bounds G (k) (R 3 ) = 3E η k ≤ Cr, H (k) (R 3 ) = E η k · k j=k−2 ξ j ≤ Cr, G (k) (R 3 ) = 3E η k · (1 − η k ) ≤ Cr, H (k) (R 3 ) = E η k · (1 − η k ) · k j=k−2 ξ j ≤ Cr.(108) We will prove the upper bound (104) for the first term on the right hand side of the first line in (105). The other four terms are done in very similar way. First we split the integral as R 3 G (k) (B x,2r ) H (k) (dx) = |x|<1 G (k) (B x,2r ) H (k) (dx) + |x|≥1 G (k) (B x,2r ) H (k) (dx)(109) and note that due to (106) and (108) the second term on the right hand side is bounded as |x|≥1 G (k) (B x,2r ) H (k) (dx) ≤ Cr 4 .(110) To bound the first term on the right hand side of (109) we proceed as follows In the first step we have used (106). The second step is an integration by parts. In the third step we use (107), (108) and the explicit form of the function F . The last step is explicit integration. Finally, (109), (110), (111) and identical comoputations for the second term on the right hand side of the first line in (105) yield the first inequality in (104). The second line of (104) for b = 3 is proved in an identical way, which we omit to repeat. The cases b = 1 is done in a formally identical way. Finally, (60) follows from (67), (103) and (104). Proof of Theorem 2 -concluded As in section 4.3 let n = (γ n ; (ξ n,j , u n,j ) : 1 ≤ j ≤ γ n ), n ≥ 1, be a sequence of i.i.d packs. Denote θ n , ((Y n (t), Z n (t)) : 0 ≤ t ≤ θ n ) the pair of Y and (forward) Z-processes constructed from them and Y (t) = νt k=1 Y (θ n ) + Y νt+1 ({t}), Z(t) = νt k=1 Z(θ n ) + Z νt+1 ({t}). Beside these two we now define yet another auxiliary process t → X (t) as follows: (X n (t) : 0 ≤ t ≤ θ n ) is the Lorentz exploration process constructed with data from (Y n (t) : 0 ≤ t ≤ θ n ) and incoming velocity u n,0 = u 0 if n = 1, X n−1 (θ − n−1 ) if n > 1. Finally, from these legs concatenate X (t) = νt k=1 X (θ n ) + X νt+1 ({t}). Note that the auxiliary process (X (t) : 0 ≤ t < ∞) is not identical with the Lorentz exploration process (X(t) : 0 ≤ t < ∞), constructed with data from (Y (t) : 0 ≤ t ≤ ∞) and initial incoming velocity u 0 , since the former one does not takes into account memory effects caused by earlier legs. However, based on Propositions 1 and 2, we will prove that until time T = T (r) = o(r −2 |log r| −2 ) the processes t → X(t), t → X (t), and t → Z(t) coincide with high probability. For this, we define the (discrete) stopping times ρ := min{n : X n (t) ≡ Z n (t), 0 ≤ t ≤ θ n } σ := min{n : max{1 Wn , 1 Wn > 0} = 1}, and note that by construction inf{t : Z(t) = X(t)} ≥ Θ min{ρ,σ}−1 . Remark: Actually, (113) holds under the much weaker condition lim r→∞ r log log T = 0. This can be achieved by applying the LIL rather than a WLLN type of argument to bound max 0≤t≤T |Y (t) − Z(t)| in the proof of Lemma 10, below. However, since the condition of Lemma 9 can not be much relaxed, in the end we would not gain much with the extra effort. Proof of Lemma 9. P Θ min{ρ,σ}−1 < T ≤ P ρ ≤ 2E θ −1 T + P σ ≤ 2E θ −1 T + P 2E θ −1 T j=1 θ j < T ≤ Cr 2 |log r| T + Cr 2 T + Ce −cT ,(114) where C < ∞ and c > 0. The first term on the right hand side of (114) is bounded by union bound and (43) from Proposition 1. Likewise, the second term is bounded by union bound and (45) of Propositions 2. In bounding the third term we use a large deviation upper bound for the sum of independent θ j -s. Finally, (112) readily follows from (114). Proof of Lemma 10. Note first that max 0≤t≤T |Y (t) − Z(t)| ≤ ν T +1 j=1 η j ξ j , with ν T and η j defined in (15), respectively, (18). Hence, P max 0≤t≤T |Y (t) − Z(t)| > δ √ T ≤ P 2T j=1 η j ξ j > δ √ T + P ν T > 2T ≤ Cδ −1 √ T r + e −cT ,(115) with C < ∞ and c > 0. The first term on the right hand side of (115) is bounded by Markov's inequality and the straightforward bound E η j ξ j ≤ Cr. The bound on the second term follows from a straightforward large deviation estimate on ν T ∼ P OI(T ). Finally, (113) readily follows from (115). (9) is direct consequence of Lemmas 9 and 10 and this concludes the proof of Theorem 2.
17,087
1812.11325
2906796011
We prove an invariance principle for a random Lorentz-gas particle in 3 dimensions under the Boltzmann-Grad limit and simultaneous diffusive scaling. That is, for the trajectory of a point-like particle moving among infinite-mass, hard-core, spherical scatterers of radius @math , placed according to a Poisson point process of density @math , in the limit @math , @math , @math up to time scales of order @math . To our knowledge this represents the first significant progress towards solving this problem in classical nonequilibrium statistical physics, since the groundbreaking work of Gallavotti (1970), Spohn (1978) and Boldrighini-Bunimovich-Sinai (1983). The novelty is that the diffusive scaling of particle trajectory and the kinetic (Boltzmann-Grad) limit are taken simulataneously. The main ingredients are a coupling of the mechanical trajectory with the Markovian random flight process, and probabilistic and geometric controls on the efficiency of this coupling.
In @cite_12 and @cite_14 it is proved that in the Boltzmann-Grad limit the trajectory of the Lorentz particle in any compact time interval @math with @math fixed, converges weakly to a non-Markovian flight process which has, however, a complete description in terms of a Markov chain of the successive collision impact parameters and, conditionally on this random sequence, independent flight lengths. (For a full description in these terms see @cite_21 .) As a second limit, an invariance principle is proved in @cite_21 for this non-Markovian random flight process, with superdiffusive scaling @math . Note that in this case the second limit doesn't just drop out from Donsker's theorem as it did in the random scatterer setting. The results of @cite_12 are valid in @math while those of @cite_14 and @cite_21 in arbitrary dimension.
{ "abstract": [ "We study the dynamics of a point particle in a periodic array of spherical scatterers and construct a stochastic process that governs the time evolution for random initial data in the limit of low scatterer density (BoltzmannGrad limit). A generic path of the limiting process is a piecewise linear curve whose consecutive segments are generated by a Markov process with memory two.", "We prove a superdiffusive central limit theorem for the displacement of a test particle in the periodic Lorentz gas in the limit of large times t and low scatterer densities (Boltzmann–Grad limit). The normalization factor is ( t log t ), where t is measured in units of the mean collision time. This result holds in any dimension and for a general class of finite-range scattering potentials. We also establish the corresponding invariance principle, i.e., the weak convergence of the particle dynamics to Brownian motion.", "Abstract The periodic Lorentz gas is the dynamical system corresponding to the free motion of a point particle in a periodic system of fixed spherical obstacles of radius r centered at the integer points of the Euclidian plane, assuming all collisions of the particle with the obstacles to be elastic. In this Note, we study this motion on time intervals of order 1 r as r → 0 + . To cite this article: E. Caglioti, F. Golse, C. R. Acad. Sci. Paris, Ser. I 346 (2008)." ], "cite_N": [ "@cite_14", "@cite_21", "@cite_12" ], "mid": [ "2962964462", "2964292971", "2053462581" ] }
Invariance Principle for the Random Lorentz Gas -Beyond the Boltzmann-Grad Limit
We consider the Lorentz gas with randomly placed spherical hard core scatterers in R d . That is, place spherical balls of radius r and infinite mass centred on the points of a Poisson point process of intensity in R d , where r d is sufficiently small so that with positive probability there is free passage out to infinity, and define t → X r, (t) ∈ R d to be the trajectory of a point particle starting with randomly oriented unit velocity, performing free flight in the complement of the scatterers and scattering elastically on them. A major problem in mathematical statistical physics is to understand the diffusive scaling limit of the particle trajectory t → X r, (T t) √ T , as T → ∞.(1) Indeed, the Holy Grail of this field of research would be to prove an invariance principle (i.e. weak convergence to a Wiener process with nondegenerate variance) for the sequence of processes in (1) in either the quenched or annealed setting (discussed in section 1.1). For extensive discussion and historical background see the surveys [18,7,14] and the monograph [19]. The same problem in the periodic setting, when the scatterers are placed in a periodic array and randomness comes only with the initial conditions of the moving particle, is much better understood, due to the fact that in the periodic case the problem is reformulated as diffusive limit of particular additive functionals of billiards in compact domains and thus heavy artillery of hyperbolic dynamical systems theory is efficiently applicable. In order to put our results in context, we will summarize very succinctly the existing results, in section 1. 4. There has been, however, no progress in the study of the random Lorentz gas informally described above, since the ground-breaking work of Gallavotti [9,10], Spohn [17,18] and Boldrighini-Bunimovich-Sinai [3] where weak convergence of the process t → X r, (t) to a continuous time random walk t → Y (t) (called Markovian flight process) was established in the Boltzmann-Grad (a.k.a. low density) limit r → 0, → ∞, r d−1 → 1, in compact time intervals t ∈ [0, T ], with T < ∞, in the annealed [9,10,17,18], respectively, quenched [3] setting. Our main result (see Theorem 2 in subsection 1.3) proves an invariance principle in the annealed setting if we take the Boltzmann-Grad and diffusive limits simultaneously: r → 0, → ∞, r d−1 → 1 and T = T (r) → ∞. Thus while the diffusive limit (1) with fixed r and remains open, this is the first result proving convergence for infinite times in the setting of randomly placed scatterers, and hence it is a significant step towards the full resolution of the problem in the annealed setting. The random Lorentz gas We define now more formally the random Lorentz process. Place spherical balls of radius r and infinite mass centred on the points of a Poisson point process of intensity in R d , and define the trajectory t → X r, (t) ∈ R d of a particle moving among these scatterers as follows: -If the origin is covered by a scatterer then X r, (t) ≡ 0. -If the origin is not covered by a scatterer then t → X r, (t) is the trajectory of a point-like particle starting from the origin with random velocity sampled uniformly from the unit sphere S d−1 and flying with constant speed between successive elastic collisions on any one of the fixed, infinite mass scatterers. The randomness of the trajectory t → X r, (t) (when not identically 0) is due to two sources: the random placement of the scatterers and the random choice of initial velocity of the moving particle. Otherwise, the dynamics of the moving particle is fully deterministic, governed by classical Newtonian laws. With probability 1 (with respect to both sources of randomness) the trajectory t → X r, (t) is well defined. Due to elementary scaling and percolation arguments P the moving particle is not trapped in a compact domain = ϑ d ( r d ), where ϑ d : R + → [0, 1] is a percolation probability which is (i) monotone non-increasing; (ii) continuous except for one possible jump at a positive and finite critical value u c = u c (d) ∈ (0, ∞); (iii) vanishing for u ∈ (u c , ∞) and positive for u ∈ (0, u c ); (iv) lim u→0 ϑ d (u) = 1. We assume that r d < u c . In fact, in the Boltzmann-Grad limit considered in this paper (see (3) below) we will have r d → 0. As discussed above, the Holy Grail of this field is a mathematically rigorous proof of invariance principle of the processes (1) in either one of the following two settings. (Q) Quenched limit: For almost all (i.e. typical) realizations of the underlying Poisson point process, with averaging over the random initial velocity of the particle. In this case, it is expected that the variance of the limiting Wiener process is deterministic, not depending on the realization of the underlying Poisson point process. (AQ) Averaged-quenched (a.k.a. annealed ) limit: Averaging over the random initial velocity of the particle and the random placements of the scatterers. The Boltzmann-Grad limit The Boltzmann-Grad limit is the following low (relative) density limit of the scatterer configuration: r → 0, → ∞, r d−1 → v d−1 ,(3) where v d−1 is the area of the (d − 1)-dimensional unit disc. In this limit the expected free path length between two successive collisions will be 1. Other choices of lim r d−1 ∈ (0, ∞) are equally legitimate and would change the limit only by a time (or space) scaling factor. It is not difficult to see that in the averaged-quenched setting and under the Boltzmann-Grad limit (3) the distribution of the first free flight length starting at any deterministic time, converges to an EXP (1) and the jump in velocity after the free flight happens in a Markovian way with transition kernel P v out ∈ dv v in = v = σ(v, v )dv ,(4) where dv is the surface element on S d−1 and σ : S d−1 × S d−1 toR + is the normalised differential cross section of a spherical hard core scatterer, computable as σ(v, v ) = 1 4v d−1 v − v 3−d .(5) Note that in 3-dimensions the transition probability (4) of velocity jumps is uniform. That is, the outgoing velocity v out is uniformly distributed on S 2 , independently of the incoming velocity v in . It is intuitively compelling but far from easy to prove that under the Boltzmann-Grad limit (3) t → X r, (t) ⇒ t → Y (t) ,(6) where the symbol ⇒ stands for weak convergence (of probability measures) on the space of continuous trajectories in R d , see [1]. The process t → Y (t) on the right hand side is the Markovian random flight process consisting of independent free flights of EXP (1)-distributed length, with Markovian velocity changes according to the scattering transition kernel (4). A formal construction of the process t → Y (t) is given in section 2.1. The limit (6), valid in any compact time interval t ∈ [0, T ], T < ∞, is rigorously established in the averaged-quenched setting in [9,10,17,18], and in the quenched setting in [3]. In [17] more general point processes of the scatterer positions, with sufficiently strong mixing properties are considered. The limiting Markovian flight process t → Y (t) is a continuous time random walk. Therefore, by taking a second, diffusive limit after the Boltzmann-Grad limit (6), Donsker's theorem (see [1]) yields indeed the invariance principle, t → T −1/2 Y (T t) ⇒ t → W (t) ,(7) as T → ∞, where t → W (t) is the Wiener process in R d of nondegenerate variance. The variance of the limiting Wiener process W can be explicitly computed but its concrete value has no importance. The natural question arises whether one could somehow interpolate between the double limit of taking first the Boltzmann-Grad limit (6) and then the diffusive limit (7) and the plain diffusive limit for the Lorentz process, (1). Our main result, Theorem 2 formulated in section 1.3 gives a positive partial answer in dimension 3. Since our results are proved in three-dimensions from now on we formulate all statements in d = 3 rather than general dimension. Results In the rest of the paper we assume = (r) = πr −2 and drop the superscript from the notation of the Lorentz process. Our results (Theorems 1 and 2 formulated below) refer to a coupling -joint realisation on the same probability space -of the Markovian random flight process t → Y (t), and the quenched-averaged (annealed) Lorentz process t → X r (t). The coupling is informally described later in this section and constructed with full formal rigour in section 2.2. The first theorem states that in our coupling, up to to time T r −1 , the Markovian flight and Lorentz exploration processes stay together. Theorem 1. Let T = T (r) be such that lim r→0 T (r) = ∞ and lim r→0 rT (r) = 0. Then lim r→0 P inf{t : X r (t) = Y (t)} ≤ T = 0.(8) Although, this result is subsumed by our main result, it shows the strength of the coupling method employed in this paper. In particular, with some elementary arguments it provides a much stronger result than Gallavotti and Spohn [9,10,17] which states the weak limit (6) (which follows from (8)) for any fixed T < ∞. On the other hand the proof of this "naïve" result sheds some light on the structure of proof of the more sophisticated Theorem 2, which is our main result. Theorem 2. Let T = T (r) be such that lim r→0 T (r) = ∞ and lim r→0 r 2 |log r| 2 T (r) = 0. Then, for any δ > 0, lim r→0 P sup 0≤t≤T |X r (t) − Y (t)| > δ √ T = 0,(9) and hence t → T −1/2 X r (T t) ⇒ t → W (t) ,(10) as r → 0, in the averaged-quenched sense. On the right hand side of (10) W is a standard Wiener process of variance 1 in R 3 . Indeed, the invariance principle (10) readily follows from the invariance principle for the Markovian flight process, (7), and the closeness of the two processes quantified in (9). So, it remains to prove (9). This will be the content of the larger part of this paper, sections 4-7. The point of Theorem 2 is that the Boltzmann-Grad limit of scatterer configuration (3) and the diffusive scaling of the trajectory are done simultaneously, and not consecutively. The memory effects due to recollisions are controlled up to the time scale T = T (r) = o(r −2 |log r| −2 ). Remarks on dimension: (1) Our proof is not valid in 2-dimensions for two different reasons: (a) Probabilistic estimates at the core of the proof are valid only in the transient dimensions of random walk, d ≥ 3. (b) A subtle geometric argument which will show up in sections 6.4-6.6 below, is valid only in d ≥ 3, as well. This is unrelated to the recurrence/transience dichotomy and it is crucial in controlling the short range recollision events in the Boltzmann-Grad limit (3). (2) The fact that in d = 3 the differential cross section of hard spherical scatterers is uniform on S 2 , c.f. (4), (5), facilitates our arguments, since, in this case, the successive velocities of the random flight process Y (t) form an i.i.d. sequence. However, this is not of crucial importance. The same arguments could also be carried out for other differential cross sections, at the expense of more extensive arguments. We are not going to these generalisations here. Therefore the proofs presented in this paper are valid exactly in d = 3. The proof will be based on a coupling (that is: a joint realisation on the same probability space) of the Markovian flight process t → Y (t) and the averaged-quenched realisation of the Lorentz process t → X r (t), such that the maximum distance of their positions up to time T be small order of √ T . The Lorentz process t → X r (t) is realised as an exploration of the environment of scatterers. That is, as time goes on, more and more information is revealed about the position of the scatterers. As long as X r (t) traverses yet unexplored territories, it behaves just like the Markovian flight process Y (t), discovering new, yet-unseen scatterers with rate 1 and scattering on them. However, unlike the Markovian flight process it has long memory, the discovered scatterers are placed forever and if the process X r (t) returns to these positions, recollisions occur. Likewise, the area swept in the past by the Lorentz exploration process X r (t) -that is: a tube of radius r around its past trajectory -is recorded as a domain where new collisions can not occur. For a formal definition of the coupling see section 2.2. Let their velocity processes be U (t) :=Ẏ (t) and V r (t) :=Ẋ r (t). These are almost surely piecewise constant jump processes. The coupling is realized in such a way, that (A) At the very beginning the two velocities coincide, V r (0) = U (0). (B) Occasionally, with typical frequency of order r mismatches of the two velocity processes occur. These mismatches are caused by two possible effects: • Recollisions of the Lorentz exploration process with a scatterer placed in the past. This causes a collision event when V r (t) changes while U (t) does not. • Scatterings of the Markovian flight process Y (t) in a moment when the Lorentz exploration process is in the explored tube, where it can not encounter a not-yet-seen new scatterer. In these moments the process U (t) has a jump discontinuity, while the process V r (t) stays unchanged. We will call these events shadowed scatterings of the Markovian flight process. (C) However, shortly after the mismatch events described in item (B) above, a new jointly realised scattering event of the two processes occurs, recoupling the two velocity processes to identical values. These recouplings occur typically at an EXP (1)-distributed time after the mismatches. Figure 1: V r (t) V r (t) U (t) U (t) The above image shows a recollision (left) and a shadowing event (right). Note that after each event U and V r are no longer coupled. However at the next scattering, if possible, the velocities are recoupled. Summarizing: The coupled velocity processes t → (U (t), V r (t)) are realized in such a way that they assume the same values except for typical time intervals of length of order 1, separated by typical intervals of lengths of order r −1 . Other, more complicated mismatches of the two processes occur only at time scales of order r −2 |log r| −2 . If all these are controlled (this will be the content of the proof) then the following hold: Up to T = T (r) = o(r −1 ), with high probability there is no mismatch whatsoever between U (t) and V r (t). That is, lim r→0 P inf{t : V r (t) = U (t)} < T = lim r→0 P inf{t : X r (t) = Y (t)} < T = 0.(11) In particular, the invariance principle (10) also follows, with T = T (r) = o(r −1 ), rather than T = T (r) = o(r −2 |log r| −2 ). As a by-product of this argument a new and handier proof of the theorem (6) of Gallavotti [9,10] and Spohn [17,18] also drops out. Going up to T = T (r) = o(r −2 |log r| −2 ) needs more argument. The ideas exposed in the outline (A), (B), (C) above lead to the following chain of bounds: max 0≤t≤1 X r (T t) √ T − Y (T t) √ T = 1 √ T max 0≤t≤1 T t 0 (V r (s) − U (s)) ds ≤ 1 √ T T 0 |V r (s) − U (s)| ds 1 √ T T r = √ T r. In the step we use the arguments (B) and (C). Finally, choosing in the end T = T (r) = o(r −2 ) we obtain a tightly close coupling of the diffusively scaled processes t → X r (T t)/ √ T and t → Y (T t)/ √ T , (9), and hence the invariance principle (10), for this longer time scale. This hand-waving argument should, however, be taken with a grain of salt: it does not show the logarithmic factor, which arises in the fine-tuning. Scaling limit of the periodic Lorentz gas As already mentioned, diffusion in the periodic setting is much better understood than in the random setting. This is due to the fact that diffusion in the periodic Lorentz gas can be reduced to study the of limit theorems of some particular additive functionals of billiard flows in compact domains. Heavy tools of hyperbolic dynamics provide the technical arsenal for the study of these problems. The first breakthrough was the fully rigorous proof of the invariance principle (diffusive scaling limit) for the Lorentz particle trajectory in a two-dimensional periodic array of spherical scatterers with finite horizon, [4]. (Finite horizon means that the length of the straight path segments not intersecting a scatterer is bounded from above.) This result was extended to higher dimensions in [6], under a still-not-proved technical assumption on singularities of the corresponding billiard flow. In the case of infinite horizon (e.g. the plain Z d arrangement of the spherical scatterers of diameter less than the lattice spacing) the free flight distribution of a particle flying in a uniformly sampled random direction has a heavy tail which causes a different type of long time behaviour of the particle displacement. The arguments of [2] indicated that in the twodimensional case super-diffusive scaling of order √ t log t is expected. A central limit theorem with this anomalous scaling was proved with full rigour in [20], for the Lorentz-particle displacement in the 2-dimensional periodic case with infinite horizon. The periodic infinite horizon case in dimensions d ≥ 3 remains open. Boltzmann-Grad limit of the periodic Lorentz gas The Boltzmann-Grad limit in the periodic case means spherical scatterers of radii r 1 placed on the points of the hypercubic lattice r (d−1)/d Z d . The particle starts with random initial position and velocity sampled uniformly and collides elastically on the scatterers. For a full exposition of the long and complex history of this problem we quote the surveys [11,14] and recall only the final, definitive results. In [5] and [15] it is proved that in the Boltzmann-Grad limit the trajectory of the Lorentz particle in any compact time interval t ∈ [0, T ] with T < ∞ fixed, converges weakly to a non-Markovian flight process which has, however, a complete description in terms of a Markov chain of the successive collision impact parameters and, conditionally on this random sequence, independent flight lengths. (For a full description in these terms see [16].) As a second limit, an invariance principle is proved in [16] for this non-Markovian random flight process, with superdiffusive scaling √ t log t. Note that in this case the second limit doesn't just drop out from Donsker's theorem as it did in the random scatterer setting. The results of [5] are valid in d = 2 while those of [15] and [16] in arbitrary dimension. Interpolating between the plain scaling limit in the infinite horizon case (open in d ≥ 3) and the kinetic limit, by simultaneously taking the Boltzmann-Grad limit and scaling the trajectory by √ T log T , where T = T (r) → ∞ with some rate, would be the problem analogous to our Theorem 1 or Theorem 2. This is widely open. Miscellaneous The quantum analogue of the problem of the Boltzmann-Grad limit for the random Lorentz gas was considered in [8], where the long time evolution of a quantum particle interacting with a random potential in the Boltzmann-Grad limit is studied. It is proved that the phase space density of the quantum evolution converges weakly to a the solution of the linear Boltzmann equation. This is the precise quantum analogue of the classical problem solved by Gallavotti and Spohn in [9,10,17,18]. Looking into the future: Liverani investigates the periodic Lorentz gas with finite horizon with local random perturbations in the cells of periodicity: a basic periodic structure with spherical scatterers centred on Z d with extra scatterers placed randomly and independently within the cells of periodicity, [12]. This is an interesting mixture of the periodic and random settings which could succumb to a mixture of dynamical and probabilistic methods, so-called deterministic walks in random environment. Structure of the paper The rest of the paper is devoted to the rigorous statement and proof of the arguments exposed in (A), (B), (C) above. Its overall structure is as follows: -Section 2: We construct the Markovian flight process and the Lorentz exploration and thus lay out the coupling argument which is essential moving forward. Moreover we will introduce an auxiliary process, Z, which will be simpler to work with than X. -Section 3: We prove Theorem 1. We go through the proof of this result as it is both informative for the dynamics, and the proof of Theorem 2 in its full strength will follow partially similar lines, however with substantial differences. Sections 4-7 are fully devoted to the proof of Theorem 2, as follows: -Section 4: We break up the process Z into independent legs. From here we state two propositions which are central to the proof. They state that (i) with high probability the process X does not differ from Z in each leg; (ii) with high probability, the different legs of the process Z do not interact (up to times of our time scales). -Section 5: We prove the proposition concerning interactions between legs. -Section 6: We prove the proposition concerning coincidence, with high probability, of the processes X and Z within a single leg. This section is longer than the others, due to the subtle geometric arguments and estimates needed in this proof. -Section 7: We finish off the proof of Theorem 2. Construction Ingredients and the Markovian flight process Let ξ j ∈ R + and u j ∈ R 3 , j = −2, −1, 0, 1, 2, . . . , be completely independent random variables (defined on an unspecified probability space (Ω, F , P)) with distributions: ξ j ∼ EXP (1), u j ∼ U N I(S 2 ),(12) and let y j := ξ j u j ∈ R 3 .(13) For later use we also introduce the sequence of indicators j := 1{ξ j < 1},(14) and the corresponding conditional exponential distributions EXP (1|1) := distrib(ξ | = 1), respectively, EXP (1|0) = distrib(ξ | = 0), with distribution densities (e − 1) −1 e 1−x 1{0 ≤ x < 1}, respectively, e 1−x 1{1 ≤ x < ∞}. We will also use the notation := ( j ) j≥0 and call the sequence the signature of the i.i.d. EXP (1)-sequence (ξ j ) j≥0 . The variables ξ j and u j will be, respectively, the consecutive flight length/flight times and flight velocities of the Markovian flight process t → Y (t) ∈ R 3 defined below. Denote, for n ∈ Z + , t ∈ R + , τ n := n j=1 ξ j , ν t := max{n : τ n ≤ t}, {t} := t − τ νt .(15) That is: τ n denotes the consecutive scattering times of the flight process, ν t is the number of scattering events of the flight process Y occurring in the time interval (0, t], and {t} is the length of the last free flight before time t. Finally let Y n := n j=1 ξ j u j = n j=1 y j , Y (t) := Y νt + {t}u νt+1 . We shall refer to the process t → Y (t) as the Markovian flight process. This will be our fundamental probabilistic object. All variables and processes will be defined in terms of this process, and adapted to the natural continuous time filtration (F t ) t≥0 of the flight process: F t := σ(u 0 , (Y (s)) 0≤s≤t ). Note that the processes n → Y n , t → Y (t) and their respective natural filtrations (F n ) n≥0 , (F t ) t≥0 , do not depend on the parameter r. We also define, for later use, the virtual scatterers of the flight process t → Y (t). For n ≥ 0, let Y k := Y k + r u k − u k+1 |u n − u k+1 | = Y k + rẎ (τ − k ) −Ẏ (τ + k ) Ẏ (τ − k ) −Ẏ (τ + k ) , k ≥ 0, S Y n := {Y k ∈ R 3 : 0 ≤ k ≤ n}, n ≥ 0. Here and throughout the paper we use the notation f (t ± ) := lim ε↓0 f (t ± ε). The points Y n ∈ R 3 are the centres of virtual spherical scatterers of radius r which would have caused the nth scattering event of the flight process. They do not have any influence on the further trajectory of the flight process Y , but will play role in the forthcoming couplings. The Lorentz exploration process Let r > 0, and = (r) = πr −2 . We define the Lorentz exploration process t → X(t) = X r (t) ∈ R 3 , coupled with the flight process t → Y (t), adapted to the filtration (F t ) t≥0 . The process t → X(t) and all upcoming random variables related to it do depend on the choice of the parameter r (and ), but from now on we will suppress explicit notation of dependence upon these parameters. The construction goes inductively, on the successive time intervals [τ n−1 , τ n ), n = 1, 2, . . . . Start with [Step 1] and then iterate indefinitely [Step 2] and [ Step 3] below. [ Step 1] Start with X(0) = X 0 = 0, V (0 + ) = u 1 , X 0 := r u 0 − u 1 |u 0 − u 1 | S X 0 = {X 0 }. Note that the trajectory of the exploration process X begins with a collision at time t = 0. This is not exactly as described previously but is of no consequence and aids the later exposition. Go to [ Step 2]. [ Step 2] This step starts with given X(τ n−1 ) = X n−1 ∈ R 3 , V (τ + n−1 ) ∈ S 2 and S X n−1 = {X k : 0 ≤ k ≤ n − 1} ⊂ R 3 ∪ { }, where • is a fictitious point at infinity, with inf x∈R 3 |x − | = ∞, introduced for bookkeeping reasons; • |X n−1 − X k | ∈ (r, ∞] for 0 ≤ k < n − 1, and X n−1 − X n−1 ∈ {r, ∞}. The trajectory t → X(t), t ∈ [τ n−1 , τ n ), is defined as free motion with elastic collisions on fixed spherical scatterers of radius r centred at the points in S X n−1 . At the end of this time interval the position and velocity of the Lorentz exploration process are X(τ n ) =: X n , respectively, V (τ − n ). Go to [Step 3]. [Step 3] Let X n := X n + r V (τ − n ) − u n+1 V (τ − n ) − u n+1 , d n := min 0≤s<τn X(s) − X n . Note that d n ≤ r. • If d n < r then let X n := , and V (τ + n ) = V (τ − n ). • If d n = r then let X n := X n ,and V (τ + n ) = u n+1 . Set S X n = S X n−1 ∪ {X n }. Go back to [Step 2]. The process t → X(t) is indeed adapted to the filtration (F t ) 0≤t<∞ and indeed has the averagedquenched distribution of the Lorentz process. Our notation is fully consistent with the one used for the markovian process Y : X n := X(τ n ) and X k :=        X k + rẊ (τ − k ) −Ẋ(τ + k ) Ẋ (τ − k ) −Ẋ(τ + k ) ifẊ(τ − k ) =Ẋ(τ + k ), ifẊ(τ − k ) =Ẋ(τ + k ), k ≥ 0, S X n := {X k ∈ R 3 : 0 ≤ k ≤ n}, n ≥ 0. Mechanical consistency and compatibility of piece-wise linear trajectories in R 3 The key notion in the exploration construction of section 2.2 was mechanical r-consistency, and r-compatibility of finite segments of piece-wise linear trajectories in R 3 , which we are going to formalize now, for later reference. Let n ∈ N, τ 0 ∈ R, Z 0 ∈ R 3 , v 0 , . . . , v n+1 ∈ S 2 t 1 , . . . , t n ∈ R + , be given and define for j = 0, . . . , n, τ j := τ 0 + j k=1 t k , Z j := Z 0 + j k=1 t k v k , Z j :=    Z j + r v j − v j+1 |v j − v j+1 | if v j = v j+1 , if v j = v j+1 , and for t ∈ [τ j , τ j+1 ], j = 0, . . . , n, Z(t) := Z j + (t − τ j )v j+1 . We call the piece-wise linear trajectory Z(t) : τ − 0 < t < τ + n mechanically r-consistent or r-inconsistent, if min τ 0 ≤t≤τn min 0≤j≤n Z(t) − Z j = r, respectively, min τ 0 ≤t≤τn min 0≤j≤n Z(t) − Z j < r(16) Note, that by formal definition the minimum distance on the left hand side can not be strictly larger than r. Given two finite pieces of mechanically r-consistent trajectories Z a (t) : τ − a,0 < t < τ + a,na and Z b (t) : τ − b,0 < t < τ + b,n b , defined over non-overlapping time intervals: [τ a,0 , τ a,na ] ∩ [τ b,0 , τ b,n b ] = ∅, with τ a,na ≤ τ b,0 , we will call them mechanically r-compatible or r-incompatible if min{ min τ a,0 ≤t≤τa,n a min 0<j≤n b Z a (t) − Z b,j , min τ b,0 ≤t≤τ b,n b min 0≤j<na Z b (t) − Z a,j } ≥ r, min{ min τ a,0 ≤t≤τa,n a min 0<j≤n b Z a (t) − Z b,j , min τ b,0 ≤t≤τ b,n b min 0≤j<na Z b (t) − Z a,j } < r,(17) respectively. It is obvious that given a mechanically r-consistent trajectory, any non-overlapping parts of it are pairwise mechanically r-compatible, and given a finite number of non-overlapping mechanically r-consistent pieces of trajectories which are also pair-wise mechanically r-compatible their concatenation (in the most natural way) is mechanically r-consistent. An auxiliary process It will be convenient to introduce a third, auxiliary process t → Z(t) ∈ R 3 , and consider the joint realization of all three processes t → (Y (t), X(t), Z(t)) on the same probability space. This construction will not be needed until section 4, but this is the optimal logical point to introduce it. The reader may safely skip to section 3 and come back here before turning to section 4. The process t → Z(t) will be a forgetful version of the true physical process t → X(t) in the sense that in its construction only memory effects by the last seen scatterers are taken into account. That is: only direct recollisions with the last seen scatterer and shadowings by the last straight flight segment are incorporated, disregarding more complex memory effects. It will be shown that (a) up to times T = T (r) = o(r −2 |log r| −2 ) the trajectories of the forgetful process Z(t) and the true physical process X(t) coincide, and (b) the forgetful process Z(t) and the Markovian process Y (t) stay sufficiently close together with probability tending to 1 (as r → 0). Thus, the invariance principle (7) can be transferred to the true physical process X(t), thus yielding the invariance principle (10). Define the following indicator variables: η j = η(y j−2 , y j−1 , y j ) := 1 |y j−1 | < 1 and min 0≤t≤ξ j−2 y j−1 + r u j−1 − u j |u j−1 − u j | + tu j−2 < r , η j = η(y j−2 , y j−1 , y j ) := 1 |y j−1 | < 1 and min 0≤t≤ξ j y j−1 + r u j−1 − u j−2 |u j−1 − u j−2 | + tu j < r , η j := max{ η j , η j }.(18) Before constructing the auxiliary process t → Z(t) we prove the following Lemma 1. There exists a constant C < ∞ such that for any sequence of signatures = ( j ) j≥1 the following bounds hold E η j ≤ Cr,(19)E η j η k ≤ Cr 2 |log r| if |j − k| = 1, Cr 2 if |j − k| > 1.(20) Proof of Lemma 1. Define the following auxiliary, and simpler, indicators: η j := 1 ∠(−u j−1 , u j−2 ) < 2r ξ j−1 , η j := 1 ∠(−u j−1 , u j ) < 2r ξ j−1 . Here, and in the rest of the paper we use the notation ∠ : S 2 × S 2 → [0, π], ∠(u, v) := arccos(u · v). Then, clearly, η j ≤ η j , η j ≤ η j . It is straightforward that the indicators η j : 1 ≤ j < ∞ , and likewise, the indicators η j : 1 ≤ j < ∞ , are independent among themselves and one-dependent across the two sequences. This holds even if conditioned on the sequence of signatures . Therefore, the following simple computations prove the claim of the lemma. E η j ≤ Cr 2 ∞ 0 e −y min{y −2 , r −2 }dy ≤ Cr, E η j ≤ Cr 2 ∞ 0 e −y min{y −2 , r −2 }dy ≤ Cr, E η j+1 η j ≤ Cr 2 ∞ 0 ∞ 0 e −y e −z min{y −2 , z −2 , r −2 }dydz ≤ Cr 2 |log r| . We omit the elementary computational details. Lemma 1 assures that, as r → 0, with probability tending to 1, up to time of order T = T (r) = o(r −2 |log r| −1 ) it will not occur that two neighbouring or next-neighbouring η-s happen to take the value 1 which would obscure the following construction. The process t → Z(t) is constructed on the successive intervals [τ j−1 , τ j ), j = 1, 2, . . . , as follows: • (No interference with the past.) If η j = 0 then for τ j−1 ≤ t ≤ τ j , Z(t) = Z(τ j−1 ) + {t}u j . • (Direct shadowing.) If η j = 1, then for τ j−1 ≤ t ≤ τ j , Z(t) = Z(τ j−1 ) + {t}u j−1 . • (Direct recollision with the last seen scatterer.) If η j = 0 and η j = 1 then, in the time interval τ j−1 ≤ t ≤ τ j the trajectory t → Z(t) is defined as that of a mechanical particle starting with initial position Z(τ j−1 ), initial velocityŻ(τ + j−1 ) = u j and colliding elastically with two infinite-mass spherical scatterers of radius r centred at the points Z(τ j−1 ) + r u j−1 − u j |u j−1 − u j | , respectively Z(τ j−2 ) − r u j−1 − u j−2 |u j−1 − u j−2 | . Consistently with the notations adopted for the processes Y (t) and X(t), we denote (1), and therefore the coupling bound of Theorem 1 holds. On the way we establish various bounds to be used in later sections. This section is purely classical-probabilistic. It also prepares the ideas (and notation) for section 5 where a similar argument is explored in more complex form. Z k := Z(τ k ) for k ≥ 0. Y (t) Z(t) X(t) (a) (a) (b) (b) Interferences Let t → Y (t) and t → Y * (t) be two independent Markovian flight processes. Think about Y (t) as running forward and Y * (t) as running backwards in time. (Note, that the Markovian flight process has invariant law under time reversal.) Define the following events W j := {min{ Y (t) − Y j : 0 < t < τ j−1 } < r}, W j := {min{ Y k − Y (t) : 0 ≤ k < j − 1, τ j−1 < t < τ j } < r}, W * j := {min{ Y * (t) − Y 1 : 0 < t < τ j−1 } < r}, W * j := {min{ Y * k − Y (t) : 0 < k ≤ j − 1, 0 < t < τ 1 } < r}, W * ∞ := {min{ Y * (t) − Y 1 : 0 < t < ∞} < r}, W * ∞ := {min{ Y * k − Y (t) : 0 < k < ∞, 0 < t < τ 1 } < r}, In words W j is the event that the virtual collision at Y j is shadowed by the past path. While W j is the event that in the time interval (τ j−1 , τ j ) there is a virtual recollision with a past scatterer. It is obvious that P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ , P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ .(21) On the other hand, by union bound and independence P W * ∞ ≤ z∈Z 3 P {1 < k < ∞ : Y * k ∈ B zr,2r } = ∅ P {0 < t ≤ ξ : Y (t) ∈ B zr,2r } = ∅ ≤ z∈Z 3 (2r) −1 E |{1 < k < ∞ : Y * k ∈ B zr,2r }| E |{0 < t ≤ ξ : Y (t) ∈ B zr,3r }| P W * ∞ ≤ z∈Z 3 P {0 < t < ∞ : Y * (t) ∈ B zr,2r } = ∅ P Y 1 ∈ B zr,2r ≤ z∈Z 3 (2r) −1 E |{0 < t < ∞ : Y * (t) ∈ B zr,3r }| P Y 1 ∈ B zr,2r(22) Here and in the rest of the paper we use the notation |{· · · }| for either cardinality or Lebesgue measure of the set {· · · }, depending on context. Occupation measures (Green's functions) Define the following occupation measures (Green's functions): for A ⊂ R 3 g(A) := P Y 1 ∈ A h(A) := E |{0 < t ≤ ξ 1 : Y (t) ∈ A}| G(A) := E |{1 ≤ k < ∞ : Y k ∈ A}| H(A) := E |{0 < t < ∞ : Y (t) ∈ A}| . Obviously, G(A) = g(A) + R 3 g(A − x)G(dx) H(A) = h(A) + R 3 h(A − x)G(dx).(23) Bounds Lemma 2. The following identities and upper bounds hold: h(dx) = g(dx) ≤ L(dx) (24) H(dx) = G(dx) ≤ K(dx) + L(dx)(25) where K(dx) := C min{1, |x| −1 }dx, L(dx) := Ce −c|x| |x| −2 dx,(26) with appropriately chosen C < ∞ and c > 0. Proof of Lemma 2. The identity h = g is a direct consequence of the flight length ξ being EXP (1)-distributed. The distribution g has the explicit expression g(dx) = C |x| −2 e −|x| dx from which the the upper bound (24) follows. (25) then follows from (23) and standard Green's function estimate for a random walk with step distribution g. For later use we introduce the conditional versions -conditioned on the sequence (see (14)) -of the bounds (24), (25). In this order we define the conditional versions of the Green's functions, given ∈ {0, 1}, respectively ∈ {0, 1} N : g (A) := P Y 1 ∈ A h (A) := E |{0 < t ≤ ξ 1 : Y (t) ∈ A}| G (A) := E |{1 ≤ k < ∞ : Y k ∈ A}| H (A) := E |{0 < t < ∞ : Y (t) ∈ A}| , and state the conditional version of Lemma 2: Lemma 3. The following upper bounds hold uniformly in ∈ {0, 1} N : g (dx) ≤ L(dx), h (dx) ≤ L(dx),(27)G (dx) ≤ K(dx) + L(dx), H (dx) ≤ K(dx) + L(dx),(28) with K(x) and L(x) as in (26), with appropriately chosen constants C < ∞ and c > 0. Proof of Lemma 3. Noting that g (dx) ≤ C |x| −2 e −|x| dx, h (dx) ≤ C |x| −2 e −|x| dx, the proof of Lemma 3 follows very much the same lines as the proof of Lemma 2. We omit the details. Computation According to (21), (22), for every j = 1, 2, . . . P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 G(B zr,2r )h(B zr,3r ), P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 H(B zr,3r )g(B zr,2r ). Moreover, straightforward computations yield Proof of Lemma 4. The bounds (29) readily follow from explicit computations. We omit the details. We conclude this section with the following consequence of the above arguments and computations. Corollary 1. There exists a constant C < ∞ such that for any j ≥ 1: P W j ≤ Cr, P W j ≤ Cr.(30) 3.5 No mismatching -up to T ∼ o(r −1 ) Define the stopping time σ := min{j > 0 : max{1 W j , 1 W j } = 1}, and note that by construction inf{t > 0 : X(t) = Y (t)} ≥ τ σ−1 .(31) Lemma 5. Let T = T (r) be such that lim r→0 T (r) = ∞ and lim r→0 rT (r) = 0. Then lim r→0 P τ σ−1 < T = 0.(32) Proof of Lemma 5. P τ σ−1 < T ≤ P σ ≤ 2T + P 2T −1 j=1 ξ j < T ≤ CrT + Ce −cT ,(33) where C < ∞ and c > 0. The first term in the middle expression of (33) is bounded by union bound and (30) of Corollary 1. In bounding the second term we use a large deviation upper bound for the sum of independent EXP (1)-distributed ξ j -s. Finally, (32) readily follows from (33). (8) follows directly from (31) and (32), and this concludes the proof of Theorem 1. Beyond the naïve coupling The forthcoming parts of the paper rely on the joint realization (coupling) of the three processes t → Y (t), X(t), Z(t) as described in section 2. In particular, recall the construction of the process t → Z(t) from section 2.4. Breaking Z into legs Let Γ 0 := 0, Θ 0 = 0 and for n ≥ 1 Γ n := min{j ≥ Γ n−1 + 2 : min{ξ j−1 , ξ j , ξ j+1 , ξ j+2 } > 1}, γ n := Γ n − Γ n−1 , Θ n := τ Γn , θ n := Θ n − Θ n−1 ,(34) and denote ξ n,j := ξ Γ n−1 +j , u n,j := u Γ n−1 +j , y n,j := y Γ n−1 +j , 1 ≤ j ≤ γ n , Y n (t) := Y (Θ n−1 + t) − Y (Θ n−1 ), 0 ≤ t ≤ θ n , Z n (t) := Z(Θ n−1 + t) − Z(Θ n−1 ), 0 ≤ t ≤ θ n . Then, it is straightforward that the packs of random variables n := (γ n ; (ξ n,j , u n,j ) : 1 ≤ j ≤ γ n ) , n ≥ 0,(35) are fully independent (for n ≥ 0), and also identically distributed for n ≥ 1. (The zeroth pack is deficient if min{ξ 0 , ξ 1 } < 1.) It is also straightforward that the legs of the Markovian flight process (θ n ; Y n (t) : 0 ≤ t ≤ θ n ) , n ≥ 0, are fully independent, and identically distributed for n ≥ 1. A key observation is that due to the rules of construction of the process t → Z(t) exposed in section 2.4, the legs (θ n ; Z n (t) : 0 ≤ t ≤ θ n ) , n ≥ 0, of the auxiliary process t → Z(t) are also independently constructed from the packs (35), following the rules in section 2.4. Note, that the restrictions |y j−1 | < 1 in (18) were imposed exactly in order to ensure this independence of the legs (36). Therefore we will construct now the auxiliary process t → Z(t) and its time reversal t → Z * (t) from an infinite sequence of independent packs (35). In order to reduce unnecessary complications of notation from now on we assume min{ξ 0 , ξ 1 } > 1. Remark: In order to break up the auxiliary process t → Z(t) into independent legs the choice of simpler stopping times Γ n := min{j ≥ Γ n−1 + 1 : min{ξ j , ξ j+1 } > 1}, would work. However, we need the slightly more complicated stoppings Γ n , given in (34), for some other reasons which will become clear towards the end of section 4.2 and in the statement and proof of Lemma 6. One leg Let ξ j , u j , j ≥ 1, be fully independent random variables with the distributions (12), conditioned to min{ξ 1 , ξ 2 } > 1. and y j as in (13). Let γ := min{j ≥ 2 : min{ξ j−1 , ξ j , ξ j+1 , ξ j+2 } > 1} ∈ {2} ∪ {5, 6, . . . }.(37) Note that γ can not assume the values {1, 3, 4}. Call := (γ; (ξ j , u j ) : 1 ≤ j ≤ γ)(38) a pack, and keep the notation τ j := j k=1 ξ k , and θ := τ γ . The forward leg (θ; Z(t) : 0 ≤ t ≤ θ) is constructed from the pack according to the rules given in section 2.4. We will also denote Z j := Z(τ j ), 0 ≤ j ≤ γ; Z := Z γ = Z(θ). These are the discrete steps, respectively, the terminal position of the leg. It is easy to see that the distributions of γ and θ are exponentially tight: there exist constants C < ∞ and c > 0 such that for any s ∈ [0, ∞) P γ > s ≤ Ce −cs , P θ > s ≤ Ce −cs .(39) The backwards leg (θ; Z * (t) : 0 ≤ t ≤ θ) is constructed from the pack as Z * (t, ) := Z(θ − t, * ) − Z( * ), where the backwards pack * := (γ; (ξ γ−j , −u γ−j ) : 0 ≤ j ≤ γ) is the time reversion of the pack . Note that the forward and backward packs, and * , are identically distributed but the forward and backward processes t → Z(t) : 0 ≤ t ≤ θ and t → Z * (t) : 0 ≤ t ≤ θ are not. The backwards process t → Z * (t) could also be defined in stepwise terms, similar (but not identical) to those in section 2.4, but we will not rely on these step-wise rules and therefore omit their explicit formulation. Consistent with the previous notation, we denote Z * j := Z * (τ j ), 0 ≤ j ≤ γ; Z * := Z * γ = Z * (θ) = −Z. Note, that due to the construction rules of the forward and backward legs, their beginning, middle and ending parts (τ 1 ; Z(t) : 0 ≤ t ≤ τ 1 ) , (τ γ−1 − τ 1 ; Z(τ 1 + t) − Z(τ 1 ) : 0 ≤ t ≤ τ γ−1 − τ 1 ) , (τ γ − τ γ−1 ; Z(τ γ−1 + t) − Z(τ γ−1 ) : 0 ≤ t ≤ τ γ − τ γ−1 ) ,(40) are independent, and likewise for the backwards process Z * , (τ 1 ; Z * (t) : 0 ≤ t ≤ τ 1 ) , (τ γ−1 − τ 1 ; Z * (τ 1 + t) − Z * (τ 1 ) : 0 ≤ t ≤ τ γ−1 − τ 1 ) , (τ γ − τ γ−1 ; Z * (τ γ−1 + t) − Z * (τ γ−1 ) : 0 ≤ t ≤ τ γ − τ γ−1 ) .(41) This fact will be of crucial importance in the proof of Proposition 2, section 5.2 below. This is the reason (alluded to in the remark at the end of section 4.1) we chose the somewhat complicated stopping time as defined in (37). Multi-leg concatenation Let n = (γ n ; (ξ n,j , u n,j ) : 1 ≤ j ≤ γ n ), n ≥ 1, be a sequence of i.i.d packs (38), and denote θ n , (Z n (t) : 0 ≤ t ≤ θ n ), (Z n,j : 1 ≤ j ≤ γ n ), (Z * n (t) : 0 ≤ t ≤ θ n ), (Z * n,j : 1 ≤ j ≤ γ n ), Z n , Z * n the various objects defined in section 4.2, specified for the n-th independent leg. In order to construct the concatenated forward and backward processes t → Z(t), t → Z * (t), 0 ≤ t < ∞, we first define for n ∈ Z + , respectively t ∈ R + Γ n := Note that Ξ n and Ξ * n are random walks with independent steps; t → Z(t), 0 ≤ t < ∞, is exactly the Z-process constructed in section 2.4, with Z n = Z(τ n ), 0 ≤ n < ∞. Similarly, t → Z * (t), 0 ≤ t < ∞, is the time reversal of the Z-process and Z * n = Z * (τ n ), 0 ≤ n < ∞. Theorem 2 will follow from Propositions 1 and 2 of the next two sections. Mismatches within one leg Given a pack = (γ; (ξ j , u j ) : 1 ≤ j ≤ γ) (38), and arbitrary incoming and outgoing velocities u 0 , u γ+1 ∈ S 2 let (Y (t), X (t), Z(t)) : 0 − < t < θ + , be the triplet of Markovian flight process, Lorentz exploration process and auxiliary Z-process jointly constructed with these data, as described in sections 2.1, 2.2, respectively, 2.4. By 0 − < t < θ + we mean that the incoming velocities at 0 − are given asẎ (0 − ) =Ẋ (0 − ) =Ż(0 − ) = u 0 and the outgoing velocities at θ + arė Y (θ + ) =Ż(θ + ) = u γ+1 , whileẊ (θ + ) is determined by the construction from section 2.2. That is,Ẋ (θ + ) = u γ+1 if this last scattering is not shadowed by the trajectory X (t) : 0 ≤ t ≤ θ andẊ (θ + ) =Ẋ (θ − ) if it is shadowed. Proposition 1. There exists a constant C < ∞ such that for any u 0 , u γ+1 ∈ S 2 P X (t) ≡ Z(t) : 0 − < t < θ + ≤ Cr 2 |log r| 2 . (43) The proof of this Proposition relies on controlling the geometry of mismatchings, and is postponed until Section 6. Inter-leg mismatches Let t → Z(t) be a forward Z-process built up as concatenation of legs, as exposed in section 4.3 and define the following events W j := min{ Z(t) − Z k : 0 < t < Θ j−1 , Γ j−1 < k ≤ Γ j } < r , W j := min{ Z k − Z(t) : 0 ≤ k < Γ j−1 , Θ j−1 < t < Θ j } < r ,(44) In words W j is the event that a collision occuring in the j-th leg is shadowed by the past path. While W j is the event that within the j-th leg the Z-trajectory bumps into a scatterer placed in an earlier leg. That is, W j ∪ W j is precisely the event that the concatenated first j − 1 legs and the j-th leg are mechanically r-incompatible (see section 2.3). The following proposition indicates that on our time scales there are no "inter-leg mismatches": Proposition 2. There exists a constant C < ∞ such that for all j ≥ 1 P W j ≤ Cr 2 , P W j ≤ Cr 2(45) The proof of Proposition 2 is the content of Section 5 Proof of Proposition 2 This section is purely probabilistic and of similar spirit as section 3. The notation used is also similar. However, similar is not identical. The various Green's functions used here, although denoted g, h, G, H, as in section 3, are similar in their rôle but not the same. The estimates on them are also different. Occupation measures (Green's functions) Let now t → Z * (t), 0 ≤ t < ∞, be a backward Z * -process and t → Z(t), 0 ≤ t ≤ θ, a forward one-leg Z-process, assumed independent. In analogy with the events W j and W j defined in (44) we define W * j := min{ Z * (t) − Z k : 0 < t < Θ j−1 , 0 < k ≤ γ} < r , W * j := min{ Z * k − Z(t) : 0 < k ≤ Γ j−1 , 0 < t < θ} < r , W * ∞ := min{ Z * (t) − Z k : 0 < t < ∞, 0 < k ≤ γ} < r , W * ∞ := min{ Z * k − Z(t) : 0 < k < ∞, 0 < t < θ} < r . It is obvious that P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ , P W j = P W * j ≤ P W * j+1 ≤ P W * ∞ .(46) On the other hand, by the union bound and independence we have P W * ∞ ≤ z∈Z 3 P {0 < t < ∞ : Z * (t) ∈ B zr,2r } = ∅ P {1 ≤ k ≤ γ : Z k ∈ B zr,2r } = ∅ ≤ z∈Z 3 (2r) −1 E |{0 < t < ∞ : Z * (t) ∈ B zr,3r }| E |{1 ≤ k ≤ γ : Z k ∈ B zr,2r }| P W * ∞ ≤ z∈Z 3 P {1 < k < ∞ : Z * k ∈ B zr,2r } = ∅ P {0 < t ≤ θ : Z(t) ∈ B zr,2r } = ∅ ≤ z∈Z 3 (2r) −1 E |{1 < k < ∞ : Z * k ∈ B zr,2r }| E |{0 < t ≤ θ : Z(t) ∈ B zr,3r }|(47) Therefore, in view of (46) we have to control the mean occupation time measures appearing on the right hand side of (47). Define the following mean occupation measures (Green's functions): for A ⊂ R 3 let g(A) := E |{1 ≤ k ≤ γ : Z k ∈ A}| , g * (A) := E |{1 ≤ k ≤ γ : Z * k ∈ A}| , h(A) := E |{0 < t ≤ θ : Z(t) ∈ A}| , h * (A) := E |{0 < t ≤ θ : Z * (t) ∈ A}| , R * (A) := E |{1 ≤ n < ∞ : Ξ * n ∈ A}| , G * (A) := E |{1 ≤ k < ∞ : Z * k ∈ A}| , H * (A) := E |{0 < t < ∞ : Z * (t) ∈ A}| . It is obvious that G * (A) = g * (A) + R 3 g * (A − x)R * (dx), H * (A) = h * (A) + R 3 h * (A − x)R * (dx). (48) Bounds Lemma 6. The following upper bounds hold: Proof of Lemma 6. The proof of the bounds (49) hinges on the decompositions (40) and (41) of the forward and backward legs into independent parts. Let max{g(dx), g * (dx)} ≤ M (dx), max{h(dx), h * (dx)} ≤ L(dx),(49)R * (dx) ≤ K(dx),(50)G * (dx) ≤ K(dx), H * (dx) ≤ K(dx) + L(dx),(51)g 1 (A) := P Z 1 ∈ A = P Z * 1 ∈ A = C A 1(|x| > 1)e −|x| dx, h 1 (A) := E |{t ≤ τ 1 : Z(t) ∈ A}| = E |{t ≤ τ 1 : Z * (t) ∈ A}| = C A |x| −2 e − max{1,|x|} dx,(52) and g 2 (A) := E |{1 ≤ k ≤ γ : Z k − Z 1 ∈ A}| , g * 2 (A) := E |{1 ≤ k ≤ γ : Z * k − Z * 1 ∈ A}| , h 2 (A) := E |{0 < t ≤ θ − τ 1 : Z(τ 1 + t) − Z 1 ∈ A}| , h * 2 (A) := E |{0 < t ≤ θ − τ 1 : Z * (τ 1 + t) − Z * 1 ∈ A}| . Due to the exponential tail of the distribution of γ and θ, (39), there are constants C < ∞ and c > 0 such that for any s < ∞ max{g 2 ({x : |x| > s}), g * 2 ({x : |x| > s})} ≤ Ce −cs , max{h 2 ({x : |x| > s}), h * 2 ({x : |x| > s})} ≤ Ce −cs ,(53) and furthermore, g 2 (R 3 ) = g * 2 (R 3 ) = E γ < ∞, h 2 (R 3 ) = h * 2 (R 3 ) = E θ − τ 1 < ∞.(54) From the independent decompositions (41) and (40) it follows that g(A) = R 3 g 2 (A − x)g 1 (dx), g * (A) = R 3 g * 2 (A − x)g 1 (dx), h(A) = R 3 h 2 (A − x)g 1 (dx) + h 1 (A), h * (A) = R 3 h * 2 (A − x)g 1 (dx) + h 1 (A).(55) The bounds (49) readily follow from the explicit expressions (52), the convolutions (55) and the bounds (53) and (54). The bound (50) is a straightforward Green's function bound for the the random walk Ξ * n defined in (42), by noting that the distribution of the i.i.d. steps Z * k of this random walk has bounded density and exponential tail decay. Finally, the bounds (51) follow from the convolutions (48) and the bounds (49), (50). Remark: On the difference between Lemmas 2 and 6. Note the difference between the upper bounds for g in (24), respectively, (49), and on G in (25), respectively, (51). These are important and are due to the fact that the length first step in a Zor Z * -leg is distributed as (ξ | ξ > 1) ∼ EXP (1|0) rather than ξ ∼ EXP (1). Computation According to (47) P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 H * (B zr,3r )g(B zr,2r ), P W j ≤ P W * ∞ ≤ (2r) −1 z∈Z 3 G * (B zr,2r )h r (B zr,3r ).(56) Lemma 7. In dimension d = 3 the following bounds hold, with some C < ∞ Proof of Lemma 7. The bounds (57) (similarly to the bounds (29)) readily follow from explicit computations which we omit. Proof of Proposition 2. Proposition 2 now follows by inserting the bounds (57) and one of the bounds in (29) into equations (56). Proof of Proposition 1 Given a pack = (γ; (ξ j , u j ) : 1 ≤ j ≤ γ) (38), and arbitrary u 0 , u γ+1 ∈ S 2 , let (Y (t), X (t), Z(t)) : 0 ≤ t ≤ θ be the triplet of Markovian flight process, Lorentz exploration process and auxiliary Z-process jointly constructed with these data. We will prove the following bounds, stated in increasing order of difficulty/complexity. P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j > 1} ≤ Cr 2 |log r| ,(58)P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} ≤ Cr 2 |log r| ,(59)P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 1} ≤ Cr 2 |log r| 2 .(60) Note that by construction η 1 = η 2 = η 3 = η γ = 0, so the sums on the left hand side go actually from 4 to γ − 1 . We stated and prove these bounds in their increasing order of complexity: (58) (proved in section 6.1) and (59) (proved in section 6.2) are of purely probabilistic nature while (60) (proved in sections 6.3-6.7) also relies on the the finer geometric understanding of the mismatch events η j = 1 and η j = 1. Proof of (58) This follows directly from Lemma 1. Indeed, given γ and = ( j ) 1≤j≤γ , due to (20), P γ j=1 η j > 1 ≤ γ max j P η j = η j+1 = 1 + γ 2 2 max j,k:|j−k|>1 P η j = η k = 1 ≤ Cγr 2 |log r| + Cγ 2 r 2 , and hence, due to the exponential tail bound (39) we get P γ−1 j=4 η j > 1 = E P γ−1 j=4 η j > 1 ≤ Cr 2 |log r| . which concludes the proof of (58). Proof of (59) First note that by construction of the processes (X (t), Z(t)) : 0 − < t < θ + the following identities hold: {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} = {X (t) ≡ Y (t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} {X (t) ≡ Y (t) : 0 − ≤ t ≤ θ + } = 0<j<γ min τ j ≤t≤θ Y j−1 − Y (t) < r ∪ min 0≤t≤τ j Y j+1 − Y (t) < r And, hence {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ j=1 η j = 0} (61) = 0<j<γ min τ j ≤t≤τ j+1 Y j−1 − Y (t) < r ∪ min τ j−1 ≤t≤τ j Y j+1 − Y (t) < r ∩ {ξ j > 1} ∪ 0<j<γ min τ j+1 ≤t≤θ Y j−1 − Y (t) < r ∪ min 0≤t≤τ j−1 Y j+1 − Y (t) < r ⊂ 0<j<γ min τ j ≤t≤τ j+1 |Y j−1 − Y (t)| < 2r ∪ min τ j−1 ≤t≤τ j |Y j+1 − Y (t)| < 2r ∩ {ξ j > 1} ∪ 0<j<γ min τ j+1 ≤t≤θ |Y j−1 − Y (t)| < 2r ∪ min 0≤t≤τ j−1 |Y j+1 − Y (t)| < 2r By simple geometric inspection we see min τ j ≤t≤τ j+1 |Y j−1 − Y (t)| < 2r ∩ {ξ j > 1} ⊂ {∠(−u j−1 , u j ) < 4r} , min τ j−1 ≤t≤τ j |Y j+1 − Y (t)| < 2r ∩ {ξ j > 1} ⊂ {∠(−u j+1 , u j ) < 4r} . And therefore, max P min τ j ≤t≤τ j+1 |Y j−1 − Y (t)| < 2r ∩ {ξ j > 1} ≤ Cr 2 max P min τ j−1 ≤t≤τ j |Y j+1 − Y (t)| < 2r ∩ {ξ j > 1} ≤ Cr 2 .(62) On the other hand, from the conditional Green's function computations of section 3, in particular from Lemma 3, we get max P min τ j+1 ≤t≤θ |Y j−1 − Y (t)| < 2r ≤ sup P min τ 2 ≤t<∞ |Y (t)| < 2r ≤ Cr 2 |log r| , max P min 0≤t≤τ j−1 |Y j+1 − Y (t)| < 2r ≤ sup P min τ 2 ≤t<∞ |Y (t)| < 2r ≤ Cr 2 |log r| .(63) Putting (61), (62) and (63) together yields P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ−1 j=4 η j = 0} ≤ Cγr 2 |log r| , and hence, taking expectation over , we get (59). Proof of (60) -preparations Let γ ∈ {2} ∪ {5, 6, . . . }, and = ( j ) 1≤j≤γ ∈ {0, 1} γ compatible with the definition of a pack, and 3 < k < γ be fixed. Given a pack with signature we define yet another auxiliary process Z (k) (t) : 0 − < t < θ + as follows: • On 0 − < t ≤ τ k−1 , Z (k) (t) = Y (t). • On τ k−1 < t ≤ τ k , Z (k) (t) is constructed according to the rules of the Z-process, given in section 2.4. • On τ k < t < θ + , Z (k) (t) = Z (k) (τ k ) + Y (t) − Y (τ k ). Note that on the event {η j = δ j,k : 1 ≤ j ≤ γ} we have Z (k) (t) ≡ Z(t), 0 − < t < θ + . We will show that max ,k P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η j = δ j,k : 1 ≤ j ≤ γ} ≤ max ,k P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η k = 1} ≤ Cγ 2 r 2 |log r| 2 ,(64) and hence max P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ { γ k=1 η k = 1} ≤ γ max ,k P {X (t) ≡ Z(t) : 0 − ≤ t ≤ θ + } ∩ {η j = δ j,k : 1 ≤ j ≤ γ} ≤ Cγ 3 r 2 |log r| 2 . Then, taking expectation over we get (60). In order to prove (64) first write P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η j = δ j,k : 1 ≤ j ≤ γ} ≤ P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ {η k = 1} = P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ { η k = 1} + P {X (t) ≡ Z (k) (t) : 0 − ≤ t ≤ θ + } ∩ { η k = 1} ∩ { η k = 0} , and note that the three parts Z (k) (t) : 0 − < t < τ k−3 = Y (t) : 0 − < t < τ k−3 , Z (k) (τ k−3 + t) − Z (k) (τ k−3 ) : 0 ≤ t ≤ τ k − τ k−3 , Z (k) (τ k ) + t) − Z (k) (τ k ) : 0 ≤ t < θ + − τ k = Y (τ k ) + t) − Y (τ k ) : 0 ≤ t < θ + − τ k ,(65) are independent -even if the events { η k = 1}, respectively, { η k = 1} ∩ { η k = 0} are specified. From the construction of the processes (X (t), Z (k) (t)) : 0 − < t < θ + it follows that if Z (k) (t) : 0 − < t < θ + is mechanically r-consistent then X (t) ≡ Z (k) (t) : 0 − < t < θ + . Denote by A (k) a,a , 1 ≤ a ≤ 3, the event that the a-th part of the decomposition (65) is mechanically r-inconsistent, and by A a,b = A b,a , 1 ≤ a, b ≤ 3, a = b, the event that the a-th and b-th parts of the decomposition (65) are mechanically r-incompatible -in the sense of the definitions (16) and (17) in section 2.3. In order to prove (64) we will have to prove appropriate upper bounds on the conditional probabilities P { η k = 1} ∩ A (k) a,b , P { η k = 1} ∩ { η k = 0} ∩ A (k) a,b , a, b = 1, 2, 3.(66) These are altogether 12 bounds. However, some of them are formally very similar. A (k) 1,1 , A(k) 3,3 and A (k) 1,3 do not involve the middle part and therefore do not rely on the geometric arguments of the forthcoming sections 6.4-6.6. Applying directly (19), (27), (29) and similar procedures as in section 3.4, without any new effort we get P { η k = 1} ∩ A (k) a,b ≤ Cγ 2 r 2 , P { η k = 1} ∩ { η k = 0} ∩ A (k) a,b ≤ Cγ 2 r 2 , a, b = 1, 3.(67) We omit the repetition of these details. The remaining six bounds rely on the geometric arguments of sections 6.4-6.6 and, therefore, are postponed to section 6.7 Geometric estimates We analyse the middle segment of the process Z (k) , presented in (65), restricted to the events { η k = 1}, respectively, { η k = 1} ∩ { η k = 0}. Since everything done in this analysis is invariant under time and space translations and also under rigid rotations of R 3 it will be notationally convenient to place the origin of space-time at (τ k−2 , Z(τ k−2 )) and choose u k−2 = e = (1, 0, 0), a fixed element of S 2 . So, the ingredient random variables are (ξ − , u, ξ, v, ξ + ), fully independent and distributed as ξ − ∼ EXP (1| k−2 ), ξ ∼ EXP (1| k−1 ) = EXP (1|1), ξ + ∼ EXP (1| k ), u, v ∼ U N I(S 2 ). It will be enlightening to group the ingredient variables as (ξ − , (u, ξ, v), ξ + ), and accordingly write the sample space of this reduced context as R + × D × R + , where D := S 2 × R + × S 2 , with the probability measure EXP (1| k−2 ) × µ × EXP (1| k ) where, on D, µ = U N I(S 2 ) × EXP (1|1) × U N I(S 2 ).(68) For r < 1, let σ r , σ r : D → R + ∪ {∞} be σ r (u, ξ, v) := inf{t : ξu + r u − v |u − v| + te < r}, σ r (u, ξ, v) := inf{t : ξu + r u − e |u − e| + tv < r}, (with the usual convention inf ∅ = ∞), and A r := {(u, ξ, v) ∈ D : σ r < ∞}, A r := {(u, ξ, v) ∈ D : σ r < ∞}. We define the process Z r (t) : −∞ < t < ∞ and Z r (t) : −∞ < t < ∞ in terms of (u, ξ, v) ∈ A r , respectively, (u, ξ, v) ∈ A r as follows. Strictly speaking, these are deficient processes, since µ( A r ) < 1, and µ( A r ) < 1. • On −∞ < t ≤ 0, Z r (t) = Z r (t) = te. • On 0 ≤ t ≤ ξ, Z r (t) = Z r (t) = tu, • On ξ ≤ t < ∞, •• Z r (t) = Z r (ξ) + (t − ξ)u, •• Z r (t) is the trajectory of a mechanical particle, with initial position Z r (ξ) and initial velocity˙ Z r (ξ + ) = v, bouncing elastically between two infinite-mass spherical scatterers centred at r e−u |e−u| , respectively, ξu + r u−v |u−v| , and, eventually, flying indefinitely with constant terminal velocity. The trapping time β r , β r ∈ R + and escape (terminal) velocity w r , w r ∈ S 2 of the process Z r (t), respectively, Z r (t), are β r := 0, w r := u, β r := sup{s < ∞ :˙ Z r (ξ + s + ) =˙ Z r (ξ + s − )}, w r :=˙ Z r (ξ + β + r ). (69) Note that β r ≥ σ r . The relation of the middle segment of (65) to Z r and Z r is the following: { η k = 1}, Z (k) (τ k−2 + t) − Z (k) (τ k−2 ) : −ξ k−2 ≤ t ≤ ξ k−1 + ξ k ∼ {ξ − > σ r }, Z r (t) : −ξ − ≤ t ≤ ξ + ξ + , { η k = 0} ∩ { η k = 1}, Z (k) (τ k−2 + t) − Z (k) (τ k−2 ) : −ξ k−2 ≤ t ≤ ξ k−1 + ξ k ∼ {ξ − ≤ σ r } ∩ {ξ + > σ r }, Z r (t) : −ξ − ≤ t ≤ ξ + ξ + ,(70) where ∼ stands for equality in distribution. So, in order to prove (64) we have to prove some subtle estimates for the processes Z r amd Z r . The main estimates are collected in Proposition 3 below Proposition 3. There exists a constant C < ∞, such that for all r < 1 and s ∈ (0, ∞), the following bounds hold: µ (u, h, v) ∈ A r : ∠(−e, w r ) < s ≤ Cr min{s, 1},(71)µ (u, h, v) ∈ A r : ∠(−e, w r ) < s ≤ Cr min{s(|log s| ∨ 1), 1} (72) µ (u, h, v) ∈ A r : r −1 β r > s ≤ Cr min{s −1 (|log s| ∨ 1), 1}.(73) Remarks: The bound (71) is sharp in the sense that a lower bound of the same order can be proved. In contrast, we think that the upper bound in (72) is not quite sharp. However, it is sufficient for our purposes so we don't strive for a better estimate. The following consequence of Proposition 3 will be used to prove (60). Corollary 2. There exists a constant C < ∞ such that the following bounds hold: P { η k = 1} ∩ { min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s} ≤ Crs(|log s| ∨ 1),(74)P { η k = 1} ∩ { min τ k−3 ≤t≤τ k−1 Z (k) (t) − Z (k) (τ k ) < s} ≤ Crs(|log s| ∨ 1),(75)P { η k = 0} ∩ { η k = 1} ∩ { min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s} (76) ≤ Cr max{s |log s| 2 , r |log r| 2 } P { η k = 0} ∩ { η k = 1} ∩ { min τ k−3 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k ) < s}(77) ≤ Cr max{s |log s| 2 , r |log r| 2 } Proposition 3 and its Corollary 2 are proved in sections 6.5, respectively, 6.6. 6.5 Geometric estimates ctd: Proof of Proposition 3 Preparations Beside the probability measure µ (see (68)) we will also need the flat Lebesgue measure on D, λ = U N I(S 2 ) × LEB(R + ) × U N I(S 2 ), so that dµ(u, h, v) = e 1−h e − 1 1{0 ≤ h < 1}dλ(u, h, v). For r > 0 we define the dilation map D r : D → D as D r (u, h, v) = (u, rh, v), and note that A r = D r A 1 A r = D r A 1 . In the forthcoming steps all events in A r and A r will be mapped by the inverse dilation D −1 r = D r −1 into A 1 , respectively, A 1 . Therefore, in order to simplify notation we will use A := A 1 and A := A 1 . The dilation D r transforms the measures µ as follows. Given an event E ⊂ D, µ(D r E) = DrE e 1−h e − 1 1{0 ≤ h ≤ 1}dλ(u, h, v) = r E e 1−rh e − 1 1{0 ≤ h ≤ r −1 }dλ(u, h, v),(78) and hence, for any event E ⊂ D and anyh < ∞ e 1−rh e − 1 rλ(E ∩ {h ≤h}) ≤ µ(D r E) ≤ e e − 1 rλ(E).(79) The following simple observation is of paramount importance in the forthcoming arguments: Proposition 4. In dimension 3 (and more) λ( A) = λ( A) < ∞.(80) Proof of Proposition 4. Obviously, A ⊂ A := {(u, h, v) ∈ D : ∠(−e, u) ≤ 2h −1 }, A ⊂ A := {(u, h, v) ∈ D : ∠(−u, v) ≤ 2h −1 }. Since, in dimension 3, {(u, v) ∈ S 2 × S 2 : ∠(−e, u) < 2h −1 } = {(u, v) ∈ S 2 × S 2 : ∠(−u, v) < 2h −1 } ≤ C min{h −2 , 1}, the claim follows by integrating over h ∈ R + . Remark: In 2-dimension, the corresponding sets A, A have infinite Lebesgue measure and, therefore, a similar proof would fail. Due to (80) in 3-dimensions the following conditional probability measures make sense λ A (·) = λ(· A) := λ(· ∩ A) λ( A) , λ A (·) = λ(· A) := λ(· ∩ A) λ( A) , and, moreover, due to (79) and (80), for any event E ∈ D lim r→0 µ(D r E | A r ) = λ A (E), lim r→0 µ(D r E | A r ) = λ A (E), In a technical sense, we will only use the upper bound in (79), and (80). In view of the upper bound in (79), in order to prove (71), (72) and (73) we need, in turn, λ (u, h, v) ∈ A : ∠(−e, w) ≤ s ≤ C min{s, 1},(81)λ (u, h, v) ∈ A : ∠(−e, w) ≤ s ≤ C min{s(|log s| ∨ 1), 1},(82)λ (u, h, v) ∈ A : β > s ≤ C min{s −1 (|log s| ∨ 1), 1}.(83) Here, and in the rest of this section, we use the simplified notation w := w 1 , w := w 1 , β := β 1 . Proof of (81) Proof. This is straightforward. Recall (69): w(u, h, v) = u. For easing notation let ϑ := ∠(−e, u) and note that for any t ∈ R + {u ∈ S 2 : 0 ≤ ϑ ≤ t} ≤ C min{t 2 , 1}, with some explicit C < ∞. Then, Figure 3: Above we show a 3 dimensional example of the geometric labelling used in this section. The Z trajectory enters with velocity e from beneath the relevant plane (the dotted line represents motion below the plane). After which the particle remains above the plane. λ (u, h, v) ∈ A : ∠(−e, w)) ≤ s ≤ λ (u, h, v) ∈ A : ϑ ≤ s ≤ λ (u, h, v) ∈ D : ϑ ≤ min{s, 2h −1 } = λ (u, h, v) ∈ D : {h ≤ 2s −1 } ∩ {ϑ ≤ s} + λ (u, h, v) ∈ D : {h ≥ 2s −1 } ∩ {ϑ ≤ 2h −1 } ≤ Cs. Let a and b be the vectors in R 3 pointing from the origin to the centre of the spherical scatterers of radius 1, on which the first, respectively, the second collision occurs: a = e − u |e − u| , b = hu + u − v |u − v| , and n the unit vector orthogonal to the plane determined by a and b, pointing so, that e · n > 0: n := a × b |a| |b| sin(∠(a, b)) , with a × b = (h + 1 |u − v| ) 1 |e − u| e × u − 1 |e − u| |u − v| e × v + 1 |e − u| |u − v| u × v,(84)|a| = 1, h − 1 ≤ |b| ≤ h + 1, 0 ≤ sin(∠(a, b)) ≤ 1.(85) are independent and distributed as w ∼ U N I(S 2 ), ϑ ∼ 1 {0≤t≤1} (1 − t 2 ) −1/2 tdt. Therefore, λ (u, h, v) ∈ A : |e · (u × v)| < 4s = ∞ 0 dh S 2 dw min{2/h,1} 0 (1 − t 2 ) −1/2 tdt1{|e · w| ≤ 4s t } = ∞ 0 dh min{2/h,1} 0 (1 − t 2 ) −1/2 dt min{4s, t} ≤ C min{s |log s| ∨ 1), 1}.(89) The last step follows from explicit computations which we omit. Finally, (87), (88) and (89) yield (82). Proof of (83). We proceed with the first (sharper) bound in (86) (the second (weaker) bound would yield only upper bound of order s −1/2 on the right hand side of (82)): λ (u, h, v) ∈ A : β > s ≤ λ (u, h, v) ∈ A : h > s 2 + λ (u, h, v) ∈ A : |v · n| < 2 s .(90) Bounding the first term on the right hand side of (90) is straightforward: λ (u, h, v) ∈ A : h > s 2 = ∞ s/2 {(u, v) ∈ S 2 × S 2 : ∠(−u, v) < 2h −1 } dh ≤ C ∞ s/2 min{h −2 , 1}dh ≤ C min{s −1 , 1}.(91) Concerning the second term on the right hand side of (90), this has exactly been done in the proof of (82) above, ending in (89) -with the rôle of s and s −1 swapped. (90), (91) and (89) yield (73). Geometric estimates ctd: Proof of Corollary 2 We start with the following straightforward geometric fact. Lemma 8. Let e, w ∈ S 2 and x ∈ R 3 . Then {t > 0 : min t≥0 x + t w + te < s} = {t > 0 : min t≥0 x + tw + t e < s} ≤ 4s ∠(−e, w) .(92) Proof of Lemma 8. This is elementary 3-dimensional geometry. We omit the details. Proof of (74) and (75). On { η k = 1} min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) ≥ min 0≤t |tu k−1 + ξ k−2 u k−2 | min τ k−3 ≤t≤τ k−1 Z (k) (t) − Z (k) (τ k ) ≥ min{min 0≤t |ξ k−1 u k−1 + tu k−2 + ξ k u k−1 | , ξ k }.(93) The bounds in (74) and (75) follow from applying (92) and (71), bearing in mind that the distribution density of ξ k−2 and ξ k is bounded. Since these are very similar we will only prove (74) here. P { η k = 1} ∩ { min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s} ≤ P { η k = 1} ∩ {min t≥0 |tu k−1 + ξ k−2 u k−2 | < s} = Ar P ξ − ∈ {t : min t≥0 tu + t e < s} dµ(u, h, v) ≤ C Ar min{ s ∠(−e, u) , 1}dµ(u, h, v) ≤ Crs(|log s| ∨ 1). In the first step we used (93). The second step follows from the representation (70). The third step relies on (92) and on uniform boundedness of the distribution density of ξ − (which is either EXP (1|1) or EXP (1|0), depending on the value of k−2 ). Finally, the last calculation is based on (71). Proof of (76). min τ k−2 ≤t≤τ k Z (k) (t) − Z (k) (τ k−3 )(94) = min min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k−3 ) , min τ k−1 + β≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) . Here, and in the rest of this proof, β and w denote the trapping time and escape direction of the recollision sequence: β := max{s ≤ ξ k :Ż (k) (τ k−1 + s − ) =Ż (k) (τ k−1 + s + )} w :=Ż (k) (τ k−1 + β + ). To bound the first expression on the right hand side of (94) we first observe that by the triangle inequality min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k−3 ) ≥ ξ k−2 − ξ k−1 − 4r(95) Applying the representation and bounds developed in sections 6.4, 6.5, P { η k = 0} ∩ { η k = 1} ∩ { min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k−3 ) < s} ≤ P { η k = 0} ∩ { η k = 1} ∩ {ξ k−2 ≤ ξ k−1 + 4r + s} = Ar P ξ − < h + 4r + s dµ(u, h, v) ≤ C Ar (min{h, 1} + 4r + s)dµ(u, h, v) ≤ Cr 2 + Crs + Cr 2 |log r| . In the first step we used (95). The second step follows from the representation (70). The third step relies on on uniform boundedness of the distribution density of ξ − (which is either EXP (1|1) or EXP (1|0), depending on the value of k−2 ). Finally, the last step follows from explicit calculation, using (79). To bound the second term on the right hands side of (94) we proceed as in the proof of (74) above. First note that min τ k−1 + β≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) ≥ min 0≤t (Z (k) (τ k−2 ) − Z (k) (τ k−1 + β)) + t w + ξ k−2 u k−2 .(97) Using in turn (97), (70), (92) and uniform boundedness of the distribution density of ξ − (which is either EXP (1|1) or EXP (1|0), depending on the value of k−2 ), and finally (72), we obtain: P { η k = 0} ∩ { η k = 1} ∩ min τ k−1 + β≤t≤τ k Z (k) (t) − Z (k) (τ k−3 ) < s ≤ P { η k = 0} ∩ { η k = 1} ∩ {min 0≤t (Z (k) (τ k−2 ) − Z (k) (τ k−1 + β)) + t w + ξ k−2 u k−2 < s} = Ar P ξ − ∈ {t : min 0≤t Z r ( β r ) + t w r + t e < s} dµ(u, h, v) ≤ C Ar min{ s ∠(−e, w r ) , 1}dµ(u, h, v) ≤ Crs(|log s| 2 ∨ 1).(98) From (94), (96) and (98) we obtain (76). Proof of (77). We proceed very similarly as in the proof of (76). min τ k−3 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k )(99) ≥ min min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k ) , min τ k−3 ≤t≤τ k−2 Z (k) (t) − Z (k) (τ k ) . To bound the first expression on the right hand side of (99) we first observe that by the triangle inequality min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k ) ≥ ξ k − 2 β − 4r(100) Using in turn (100), (70), (73) and explicit computation based on uniform boundedness of the distribution density of ξ + (which is either EXP (1|1) or EXP (1|0), depending on the value of k ) we write P { η k = 0} ∩ { η k = 1} ∩ { min τ k−2 ≤t≤τ k−1 + β Z (k) (t) − Z (k) (τ k) < s} ≤ P { η k = 0} ∩ { η k = 1} ∩ {ξ k < 8r + 2s} + P { η k = 0} ∩ { η k = 1} ∩ {ξ k < 4 β} = P ξ + < 8r + 2s µ( A r ) + E µ((u, h, v) ∈ A r : ξ + ≤ 4 β r ) ≤ Cr(r + s) + CrE min{ ξ + 2r −1 log ξ + 2r ∨ 1 , 1} ≤ Cr 2 + Crs + Cr 2 |log r| 2 . The second term on the right hand side of (99) is bounded in a very similar way as the analogous second term on the right hand side of (94), see (97)-(98). Without repeating these details we state that P { η k = 0} ∩ { η k = 1} ∩ min τ k−2 ≤t≤τ k−1 Z (k) (t) − Z (k) (τ k ) < s ≤ Crs |log s| 2 .(102) Eventually, from (99), (101) and (102) we obtain (77). Proof of (60) -concluded Recall the events A P { η k = 1} ∩ A (k) 2,2 ≤ Cγr 2 |log r| , P { η k = 1} ∩ { η k = 0} ∩ A (k) 2,2 ≤ Cγr 2 |log r| 2 .(103) It remains to prove P { η k = 1} ∩ A (k) b,2 ≤ Cγr 2 |log r| , P { η k = 1} ∩ { η k = 0} ∩ A (k) b,2 ≤ Cγr 2 |log r| 2 , b = 1, 3.(104) Since the cases b = 1 and b = 3 are formally identical we will go through the steps of proof with b = 3 only. In order to do this we first define the necessary occupation time measures (Green's functions). For A ⊂ R 3 , define the following occupation time measures for the last part of (65) G (k) (A) :=E #{1 ≤ j ≤ γ − k : Y (τ j ) ∈ A} k+j : 1 ≤ j ≤ γ − k =E #{k + 1 ≤ j ≤ γ : Z (k) (τ j ) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} =E #{k + 1 ≤ j ≤ γ : Z (k) (τ j ) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} ∩ { η k = 0} , H (k) (A) :=E |{0 ≤ t ≤ τ γ−k : Y (t) ∈ A}| k+j : 1 ≤ j ≤ γ − k =E {τ k ≤ t ≤ θ : Z (k) (t) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} =E {τ k ≤ t ≤ θ : Z (k) (t) − Z (k) (τ k ) ∈ A} ∩ { η k = 1} ∩ { η k = 0} . Similarly, define the following occupation time measures for the middle part of (65) G (k) (A) := E #{1 ≤ j ≤ 3 : Z (k) (τ k−j ) − Z (k) (τ k ) ∈ A} · η k H (k) (A) := E {τ k−3 ≤ t ≤ τ k : Z (k) (t) − Z (k) (τ k ) ∈ A} · η k G (k) (A) := E #{1 ≤ j ≤ 3 : Z (k) (τ k−j ) − Z (k) (τ k ) ∈ A} · η k · (1 − η k ) H (k) (A) := E {τ k−3 ≤ t ≤ τ k : Z (k) (t) − Z (k) (τ k ) ∈ A} · η k · (1 − η k ) . Using the independence of the middle and last parts in the decomposition (65), similarly as (22) or (47), following bounds are obtained P { η k = 1} ∩ A (k) 3,2 ≤ Cr −1 R 3 G (k) (B x,2r ) H (k) (dx) + Cr −1 R 3 H (k) (B x,3r ) G (k) (dx) P { η k = 1} ∩ { η k = 0} ∩ A (k) 3,2 ≤ ≤ Cr −1 R 3 G (k) (B x,2r ) H (k) (dx) + Cr −1 R 3 H (k) (B x,3r ) G (k) (dx)(105) Due to (28) of Lemma 3 by direct computations the following upper bounds hold G (k) (B x,2r ) ≤ CF (|x|), H (k) (B x,3r ) ≤ CF (|x|),(106) where C < ∞ is an appropriately chosen constant and F : R + → R, F (u) := r1{0 ≤ u < r} + r 3 u 2 1{r ≤ u < 1} + Finally, we also have the global bounds G (k) (R 3 ) = 3E η k ≤ Cr, H (k) (R 3 ) = E η k · k j=k−2 ξ j ≤ Cr, G (k) (R 3 ) = 3E η k · (1 − η k ) ≤ Cr, H (k) (R 3 ) = E η k · (1 − η k ) · k j=k−2 ξ j ≤ Cr.(108) We will prove the upper bound (104) for the first term on the right hand side of the first line in (105). The other four terms are done in very similar way. First we split the integral as R 3 G (k) (B x,2r ) H (k) (dx) = |x|<1 G (k) (B x,2r ) H (k) (dx) + |x|≥1 G (k) (B x,2r ) H (k) (dx)(109) and note that due to (106) and (108) the second term on the right hand side is bounded as |x|≥1 G (k) (B x,2r ) H (k) (dx) ≤ Cr 4 .(110) To bound the first term on the right hand side of (109) we proceed as follows In the first step we have used (106). The second step is an integration by parts. In the third step we use (107), (108) and the explicit form of the function F . The last step is explicit integration. Finally, (109), (110), (111) and identical comoputations for the second term on the right hand side of the first line in (105) yield the first inequality in (104). The second line of (104) for b = 3 is proved in an identical way, which we omit to repeat. The cases b = 1 is done in a formally identical way. Finally, (60) follows from (67), (103) and (104). Proof of Theorem 2 -concluded As in section 4.3 let n = (γ n ; (ξ n,j , u n,j ) : 1 ≤ j ≤ γ n ), n ≥ 1, be a sequence of i.i.d packs. Denote θ n , ((Y n (t), Z n (t)) : 0 ≤ t ≤ θ n ) the pair of Y and (forward) Z-processes constructed from them and Y (t) = νt k=1 Y (θ n ) + Y νt+1 ({t}), Z(t) = νt k=1 Z(θ n ) + Z νt+1 ({t}). Beside these two we now define yet another auxiliary process t → X (t) as follows: (X n (t) : 0 ≤ t ≤ θ n ) is the Lorentz exploration process constructed with data from (Y n (t) : 0 ≤ t ≤ θ n ) and incoming velocity u n,0 = u 0 if n = 1, X n−1 (θ − n−1 ) if n > 1. Finally, from these legs concatenate X (t) = νt k=1 X (θ n ) + X νt+1 ({t}). Note that the auxiliary process (X (t) : 0 ≤ t < ∞) is not identical with the Lorentz exploration process (X(t) : 0 ≤ t < ∞), constructed with data from (Y (t) : 0 ≤ t ≤ ∞) and initial incoming velocity u 0 , since the former one does not takes into account memory effects caused by earlier legs. However, based on Propositions 1 and 2, we will prove that until time T = T (r) = o(r −2 |log r| −2 ) the processes t → X(t), t → X (t), and t → Z(t) coincide with high probability. For this, we define the (discrete) stopping times ρ := min{n : X n (t) ≡ Z n (t), 0 ≤ t ≤ θ n } σ := min{n : max{1 Wn , 1 Wn > 0} = 1}, and note that by construction inf{t : Z(t) = X(t)} ≥ Θ min{ρ,σ}−1 . Remark: Actually, (113) holds under the much weaker condition lim r→∞ r log log T = 0. This can be achieved by applying the LIL rather than a WLLN type of argument to bound max 0≤t≤T |Y (t) − Z(t)| in the proof of Lemma 10, below. However, since the condition of Lemma 9 can not be much relaxed, in the end we would not gain much with the extra effort. Proof of Lemma 9. P Θ min{ρ,σ}−1 < T ≤ P ρ ≤ 2E θ −1 T + P σ ≤ 2E θ −1 T + P 2E θ −1 T j=1 θ j < T ≤ Cr 2 |log r| T + Cr 2 T + Ce −cT ,(114) where C < ∞ and c > 0. The first term on the right hand side of (114) is bounded by union bound and (43) from Proposition 1. Likewise, the second term is bounded by union bound and (45) of Propositions 2. In bounding the third term we use a large deviation upper bound for the sum of independent θ j -s. Finally, (112) readily follows from (114). Proof of Lemma 10. Note first that max 0≤t≤T |Y (t) − Z(t)| ≤ ν T +1 j=1 η j ξ j , with ν T and η j defined in (15), respectively, (18). Hence, P max 0≤t≤T |Y (t) − Z(t)| > δ √ T ≤ P 2T j=1 η j ξ j > δ √ T + P ν T > 2T ≤ Cδ −1 √ T r + e −cT ,(115) with C < ∞ and c > 0. The first term on the right hand side of (115) is bounded by Markov's inequality and the straightforward bound E η j ξ j ≤ Cr. The bound on the second term follows from a straightforward large deviation estimate on ν T ∼ P OI(T ). Finally, (113) readily follows from (115). (9) is direct consequence of Lemmas 9 and 10 and this concludes the proof of Theorem 2.
17,087
1907.03228
2891409106
The problem of entity-typing has been studied predominantly in supervised learning fashion, mostly with task-specific annotations (for coarse types) and sometimes with distant supervision (for fine types). While such approaches have strong performance within datasets, they often lack the flexibility to transfer across text genres and to generalize to new type taxonomies. In this work we propose a zero-shot entity typing approach that requires no annotated data and can flexibly identify newly defined types. Given a type taxonomy defined as Boolean functions of FREEBASE "types", we ground a given mention to a set of type-compatible Wikipedia entries and then infer the target mention's types using an inference algorithm that makes use of the types of these entries. We evaluate our system on a broad range of datasets, including standard fine-grained and coarse-grained entity typing datasets, and also a dataset in the biological domain. Our system is shown to be competitive with state-of-the-art supervised NER systems and outperforms them on out-of-domain datasets. We also show that our system significantly outperforms other zero-shot fine typing systems.
Named Entity Recognition (NER), for which the goal is to discover mention-boundaries in addition to typing, often using a small set of mutually exclusive types, has a considerable amount of work @cite_37 @cite_21 @cite_0 @cite_10 @cite_7 .
{ "abstract": [ "We have recently completed the sixth in a series of \"Message Understanding Conferences\" which are designed to promote and evaluate research in information extraction. MUC-6 introduced several innovations over prior MUCs, most notably in the range of different tasks for which evaluations were conducted. We describe some of the motivations for the new format and briefly discuss some of the results of the evaluations.", "We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset.", "It is often claimed that Named Entity recognition systems need extensive gazetteers---lists of names of people, organisations, locations, and other named entities. Indeed, the compilation of such gazetteers is sometimes mentioned as a bottleneck in the design of Named Entity recognition systems.We report on a Named Entity recognition system which combines rule-based grammars with statistical (maximum entropy) models. We report on the system's performance with gazetteers of different types and different sizes, using test material from the MUC-7 competition. We show that, for the text type and task of this competition, it is sufficient to use relatively small gazetteers of well-known names, rather than large gazetteers of low-frequency names. We conclude with observations about the domain independence of the competition and of our experiments.", "We describe the CoNLL-2003 shared task: language-independent named entity recognition. We give background information on the data sets (English and German) and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.", "This paper presents a classifier-combination experimental framework for named entity recognition in which four diverse classifiers (robust linear classifier, maximum entropy, transformation-based learning, and hidden Markov model) are combined under different conditions. When no gazetteer or other additional training resources are used, the combined system attains a performance of 91.6F on the English development data; integrating name, location and person gazetteers, and named entity systems trained on additional, more general, data reduces the F-measure error by a factor of 15 to 21 on the English data." ], "cite_N": [ "@cite_37", "@cite_7", "@cite_21", "@cite_0", "@cite_10" ], "mid": [ "2068882115", "2004763266", "1982982698", "2144578941", "2056451646" ] }
Zero-Shot Open Entity Typing as Type-Compatible Grounding
Entity type classification is the task of connecting an entity mention to a given set of semantic types. The commonly used type sets range in size and level of granularity, from a small number of coarse-grained types (Tjong Kim Sang and De Meulder, 2003) to over a hundred fine-grained types (Ling and Weld, 2012). It is understood that semantic typing is a key component in many natural language understanding tasks, including Question Answering (Toral et al., 2005;Li and Roth, 2005) and Textual Entailment (Dagan et al., 2010(Dagan et al., , 2013. Consequently, the ability to type mentions semantically across domains and text genres, and to use a flexible type hierarchy, is essential for solving many important challenges. Nevertheless, most commonly used approaches and systems for semantic typing (e.g., CORENLP (Manning et al., 2014), COG-COMPNLP (Khashabi et al., 2018), NLTK (Loper and Bird, 2002), SPACY) are trained in a supervised fashion and rely on high quality, taskspecific annotation. Scaling such systems to other domains and to a larger set of entity types faces fundamental restrictions. Coarse typing systems, which are mostly fully supervised, are known to fit a single dataset very well. However, their performance drops significantly on different text genres and even new data sets. Moreover, adding a new coarse type requires manual annotation and retraining. For finetyping systems, people have adopted a distantsupervision approach. Nevertheless, the number of types used is small: the distantly-supervised FIGER dataset covers only 113 types, a small fraction of most-conservative estimates of the number of types in the English language (the FREEBASE (Bollacker et al., 2008) and WORD-NET (Miller, 1995) hierarchies consist of more than 1k and 1.5k unique types, respectively). More importantly, adapting these systems, once trained, to new type taxonomies cannot be done flexibly. As was argued in Roth (2017), there is a need to develop new training paradigms that support scalable semantic processing; specifically, there is a need to scale semantic typing to flexible type taxonomies and to multiple domains. In this work, we introduce ZOE, a zero-shot entity typing system, with open type definitions. Given a mention in a sentence and a taxonomy of entity types with their definitions, ZOE identifies a set of types that are appropriate for the mention Figure 1: ZOE maps a given mention to its type-compatible entities in Wikipedia and infers a collection of types using this set of entities. While the mention "Oarnniwsf," a football player in the U. of Washington, does not exist in Wikipedia, we ground it to other entities with approximately the same types ( §3). in this context. ZOE does not require any training, and it makes use of existing data resources (e.g., Wikipedia) and tools developed without any taskspecific annotation. The key idea is to ground each mention to a set of type-compatible Wikipedia entities. The benefit of using a set of Wikipedia titles as an intermediate representation for a mention is that there is much human-curated information in Wikipedia -categories associated with each page, FREEBASE types, and DBpedia types. These were put there independently of the task at hand and can be harnessed for many tasks: in particular, for determining the semantic types of a given mention in its context. In this grounding step, the guiding principle is that type-compatible entities often appear in similar contexts. We rely on contextual signals and, when available, surface forms, to rank Wikipedia titles and choose those that are more compatible with a given mention. Importantly, our algorithm does not require a given mention to be in Wikipedia; in fact, in many cases (such as nominal mentions) the mentions are not available in Wikipedia. We hypothesize that any entity possible in English corresponds to some type-compatible entities in Wikipedia. We can then rely mostly on the context to reveal a set of compatible titles, those that are likely to share semantic types with the target mention. The fact that our system is not required to ground to the exact concept is a key difference between our grounding and "standard" Wikification approaches (Mihalcea and Csomai, 2007;Ratinov et al., 2011). As a consequence, while entity linking approaches rely heavily on priors associated with the surface forms and do not consider those that do not link to Wikipedia titles, our system mostly relies on context, regardless of whether the grounding actually exists or not. Figure 1 shows a high-level visualization of our system. Given a mention, our system grounds it into type-compatible entries in Wikipedia. The target mention "Oarnniwsf," is not in Wikipedia, yet it is grounded to entities with approximately correct types. In addition, while some of the grounded Wikipedia entries are inaccurate in terms of entity types, the resulting aggregated decision is correct. ZOE is an open type system, since it is not restricted to a closed set of types. In our experiments, we build on FREEBASE types as primitive types and use them to define types across seven different datasets. Note, however, that our approach is not fundamentally restricted to FREE-BASE types; in particular, we allow types to be defined as Boolean formulae over these primitives (considering a type to be a set of entities). Furthermore, we support other primitives, e.g., DBPedia or Wikipedia entries. Consequently, our system can be used across type taxonomies; there is no need to restrict to previously observed types or retrain with annotations of new types. If one wants to use types that are outside our current vocabulary, one only needs to define the target type taxonomy in terms of the primitives used in this work. In summary, our contributions are as follows: • We propose a zero-shot open entity typing framework 1 that does not require training on entity-typing-specific supervised data. • The proposed system outperforms existing zero-shot entity typing systems. • Our system is competitive with fullysupervised systems in their respective domains across a broad range of coarse-and fine-grained typing datasets, and it outperforms these systems in out-of-domain settings. Concept-embedding Clustering No ZOE (ours) Yes Type-Compatible Concepts No Table 1: Comparison of recent work on entity typing. Our system does not require any labeled data for entity typing; therefore it works on new datasets without retraining. be thought of as the continuation of the same research direction. A critical step in the design of zero-shot systems is the characterization of the output space. For supervised systems, the output representations are trivial, as they are just indices. For zero-shot systems, the output space is often represented in a high-dimensional space that encodes the semantics of the labels. In OTYPER (Yuan and Downey, 2018), each type embedding is computed by averaging the word embeddings of the words comprising the type. The same idea is also used in PROTOLE (Ma et al., 2016), except that averaging is done only for a few prototypical instances of each type. In our work, we choose to define types using information in Wikipedia. This flexibility allows our system to perform well across several datasets without retraining. On a conceptual level, the work of Lin et al. (2012) and are close to our approach. The governing idea in these works is to cluster mentions, followed by propagating type information from representative mentions. Table 1 compares our proposed system with several recently proposed models. Zero-Shot Open Entity Typing Types are conceptual containers that bind entities together to form a coherent group. Among the entities of the same type, type-compatibility creates a network of loosely connected entities: Definition 1 (Weak Type Compatibility) Two entities are type-compatible if they share at least one type with respect to a type taxonomy and the contexts in which they appear. In our approach, given a mention in a sentence, we aim to discover type-compatible entities in Wikipedia and then infer the mention's types using all the type-compatible entities together. The advantage of using Wikipedia entries is that the rich information associated with them allows us to infer the types more easily. Note that this problem is different from the standard entity linking or Wikification problem in which the goal is to find the corresponding entity in Wikipedia. Wikipedia does not contain all entities in the world, but an entity is likely to have at least one type-compatible entity in Wikipedia. In order to find the type-compatible entities, we use the context of mentions as a proxy. Defining it formally: Definition 2 (Context Consistency) A mention m (in a context sentence s) is context-consistent with another well-defined mention m , if m can be replaced by m in the context s, and the new sentence still makes logical sense. Hypothesis 1 Context consistency is a strong proxy for type compatibility. Based on this hypothesis, given a mention m in a sentence s, we find other context-compatible mentions in a Wikified corpus. Since the mentions in the Wikified corpus are linked to the corresponding Wikipedia entries, we can infer m's types by aggregating information associated with these Wikipedia entries. Figure 2 shows the high-level architecture of our proposed system. The inputs to the system are a mention m in a sentence s, and a type definition T . The output of the system is a set of types {t Target } ⊆ T in the target taxonomy that best represents the given mention. The type definitions characterize the target entity-type space. In our experiments, we choose to use FREEBASE types to define the types across 7 datasets; that is, T is a mapping from the set of FREEBASE types to the set of target types: T : {t FB } → {t Target }. This definition comprises many atomic definitions; for example, we can define the type location as the disjunction of FREEBASE types like FB.location and FB.geography: The type definitions of a dataset reflect the understanding of a domain expert and the assumptions made in dataset design. Such definitions are often much cheaper to define, than to annotate full-fledged supervised datasets. It is important to emphasize that, to use our system on different datasets, one does not need to retrain it; there is one single system used across different datasets, working with different type definitions. For notational simplicity, we define a few conventions for the rest of the paper. The notation t ∈ T , simply means t is a member of the image of the map T (i.e., t is a member of the target types). For a fixed concept c, the notation T (c) is the application of T (.) on the FREEBASE types attached to the concept c. For a collection of concepts C, T (C) is defined as c∈C T (c). We use T coarse (.) to refer to the subset of coarse types of T (.), while T fine (.) defines the fine type subset. Components in Figure 2 are described in the following sections. Initial Concept Candidate Generation Given a mention, the goal of this step is to quickly generate a set of Wikipedia entries based on other words in the sentence. Since there are millions of entries in Wikipedia, it is extremely inefficient to go through all entries for each mention. We adopt ideas from explicit semantic analysis (ESA) (Gabrilovich and Markovitch, 2007), an approach to representing words with a vector of Wikipedia concepts, and to providing fast retrieval of the relevant Wikipedia concepts via inverted indexing. In our construction we use the WIK-ILINKS (Singh et al., 2012) corpus, which contains a total of 40 million mentions over 3 million concepts. Each mention in WIKILINKS is associated with a Wikipedia concept. To characterize it formally, in the WIKILINKS corpus, for each concept c, there are example sentences sent(c) = {s i }. Offline computation: The first step is to construct an ESA representation for each word in the WIKILINKS corpus. We create a mapping from each word in the corpus to the relevant concepts associated with it. The result is a map S from tokens to concepts: S : w → {c, score(c|w)} (see Figure 3), where score(c|w) denotes the association of the word w with concept c, calculated as the sum of the TF-IDF values of the word w in the sentences describing c: score(c|w) s∈sent(c) w∈s tf-idf(w, s). That is, we treat each sentence as a document and compute TF-IDF scores for the words in it. Online computation: For a given mention m and its sentence context s, we use our offline wordconcept map S to find the concepts associated with each word, and aggregate them to create a single list of weighted concepts; i.e., w∈s S(w). The resulting concepts are sorted by the corresponding weights, and the top ESA candidates form a set C ESA which is passed to the next step. Context-Consistent Re-Ranking After quick retrieval of the initial concept candidates, we re-rank concepts in C ESA based on context consistency between the input mention and concept mentions in WIKILINKS. For this step, assume we have a representation that encodes the sentential information anchored on the mention. We denote this mention-aware context representation as SentRep(s|m). We define a measure of consistency between a concept c and a mention m in a sentence s: Consistency(c, s, m) = cosine(SentRep(s|m), ConceptRep(c)),(1) where ConceptRep(c) is representation of a concept: ConceptRep(c) avg s SentRep(s|c) s ∈ WIKILINKS, c ∈ s) , which is the average vector of the representation of all the sentences in WIKILINKS that describe the given concept. We use pre-trained ELMO (Peters et al., 2018), a state-of-the-art contextual and mentionaware word representation. In order to generate SentRep(s|m), we run ELMO on sentence s, where the tokens of the mention m are concatenated with " ", and retrieve its ELMO vector as SentRep(s|m). According to the consistency measure, we select the top ELMO concepts for each mention. We call this set of concepts C ELMO . Surface-Based Concept Generation While context often is a key signal for typing, one should not ignore the information included in the surface form of the mentions. If the corresponding concept or entity exists in Wikipedia, many mentions can be accurately grounded with only trivial prior probability Pr(concept|surface). The prior distribution is pre-computed by calculating the fre-quency of the times a certain surface string refers to different concepts within Wikipedia. In the test time, for a given mention, we use the pre-computed probability distribution to obtain the most likely concept, c surf = arg max c Pr(c|m), for the given mention m. Type Inference Our inference algorithm starts with selection of concepts, followed by inference of coarse and fine types. Our approach is outlined in Algorithm 1 and explained below. Concept inference. To integrate surface-based and context-based concepts, we follow a simple rule: if the prior probability of the surface-based concept (c surf ) has confidence below a threshold λ, we ignore it; otherwise we include it among the concepts selected from context (C ELMO ), and only choose coarse and fine types from c surf . To map the selected concepts to the target entity types, we retrieve the FREEBASE-types of each concept and then apply the type definition T (defined just before §3.1). In Algorithm 1, the set of target types of a concept c is denoted as T (c). This is followed by an aggregation step for selection of a coarse type t coarse ∈ T coarse (.), and ends with the selection of a set of fine types {t fine } ⊆ T fine (.). Coarse type inference. Our type inference algorithm works in a relatively simple confidence analysis procedure. To this end, we define Count(t; C) to be the number of occurrences of type t in the collection of concepts C: Count(t; C) := |{c : c ∈ C and t ∈ T (c)}|. In theory, for a sensible type t, the count of context-consistent concepts that have this type should be higher than that of the initial concept candidates. In other words, Count(t;C ELMO )/ ELMO Count(t;C ESA )/ ESA > 1. We select the first concept (in the C ELMO ranking) which has some coarse type that matches this criterion. If there is no such concept, we use the coarse types of the highest scoring concept. To select one of the coarse types of the selected concept, we let each concept of C ELMO vote based on its consistency score. We name this voting-based procedure SelectCoarse(c), which selects one coarse type from a given concept: SelectCoarse(c) argmax t c∈C ELMO t∈Tcoarse(c) Consistency(c, s, m), Algorithm 1: Type inference algorithm Input mention m in sentence s, retrieved concepts C ESA , C ELMO , c surf , and type definition T Output Inferred types tcoarse and {t fine }. Define, r(t, t ; C, C ) := Count(t;C)/|C| Count(t ;C )/|C | , r(t; C, C ) := r(t, t; C, C ), r(t, t ; C, ) := r(t, t ; C, C). τ surf ← {t|t ∈ Tcoarse(c surf ), r(t; C ELMO , C ESA ) > 1} if Pr(c surf |m) ≥ λ and τ surf = ∅ then tcoarse ← SelectCoarse(c surf ) C ← {c surf } ∪ C ELMO {t fine } ←    t f t f ∈ T fine (c surf ), compatible w/ tcoarse and, r(t f , tcoarse;C) ≥ ηs    elseC ELMO ← c c ∈ C ELMO , ∃t ∈ Tcoarse(c) r(t; C ELMO , C ESA ) > 1 ifCELMO = ø theñ c ← argmax c∈C ELMO Consistency(c, s, m) elsec ← argmax c∈C ELMO Consistency(c, s, m) end tcoarse ← SelectCoarse(c) {t fine } ←    t f t f ∈ T fine (C ELMO ), compatible w/ tcoarse and, r(t f , tcoarse; C ELMO ) ≥ ηc    end where consistency is defined in Equation (1). Fine type inference. With the selected coarse type, we take only the fine types that are compatible with the selected coarse type (e.g., the fine type /people/athlete and the coarse type /people are compatible). Among the compatible fine types, we further filter the ones that have better support from the context. Therefore, we select the fine types t f such that Count(t f ;C ELMO ) Count(tc;C ELMO ) ≥ η, where t c is the previously selected coarse type which is compatible with t f . Intuitively, the fraction filters out the fine-grained candidate types that don't have enough support compared to the selected coarse type. Experiments Empirically, we study the behavior of our system compared to published results. All the results are reproduced except the ones indicated by * , which are directly cited from their corresponding papers. Datasets. In our experiments, we use a wide range of typing datasets: • For coarse entity typing, we use MUC (Grishman and Sundheim, 1996), CoNLL (Tjong Kim Sang and De Meulder, 2003), and OntoNotes (Hovy et al., 2006). Table 2: Evaluation of fine-grained entity-typing: we compare our system with state-of-the-art systems ( §4.1) For each column, the best zero-shot and overall results are bold-faced and underlined, respectively. Numbers are F 1 in percentage. For supervised systems, we report their in-domain performances, since they do not transfer to other datasets with different labels. For OTYPER, cells with gray color indicate in-domain evaluation, which is the setting in which it has the best performance. Our system outperforms all the other zero-shot baselines, and achieves competitive results compared to the best supervised systems. Table 3: Evaluation of coarse entity-typing ( §4.2): we compare two supervised entity-typers with our system. For the supervised systems, cells with gray color indicate in-domain evaluation. For each column, the best, out-of-domain and overall results are bold-faced and underlined, respectively. Numbers are F 1 in percentage. In most of the out-of-domain settings our system outperforms the supervised system. OntoNotes • For fine typing, we focus on FIGER (Ling and Weld, 2012), BBN (Weischedel and Brunstein, 2005), and OntoNotes fine (Gillick et al., 2014). • In addition to the news NER, we use the BB3 dataset (Delėger et al., 2016), with contain mentions of bacteria or other notions, extracted from sentences of scientific papers. ZOE's parameters. We use different type definitions for each dataset. In order to design type definitions for each dataset, we follow in the footsteps of Abhishek et al. (2017) and randomly sample 10% of the test set. For the experiments, we exclude the sampled set. For completeness, we have included the type definitions of the major experiments in Appendix D. The parameters are set universally across different experiments. For parameters that determine the number of extracted concepts, we use ESA = 300 and ELMO = 20, which are based on the upper-bound analysis in Appendix A. For other parameters, we set λ = 0.5, η s = 0.8 and η c = 0.3, based on the FIGER dev set. We emphasize that these parameters are universal across our evaluations. Evaluation metrics. Given a collection of mentions M , denote the set of gold types and predicted types of a mention m ∈ M as T g (m) and T p (m) respectively. We define the following metrics for our evaluations: , and the Micro recall and F1 follow the same pattern. In the experiment in §4.3, to evaluate systems on unseen types we used modified versions of metrics. Let G(t) be the number of mentions with gold type t, P (t) be the number of mentions predicted to have type t, C(t) be the number of mentions correctly predicted to have type t: • The precision corresponding to F 1 type ma is defined as t C(t) P (t) G(t) t G(t ) ; recall follows the same pattern. • The precision corresponding to F 1 type mi is defined as t C(t) t P (t) ; recall follows the same pattern. Baselines. To add to the best published results on each dataset, we create two simple and effective baselines. The first baseline, ELMONN, selects the nearest neighbor types to a given mention, where mentions and types are represented by ELMO vectors. To create a representation for each type t, we average the representation of the WIK-ILINKS sentences that contain mentions of type t (as explained in §3.2). Our other baseline, WIK-IFIERTYPER, uses Wikifier (Tsai et al., 2016b) to map the mention to a Wikipedia concept, followed by mapping to FREEBASE types, and finally projecting them to the target types, via type definition function T (.). Additionally, to compare with published zero-shot systems, we compare our system to OTYPER, a recently published open-typing system. Unfortunately, to the best of our knowledge, the systems proposed by Ma et al.; are not available online for empirical comparison. Fine-Grained Entity Typing We evaluate our system for fine-grained namedentity typing. Table 2 shows the evaluation result for three datasets, FIGER, BBN, and OntoNotes fine . We report our system's performance, our zero-shot baselines, and two supervised systems (AFET, plus the-state-of-the-art), for each dataset. There is no easy way to transfer the supervised systems across datasets, hence no out-of-domain numbers for such systems. For each dataset, we train OTYPER and evaluate on the test sets of all the three datasets. In order to run OTYPER on different datasets, we disabled original dataset-specific entity and type features. As a result, among the open typing systems, our system has significantly better results. In addition, our system has competitive scores compared to the supervised systems. Coarse Entity Typing In Table 3 we study entity typing for the coarse types on three datasets. We focus on three types that are shared among the datasets: PER, LOC, ORG. In coarse-entity typing, the best available systems are heavily supervised. In this evaluation, we use gold mention spans; i.e., we force the decoding algorithm of the supervised systems to select the best of the three classes for each gold mention. As expected, the supervised systems have strong in-domain performance. However, they suffer a significant drop when evaluated in a different domain. Our system, while not trained on any supervised data, achieves better or comparable performance compared to other supervised baselines in the out-of-domain evaluations. Typing of Unseen Types within Domain We compare the quality of open typing, in which the target type(s) have not been seen before. We compare our system to OTYPER, which relies on supervised data to create representations for each type; however, it is not restricted to the observed types. We follow a similar setting to Yuan and Downey (2018) and split the FIGER test in folds (one fold per type) and do cross-validations. For each fold, mentions of only one type are used for evaluation, and the rest are used for training OTYPER. To be able to evaluate on unseen types (only for this experiment), we use modified metrics F 1 type ma and F 1 type mi that measure per type quality of the system ( §4). In this experiment, we focus on a within-domain setting, and show the results of transfer across genres in the next experiments. The results are summarized in Table 4. We observe a significant margin between ZOE and other systems, including OTYPER. Biology Entity Typing We go beyond the scope of popular entity-typing tasks, and evaluate the quality of our system on a dataset that contains sentences from scientific papers (Delėger et al., 2016), which makes it different from other entity-typing datasets. The mentions refer to either "bacteria", or some miscellaneous class (two class typing). As indicated in Ta Table 6: Ablation study of different ways in which concepts are generated in our system ( §4.5). The first row shows performance of our system on each dataset, followed by the change in the performance upon dropping a component. While both signals are crucial, contextual information is playing more important role than the mention-surface signal. ble 5, our system's overall scores are higher than our baselines. Ablation Study We carry out ablation studies that quantify the contribution of surface information ( §3.3) and context information ( §3.2). As Table 6 shows, both factors are crucial and complementary for the system. However, the contextual information seems to have a bigger role overall. We complement our qualitative analysis with the quantitative share of each component. In 69.3%, 54.6%, and 69.7% of mentions, our system uses the context information (and ignores the surface), in FIGER, BBN, and OntoNotes fine datasets, respectively, underscoring the importance of contextual information. Error Analysis We provide insights into specific reasons for the mistakes made by the system. For our analysis, we use the erroneous decisions in the FIGER dev set. Two independent annotators label the cause(s) of the mistakes, resulting in 83% agreement between the annotators. The disagreements are later reconciled by an adjudication step. 1. Incompatible concept, due to context information: Ambiguous contexts, or short ones, often contribute to the inaccurate mapping to concepts. In our manual annotations, 23.3% of errors are caused, at least partly, by this issue. 2. Incompatible concept, due to surface information: Although the prior probability is high, the surface-based concept could be wrong. About 26% of the errors are partly due to the surface signal errors. 3. Incorrect type, due to type inference: Even when the system is able to find several typecompatible concepts, it can fail due to inference errors. This could happen if the types attached to the type-compatible concepts are not the majority among other types attached to other con-cepts. This is the major reason behind 56.6% of errors. 4. Incorrect type, due to type definition: Some errors are caused by the inaccurate definition of the type mapping function T . About 23% of the mistakes are partly caused by this issue. Note that each mistake could be caused by multiple factors; in other words, the above categories are not mutually disjoint events. A slightly more detailed analysis is included in Appendix C. Conclusion Moving beyond a fully supervised paradigm and scaling entity-typing systems to support bigger type sets is a crucial challenge for NLP. In this work, we have presented ZOE, a zero-shot open entity typing framework. The significance of this work is threefold. First, the proposed system does not require task-specific labeled data. Our system relies on type definitions, which are much cheaper to obtain than annotating thousands of examples. Second, our system outperforms existing state-of-the-art zero-shot systems by a significant margin. Third, we show that without reliance on task-specific supervision, one can achieve relatively robust transfer across datasets.
4,715
1907.03228
2891409106
The problem of entity-typing has been studied predominantly in supervised learning fashion, mostly with task-specific annotations (for coarse types) and sometimes with distant supervision (for fine types). While such approaches have strong performance within datasets, they often lack the flexibility to transfer across text genres and to generalize to new type taxonomies. In this work we propose a zero-shot entity typing approach that requires no annotated data and can flexibly identify newly defined types. Given a type taxonomy defined as Boolean functions of FREEBASE "types", we ground a given mention to a set of type-compatible Wikipedia entries and then infer the target mention's types using an inference algorithm that makes use of the types of these entries. We evaluate our system on a broad range of datasets, including standard fine-grained and coarse-grained entity typing datasets, and also a dataset in the biological domain. Our system is shown to be competitive with state-of-the-art supervised NER systems and outperforms them on out-of-domain datasets. We also show that our system significantly outperforms other zero-shot fine typing systems.
There is a handful of works aiming to pave the road towards zero-shot typing by addressing ways to extract cheap signals, often to help the supervised algorithms: e.g., by generating gazetteers @cite_41 , or using the anchor texts in Wikipedia @cite_4 @cite_27 . Ren2016AFETAF project labels in high-dimensional space and use label correlations to suppress noise and better model their relations. In our work, we choose not to use the supervised-learning paradigm and instead merely rely on a general entity linking corpus and the signals in Wikipedia. Prior work has already shown the importance of Wikipedia information for NER. use a cross-lingual to facilitate cross-lingual NER. However, they do not explicitly address the case where the target entity does not exist in Wikipedia.
{ "abstract": [ "In this paper, we propose a named-entity recognition (NER) system that addresses two major limitations frequently discussed in the field. First, the system requires no human intervention such as manually labeling training data or creating gazetteers. Second, the system can handle more than the three classical named-entity types (person, location, and organization). We describe the system's architecture and compare its performance with a supervised system. We experimentally evaluate the system on a standard corpus, with the three classical named-entity types, and also on a new corpus, with a new named-entity type (car brands).", "Named entity recognition (ner) for English typically involves one of three gold standards: muc, conll, or bbn, all created by costly manual annotation. Recent work has used Wikipedia to automatically create a massive corpus of named entity annotated text. We present the first comprehensive cross-corpus evaluation of ner. We identify the causes of poor cross-corpus performance and demonstrate ways of making them more compatible. Using our process, we develop a Wikipedia corpus which outperforms gold standard corpora on cross-corpus evaluation by up to 11 .", "Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia’s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train test pairs." ], "cite_N": [ "@cite_41", "@cite_27", "@cite_4" ], "mid": [ "1533057952", "2073420389", "2252061787" ] }
Zero-Shot Open Entity Typing as Type-Compatible Grounding
Entity type classification is the task of connecting an entity mention to a given set of semantic types. The commonly used type sets range in size and level of granularity, from a small number of coarse-grained types (Tjong Kim Sang and De Meulder, 2003) to over a hundred fine-grained types (Ling and Weld, 2012). It is understood that semantic typing is a key component in many natural language understanding tasks, including Question Answering (Toral et al., 2005;Li and Roth, 2005) and Textual Entailment (Dagan et al., 2010(Dagan et al., , 2013. Consequently, the ability to type mentions semantically across domains and text genres, and to use a flexible type hierarchy, is essential for solving many important challenges. Nevertheless, most commonly used approaches and systems for semantic typing (e.g., CORENLP (Manning et al., 2014), COG-COMPNLP (Khashabi et al., 2018), NLTK (Loper and Bird, 2002), SPACY) are trained in a supervised fashion and rely on high quality, taskspecific annotation. Scaling such systems to other domains and to a larger set of entity types faces fundamental restrictions. Coarse typing systems, which are mostly fully supervised, are known to fit a single dataset very well. However, their performance drops significantly on different text genres and even new data sets. Moreover, adding a new coarse type requires manual annotation and retraining. For finetyping systems, people have adopted a distantsupervision approach. Nevertheless, the number of types used is small: the distantly-supervised FIGER dataset covers only 113 types, a small fraction of most-conservative estimates of the number of types in the English language (the FREEBASE (Bollacker et al., 2008) and WORD-NET (Miller, 1995) hierarchies consist of more than 1k and 1.5k unique types, respectively). More importantly, adapting these systems, once trained, to new type taxonomies cannot be done flexibly. As was argued in Roth (2017), there is a need to develop new training paradigms that support scalable semantic processing; specifically, there is a need to scale semantic typing to flexible type taxonomies and to multiple domains. In this work, we introduce ZOE, a zero-shot entity typing system, with open type definitions. Given a mention in a sentence and a taxonomy of entity types with their definitions, ZOE identifies a set of types that are appropriate for the mention Figure 1: ZOE maps a given mention to its type-compatible entities in Wikipedia and infers a collection of types using this set of entities. While the mention "Oarnniwsf," a football player in the U. of Washington, does not exist in Wikipedia, we ground it to other entities with approximately the same types ( §3). in this context. ZOE does not require any training, and it makes use of existing data resources (e.g., Wikipedia) and tools developed without any taskspecific annotation. The key idea is to ground each mention to a set of type-compatible Wikipedia entities. The benefit of using a set of Wikipedia titles as an intermediate representation for a mention is that there is much human-curated information in Wikipedia -categories associated with each page, FREEBASE types, and DBpedia types. These were put there independently of the task at hand and can be harnessed for many tasks: in particular, for determining the semantic types of a given mention in its context. In this grounding step, the guiding principle is that type-compatible entities often appear in similar contexts. We rely on contextual signals and, when available, surface forms, to rank Wikipedia titles and choose those that are more compatible with a given mention. Importantly, our algorithm does not require a given mention to be in Wikipedia; in fact, in many cases (such as nominal mentions) the mentions are not available in Wikipedia. We hypothesize that any entity possible in English corresponds to some type-compatible entities in Wikipedia. We can then rely mostly on the context to reveal a set of compatible titles, those that are likely to share semantic types with the target mention. The fact that our system is not required to ground to the exact concept is a key difference between our grounding and "standard" Wikification approaches (Mihalcea and Csomai, 2007;Ratinov et al., 2011). As a consequence, while entity linking approaches rely heavily on priors associated with the surface forms and do not consider those that do not link to Wikipedia titles, our system mostly relies on context, regardless of whether the grounding actually exists or not. Figure 1 shows a high-level visualization of our system. Given a mention, our system grounds it into type-compatible entries in Wikipedia. The target mention "Oarnniwsf," is not in Wikipedia, yet it is grounded to entities with approximately correct types. In addition, while some of the grounded Wikipedia entries are inaccurate in terms of entity types, the resulting aggregated decision is correct. ZOE is an open type system, since it is not restricted to a closed set of types. In our experiments, we build on FREEBASE types as primitive types and use them to define types across seven different datasets. Note, however, that our approach is not fundamentally restricted to FREE-BASE types; in particular, we allow types to be defined as Boolean formulae over these primitives (considering a type to be a set of entities). Furthermore, we support other primitives, e.g., DBPedia or Wikipedia entries. Consequently, our system can be used across type taxonomies; there is no need to restrict to previously observed types or retrain with annotations of new types. If one wants to use types that are outside our current vocabulary, one only needs to define the target type taxonomy in terms of the primitives used in this work. In summary, our contributions are as follows: • We propose a zero-shot open entity typing framework 1 that does not require training on entity-typing-specific supervised data. • The proposed system outperforms existing zero-shot entity typing systems. • Our system is competitive with fullysupervised systems in their respective domains across a broad range of coarse-and fine-grained typing datasets, and it outperforms these systems in out-of-domain settings. Concept-embedding Clustering No ZOE (ours) Yes Type-Compatible Concepts No Table 1: Comparison of recent work on entity typing. Our system does not require any labeled data for entity typing; therefore it works on new datasets without retraining. be thought of as the continuation of the same research direction. A critical step in the design of zero-shot systems is the characterization of the output space. For supervised systems, the output representations are trivial, as they are just indices. For zero-shot systems, the output space is often represented in a high-dimensional space that encodes the semantics of the labels. In OTYPER (Yuan and Downey, 2018), each type embedding is computed by averaging the word embeddings of the words comprising the type. The same idea is also used in PROTOLE (Ma et al., 2016), except that averaging is done only for a few prototypical instances of each type. In our work, we choose to define types using information in Wikipedia. This flexibility allows our system to perform well across several datasets without retraining. On a conceptual level, the work of Lin et al. (2012) and are close to our approach. The governing idea in these works is to cluster mentions, followed by propagating type information from representative mentions. Table 1 compares our proposed system with several recently proposed models. Zero-Shot Open Entity Typing Types are conceptual containers that bind entities together to form a coherent group. Among the entities of the same type, type-compatibility creates a network of loosely connected entities: Definition 1 (Weak Type Compatibility) Two entities are type-compatible if they share at least one type with respect to a type taxonomy and the contexts in which they appear. In our approach, given a mention in a sentence, we aim to discover type-compatible entities in Wikipedia and then infer the mention's types using all the type-compatible entities together. The advantage of using Wikipedia entries is that the rich information associated with them allows us to infer the types more easily. Note that this problem is different from the standard entity linking or Wikification problem in which the goal is to find the corresponding entity in Wikipedia. Wikipedia does not contain all entities in the world, but an entity is likely to have at least one type-compatible entity in Wikipedia. In order to find the type-compatible entities, we use the context of mentions as a proxy. Defining it formally: Definition 2 (Context Consistency) A mention m (in a context sentence s) is context-consistent with another well-defined mention m , if m can be replaced by m in the context s, and the new sentence still makes logical sense. Hypothesis 1 Context consistency is a strong proxy for type compatibility. Based on this hypothesis, given a mention m in a sentence s, we find other context-compatible mentions in a Wikified corpus. Since the mentions in the Wikified corpus are linked to the corresponding Wikipedia entries, we can infer m's types by aggregating information associated with these Wikipedia entries. Figure 2 shows the high-level architecture of our proposed system. The inputs to the system are a mention m in a sentence s, and a type definition T . The output of the system is a set of types {t Target } ⊆ T in the target taxonomy that best represents the given mention. The type definitions characterize the target entity-type space. In our experiments, we choose to use FREEBASE types to define the types across 7 datasets; that is, T is a mapping from the set of FREEBASE types to the set of target types: T : {t FB } → {t Target }. This definition comprises many atomic definitions; for example, we can define the type location as the disjunction of FREEBASE types like FB.location and FB.geography: The type definitions of a dataset reflect the understanding of a domain expert and the assumptions made in dataset design. Such definitions are often much cheaper to define, than to annotate full-fledged supervised datasets. It is important to emphasize that, to use our system on different datasets, one does not need to retrain it; there is one single system used across different datasets, working with different type definitions. For notational simplicity, we define a few conventions for the rest of the paper. The notation t ∈ T , simply means t is a member of the image of the map T (i.e., t is a member of the target types). For a fixed concept c, the notation T (c) is the application of T (.) on the FREEBASE types attached to the concept c. For a collection of concepts C, T (C) is defined as c∈C T (c). We use T coarse (.) to refer to the subset of coarse types of T (.), while T fine (.) defines the fine type subset. Components in Figure 2 are described in the following sections. Initial Concept Candidate Generation Given a mention, the goal of this step is to quickly generate a set of Wikipedia entries based on other words in the sentence. Since there are millions of entries in Wikipedia, it is extremely inefficient to go through all entries for each mention. We adopt ideas from explicit semantic analysis (ESA) (Gabrilovich and Markovitch, 2007), an approach to representing words with a vector of Wikipedia concepts, and to providing fast retrieval of the relevant Wikipedia concepts via inverted indexing. In our construction we use the WIK-ILINKS (Singh et al., 2012) corpus, which contains a total of 40 million mentions over 3 million concepts. Each mention in WIKILINKS is associated with a Wikipedia concept. To characterize it formally, in the WIKILINKS corpus, for each concept c, there are example sentences sent(c) = {s i }. Offline computation: The first step is to construct an ESA representation for each word in the WIKILINKS corpus. We create a mapping from each word in the corpus to the relevant concepts associated with it. The result is a map S from tokens to concepts: S : w → {c, score(c|w)} (see Figure 3), where score(c|w) denotes the association of the word w with concept c, calculated as the sum of the TF-IDF values of the word w in the sentences describing c: score(c|w) s∈sent(c) w∈s tf-idf(w, s). That is, we treat each sentence as a document and compute TF-IDF scores for the words in it. Online computation: For a given mention m and its sentence context s, we use our offline wordconcept map S to find the concepts associated with each word, and aggregate them to create a single list of weighted concepts; i.e., w∈s S(w). The resulting concepts are sorted by the corresponding weights, and the top ESA candidates form a set C ESA which is passed to the next step. Context-Consistent Re-Ranking After quick retrieval of the initial concept candidates, we re-rank concepts in C ESA based on context consistency between the input mention and concept mentions in WIKILINKS. For this step, assume we have a representation that encodes the sentential information anchored on the mention. We denote this mention-aware context representation as SentRep(s|m). We define a measure of consistency between a concept c and a mention m in a sentence s: Consistency(c, s, m) = cosine(SentRep(s|m), ConceptRep(c)),(1) where ConceptRep(c) is representation of a concept: ConceptRep(c) avg s SentRep(s|c) s ∈ WIKILINKS, c ∈ s) , which is the average vector of the representation of all the sentences in WIKILINKS that describe the given concept. We use pre-trained ELMO (Peters et al., 2018), a state-of-the-art contextual and mentionaware word representation. In order to generate SentRep(s|m), we run ELMO on sentence s, where the tokens of the mention m are concatenated with " ", and retrieve its ELMO vector as SentRep(s|m). According to the consistency measure, we select the top ELMO concepts for each mention. We call this set of concepts C ELMO . Surface-Based Concept Generation While context often is a key signal for typing, one should not ignore the information included in the surface form of the mentions. If the corresponding concept or entity exists in Wikipedia, many mentions can be accurately grounded with only trivial prior probability Pr(concept|surface). The prior distribution is pre-computed by calculating the fre-quency of the times a certain surface string refers to different concepts within Wikipedia. In the test time, for a given mention, we use the pre-computed probability distribution to obtain the most likely concept, c surf = arg max c Pr(c|m), for the given mention m. Type Inference Our inference algorithm starts with selection of concepts, followed by inference of coarse and fine types. Our approach is outlined in Algorithm 1 and explained below. Concept inference. To integrate surface-based and context-based concepts, we follow a simple rule: if the prior probability of the surface-based concept (c surf ) has confidence below a threshold λ, we ignore it; otherwise we include it among the concepts selected from context (C ELMO ), and only choose coarse and fine types from c surf . To map the selected concepts to the target entity types, we retrieve the FREEBASE-types of each concept and then apply the type definition T (defined just before §3.1). In Algorithm 1, the set of target types of a concept c is denoted as T (c). This is followed by an aggregation step for selection of a coarse type t coarse ∈ T coarse (.), and ends with the selection of a set of fine types {t fine } ⊆ T fine (.). Coarse type inference. Our type inference algorithm works in a relatively simple confidence analysis procedure. To this end, we define Count(t; C) to be the number of occurrences of type t in the collection of concepts C: Count(t; C) := |{c : c ∈ C and t ∈ T (c)}|. In theory, for a sensible type t, the count of context-consistent concepts that have this type should be higher than that of the initial concept candidates. In other words, Count(t;C ELMO )/ ELMO Count(t;C ESA )/ ESA > 1. We select the first concept (in the C ELMO ranking) which has some coarse type that matches this criterion. If there is no such concept, we use the coarse types of the highest scoring concept. To select one of the coarse types of the selected concept, we let each concept of C ELMO vote based on its consistency score. We name this voting-based procedure SelectCoarse(c), which selects one coarse type from a given concept: SelectCoarse(c) argmax t c∈C ELMO t∈Tcoarse(c) Consistency(c, s, m), Algorithm 1: Type inference algorithm Input mention m in sentence s, retrieved concepts C ESA , C ELMO , c surf , and type definition T Output Inferred types tcoarse and {t fine }. Define, r(t, t ; C, C ) := Count(t;C)/|C| Count(t ;C )/|C | , r(t; C, C ) := r(t, t; C, C ), r(t, t ; C, ) := r(t, t ; C, C). τ surf ← {t|t ∈ Tcoarse(c surf ), r(t; C ELMO , C ESA ) > 1} if Pr(c surf |m) ≥ λ and τ surf = ∅ then tcoarse ← SelectCoarse(c surf ) C ← {c surf } ∪ C ELMO {t fine } ←    t f t f ∈ T fine (c surf ), compatible w/ tcoarse and, r(t f , tcoarse;C) ≥ ηs    elseC ELMO ← c c ∈ C ELMO , ∃t ∈ Tcoarse(c) r(t; C ELMO , C ESA ) > 1 ifCELMO = ø theñ c ← argmax c∈C ELMO Consistency(c, s, m) elsec ← argmax c∈C ELMO Consistency(c, s, m) end tcoarse ← SelectCoarse(c) {t fine } ←    t f t f ∈ T fine (C ELMO ), compatible w/ tcoarse and, r(t f , tcoarse; C ELMO ) ≥ ηc    end where consistency is defined in Equation (1). Fine type inference. With the selected coarse type, we take only the fine types that are compatible with the selected coarse type (e.g., the fine type /people/athlete and the coarse type /people are compatible). Among the compatible fine types, we further filter the ones that have better support from the context. Therefore, we select the fine types t f such that Count(t f ;C ELMO ) Count(tc;C ELMO ) ≥ η, where t c is the previously selected coarse type which is compatible with t f . Intuitively, the fraction filters out the fine-grained candidate types that don't have enough support compared to the selected coarse type. Experiments Empirically, we study the behavior of our system compared to published results. All the results are reproduced except the ones indicated by * , which are directly cited from their corresponding papers. Datasets. In our experiments, we use a wide range of typing datasets: • For coarse entity typing, we use MUC (Grishman and Sundheim, 1996), CoNLL (Tjong Kim Sang and De Meulder, 2003), and OntoNotes (Hovy et al., 2006). Table 2: Evaluation of fine-grained entity-typing: we compare our system with state-of-the-art systems ( §4.1) For each column, the best zero-shot and overall results are bold-faced and underlined, respectively. Numbers are F 1 in percentage. For supervised systems, we report their in-domain performances, since they do not transfer to other datasets with different labels. For OTYPER, cells with gray color indicate in-domain evaluation, which is the setting in which it has the best performance. Our system outperforms all the other zero-shot baselines, and achieves competitive results compared to the best supervised systems. Table 3: Evaluation of coarse entity-typing ( §4.2): we compare two supervised entity-typers with our system. For the supervised systems, cells with gray color indicate in-domain evaluation. For each column, the best, out-of-domain and overall results are bold-faced and underlined, respectively. Numbers are F 1 in percentage. In most of the out-of-domain settings our system outperforms the supervised system. OntoNotes • For fine typing, we focus on FIGER (Ling and Weld, 2012), BBN (Weischedel and Brunstein, 2005), and OntoNotes fine (Gillick et al., 2014). • In addition to the news NER, we use the BB3 dataset (Delėger et al., 2016), with contain mentions of bacteria or other notions, extracted from sentences of scientific papers. ZOE's parameters. We use different type definitions for each dataset. In order to design type definitions for each dataset, we follow in the footsteps of Abhishek et al. (2017) and randomly sample 10% of the test set. For the experiments, we exclude the sampled set. For completeness, we have included the type definitions of the major experiments in Appendix D. The parameters are set universally across different experiments. For parameters that determine the number of extracted concepts, we use ESA = 300 and ELMO = 20, which are based on the upper-bound analysis in Appendix A. For other parameters, we set λ = 0.5, η s = 0.8 and η c = 0.3, based on the FIGER dev set. We emphasize that these parameters are universal across our evaluations. Evaluation metrics. Given a collection of mentions M , denote the set of gold types and predicted types of a mention m ∈ M as T g (m) and T p (m) respectively. We define the following metrics for our evaluations: , and the Micro recall and F1 follow the same pattern. In the experiment in §4.3, to evaluate systems on unseen types we used modified versions of metrics. Let G(t) be the number of mentions with gold type t, P (t) be the number of mentions predicted to have type t, C(t) be the number of mentions correctly predicted to have type t: • The precision corresponding to F 1 type ma is defined as t C(t) P (t) G(t) t G(t ) ; recall follows the same pattern. • The precision corresponding to F 1 type mi is defined as t C(t) t P (t) ; recall follows the same pattern. Baselines. To add to the best published results on each dataset, we create two simple and effective baselines. The first baseline, ELMONN, selects the nearest neighbor types to a given mention, where mentions and types are represented by ELMO vectors. To create a representation for each type t, we average the representation of the WIK-ILINKS sentences that contain mentions of type t (as explained in §3.2). Our other baseline, WIK-IFIERTYPER, uses Wikifier (Tsai et al., 2016b) to map the mention to a Wikipedia concept, followed by mapping to FREEBASE types, and finally projecting them to the target types, via type definition function T (.). Additionally, to compare with published zero-shot systems, we compare our system to OTYPER, a recently published open-typing system. Unfortunately, to the best of our knowledge, the systems proposed by Ma et al.; are not available online for empirical comparison. Fine-Grained Entity Typing We evaluate our system for fine-grained namedentity typing. Table 2 shows the evaluation result for three datasets, FIGER, BBN, and OntoNotes fine . We report our system's performance, our zero-shot baselines, and two supervised systems (AFET, plus the-state-of-the-art), for each dataset. There is no easy way to transfer the supervised systems across datasets, hence no out-of-domain numbers for such systems. For each dataset, we train OTYPER and evaluate on the test sets of all the three datasets. In order to run OTYPER on different datasets, we disabled original dataset-specific entity and type features. As a result, among the open typing systems, our system has significantly better results. In addition, our system has competitive scores compared to the supervised systems. Coarse Entity Typing In Table 3 we study entity typing for the coarse types on three datasets. We focus on three types that are shared among the datasets: PER, LOC, ORG. In coarse-entity typing, the best available systems are heavily supervised. In this evaluation, we use gold mention spans; i.e., we force the decoding algorithm of the supervised systems to select the best of the three classes for each gold mention. As expected, the supervised systems have strong in-domain performance. However, they suffer a significant drop when evaluated in a different domain. Our system, while not trained on any supervised data, achieves better or comparable performance compared to other supervised baselines in the out-of-domain evaluations. Typing of Unseen Types within Domain We compare the quality of open typing, in which the target type(s) have not been seen before. We compare our system to OTYPER, which relies on supervised data to create representations for each type; however, it is not restricted to the observed types. We follow a similar setting to Yuan and Downey (2018) and split the FIGER test in folds (one fold per type) and do cross-validations. For each fold, mentions of only one type are used for evaluation, and the rest are used for training OTYPER. To be able to evaluate on unseen types (only for this experiment), we use modified metrics F 1 type ma and F 1 type mi that measure per type quality of the system ( §4). In this experiment, we focus on a within-domain setting, and show the results of transfer across genres in the next experiments. The results are summarized in Table 4. We observe a significant margin between ZOE and other systems, including OTYPER. Biology Entity Typing We go beyond the scope of popular entity-typing tasks, and evaluate the quality of our system on a dataset that contains sentences from scientific papers (Delėger et al., 2016), which makes it different from other entity-typing datasets. The mentions refer to either "bacteria", or some miscellaneous class (two class typing). As indicated in Ta Table 6: Ablation study of different ways in which concepts are generated in our system ( §4.5). The first row shows performance of our system on each dataset, followed by the change in the performance upon dropping a component. While both signals are crucial, contextual information is playing more important role than the mention-surface signal. ble 5, our system's overall scores are higher than our baselines. Ablation Study We carry out ablation studies that quantify the contribution of surface information ( §3.3) and context information ( §3.2). As Table 6 shows, both factors are crucial and complementary for the system. However, the contextual information seems to have a bigger role overall. We complement our qualitative analysis with the quantitative share of each component. In 69.3%, 54.6%, and 69.7% of mentions, our system uses the context information (and ignores the surface), in FIGER, BBN, and OntoNotes fine datasets, respectively, underscoring the importance of contextual information. Error Analysis We provide insights into specific reasons for the mistakes made by the system. For our analysis, we use the erroneous decisions in the FIGER dev set. Two independent annotators label the cause(s) of the mistakes, resulting in 83% agreement between the annotators. The disagreements are later reconciled by an adjudication step. 1. Incompatible concept, due to context information: Ambiguous contexts, or short ones, often contribute to the inaccurate mapping to concepts. In our manual annotations, 23.3% of errors are caused, at least partly, by this issue. 2. Incompatible concept, due to surface information: Although the prior probability is high, the surface-based concept could be wrong. About 26% of the errors are partly due to the surface signal errors. 3. Incorrect type, due to type inference: Even when the system is able to find several typecompatible concepts, it can fail due to inference errors. This could happen if the types attached to the type-compatible concepts are not the majority among other types attached to other con-cepts. This is the major reason behind 56.6% of errors. 4. Incorrect type, due to type definition: Some errors are caused by the inaccurate definition of the type mapping function T . About 23% of the mistakes are partly caused by this issue. Note that each mistake could be caused by multiple factors; in other words, the above categories are not mutually disjoint events. A slightly more detailed analysis is included in Appendix C. Conclusion Moving beyond a fully supervised paradigm and scaling entity-typing systems to support bigger type sets is a crucial challenge for NLP. In this work, we have presented ZOE, a zero-shot open entity typing framework. The significance of this work is threefold. First, the proposed system does not require task-specific labeled data. Our system relies on type definitions, which are much cheaper to obtain than annotating thousands of examples. Second, our system outperforms existing state-of-the-art zero-shot systems by a significant margin. Third, we show that without reliance on task-specific supervision, one can achieve relatively robust transfer across datasets.
4,715
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
The resource provisioning from cloud computing infrastructures using Spot Instances or similar mechanisms has been addressed profusely in the scientific literature in the last years @cite_10 . However, the vast majority of this work has been done from the users' perspective when using and consuming Spot Instances @cite_31 and few works tackle the problem from the resource provider standpoint.
{ "abstract": [ "An important feature of most cloud computing solutions is auto-scaling, an operation that enables dynamic changes on resource capacity. Auto-scaling algorithms generally take into account aspects such as system load and response time to determine when and by how much a resource pool capacity should be extended or shrunk. In this article, we propose a scheduling algorithm and auto-scaling triggering strategies that explore user patience, a metric that estimates the perception end-users have from the Quality of Service (QoS) delivered by a service provider based on the ratio between expected and actual response times for each request. The proposed strategies help reduce costs with resource allocation while maintaining perceived QoS at adequate levels. Results show reductions on resource-hour consumption by up to approximately 9 compared to traditional approaches. Mechanisms for resource auto-scaling in clouds considering users' patience.Methods for determining the step size of scaling operations under bound and unbounded maximum capacity.Users patience model inspired in prospect theory.", "Resource management in a cloud environment is a hard problem, due to: the scale of modern data centers; the heterogeneity of resource types and their interdependencies; the variability and unpredictability of the load; as well as the range of objectives of the different actors in a cloud ecosystem. Consequently, both academia and industry began significant research efforts in this area. In this paper, we survey the recent literature, covering 250+ publications, and highlighting key results. We outline a conceptual framework for cloud resource management and use it to structure the state-of-the-art review. Based on our analysis, we identify five challenges for future investigation. These relate to: providing predictable performance for cloud-hosted applications; achieving global manageability for cloud systems; engineering scalable resource management systems; understanding economic behavior and cloud pricing; and developing solutions for the mobile cloud paradigm ." ], "cite_N": [ "@cite_31", "@cite_10" ], "mid": [ "1646360816", "2034603054" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
Due to the unpredictable nature of the Spot Instances, there are several research papers that try to improve the task completion time ---making the task resilient against termination--- and reduce the costs for the user. @cite_26 propose a probabilistic model to obtain the bid prices so that the costs and performance and reliability can be improved. In @cite_13 @cite_3 @cite_19 @cite_30 the task checkpointing is addressed so as to minimize costs and improve the whole completion time.
{ "abstract": [ "The cloud computing is a computing paradigm that users can rent computing resources from service providers as much as they require. A spot instance in cloud computing helps a user to utilize resources with less expensive cost, even if it is unreliable. When a user performs tasks with unreliable spot instances, failures inevitably lead to the delay of task completion time and cause a seriously deterioration in the QoS of users. Therefore, we propose a price history based checkpointing scheme to avoid the delay of task completion time. The proposed checkpointing scheme reduces the number of checkpoint trials and improves the performance of task execution. The simulation results show that our scheme outperforms the existing checkpointing schemes in terms of the reduction of both the number of checkpoint trials and total costs per spot instance for user's bid.", "", "Recently introduced spot instances in the Amazon Elastic Compute Cloud (EC2) offer low resource costs in exchange for reduced reliability; these instances can be revoked abruptly due to price and demand fluctuations. Mechanisms and tools that deal with the cost-reliability tradeoffs under this schema are of great value for users seeking to lessen their costs while maintaining high reliability. We study how mechanisms, namely, checkpointing and migration, can be used to minimize the cost and volatility of resource provisioning. Based on the real price history of EC2 spot instances, we compare several adaptive checkpointing schemes in terms of monetary costs and improvement of job completion times. We evaluate schemes that apply predictive methods for spot prices. Furthermore, we also study how work migration can improve task completion in the midst of failures while maintaining low monetary costs. Trace-based simulations show that our schemes can reduce significantly both monetary costs and task completion times of computation on spot instance.", "In late 2009, Amazon introduced spot instances to offer their unused resources at lower cost with reduced reliability. Amazon's spot instances allow customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current spot price. The spot price changes periodically based on supply and demand of spot instances, and customers whose bid exceeds it gain access to the available spot instances. Customers may expect their services at lower cost with spot instances compared to on-demand or reserved. However the reliability is compromised since the instances (IaaS) providing the service (SaaS) may become unavailable at any time without any notice to the customer. In this paper, we study various checkpointing schemes to increase the reliability over spot instances. Also we devise a novel checkpointing scheme on top of application-centric resource provisioning framework that increases the reliability while reducing the cost significantly.", "Recently introduced spot instances in the Amazon Elastic Compute Cloud (EC2) offer lower resource costs in exchange for reduced reliability; these instances can be revoked abruptly due to price and demand fluctuations. Mechanisms and tools that deal with the cost-reliability trade-offs under this schema are of great value for users seeking to lessen their costs while maintaining high reliability. We study how one such a mechanism, namely check pointing, can be used to minimize the cost and volatility of resource provisioning. Based on the real price history of EC2 spot instances, we compare several adaptive check pointing schemes in terms of monetary costs and improvement of job completion times. Trace-based simulations show that our approach can reduce significantly both price and the task completion times." ], "cite_N": [ "@cite_30", "@cite_26", "@cite_3", "@cite_19", "@cite_13" ], "mid": [ "2206870717", "", "2160016350", "1552783944", "2008793665" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
Related with the previous works, have studied the usage of Spot Instances to deploy reliable virtual clusters @cite_17 @cite_37 , managing the allocated instances on behalf of the users. They focus on the execution of compute intensive tasks on top of a pool of Spot Instances, in order to find the most effective way to minimize both the execution time of a given workload and the price of the allocated resources. Similarly, in @cite_21 the autors develop a workflow scheduling scheme that reduces the completion time using Spot Instances.
{ "abstract": [ "Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault-tolerance techniques, namely check pointing, task duplication and migration. We evaluate our strategies using trace-driven simulations, which take as input real price variation traces, as well as an application trace from the Parallel Workload Archive. Our results demonstrate the effectiveness of executing applications on spot instances, respecting QoS constraints, despite occasional failures.", "The cloud computing is a computing paradigm that users can rent computing resources from service providers as much as they require. A spot instance in cloud computing helps a user to utilize resources with less expensive cost, even if it is unreliable. In this paper, we propose the workflow scheduling scheme that reduces the task waiting time when an instance occurs the out-of-bid situation. And, our scheme executes user’s job within selected instances and expands the suggested user budget. The simulation results reveal that, compared to various instance types, our scheme achieves performance improvements in terms of an average execution time of 66.86 over shortest execution time in each task time interval. And, the cost in our scheme is higher than an instance with low performance and is lower than an instance with high performance. Therefore, our scheme is difficult to optimize cost for task execution.", "Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as \"spot instances\", at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they are willing to pay per hour and VMs will run only when the current price is lower than the user's bid. This paper proposes a resource allocation policy that addresses the problem of running deadlineconstrained compute-intensive jobs on a pool of composed solely of spot instances, while exploiting variations in price and performance to run applications in a fast and economical way. Our policy relies on job runtime estimations to decide what are the best types of VMs to run each job and when jobs should run. Several estimation methods are evaluated and compared, using trace-based simulations, which take real price variation traces obtained from Amazon Web Services as input, as well as an application trace from the Parallel Workload Archive. Results demonstrate the effectiveness of running computational jobs on spot instances, at a fraction (up to 60 lower) of the price that would normally cost on fixed priced resources." ], "cite_N": [ "@cite_37", "@cite_21", "@cite_17" ], "mid": [ "2088983345", "2234702220", "1569944010" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
Regarding Big Data analysis, several authors have studied how the usage of Spot Instances could be used to execute MapReduce workloads reducing the monetary costs, such as in @cite_4 @cite_32 . The usage of Spot Instances for opportunistic computing is another usage that has awaken a lot of interest, especially regarding the design of an optimal bidding algorithm that would reduce the costs for the users @cite_15 @cite_20 . There are already existing applications such as the vCluster framework @cite_35 that can consume resources from heterogeneous cloud infrastructures in a fashion that could take advantage of the lower price that the Spot Instances should provide.
{ "abstract": [ "Cloud computing is an emerging technology and is being widely considered for resource utilization in various research areas. One of the main advantages of cloud computing is its flexibility in computing resource allocations. Many computing cycles can be ready in very short time and can be smoothly reallocated between tasks. Because of this, there are many private companies entering the new business of reselling their idle computing cycles. Research institutes have also started building their own cloud systems for their various research purposes. In this paper, we introduce a framework for virtual cluster system called vcluster which is capable of utilizing computing resources from heterogeneous clouds and provides a uniform view in computing resource management. vcluster is an IaaS (Infrastructure as a Service) based cloud resource management system. It distributes batch jobs to multiple clouds depending on the status of queue and system pool. The main design philosophy behind vcluster is cloud and batch system agnostic and it is achieved through plugins. This feature mitigates the complexity of integrating heterogeneous clouds. In the pilot system development, we use FermiCloud and Amazon EC2, which are a private and a public cloud system, respectively. In this paper, we also discuss the features and functionalities that must be considered in virtual cluster systems.", "MapReduce is a scalable and fault tolerant framework, patented by Google, for computing embarrassingly parallel reductions. Hadoop is an open-source implementation of Google MapReduce that is made available as a web service to cloud users by the AmazonWeb Services (AWS) cloud computing infrastructure. Amazon Spot Instances (SIs) provide an inexpensive yet transient and market-based option to purchasing virtualized instances for execution in AWS. As opposed to manually controlling when an instance is terminated, SI termination can also occur automatically as a function of the market price and maximum user bid price. We find that we can significantly improve the runtime of MapReduce jobs in our benchmarks by using SIs as accelerators. However, we also find that SI termination due to budget constraints during the job can have adverse affects on the runtime and may cause the user to overpay for their job. We describe new techniques that help reduce such effects.", "Spot market provides the ideal mechanism to leverage idle CPU resources and smooth out the computation demands. Unfortunately, few applications can take advantage of spot market because they cannot handle sudden terminations. We describe Spot Cloud MapReduce, the first MapReduce implementation that can fully take advantage of a spot market. Even if a massive number of nodes are terminated regularly due to a price increase, Spot Cloud MapReduce can still make computation progress. We show experimentally that it performs well and it has very little overhead.", "Amazon introduced Spot Instance Market to utilize the idle resources of Amazon Elastic Compute Cloud (EC2) more efficiently. The price of a spot instance changes dynamically according to the current supply and demand for cloud resources. Users can bid for a spot instance and the job request will be granted if the current spot price falls below the bid, whereas the job will be terminated if the spot price exceeds the bid. In this paper, we investigate the problem of designing a bidding strategy from a cloud service broker's perspective, where the cloud service broker accepts job requests from cloud users, and leverages the opportunistic yet less expensive spot instances for computation in order to maximize its own profit. In this context, we propose a profit aware dynamic bidding (PADB) algorithm, which observes the current spot price and selects the bid adaptively to maximize the time average profit of the cloud service broker. We show that our bidding strategy achieves a near-optimal solution, i.e., (1−∈) of the optimal solution to the profit maximization problem, where ∈ can be arbitrarily small. The proposed dynamic bidding algorithm is self-adaptive and requires no a priori statistical knowledge on the distribution of random job sizes from cloud users.", "Cloud computing offers computing and storage services which can be dynamically developed, composed and deployed on virtualized infrastructure. Cloud providers holding excess spare capacity, incentivize customers to purchase it by selling them in a market (spot market), where the prices are derived dynamically based on supply and demand. The cloud providers allow clients to bid on this excess capacity by allocating resources to bidders while their bids exceed a intermittently changing dynamic spot price. In this paper we have used game theory to model the bidding strategies of bidders in a spot market who are attempting to procure the cloud instances, as a prisoner dilemma game. We then analyze real time data from Amazon EC2 spot market to validate this model. In a single shot prisoner dilemma game mutual defection is the Nash equilibrium. We find that a majority (approx. 85 ) of bidders choose to Defect which is in-line with the single shot classical prisoner dilemma game. However considering that most bidders in a spot market are repetitive bidders, we propose a Co-operation strategy which is in-line with the Iterated Prisoner Dilemma Game." ], "cite_N": [ "@cite_35", "@cite_4", "@cite_32", "@cite_15", "@cite_20" ], "mid": [ "1983908819", "2098812784", "99159337", "2099393515", "2535067972" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
@cite_22 delivered an implementation of preemptible instances for the Nimbus toolkit in order to utilize those instances for backfilling of idle resources, focusing on HTC fault-tolerant tasks. However, they did not focus on offering this functionality to the end-users, but rather to the operators of the infrastructure, as a way to maximize their resource utilization. In this work, it was the responsibility of the provider to configure the backfill tasks that were to be executed on the idle resources.
{ "abstract": [ "A key advantage of infrastructure-as-a-service (IaaS) clouds is providing users on-demand access to resources. To provide on-demand access, however, cloud providers must either significantly overprovision their infrastructure (and pay a high price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is no longer on-demand). At the same time, not all users require truly on-demand access to resources. Many applications and workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists utilize high-throughput computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available resources and terminated when the resource is no longer available. We propose a cloud infrastructure that combines on-demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by deploying backfill virtual machines (VMs). For demonstration and experimental evaluation, we extend the Nimbus cloud computing toolkit to deploy backfill VMs on idle cloud nodes for processing an HTC workload. Initial tests show an increase in IaaS cloud utilization from 37.5 to 100 during a portion of the evaluation trace but only 6.39 overhead cost for processing the HTC workload. We demonstrate that a shared infrastructure between IaaS cloud providers and an HTC job management system can be highly beneficial to both the IaaS cloud provider and HTC users by increasing the utilization of the cloud infrastructure (thereby decreasing the overall cost) and contributing cycles that would otherwise be idle to processing HTC jobs." ], "cite_N": [ "@cite_22" ], "mid": [ "2152942310" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
Nadjaran have developed a Spot Instances as a Service (SIPaaS) framework, a set of web services that makes possible to run a Spot market on top of an OpenStack cloud @cite_24 . However, even if this framework aims to deliver preemptible instances on OpenStack cloud, it is designed to utilize normal resources to provide this functionality. SIPaaS utilizes normal resources to create the Spot market that is provided to the users by means of a thin layer on top of a given OpenStack, providing a different API to interact with the resources. From the CMF point of view, all resources are of the same type, being SIPaaS the responsible of handling them, in different ways. In contrast, our work leverages two different kind of instances at the CMF level, performing different scheduling strategies depending on which kind of resource it is being requested. SIPaaS also delivers a price market similar to the Amazon EC2 Spot Instances market, therefore they also provide the Ex-CORE auction algorithm @cite_5 in order to govern the price fluctuations.
{ "abstract": [ "Designing dynamic pricing mechanisms that efficiently price resources in line with a provider's profit maximization goal is a key challenge in cloud computing environments. Despite the large volume of research published on this topic, there is no publicly available software system implementing dynamic pricing for Infrastructure as a Service cloud spot markets. This paper presents the implementation of a framework called Spot instance pricing as a Service SipaaS that supports an auction mechanism to price and allocate virtual machine instances. SipaaS is an open-source project offering a set of web services to price and sell virtual machine instances in a spot market resembling the Amazon EC2 spot instances. Cloud providers, who aim at utilizing SipaaS, should install add-ons in their existing platform to make use of the framework. As an instance, we provide an extension to the Horizon - the OpenStack dashboard project - to employ SipaaS web services and to add a spot market environment to OpenStack. To validate and evaluate the system, we conducted an experimental study with a group of 10 users utilizing the provided spot market in a real environment. Results show that the system performs reliably in a practical test environment. Copyright © 2015 John Wiley & Sons, Ltd.", "Dynamic forms of resource pricing have recently been introduced by cloud providers that offer Infrastructure as a Service (IaaS) capabilities in order to maximize profits and balance resource supply and demand. The design of a mechanism that efficiently prices perishable cloud resources in line with a provider’s profit maximization goal remains an open research challenge, however. In this article, we propose the Online Extended Consensus Revenue Estimate mechanism in the setting of a recurrent, multiunit and single price auction for IaaS cloud resources. The mechanism is envy-free, has a high probability of being truthful, and generates a near optimal profit for the provider. We combine the proposed auction design with a scheme for dynamically calculating reserve prices based on data center Power Usage Effectiveness (PUE) and electricity costs. Our simulation-based evaluation of the mechanism demonstrates its effectiveness under a broad variety of market conditions. In particular, we show how it improves on the classical uniform price auction, and we investigate the value of prior knowledge on the execution time of virtual machines for maximizing profit. We also developed a system prototype and conducted a small-scale experimental study with a group of 10 users that confirms the truthfulness property of the mechanism in a real test environment." ], "cite_N": [ "@cite_24", "@cite_5" ], "mid": [ "2483807727", "2264445763" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
have proposed @cite_25 a capacity planning method combined with an admission service for IaaS cloud providers offering different service classes. This method allows providers to tackle the challenge of estimating the minimum capacity required to deliver an agreed Service Level Objective (SLO) across all the defined service classes. In the aforementioned paper lean on their previous work @cite_0 @cite_29 , where they proposed a way to reclaim unused cloud resources to offer a new class. This class, in contrast with the preemptible instances described here, still offer a SLO to the users, being the work on focused on the reduction of the changes that the SLO is violated due to an instance reclamation because of a capacity shortage.
{ "abstract": [ "The elasticity promised by cloud computing does not come for free. Providers need to reserve resources to allow users to scale on demand, and cope with workload variations, which results in low utilization. The current response to this low utilization is to re-sell unused resources with no Service Level Objectives (SLOs) for availability. In this paper, we show how to make some of these reclaimable resources more valuable by providing strong, long-term availability SLOs for them. These SLOs are based on forecasts of how many resources will remain unused during multi-month periods, so users can do capacity planning for their long-running services. By using confidence levels for the predictions, we give service providers control over the risk of violating the availability SLOs, and allow them trade increased risk for more resources to make available. We evaluated our approach using 45 months of workload data from 6 production clusters at Google, and show that 6--17 of the resources can be re-offered with a long-term availability of 98.9 or better. A conservative analysis shows that doing so may increase the profitability of selling reclaimed resources by 22--60 .", "There is a growing adoption of cloud computing services, attracting users with different requirements and budgets to run their applications in cloud infrastructures. In order to match users' needs, cloud providers can offer multiple service classes with different pricing and Service Level Objective (SLO) guarantees. Admission control mechanisms can help providers to meet target SLOs by limiting the demand at peak periods. This paper proposes a prediction-based admission control model for IaaS clouds with multiple service classes, aiming to maximize request admission rates while fulfilling availability SLOs defined for each class. We evaluate our approach with trace-driven simulations fed with data from production systems. Our results show that admission control can reduce SLO violations significantly, specially in underprovisioned scenarios. Moreover, our predictive heuristics are less sensitive to different capacity planning and SLO decisions, as they fulfill availability SLOs for more than 91 of requests even in the worst case scenario, for which only 56 of SLOs are fulfilled by a simpler greedy heuristic and as little as 0.2 when admission control is not used.", "Abstract Infrastructure as a Service (IaaS) cloud providers typically offer multiple service classes to satisfy users with different requirements and budgets. Cloud providers are faced with the challenge of estimating the minimum resource capacity required to meet Service Level Objectives (SLOs) defined for all service classes. This paper proposes a capacity planning method that is combined with an admission control mechanism to address this challenge. The capacity planning method uses analytical models to estimate the output of a quota-based admission control mechanism and find the minimum capacity required to meet availability SLOs and admission rate targets for all classes. An evaluation using trace-driven simulations shows that our method estimates the best cloud capacity with a mean relative error of 2.5 with respect to the simulation, compared to a 36 relative error achieved by a single-class baseline method that does not consider admission control mechanisms. Moreover, our method exhibited a high SLO fulfillment for both availability and admission rates, and obtained mean CPU utilization over 91 , while the single-class baseline method had values not greater than 78 ." ], "cite_N": [ "@cite_0", "@cite_29", "@cite_25" ], "mid": [ "2083647394", "2282269972", "2738547443" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems @cite_11 . The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fist-come, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) @cite_33 .
{ "abstract": [ "The primary purpose of this book is to capture the state-of-the-art in Cloud Computing technologies and applications. The book will also aim to identify potential research directions and technologies that will facilitate creation a global market-place of cloud computing services supporting scientific, industrial, business, and consumer applications. We expect the book to serve as a reference for larger audience such as systems architects, practitioners, developers, new researchers and graduate level students. This area of research is relatively recent, and as such has no existing reference book that addresses it.This book will be a timely contribution to a field that is gaining considerable research interest, momentum, and is expected to be of increasing interest to commercial developers. The book is targeted for professional computer science developers and graduate students especially at Masters level. As Cloud Computing is recognized as one of the top five emerging technologies that will have a major impact on the quality of science and society over the next 20 years, its knowledge will help position our readers at the forefront of the field.", "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both." ], "cite_N": [ "@cite_33", "@cite_11" ], "mid": [ "2096120508", "2154158105" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
However, some users require for a queuing system ---or some more advanced features like advance reservations--- for running virtual machines. In those cases, there are some external services such as Haizea @cite_34 for OpenNebula or Blazar https: launchpad.net blazar for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality.
{ "abstract": [ "One of the many definitions of \"cloud\" is that of an infrastructure-as-a-service (IaaS) system, in which IT infrastructure is deployed in a provider's data center as virtual machines. With IaaS clouds' growing popularity, tools and technologies are emerging that can transform an organization's existing infrastructure into a private or hybrid cloud. OpenNebula is an open source, virtual infrastructure manager that deploys virtualized services on both a local pool of resources and external IaaS clouds. Haizea, a resource lease manager, can act as a scheduling back end for OpenNebula, providing features not found in other cloud software or virtualization-based data center management software." ], "cite_N": [ "@cite_34" ], "mid": [ "2151329337" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10668
2906853528
Abstract Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performance overhead of the proposed scheduler against existing solutions.
Besides simplistic scheduling policies like first-fit or random chance node selection @cite_33 , current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows:
{ "abstract": [ "The primary purpose of this book is to capture the state-of-the-art in Cloud Computing technologies and applications. The book will also aim to identify potential research directions and technologies that will facilitate creation a global market-place of cloud computing services supporting scientific, industrial, business, and consumer applications. We expect the book to serve as a reference for larger audience such as systems architects, practitioners, developers, new researchers and graduate level students. This area of research is relatively recent, and as such has no existing reference book that addresses it.This book will be a timely contribution to a field that is gaining considerable research interest, momentum, and is expected to be of increasing interest to commercial developers. The book is targeted for professional computer science developers and graduate students especially at Masters level. As Cloud Computing is recognized as one of the top five emerging technologies that will have a major impact on the quality of science and society over the next 20 years, its knowledge will help position our readers at the forefront of the field." ], "cite_N": [ "@cite_33" ], "mid": [ "2096120508" ] }
An efficient cloud scheduler design supporting preemptible instances $
Infrastructure as a Service (IaaS) Clouds make possible to provide computing capacity as a utility to the users following a pay-per-use model. This fact allows the deployment of complex execution environments without an upfront infrastructure commitment, fostering the adoption of the cloud by users that could not afford to operate an on-premises infrastructure. In this regard, Clouds are not only present in the industrial ICT ecosystem, and they are being more and more adopted by other stakeholders such as public administrations or research institutions. Indeed, clouds are nowadays common in the scientific computing field [1,2,3,4], due to the fact that they $ This is the author's accepted version of the following article:Álvaro López García, Enol Fernndez del Castillo, Isabel Campos Plasencia, "An efficient cloud scheduler design supporting preemptible instances", accepted in Future Generation Computer Systems, 2019, which is published in its final form at https://doi. org/10.1016/j.future.2018.12.057. This preprint article may be used for non-commercial purposes under a CC BY-NC-SA 4.0 license. are able to deliver resources that can be configured with the complete software needed for an application [5]. Moreover, they also allow the execution of non-transient tasks, making possible to execute virtual laboratories, databases, etc. that could be tightly coupled with the execution environments. This flexibility poses a great advantage against traditional computational modelssuch as batch systems or even Grid computing-where a fixed operating system is normally imposed and any complimentary tools (such as databases) need to be selfmanaged outside the infrastructure. This fact is pushing scientific datacenters outside their traditional boundaries, evolving into a mixture of services that deliver more added value to their users, with the Cloud as a prominent actor. Maximizing resource utilization by performing an efficient resource provisioning is a fundamental aspect for any resource provider, specially for scientific providers. Users accessing these computing resources do not usually pay -or at least they are not charged directly-for their consumption, and normally resources are paid via other indirect methods (like access grants), with users tending to assume that resources are for free. Scientific computing facilities tend to work on a fully saturated manner, aiming at the maximum possible resource uti-lization level. In this context it is common that compute servers spawned in a cloud infrastructure are not terminated at the end of their lifetime, resulting in idle resources, a state that is are not desirable as long as there is processing that needs to be done [4]. In a commercial this is not a problem, since users are being charged for their allocated resources, regardless if they are being used or not. Therefore users tend to take care of their virtual machines, terminating them whenever they are not needed anymore. Moreover, in the cases where users leave their resources running forever, the provider is still obtaining revenues for those resources. Cloud operators try to solve this problem by setting resource quotas that limits the amount of resources that a user or group is able to consume by doing a static partitioning of the resources [8]. However, this kind of resource allocation automatically leads to an underutilization of the infrastructure since the partitioning needs to be conservative enough so that other users could utilize the infrastructure. Quotas impose hard limits that leading to dedicated resources for a group, even if the group is not using the resources. Besides, cloud providers also need to provide their users with on-demand access to the resources, one of the most compelling cloud characteristics [9]. In order to provide such access, an overprovisioning of resources is expected [10] in order to fulfil user request, leading to an infrastructure where utilization is not maximized, as there should be always enough resources available for a potential request. Taking into account that some processing workloads executed on the cloud do not really require on-demand access (but rather they are executed for long periods of time), a compromise between these two aspects (i.e. maximizing utilization and providing enough ondemand access to the users) can be provided by using idle resources to execute these tasks that do not require truly on-demand access [10]. This approach indeed is common in scientific computing, where batch systems maximize the resource utilization through backfilling techniques, where opportunistic access is provided to these kind of tasks. Unlike in batch processing environments, virtual machines (VMs) spawned in a Cloud do not have fixed duration in time and are supposed to live forever -or until the user decides to stop them. Commercial cloud providers provide specific VM types (like the Amazon EC2 Spot Instances 1 or the Google Compute Engine Preemptible Virtual Machines 2 ) that can be provisioned at a fraction of a normal VM price, with the caveat that they can terminated whenever the provider decides to do so. This kind of VMs can be used to backfill idle resources, thus allowing to maximize the utilization and providing on-demand access, since normal VMs will obtain resources by evacuating Spot or Preemptible instances. In this paper we propose an efficient scheduling algorithm that combines the scheduling of preemptible and non preemptible instances in a modular way. The proposed solution is flexible enough in order to allow different allocation, selection and termination policies, thus allowing resource providers to easily implement and enforce the strategy that is more suitable for their needs. In our work we extend the OpenStack Cloud middleware with a prototype implementation of the proposed scheduler, as a way to demonstrate and evaluate the feasibility of our solution. We moreover perform an evaluation of the performance of this solution, in comparison with the existing OpenStack scheduler. The remainder of the paper is structured as follows. In Section 2 we present the related work in this field. In Section 3 we propose a design for an efficient scheduling mechanism for preemptible instances. In Section 4 we present an implementation of our proposed algorithm, as well as an evaluation of its feasibility and performance with regards with a normal scheduler. Finally, in Section 6 we present this work's conclusions. Scheduling in the existing Cloud Management Frameworks Generally speaking, existing Cloud Management Frameworks (CMFs) do not implement full-fledged queuing mechanism as other computing models do (like the Grid or traditional batch systems). Clouds are normally more focused on the rapid scaling of the resources rather than in batch processing, where systems are governed by queuing systems [34]. The default scheduling strategies in the current CMFs are mostly based on the immediate allocation or resources following a fistcome, first-served basis. The cloud schedulers provision them when requested, or they are not provisioned at all (except in some CMFs that implement a FIFO queuing mechanism) [35]. However, some users require for a queuing system -or some more advanced features like advance reservations-for running virtual machines. In those cases, there are some external services such as Haizea [36] for OpenNebula or Blazar 6 for OpenStack. Those systems lay between the CMF and the users, intercepting their requests and interacting with the cloud system on their behalf, implementing the required functionality. Besides simplistic scheduling policies like first-fit or random chance node selection [35], current CMF implement a scheduling algorithm that is based on a rank selection of hosts, as we will explain in what follows: OpenNebula 7 uses by default a match making scheduler, implementing the Rank Scheduling Policy [36]. This policy first performs a filtering of the existing hosts, excluding those that do not meet the request requirements. Afterwards, the scheduler evaluates some operator defined rank expressions against the recorded information from each of the hosts so as to obtain an ordered list of nodes. Finally, the resources with a higher rank are selected to fulfil the request. OpenNebula implements a queue to hold the requests that cannot be satisfied immediately, but this queuing mechanism follows a FIFO logic, without further priority adjustment. OpenStack 8 implements a Filter Scheduler [37], based on two separated phases. The first phase consists on the filtering of hosts that will exclude the hosts that cannot satisfy the request. This filtering follows a modular design, so that it is possible to filter out nodes based on the user request (RAM, number of vCPU), direct user input (such as instance affinity or anti-affinity) or operator configured filtering. The second phase consists on the weighing of hosts, following the same modular approach. Once the nodes are filtered and weighed, the best candidate is selected from that ordered set. CloudStack 9 utilizes the term allocator to determine which host will be selected to place the new VM requested. The nodes that are used by the allocators are the ones that are able to satisfy the request. Eucalyptus 10 implements a greedy or round robin algorithm. The former strategy uses the first node that is identified as suitable for running the VM. This algorithm exhausts a node before moving on to the next node available. On the other hand, the later schedules each request in a cyclic manner, distributing evenly the load in the long term. All the presented scheduling algorithms share the view that the nodes are firstly filtered out -so that only those that can run the request are considered-and then ordered or ranked according to some defined rules. Generally speaking, the scheduling algorithm can be expressed as the pseudo-code in the Algorithm 1. Preemptible Instances Design The initial assumption for a preemptible aware scheduler is that the scheduler should be able to take into account two different instance types -preemptible and normal-according to the following basic rules: if Filter(h i , req) then 5: Ω i ← 0 6: for all r, m in ranks do r is a rank function, m the rank multiplier 7: -If this is true, those instances should be terminated -according to some well defined rules-and the new VM should be scheduled into that freed node. Ω i ← Ω i + m j * r j (h i , -If this is not possible, then the request should continue with the failure process defined in the scheduling algorithm -it can be an error, or it can be retried after some elapsed time. • If it is a preemptible instance, it should try to schedule it without other considerations. It should be noted that the preemptible instance selection and termination does not only depend on pure theoretical aspects, as this selection will have an influence on the resource provider revenues and the service level agreements signed with their users. Taking this into account, it is obvious that modularity and flexibility for the preemptible instance selection and is a key aspect here. For instance, an instance selection and termination algorithm that is only based on the minimization of instances terminated in order to free enough resources may not work for a provider that wish to terminate the instances that generate less revenues, event if it is needed to terminate a larger amount of instances. Therefore, the aim of our work is not only to design an scheduling algorithm, but also to design it as a modular system so that it would be possible to create any more complex model on top of it once the initial preemptible mechanism is in place. The most evident design approach is a retry mechanism based on two selection cycles within a scheduling loop. The scheduler will take into account a scheduling failure and then perform a second scheduling cycle after preemptible instances have been evacuated -either by the scheduler itself or by an external service. However, this two-cycle scheduling mechanism would introduce a larger scheduling latency and load in the system. This latency is something perceived negatively by the users [38] so the challenge here is how to perform this selection in a efficient way, ensuring that the selected preemptible instances are the less costly for the provider. Preemptible-aware scheduler Our proposed algorithm (depicted in Figure 1) addresses the preemptible instances scheduling within one scheduling loop, without introducing a retry cycle, bur rather performing the scheduling taking into account different host states depending on the instance that is to be scheduled. This design takes into account the fact that all the algorithms described in Section 2.1 are based on two complimentary phases: filtering and raking., but adds a final phase, where the preemptible instances that need to be terminated are selected. The algorithm pseudocode is shown in 2 and will be further described in what follows. As we already explained, the filtering phase eliminates the nodes that are not able to host the new request due to its current state -for instance, because of a lack of resources or a VM anti-affinity-, whereas the raking phase is the one in charge of assigning a rank or weight to the filtered hosts so that the best candidate is selected. I our preemptible-aware scheduler, the filtering phase only takes into account preemptible instances when doing the filtering phase. In order to do so we propose to utilize two different states for the physical hosts: h f This state will take into account all the running VM inside that host, that is, the preemptible and non preemptible instances. h n This state will not take into account all the preemptible instances inside that host. That is, the preemptible instances running into a particular physical host are not accounted in term of consumed resources. Whenever a new request arrives, the scheduler will use the h f or h n host states for the filtering phase, depending on the type of the request: if Filter(h i , req) then 10: Ω i ← 0 11: for all r, m in ranks do r is a rank function, m the rank multiplier 12: host ← Best Host(hosts) 22: Select and Terminate(req, host) 23: return host 24: end function • When a normal request arrives, the scheduler will use h n . Ω i ← Ω i + m j * r j (h f i , • When a preemptible request arrives, the scheduler will use h f . This way the scheduler ensures that a normal instance can run regardless of any preemptible instance occupying its place, as the h n state does not account for the resources consumed by any preemptible instance running on the host. After this stage, the resulting list of hosts will contain all the hosts susceptible to host the new request, either by evacuating one or several preemptible instances or because there are enough free resources. Once the hosts are filtered out, the ranking phase is started. However, in order to perform the correct ranking, it is needed to use the full state of the hosts, that is, h f . This is needed as the different rank functions will require the information about the preemptible instances so as to select the best node. This list of filtered hosts may contain hosts that are able to accept the request because they have free resources and nodes that would imply the termination of one or several instances. In order to choose the best host for scheduling a new instance new ranking functions need to be implemented, in order to prioritise the costless host. The simplest ranking function based on the number of preemptible instances per host is described in Algorithm 3. This function assigns a negative value if the free resources are not enough to accommodate the request, detecting an overcommit produced by the fact that it is needed to terminate one or several preemptible instances. However, this basic function only establishes a naive ranking based on the termination or not of instances. In the case that it is needed to terminate various instances, this function does not establish any rank between them, so more appropriate rank functions need to be created, depending on the business model implemented by the provider. Our design takes this fact into account, allowing for modularity of these cost functions that can be applied to the raking function. For instance, commercial providers tend to charge by complete periods of 1 h, so partial hours are not accounted. A ranking function based in this business model can be expressed as Algorithm 4, ranking hosts according to the preemptible instances running inside them and the time needed until the next complete period. Algorithm 4 Ranking function based on 1 h consumption periods. 1 Once the ranking phase is finished, the scheduler will have built an ordered list of hosts, containing the best candidates for the new request. Once the best host selected it is still needed to select which individual preemptible instances need to be evacuated from that host, if any. Our design adds a third phase, so as to terminate the preemptible instances if needed. This last phase will perform an additional raking and selection of the candidate preemptible instances inside the selected host, so as to select the less costly for the provider. This selection leverages a similar ranking process, performed on the preemptible instances, considering all the preemptible instances combination and the costs for the provider, as shown in Algorithm 5. Evaluation In the first part of this section (4.2) we will describe an implementation -done for the OpenStack Compute CMF-, in order to evaluate our proposed algorithm. We have decided to implement it on top of the Open-Stack Compute software due to its modular design, that allowed us to easily plug our modified modules without requiring significant modifications to the code core. Afterwards we will perform two different evaluations. On the one hand we will assess the algorithm Algorithm 5 Preemptible instance selection and termination. 1 Terminate(selected instances) 11: end procedure correctness, ensuring that the most desirable instances are selected according to the configured weighers (Section 4.4). On the other hand we will examine the performance of the proposed algorithm when compared with the default scheduling mechanism (Section 4.5). OpenStack Compute Filter Scheduler The OpenStack Compute scheduler is called Filter Scheduler and, as already described in Section 2, it is a rank scheduler, implementing two different phases: filtering and weighting. Filtering The first step is the filtering phase. The scheduler applies a concatenation of filter functions to the initial set of available hosts, based on the host properties and state -e.g. free RAM or free CPU number-user input -e.g. affinity or anti-affinity with other instances-and resource provider defined configuration. When the filtering process has concluded, all the hosts in the final set are able to satisfy the user request. Weighing Once the filtering phase returns a list of suitable hosts, the weighting stage starts so that the best host -according to the defined configuration-is selected. The scheduler will apply all hosts the same set of weigher functions w i (h), taking into account each host state h. Those weigher functions will return a value considering the characteristics of the host received as input parameter, therefore, total weight Ω for a node h is calculated as follows: Ω = n m i · N(w i (h)) Where m i is the multiplier for a weigher function, N(w i (h)) is the normalized weight between [0, 1] calculated via a rescaling like: N(w i (h)) = w i (h) − min W max W − min W where w i (h) is the weight function, and min W, max W are the minimum and maximum values that the weigher has assigned for the set of weighted hosts. This way, the final weight before applying the multiplication factor will be always in the range [0, 1]. After these two phases have ended, the scheduler has a set of hosts, ordered according to the weights assigned to them, thus it will assign the request to the host with the maximum weight. If several nodes have the same weight, the final host will be randomly selected from that set. Implementation Evaluation We have extended the Filter Scheduler algorithm with the functionality described in Algorithm 6. We have also implemented the ranking functions described in Algorithm 3 and Algorithm 4 as weighers, using the Open-Stack terminology. Moreover, the Filter Scheduler has been also modified so as to introduce the additional select and termination phase (Algorithm 5). This phase has been implemented following the same same modular approach as the OpenStack weighting modules, allowing to define and implement additional cost modules to determine which instances are to be selected for termination. As for the cost functions, we have implemented a module following Algorithm 4. This cost function assumes that customers are charged by periods of 1 h, therefore it prioritizes the termination of Spot Instances with the lower partial-hour consumption (i.e. if we consider instances with 120 min, 119 min and 61 min of duration, the instance with 120 min will be terminated). This development has been done on the OpenStack Newton version 11 , and was deployed on the infrastructure that we describe in Section 4.3. Terminate(selected instances) 30: end procedure Algorithm 6 Preemptible Instances Configurations In order to evaluate our algorithm proposal we have set up a dedicated test infrastructure comprising a set of 26 identical IBM HS21 blade servers, with the characteristics described in Table 1. All the nodes had an identical base installation, based on an Ubuntu Server 16.04 LTS, running the Linux 3.8.0 Kernel, where we have deployed OpenStack Compute as the Cloud Management Framework. The system architecture is as follows: • An Image Catalog running the OpenStack Image Service (Glance) serving images from its local disk. • 24 Compute Nodes running OpenStack Compute, hosting the spawned instances. The network setup of the testbed consists on two 10 Gbit Ethernet switches, interconnected with a 10 Gbit Ethernet link. All the hosts are evenly connected to these switches using a 1 Gbit Ethernet connection. We have considered the VM sizes described in Table 2, based on the default set of sizes existing in a default OpenStack installation. Algorithm Evaluation The purpose of this evaluation is to ensure that the proposed algorithm is working as expected, so that: • The scheduler is able to deliver the resources for a normal request, by terminating one or several preemptible instances when there are not enough free idle resources. • The scheduler selects the best preemptible instance for termination, according to the configured policies by means of the scheduler weighers. Scheduling using same Virtual Machine sizes For the first batch of tests, we have considered same size instances, to evaluate if the proposed algorithm chooses the best physical host and selects the best preemptible instance for termination. We generated requests for both preemptible and normal instances -chosen randomly-, of random duration between 10 min and 300 min, using an exponential distribution [39] until the first scheduling failure for a normal instance was detected. The compute nodes used have 16 GB of RAM and eight CPUs, as already described. The VM size requested was the medium one, according to Table 2, therefore each compute node could host up to four VMs. We executed these requests and monitored the infrastructure until the first scheduling failure for a normal instance took place, thus the preemptible instance termination mechanism was triggered. At that moment we took a snapshot of the nodes statuses, as shown in Table 3 and Table 4. These tables depict the status for each of the physical hosts, as well as the running time for each of the instances that were running at that point. The shaded cells represents the preemptible instance that was terminated to free the resources for the incoming non preemptible request. Considering that the preemptible instance selection was done according to Algorithm 5 using the cost function in Algorithm 4, the chosen instance has to be the one with the lowest partial-hour period. In Table 3 this is the instance marked with ( 1 ): BP1. By chance, it cor- responds with the preemptible instance with the lowest run time. Table 4 shows a different test execution under the same conditions and constraints. Again, the selected instance has to be the one with the lowest partial-hour period. In Table 4 this corresponds to the instance marked again with ( 1 ): CP1, as its remainder is 1 min. In this case this is not the preemptible instance with the lowest run time (being it CP2). Scheduling using different Virtual Machine sizes For the second batch of tests we requested instances using different sizes, always following the sizes in Table 2. Table 5 depicts the testbed status when a request for a large VM caused the termination of the instances marked with ( 1 ): AP2, AP3 and AP4. In this case, the scheduler decided that the termination of these three instances caused a smaller impact on the provider, as the sum of their 1 h remainders (55) was lower than any of the other possibilities (58 for BP1, 57 for CP1, 112 for CP2 and CP3). Table 6 shows a different test execution under the same conditions and constraints. In this case, the preemptible instance termination was triggered by a new VM request of size medium and the selected instance was the one marked with ( 1 ): BP3, as host-B will have enough free space just by terminating one instance. Performance evaluation As we have already said in Section 3, we have focused on designing an algorithm that does not introduce a significant latency in the system. This latency will introduce a larger delay when delivering the requested resources to the end users, something that is not desirable by any resource provider [4]. In order to evaluate the performance of our proposed algorithm we have done a comparison with the default, unmodified OpenStack Filter Scheduler. Moreover, for the sake of comparison, we have implemented a scheduler based on a retry loop as well. This scheduler performs a normal scheduling loop, and if there is a scheduling failure for a normal instance, it will perform a second pass taking into account the existing preemptible instances. The preemptible instance selection and termination mechanisms remain the same. We have scheduled 130 Virtual Machines of the same size on our test infrastructure and we have recorded the timings for the scheduling function, thus calculating the means and standard deviation for each of the following scenarios: • Using the original, unmodified OpenStack Filter scheduler with an empty infrastructure. • Using the preemptible instances Filter Scheduler and the retry scheduler: -Requesting normal instances with an empty infrastructure. -Requesting preemptible instances with an empty infrastructure. -Requesting normal instances with a saturated infrastructure, thus implying the termination of a preemptible instance each time a request is performed. We have then collected the scheduling calls timings and we have calculated the means and deviations for each scenario, as shown in Figure 2. Numbers in these scenarios are quite low, since the infrastructure is a small testbed, but these numbers are expected to become larger as the infrastructure grows in size. As it can be seen in the aforementioned Figure 2, our solution introduces a delay in the scheduling calls, as we need to calculate additional host states (we hold two different states for each node) and we need to select a preemptible instance for termination (in case it is needed). In the case of the retry scheduler, this delay does not exists and numbers are similar to the original scheduler. However, when it is needed to trigger the termination of a preemptible instance, having a retry mechanism (thus executing the same scheduling call two times) introduces a significantly larger penalty when compared to our proposed solution. We consider that the latency that we are introducing is within an acceptable range, therefore not impacting significantly the scheduler performance. Exploitation and integration in existing infrastructures The functionality introduced by the preemptible instances model that we have described in this work can be exploited not only within a cloud resource provider, but it can also be leveraged on more complex hybrid infrastructures. High Performance Computing Integration One can find in the literature several exercises of integration of hybrid infrastructures, integrating cloud resources, commercial or private, with High Performance Computing (HPC) resources. Those efforts focus on outbursting resources from the cloud, when the HPC system does not provide enough resources to solve a particular problem [41]. On-demand provisioning using cloud resources when the batch system of the HPC is full is certainly a viable option to expand the capabilities of a HPC center for serial batch processing. We focus however in the complementary approach, this is, using HPC resources to provide cloud resources capability, so as to complement existing distributed infrastructures. Obviously HPC systems are oriented to batch processing of highly coupled (parallel) jobs. The question here is optimizing resource utilization when the HPC batch system has empty slots. If we backfill the empty slots of a HPC system with cloud jobs, and a new regular batch job arrives from the HPC users, the cloud jobs occupying the slots needed by the newly arrived batch job should be terminated immediately, so as to not disturb regular work. Therefore such cloud jobs should be submitted as Spot Instances Enabling HPC systems to process other jobs during periods in which the load of the HPC mainframe is low, appears as an attractive possibility from the point of view of resource optimization. However the practical implementation of such idea would need to be compatible with both, the HPC usage model, and the cloud usage model. In HPC systems users login via ssh to a frontend. At the frontend the user has the tools to submit jobs. The scheduling of HPC jobs is done using a regular batch systems software (such as SLURM, SGE, etc...). HPC systems are typically running MPI parallel jobs as well using specialized hardware interconnects such as Infiniband. Let us imagine a situation in which the load of the HPC system is low. One can instruct the scheduler of the batch system to allow cloud jobs to HPC system occupying those slots not allocated by the regular batch allocation. In order to be as less disrupting as possible the best option is that the cloud jobs arrive as preemptible instances as described through this paper. When a batch job arrives to the HPC system, this job should be immediately scheduled and executed. Therefore the scheduler should be able to perform the following steps: • Allocate resources for the job that just arrived to the batch queue system • Identify the cloud jobs that are occupying those resources, and stop them. • Dispatch the batch job. In the case of parallel jobs the scheduling decision may depend on many factors like the topology of the network requested, or the affinity of the processes at the core/CPU level. In any case parallel jobs using heavily the low latency interconnect should not share nodes with any other job. High Throughput Computing Integration Existing High Throughput Computing Infrastructures, like the service offered by EGI 12 , could benefit from a cloud providers offering preemptible instances. It has been shown that cloud resources and IaaS offerings can be used to run HTC tasks [42] in a pull mode, where cloud instances are started in a way that they are able to pull computing tasks from a central location (for example using a distributed batch system like HTCondor). However, sites are reluctant to offer large amounts of resources to be used in this mode due to the lack of a fixed duration for cloud instances. In this context, federated cloud e-Infrastrucutres like the EGI Federated Cloud [43], could benefit from resource providers offering preemptible instances. Users could populate idle resources with preemptible instances pulling their HTC tasks, whereas interactive and normal IaaS users will not be impacted negatively, as they will get the requests satisfied. In this way, large amounts of cloud computing power could be offered to the European research community. Conclusions In this work we have proposed a preemptible instance scheduling design that does not modify substantially the existing scheduling algorithms, but rather enhances them. The modular rank and cost mechanisms allows the definition and implementation of any resource provider defined policy by means of additional pluggable rankers. Our proposal and implementation enables all kind of service providers -whose infrastructure is managed by open source middleware such as OpenStack-to offer a new access model based on preemptible instances, with a functionality similar to the one offered by the major commercial providers. We have checked for the algorithm correctness when selecting the preemptible instances for termination. The results yield that the algorithm behaves as expected. Moreover we have compared the scheduling performance with regards equivalent default scheduler, obtaining similar results, thus ensuring that the scheduler performance is not significantly impacted. This implementation allows to apply more complex policies on top of the preemptible instances, like instance termination based on price fluctuations (that is, implementing a preemptible instance stock market), 12 https://www.egi.eu/services/ high-throughput-compute/ preemptible instance migration so as to consolidate them or proactive instance termination to maximize the provider's revenues by not delivering computing power at no cost to the users.
5,471
1812.10358
2950902462
Capturing the interesting components of an image is a key aspect of image understanding. When a speaker annotates an image, selecting labels that are informative greatly depends on the prior knowledge of a prospective listener. Motivated by cognitive theories of categorization and communication, we present a new unsupervised approach to model this prior knowledge and quantify the informativeness of a description. Specifically, we compute how knowledge of a label reduces uncertainty over the space of labels and utilize this to rank candidate labels for describing an image. While the full estimation problem is intractable, we describe an efficient algorithm to approximate entropy reduction using a tree-structured graphical model. We evaluate our approach on the open-images dataset using a new evaluation set of 10K ground-truth ratings and find that it achieves 65 agreement with human raters, largely outperforming other unsupervised baseline approaches.
* Image importance and object saliency. The problem of deciding which components in an image are important has been studied intensively. The main approaches involved identifying characteristics of objects and images that could contribute to importance, and use labeled data for predicting object importance. Elazary and Itti @cite_8 considered the order of object naming in the LabelMe dataset @cite_7 as a measure of the interest of an object and compare that to salient locations predicted by computational models of bottom-up attention. The elegant work of Spain and Perona @cite_18 examined which factors can predict the order in which objects will be mentioned given an image. @cite_19 characterized factors related to semantics, to composition and to the likelihood of attribute-object, and investigated how these affected the measures of importance. @cite_6 focused on predicting entry-level classes using a supervised approach. These studies also make it clear that the object saliency is strongly correlated with its perceived importance @cite_16 @cite_4 .
{ "abstract": [ "How important is a particular object in a photograph of a complex scene? We propose a definition of importance and present two methods for measuring object importance from human observers. Using this ground truth, we fit a function for predicting the importance of each object directly from a segmented image; our function combines a large number of object-related and image-related features. We validate our importance predictions on 2,841 objects and find that the most important objects may be identified automatically. We find that object position and size are particularly informative, while a popular measure of saliency is not.", "We extensively compare, qualitatively and quantitatively, 41 state-of-the-art models (29 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over seven challenging data sets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted three years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for the state-of-the-art models, provide useful hints toward constructing more challenging large-scale data sets and better saliency models. Finally, we propose probable solutions for tackling several open problems, such as evaluation scores and data set bias, which also suggest future research directions in the rapidly growing field of salient object detection.", "", "How do we decide which objects in a visual scene are more interesting? While intuition may point toward high-level object recognition and cognitive processes, here we investigate the contributions of a much simpler process, low-level visual saliency. We used the LabelMe database (24,863 photographs with 74,454 manually outlined objects) to evaluate how often interesting objects were among the few most salient locations predicted by a computational model of bottom-up attention. In 43 of all images the model’s predicted most salient location falls within a labeled region (chance 21 ). Furthermore, in 76 of the images (chance 43 ), one or more of the top three salient locations fell on an outlined object, with performance leveling off after six predicted locations. The bottom-up attention model has neither notion of object nor notion of semantic relevance. Hence, our results indicate that selecting interesting objects in a scene is largely constrained by low-level visual properties rather than solely determined by higher cognitive processes.", "Entry level categories - the labels people will use to name an object - were originally defined and studied by psychologists in the 1980s. In this paper we study entry-level categories at a large scale and learn the first models for predicting entry-level categories for images. Our models combine visual recognition predictions with proxies for word \"naturalness\" mined from the enormous amounts of text on the web. We demonstrate the usefulness of our models for predicting nouns (entry-level words) associated with images by people. We also learn mappings between concepts predicted by existing visual recognition systems and entry-level concepts that could be useful for improving human-focused applications such as natural language image description or retrieval.", "What do people care about in an image? To drive computational visual recognition toward more human-centric outputs, we need a better understanding of how people perceive and judge the importance of content in images. In this paper, we explore how a number of factors relate to human perception of importance. Proposed factors fall into 3 broad types: 1) factors related to composition, e.g. size, location, 2) factors related to semantics, e.g. category of object or scene, and 3) contextual factors related to the likelihood of attribute-object, or object-scene pairs. We explore these factors using what people describe as a proxy for importance. Finally, we build models to predict what will be described about an image given either known image content, or image content estimated automatically by recognition systems.", "" ], "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_6", "@cite_19", "@cite_16" ], "mid": [ "2059952380", "1772076007", "2264742718", "2165947725", "2135166986", "2067816745", "" ] }
INFORMATIVE OBJECT ANNOTATIONS Tell Me Something I Don't Know
How would you label the photo in Figure 1? If you answered "a dog", your response agrees with what most people would answer. Indeed, people are surprisingly consistent when asked to describe what an image is "about" [1]. They intuitively manage to focus on what is "informative" or "relevant" and select terms that reflect this information. In contrast, automated classifiers can produce a large number of labels that might be technically correct, but are often non-interesting. A natural approach to ascertain importance lies in the context of the specific task. For instance, classifiers can be efficiently trained to identify dog breeds or animal species. More generally, each task defines importance through a supervision signal provided to the classifier [2,3,4]. Here we are interested in a more generic setup, where no down- Figure 1. The problem of informative labeling. An image is automatically annotated with multiple labels. A "speaker" is then given these labels and their confidence scores and has to select k labels to transmit to a listener, such that the listener finds them informative given her prior knowledge. The prior knowledge is assumed to be common to both the speaker and the listener. stream task dictates the scene interpretation. This represents the challenge that people face when describing a scene to another person, without any specific task at hand. The principles that govern informative communication have long been a subject of research in various fields from philosophy of language and linguistics to computer science. In the discipline of pragmatics, Grice's maxims state that "one tries to be as informative as one possibly can." [5]. But the question remains, "Informative about what?" How can we build a practical theory of informative communication that can be applied to concrete problems with real-world data? In this paper, we address the following concrete learning Figure 2. Uncertainty over labels can be estimated through measuring the entropy of its joint distribution, and computed efficiently using a tree-structured probabilistic graphical model (PGM). (a) An image corpus is used for collecting pairwise label co-occurrence. Then, a tree-structured graphical model is learned using the Chow-Liu algorithm. Computing the entropy of the approximated distributionp has a run-time that is linear in the number of labels. (b) To compute the entropy conditioned on a label l dog = true, the marginal of that node is set to [0,1]. Then, the graph edges are redirected and rest of the distribution is updated using the conditional probability tables represented on the edges. Finally, we compute the entropy of the resulting distribution. setup ( Figure 1). A speaker receives a set of labels predicted automatically from an image by a multiclass classifier. It also receives the confidence that the classifier assigns to each prediction. Then, it aims to select a few labels to be transmitted to a listener, such that the listener will find those labels informative. The speaker and listener also share the same prior knowledge in the form of the distribution of labels in the image dataset. We put forward a quantitative theory of how speakers select terms to describe an image. The key idea is that communicated terms are aimed to reduce the uncertainty that a listener has about the semantic space. We show how this "theory-of-mind" can be quantitatively computed using information-theoretic measures. In contrast with previous approaches that focused on visual aspects and their importance [6,7,8,9,10], our measures focus on information about the semantics of labels. To compute information content of a label, we build a probabilistic model of the full label space and use it to quantify how transmitting a label reduces uncertainty. Specifically, we compute the entropy of the label distribution as a measure of uncertainty, and also quantify how much this entropy is reduced when a label is set to be true. Importantly, computing these measures over the full distribution of labels is not feasible because it requires to aggregate an exponentially-large set of label combinations. We show how the entropy and other information theoretic measures can be computed efficiently by approximating the full joint distribution with a tree-structured graphical model (a Chow-Liu tree). We then treat entropy-reduction as a scoring function that allows us to rank all labels of an image, and select those that reduce the entropy most. We name this approach IOTA, for Informative Object Annotations. We test this approach on a new evaluation dataset: 10K images from the open-images dataset [11] were annotated with informative labels by three raters each. We find that human annotations are in strong agreement (∼ 70%) with the uncertainty-reduction measures, just shy of inter-rater agreement and superior to 4 other unsupervised baselines. Our main contributions are as follows: (1) We describe a novel learning setup of selecting important labels without direct supervision about importance. (2) We develop an information-theoretic framework to address this task, and propose scoring functions that can be used to solve it. (3) We further describe an efficient algorithm for computing these scoring functions, by approximating the label distribution using a tree-structured graphical model. (4) We provide a new evaluation set of ground-truth importance ratings based on 10K images from the open-images dataset. (5) We show that IOTA achieves high agreement with human judgment on this dataset. Our approach The key idea of our approach is to quantify the relevantinformation content of a message, by modelling what the listener does not know, and find labels that reduce this uncertainty. To illustrate the idea, consider a label that appears in most of the images in a dataset (e.g., nature). If the speaker selects to transmit that label, it provides very little information to the listener, because they can already assume that a given image is annotated with that label. In contrast, if the speaker transmits a label that is less common, appearing in only half of the images, more of the listener's uncertainty would be removed. A more important property of multi-label uncertainty is that labels are interdependent: transmitting one label can reduce the uncertainty of others. This property is evident when considering label hierarchy, for example, golden-retriever = true implies that dog = true. As a result, transmitting a fine-grained label removes more entropy than a more general label. This effect however, is not limited to hierarchical relations. For instance, because the label street tends to co-occur with car and other vehicles, transmitting street would reduce the overall uncertainty by reducing uncertainty in correlated co-occurring terms. Going beyond these examples, we aim to calculate how a revealed label affects the listener's uncertainty. For this purpose, the Shannon entropy is a natural choice to quantify uncertainty, pending that we can estimate the prior joint distribution of labels. Clearly, modelling the entire prior knowledge about the visual world of a listener is beyond our current reach. Instead, we show how we can approximate the entire joint distribution by building a compact graphical model with a tree structure. This allows us to efficiently compute properties of the joint distribution over labels and more specifically, estimate listener uncertainty and labelconditioned uncertainty. We start by describing an information-theoretic approach for selecting informative labels by estimating uncertainty and label-conditioned uncertainty. We then describe an algorithm to effectively compute these quantities in practice. The problem setup Assume that we are given a corpus of images, each annotated with multiple labels from a vocabulary of d terms L = (l 1 , . . . , l d ). Since we operate in a noisy labeling setup, we treat the labels as binary random variables l i ∈ {true, f alse}. We also assume that for each image I, labels are accompanied with a score reflecting the classifier's confidence in that label p c (l i |I). 1 The goal of the speaker is to select k labels to be transmitted to the listener, such that they are most "useful" or informative. Information-theoretic measure of importance Let us first assume that we can estimate the distribution over labels that a listener has in mind. Clearly, this is a major assumption, and we discuss below how we relax this assumption and approximate this distribution. Given this distribution, we wish to measure the uncertainty it reflects, as well as how much this uncertainty is reduced when the speaker reveals a specific label. A principled measure of the uncertainty about random variables is the Shannon entropy of their joint distribution H(L 1 , . . . , L d ) [19]. We use a notation that makes it explicit that the entropy depends on the distribution, where the entropy is defined as H[p(l 1 , ..., l d )] = − l1,..., l d p(l 1 , ..., l d ) log p(l 1 , ..., l d ). (1) Here, summation is over all possible assignments of the d labels, an exponential number of terms that cannot be computed in practice. We show below how to approximate it. The amount of entropy that is reduced when the speaker transmits a subset of the labels L = {l i , l j , l k , . . .}, is ∆H(L ) = H[p(l 1 , ..., l d )] − H[p(l 1 , ..., l d |L = true)] , where L = true means that all labels in L are assigned a true value. For simplicity, we focus here on the case of transmitting a single label l i (see also [20]), and define the per-label entropy-reduction ∆H(i) = H[p(l 1 , ..., l d )] − H[p(l 1 , ..., l d |l i =true)]. (2) This measure has several interesting properties. It has a similar form to the Shannon mutual information, M I(X; Y ) = H(X) − H(X|Y ), which is always positive. However, the condition on the second term is only over a single value of the label (l i = true). As a result, Eq. (2) can obtain both negative and positive values. When the random variables are independent, ∆H(i) is always positive, because the entropy can be factored using the chain rule, and obeys [19]). However, when the variables are not independent, collapsing one variable to a True value can actually increase the entropy of other co-dependant variables. As an intuitive example, the base probability of observing a lion in a city is very low, and has low entropy. However, once you see a sign "zoo", the entropy of facing a lion rises. H(L 1 , . . . , L d ) − H(L 1 , . . . , L d |L i ) = j =i H(L j ) > 0 (Sec 2.5 The second important property of ∆H(i) is that it is completely agnostic to the image and only depends on the label distribution. To capture image-specific label relevance, we note that the accuracy of annotating an image with a label may strongly depend on the image. For example, some images may have key aspects of the object occluded. We therefore wish to compute the expected reduction in entropy based on the likelihood that a label is correct p c (l i |I). When an incorrect label is transmitted, we assume here that no information is passed to the listener (there is an interesting research question about negative information value in this case, which is outside the scope of this paper). The expected entropy-reduction is therefore E(∆H) = p c (l i |I)∆H + (1 − p c (l i |I)) · 0 this expectation is equivalent to a confidence-weighted entropy reduction measure: cw∆H(i) = p c (l i |I) [H(L) − H[L|l i = true])] ,(3) where p c (l i |I) is the probability that l i is correct and L is a random variable that holds the distribution of all labels. We propose that this is a good measure of label information in the context of a corpus. Other measures of informative labels Confidence-weighted (cw) entropy reduction, Eq. (3), is an intuitive quantification of label informativeness, but other properties of the label distribution may capture aspects of label importance. We now discuss two such measures: information about images, and probabilistic surprise. Information about images. Informative labels were studied in the context of an image reference game. In this setup, a speaker provides labels about an image, and a listener needs to identify the target image among a set of distractor images. Recent versions used natural language captioning for the same purpose [2,3]. It is natural to define entropy-reduction for that setup. Similar to 2, compute the difference between the full entropy over images, and the entropy after transmitting a label. When the distribution over images is uniform the entropy reduction is simply log(num. images) − log(num. matching images), where the second term is the number of images annotated by a label. Considering the confidence of a label we obtain cw-Image∆H(i) = p c (l i |I) [log(q(l i )] ,(4) where p c (l i |I) is again the probability that l i is correct and q(l i ) is the fraction of images with the label i. This measure is fundamentally different from Eq. (3) in that it focuses on the distribution of labels over images, not their on joint distribution. Probabilistic surprise. Transmitting a label changes the label distribution, the "belief" of the listener. This change can be quantified by the Kullback-Leibler divergence between the label distribution with and without transmission. cw-D KL (i) = D KL (p(l 1 , ..., l d |l i = true)||p(l 1 , ..., l d )) . (5) We can use this measure as a scoring function to rank labels by how strongly they affect the distribution. Entropy reduction in a singleton model. Equation (1) computes the entropy over the full joint distribution. An interesting approximation of the joint distribution is provided by the singleton model, which models the joint distribution as the product of the marginals p(l 1 , ..., l d ) = i p(l i ). Given this probabilistic model, the joint entropy of all labels is simply the sum of per-label entropies. The reduction of entropy by a transmitted label is simply the entropy of that label. For labels that are rare (p < 0.5) the entropy grows monotonically with p. This means that if all labels are rare, then ranking labels by their frequency in the data, yields the same order as when ranking labels by their singleton entropy reduction. Entropy reduction in large label spaces Given a corpus of images, we wish to compute the joint distribution of label co-occurrence in an image p(l 1 , . . . , l d ). The scoring functions described above assume that we can estimate and represent the joint distribution over labels. Unfortunately, even for a modest vocabulary size d, the distribution cannot be estimated in practice since it has 2 d parameters. Instead, we approximate the label distribution using a probabilistic graphical model called a Chow-Liu tree [21]. We first describe the graphical model, and then how it is learned from data. As any probabilistic graphical model, A Chow-Liu tree has two components: First, a tree G(V, E) with d nodes and d − 1 edges, where the nodes V correspond to the d labels, and the edges E connect the nodes to form a fully-connected tree. The tree is directed, and each node l i , except a single root node, has a single parent node l j . As a second component, every edge in the graph, connecting nodes i and j is accompanied by a conditional distribution, p(l i |parent(l i )). Note that this conditional distribution involves only two binary variables, namely a total of 4 parameters. The full model therefore has only O(d) parameters and can be estimate efficiently from data. With these two components, the Chow-Liu model can be used to represent a joint distribution over all labels, which factorizes over the graph logp(l 1 , . . . , l d ) = d i=1 log p l i |l parent(i) .(6) While any tree structure can be used to represent a factored distribution as in Eq. (6), the Chow-Liu algorithm finds one specific tree structure: The distribution that is closest to the original full distribution terms of the Kullback-Liebler divergence D KL (p(L)||p(L)). That tree is found in two steps: First, for every pair of labels i, j, compute their 2 × 2 joint distribution in the image corpus, then compute the mutual information of that distribution. M I ij = li=T,F lj =T,F p ij (l i , l j ) p ij (l i , l j ) p i (l i )p j (l j )(7) where the summation is over all combination of True and False value for the two variables, p ij is the joint distribution over label co-occurrence, and p i and p j are the marginals of that distribution. As a second step, assign M I ij as the weight of the edge connecting the nodes of labels i and j and find the maximum spanning tree on the weighted graph. Note that the particular directions of the edges of the model are not important. Any set of directions that forms a consistent tree (having at most one parent per node), defines the same distribution over the graph [21]. In practice, since committing to a single tree may be sensitive to small perturbations in the data, we model the distribution as a mixture of k trees, which are created by a bootstrap procedure. Details are discussed below. Representing the joint distribution of labels using a tree provides great computationally benefits, since many properties of the distribution can be computed very efficiently. Importantly, when the joint distribution factorizes over a tree, the entropy can be computed exactly using the entropy chain rule: H[p(l 1 , . . . , l d )] = H[ d i=1 p(l i |parent(l i ))](8)= d i=1 H[p(l i |parent(l i ))]. Here we abused the notation slightly, the root node does not have a parent hence its entropy is not conditioned on a parent but should be H[p(l root )]. Furthermore, in a tree-structured probabilistic model, one can redirect the edges by selecting any node to be a root, and conditioning all other nodes accordingly [22]. This allows us to compute the labeled-conditioned entropy using the following steps. First, given a new root label l i , iteratively redirect all edges in the tree to make all nodes its descendents. Update the conditional density tables on the edges. Second, assign a marginal distribution of [0, 1] to the node l i , reflecting the fact that the label is assigned to be true. Third, propagate the distribution throughout the graph using the conditional probability functions on the edges. Finally, compute the entropy of the new distribution using the chain rule. Selecting labels for transmission Given the above model, we can compute the expected entropy reduction for each label for a given image. We then take an information-retrieval perspective, rank the labels by their scores and emit the highest rank label. This process can be repeated for transmitting multiple labels. For example, given that label l i was transmitted first, we compute how much each of the remaining labels reduces the entropy further. Formally, to decide about a second label to transmit, we compute for every label l j = l i : ∆H i (j) = H[p(l 1 , ..., l d |l i = true)] (9) −H[p(l 1 , ..., l d |l i = true, l j = true)] Intuitively, selecting a second label that maximizes this score tends to select labels that are semantically remote from the first emitted labels. If a second label (say, l j =pet) is semantically similar to the first label (say, l i =dog), the residual confidence cw∆H cw-D KL cw − Image∆H cw-p( Experiments Data We tested IOTA on the open-images dataset (OID) [11]. In OID, each image is annotated with a list of labels, together with a confidence score. We approximate the joint label distribution over the validation set (41,620 images annotated with 512,093 labels) and also over the test set (125,436 images annotated with 1,545,835 labels). Ground-truth data (OID-IOTA-10K). we collected a new dataset of ground-truth "informative" labels for 10K images: 2500 from OID-validation and 7500 from OID-test, 3 raters per image. Specifically, raters were instructed to focus on the object or scene that is dominant in the image and to avoid overly generic terms that are not particularly descriptive ("a picture"). Labels were entered as free text, and if possible, matched in real time to a predefined knowledge graph (64% of samples) so raters can verify label meaning. For 36% of annotations that were not matched during rating, we mapped them as a post-process to appropriate labels. This process included stemming, resolving ambiguities (e.g. deciding if a bat meant the animal or the sport equipment) and resolving synonyms (e.g. pants and trousers). Overall, in many cases raters used exactly the same term to describe an image. In 68% of the images at least two raters described the image with the same label, and in 27% all raters agreed. We made the data publicly available at https:// chechiklab.biu.ac.il/~brachalior/IOTA/. Label co-occurrence. OID lists labels whose confidence is above 0.5. All labels with a count of 100 appearances or more were considered when collecting the label distribution, ignoring their confidence. This yielded a vocabulary of 765 labels. Evaluation Protocol For each of the scoring functions derived above (Sec 3.2) we ranked all labels predicted to each image. Given this label ranking, we compared top labels with the ground-truth labels collected from raters, and computed the precision and recall for the top-k ranked labels. Precision and recall are usually used with more than one ground-truth item. In our case however, for each image, there was only one ground-truth label: the majority vote across the three raters. As a result, the precision@1 is identical to recall@1. We excluded im-ages that had no majority vote (3 unique ratings, 27.6% of images). OID provides confidence values in coarse resolution (1 significant digit), hence multiple labels in an image often share the same confidence values. When ranking by confidence only, we broke ties at random. We also tested an evaluation setup where instead of a majority label, every label provided by the three raters was considered as ground truth. Precision and recall was computed in the same way. The code is available at https: //github.com/liorbracha/iota Clean and noisy evaluation We evaluated our approach in two setups. In the first, clean evaluation, we only considered image labels that were verified to be correct by OID raters. Incorrect labels were excluded from the analysis and not ranked by the scoring functions. We also excluded images whose ground truth label was not in the model's vocabulary. In the second setup, noisy evaluation we did not force any of these requirements. The analysis included incorrect labels as well as images whose ground truth labels were not in the vocabulary; and thus could not be predicted by our model. As expected, the precision and recall in this setting were significantly lower. Compared scoring functions and baselines We compared the following information-theoretic scoring functions, all weighted by classifier confidence. We also evaluated three simpler baselines: (5) random A random ranking of labels within each image. (6) confidence, ranking based on classifier confidence only, where labels with highest confidence were ranked first. When two labels had the same confidence values, we broke ties randomly. (7) term frequency p c (l i |I), ranked in a descending order. Note that in our data, the term frequency produces the same ranking as singletons, because all labels have a marginal probability below 0.5, hence monotonically increase with the entropy. Results We first illustrate label ranking by showing the detailed scores of all scoring functions for one image. Table 1 annotations (left column) are ordered by cw∆H , and the best label per column (scoring function) is highlighted. Note that the classifier gave a confidence of 1.0 to both airplane and vehicle. Singleton and p(x) yield the same ranking (but with different values) because the entropy grows monotonically with p. D KL prefers fine-grained classes. We next present the precision and recall of IOTA and compared methods over the full OID-test in the clean setup (Sec. 4.2.1). Figure 3 reveals that IOTA achieves high precision, including a p@1 of 64%. This precision is only slightly lower than the agreement rate of human raters (66%). See details in Table 2 for comparison. Next, we show similar curves for the noisy setup (Sec. 4.2.1). Here we also considered images where the ground-truth label is not included in the vocabulary, treating model predictions for these images as false. Figure 4 shows that in this case too, cw∆H achieves the highest precision and recall compare with the other approaches. As expected, the precision and recall in this setting are lower, reaching precision@1=45%. We further tested all scoring functions using a multilabel Table 3. Qualitative example of top-ranked labels by the various scoring functions. While all annotation are correct shoe (top) or leaf (middle) are consistent with human annotations. car (bottom) cw − p(x) and singletons select an overly abstract label, while cw − DKL and cw − Image∆H select more fine grained labels. This effect was pervasive in our dataset. evaluation protocol (Sec. 4.2.1). Here, instead of taking the majority label over three rater annotations, we used all three labels (non-weighted) and computed the precision and recall of the scoring functions against that ground truth set. Results are given in Table 2, showing a similar behavior where cw∆H outperforms the other scoring functions. Ablation and comparisons. Several comparisons are worth mentioning. First, confidence-weighted approaches (imagedependent) are consistently superior to non-weighted approaches. This suggests that it is not enough to select "interesting" labels if they are not highly confident for the image. Second, The singleton model (cw-Singleton) performs quite poorly compared to the Chow-Liu tree model (cw∆H). This agrees with our observation that a key factor of label importance is how much it affects uncertainty on other labels. Finally, Image−∆H, is substantially worse, which is again consistent with the observation that structure in label space is key. Qualitative Results Table 3 lists top-ranked labels by various scoring functions for three images. cw∆H consistently agrees with human annotations (marked in bold), capturing an intermediate, more informative category compared with other scoring functions. In the top row, if based only on high confidence the image content could be described as either shoe, footwear or purple. While all three are technically correct, shoe is the most natural, informative title for that image. The middle row (leaf) had 20 predicted annotations (only 6 shown); all approaches other than cw∆H failed to return "leaf". Finally, the car example (bottom) demonstrates a common phenomena where cw − P (x) and cw − Singleton prefer to more abstract categories whereas cw − D KL and cw − Image∆H prefer fine-grained labels. These results are all built on a Chow-Liu graphical model. Figure 5 illustrates parts of the tree that was formed around Figure 5. Part of the Chow-Liu tree around the label "dog". The model clearly captures semantic relations, even-though they are not explicitly enforced. For instance the label "pet" is connected directly to "dog", and "truck" and "bike" connected to "vehicle" the label dog (38 of 765 labels; validation set). The labeldependency structure reflects sensible label semantics where concepts are grouped in a way that agrees with their meaning (mostly). Note that this tree structure is not a hierarchical model, but only captures the pairwise dependencies among label co-occurrence in the open-images dataset. Robustness to hyper parameters. We tested the robustness of IOTA to the two hyper parameters of the model. (1) The number of trees in the mixture model; and (2) The size of the vocabulary analyzed. For the first, we computed all scoring functions for tree mixtures with 1,3,5 and 10 trees, and found only a 3% difference in the p@1 of cw∆H. Second, we tested robustness to the number of words in our vocabulary. The vocabulary size is important because our analysis was performed over the most frequent labels Figure 6. Robustness to vocabulary size. Different thresholds for the minimum number of label occurrence were tested. The precision of cw∆H remains very high for a large range of vocabulary sizes. The relationship between the different scoring functions is consistent as well. in the corpus. As a result, the size of the vocabulary could have affected precision, because entry-level terms (dog, car) tend to be more frequent than more fine-grained terms (e.g. Labrador, Toyota). We repeated our analysis with different thresholds on the minimum label frequency included in the vocabulary (threshold for values of 50, 100:1000) Figure 6 plots the precision@1 of the various scoring functions, showing that the analysis is robust to the size of the vocabulary. Conclusions We present an unsupervised approach to select informative annotation for a visual scene. We model the prior knowledge of the visual experience using the joint distribution of labels, and use it to rank labels per-image according to how much entropy they can remove over the label distribution. The top ranked labels are the most "intuitive", showing high agreement with human raters. These results are non-trivial as the model does not use any external source of semantic information besides label concurrence. Several questions remain open. First, while our current experiments capture common context, the approach can be extended to any context. It would be interesting to apply this method to expert annotators with the aim of retrieving listener-specific context. Second, easy-to-learn quantifiers of label importance can be used to improve loss functions in multi-class training, assigning more weight to more important labels. A. Implementation Details Algorithm 1 describes in detail the steps to compute the cw∆H scores for a set of labels. Algorithm 2 describes the inference phase, where the computed scores provide an information-based ranking of the image annotations. Here, we do not specify whether we take a single label as ground truth (by majority) or multiple labels (see Sec 4.2) but give a general framework. Figure 7 illustrates the annotations ranking for some images from OID test-set. In these examples we give the full, raw output of our experiments, showing results from all scoring-functions, with or without the confidence weights. "verification" column specifies whether the label was verified by OID raters as correct. "R*" columns present our raters response (see Sec. 4.1) and "y-true" column is the ground truth determined by majority. R* columns in which no entry is marked "1", means that the rater's label was not in the vocabulary. for label pair l i , l j ∈ L do 5: B. Qualitative Examples Compute pair (2x2) joint distribution p(l i , l j ) in A 6: Compute the mutual information I i,j of (Eq. 7) Find a maximum weight spanning tree (MST, Chow-Liu tree) 10: Sort the graph such that each node has a single parent 11: H ← Compute tree entropy over G (Eq. 8) 12: for each label ∈ L do 13: Set l i as root, direct all other edges such that all node are descendents of l i . 14: Set the root marginal p(l i ) = [0, 1] 15: Propagate p(l i ) throughout the tree, compute new marginals. 16: Rank image annotations by cw∆H. H i ← label- 6: Evaluate against ground-truth label. 7: Compute precision and recall. 8: end for 9: Average precision and recall across images.
5,260
1812.09922
2906539509
High bandwidth requirements are an obstacle for accelerating the training and inference of deep neural networks. Most previous research focuses on reducing the size of kernel maps for inference. We analyze parameter sparsity of six popular convolutional neural networks - AlexNet, MobileNet, ResNet-50, SqueezeNet, TinyNet, and VGG16. Of the networks considered, those using ReLU (AlexNet, SqueezeNet, VGG16) contain a high percentage of 0-valued parameters and can be statically pruned. Networks with Non-ReLU activation functions in some cases may not contain any 0-valued parameters (ResNet-50, TinyNet). We also investigate runtime feature map usage and find that input feature maps comprise the majority of bandwidth requirements when depth-wise convolution and point-wise convolutions used. We introduce dynamic runtime pruning of feature maps and show that 10 of dynamic feature map execution can be removed without loss of accuracy. We then extend dynamic pruning to allow for values within an epsilon of zero and show a further 5 reduction of feature map loading with a 1 loss of accuracy in top-1.
Rather, we look at all the feature maps and remove the maps that are dynamically determined not to be participating in the classification. Rhu @cite_10 recently described a compressing DMA engine (cDMA) that improved virtualized DNNs (vDNN) performance by 32 technique prunes by channel rather than elements. This benefits instruction set processors, particularly signal processors, because data can be easily loaded into the processor using sliding windows.
{ "abstract": [ "Popular deep learning frameworks require users to fine-tune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory. Prior work tries to address this restriction by virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be utilized for memory allocations. Despite its merits, virtualizing memory can incur significant performance overheads when the time needed to copy data back and forth from CPU memory is higher than the latency to perform DNN computations. We introduce a high-performance virtualization strategy based on a \"compressing DMA engine\" (cDMA) that drastically reduces the size of the data structures that are targeted for CPU-side allocations. The cDMA engine offers an average 2.6x (maximum 13.8x) compression ratio by exploiting the sparsity inherent in offloaded data, improving the performance of virtualized DNNs by an average 53 (maximum 79 ) when evaluated on an NVIDIA Titan Xp." ], "cite_N": [ "@cite_10" ], "mid": [ "2962821792" ] }
DYNAMIC RUNTIME FEATURE MAP PRUNING A PREPRINT
Deep Neural Networks (DNN) [1] have been developed to identify relationships in high-dimensional data. Recent neural network designs have shown superior performance over traditional methods in many domains including handwriting recognition, voice synthesis, object classification, and object detection. Using neural networks consists of two steps -training and inference. Training involves taking input data, comparing it against a ground truth of labels, and then updating the weights of the neurons to reduce the error between the the network's output and the ground truth. Training is very compute intensive and typically performed in data centers, on dedicated Graphics Processing Units (GPUs), or on specialized accelerators such as Tensor Processing Units (TPUs). Figure 1 shows an image of a dog and a Convolutional Neural Network (CNN) trained to recognize it. A CNN is a class of DNN where a convolution operation is applied to input data. A convolution calculation along with a well trained 3D tensor filter (also known as kernel) can be used identify objects in images. The filters work by extracting multiple smaller bit maps known as feature maps since they "map" portions of the image to different filters. Typically the input pixel image is encoded with separate Red/Green/Blue (RGB) pixels. These are operated on independently and the resulting matrix-matrix multiply for each slice of the tensor is often called a channel [1]. Inference involves processing data on a neural network that has previously been trained. No error computation or back-propagation is traditionally performed in inference. Therefore, the compute requirements are significantly reduced compared to training. However, modern deep neural networks have become quite large with hundreds of hidden layers and upwards of a billion parameters (coefficients) [2]. With increasing size, it is no longer possible to maintain data and parameters in processors caches. Therefore data must be stored in external memory causing significant loading Figure 1: Convolutional Neural Network: Convolutional filters are arranged in layers to detect features in an object. These features "map" the input image to specific characteristics to be predicted from the input data [1]. requirements (bandwidth usage). Reducing DNN bandwidth usage has been studied by many researchers and methods of compressing networks have been investigated. Results have shown the number of parameters can be significantly reduced without loss of accuracy. Previous work includes parameter quantization [3], low-rank decomposition [4], and network pruning which we describe more fully below. shows an simple example of network pruning. Network pruning involves taking a designed neural network and removing neurons with the benefit of reducing computational complexity, power dissipation, and memory loading. Surprisingly, neurons can often be removed without significant loss of accuracy. Network pruning generally falls into the categories of static pruning and dynamic pruning. Static pruning chooses which neurons to remove before the network is deployed. It considers parameter values at or near 0 and removes neurons that wouldn't contribute to classifications. Statically pruned networks may optionally be retrained [6]. While retraining is time consuming it may lead to better performance than leaving the weights as calculated from the unpruned network [7]. With static pruning the pruned models are fixed to an often irregular network structure. A fixed network is also unable to take advantage of 0-valued input data. Dynamic pruning determines at runtime which neurons will not participate in the classification activity. Dynamic runtime pruning can overcome limitations of static pruning as well as take advantage of changing input data while still reducing parameter loading (bandwidth) and power dissipation. One possible implementation of dynamic runtime pruning considers any parameters that are trained as 0-values are implemented within a processing element (PE) in such a way that the PE is inhibited from participating in the computation [8]. Sparse matrices fall into this category [9]. A kernel map comes from pre-trained coefficient matrices stored in external memory. These are usually saved as a weights file. The kernel is a filter that has the ability to identify input data features. Most dynamic runtime pruning approaches remove kernels of computation [6,10,11]. In this approach, loading bandwidth is reduced by suppressing the loading of weights. Another approach for convolutional neural networks is to dynamically remove feature maps (sometimes called filter "channels"). In this approach channels that are not participating in the classification determination are removed at runtime. This type of dynamic runtime pruning is the focus of this paper. In this paper we introduce a method of dynamic runtime network pruning for CNNs that removes feature maps that are not participating in the classification of an object. For networks not amenable to static pruning this can reduce the number of feature maps loaded without loss of accuracy. Retraining is not required and the network preserves Top-1 accuracy. We provide implementation results showing on average a 10+% feature map loading reduction. We further extend the technique to allow for pruning of feature maps within an epsilon of 0 thereby including networks that use non-zero activation functions. This paper is organized as follows. In Section 2 we discuss our research methodology. In Section 3 we analyze experimental results. In Section 4 we compare our method with related techniques. In Section 5 we discuss the effectiveness of our technique for certain classes of networks and describe our future research. Finally, in Section 6 we conclude and summarize our results. Even with memory performance improvements, bandwidth is still a limiting factor in many neural network designs [12]. When direct connection to memory is not possible, common bus protocols such as PCIe further limit the peak available bandwidth within a system. Once a system is fixed, further performance improvements may only be achieved by reducing the bandwidth requirements of the networks being implemented. One way of achieving this is by reducing the number of feature maps being loaded. Most current techniques only prune kernel maps. Our technique proposes not to remove entire kernels but only specific feature maps that do not contribute to the effectiveness of the network. This is done dynamically at runtime and has the advantage of reducing the number of feature maps loaded (i.e. bandwidth) without limiting the type of network architecture that can be pruned. property and allow small negative values of x so as to smooth gradients during training [13]. A result of this is that they have many less 0-valued parameters. ResNet and TinyNet both use leaky ReLU and as shown in Figure 4 they both have low 0-sparsity. However, in some cases, while not exactly zero, the values may be close to 0. active(x) =    x < x 0 -0.01 ≤ x ≤ 0.01 × x x < −0.01(1) Equation 1 shows the activation function we compute for feature maps with epsilon pruning. For any value of epsilon greater than x the function returns x. For positive values of x that are less than epsilon the function returns 0 and effectively is pruned. To accommodate leaky ReLU we multiply negative values of x by a small coefficient to transform it to a positive value that is likely to be pruned. Algorithm 1: Dynamic Feature Map Pruning Input: channel size, including height, width, number (H, W, C). capability of processor, max width and height to process (h, w). Output: marker for small data filled channels. we define the column "part + 1" as the zero mark of each channel. for each i in C // get the channel pieces number channel_part = ceil ( W × H / w × h) // check if all zero in one part for all j in channel_part for all k in w × h if ( abs(value[k] < ) channel_zero_mark[i][j]=1 end if end for // check if all zero in one channel channel_zero_mark[i][channel_part + 1] = sum (channel_zero_mark[i][0 : channel_part]) end for end for Algorithm 1 describes a brute-force naive technique for dynamic feature map pruning. It is applied after activation function of Equation 1 is applied. For all feature map channels in a convolutional neural network we look at the element values. If we determine that a feature map has 0-valued coefficients such that the entire channel is unused, we mark it and subsequently do not compute any values for that feature map. Specifically, we count the number of zeros (or absolute value less than ). If the entire channel is filled with values less than we then regard this channel as a zero channel and mark it for later identification. When loading a feature map for processing, if the channel was marked to be within an of 0, we will prune it. Some implementations may not implement sufficient neurons (multiply-accumulate units) to process an entire feature map simultaneously. In this case we will break the feature map into smaller pieces. We then sum the flag of each part to determine if the entire feature map is filled with zero elements. If so, the entire feature map will be marked as zero-filled and will be skipped thus saving feature map loading and reducing bandwidth requirements. Our source code for our technique is available at Github 1 . Experimental Results Classification Accuracy We implemented our dynamic pruning algorithm using Darknet -a C language deep neural network framework [14]. We compute statistics by counting the number of feature maps loaded, noting that if a computation unit has a small cache, a feature map will be loaded more than once. In this work we don't consider this additional effect. To validate our technique we used the ILSVRC2012-50K image dataset containing 1000 classes [15]. Table 3 lists our results based on the 50,000 images using GPU acceleration. Our results show that using an = 0.1 for pruning has no significant loss in accuracy for the top-1 while reducing feature map loading up to 10%. At = 0.2, top-1 accuracy drooped but top-5 accuracy is still state-of-the-art performance. We note that not all networks improved using this approach. We comment on that in Section 5. The results show that layers using ReLU activation have the most feature maps removed and therefore the highest feature map loading reduction. Leaky ReLU with an = 0.0 has little advantage as its feature maps do not have many 0-values. Table 4 shows single image accuracy without pruning and pruning with = 0.1. We note that the labels are not changed (i.e. there is no prediction involved). We compare the ground truth labels with the effects exclusively related to pruning the network. In some cases the pruned network outperformed the unpruned. This is inline with other researcher's results [7,16]. The last column shows the number of feature maps pruned at = 0.1. For example, MobileNet reduced the number of feature maps loaded by 940 thousand out of a total number of 9255 thousand feature maps to be computed. This is approximated a 10% savings in the number of feature maps loaded. It should be noted that due to long simulation times Table 4 was determined using 5 random images. Therefore the results should be considered preliminary. Additional results for MobileNet not shown in Table 4 reveals that MobileNet particularly benefited from feature map pruning. With = 0 pruning, MobileNet reduced 36/54 ReLU activated convolutional layers resulting in a feature map loading reduction of 7.8%. AlexNet reduced 3/5 ReLu activated convolutional layers reduced feature map loading of 5.1%, while SqueezeNet reduced 5/26 layers resulting in a 0.7% reduction. Other networks using leaky ReLU, as anticipated, do not have reduced feature map loading with = 0. Figure 6 shows the convolutional layer-by-layer feature map loading with and without dynamic pruning. Figure 6 shows the feature loading requirements for MobileNet by convolution layer. The y-axis displays the Mega-bits of data required to be read. The x-axis displays the network layers. The stacked bars show the data requirements with and without dynamic pruning. = 0 is used and shows that dynamic pruning can reduce the image "dog", which shown in 1, data loading requirements by about 9.2% as averaged across all the layers. A few layers of MobileNet use linear activation functions and therefore don't benefit from = 0 pruning. We've also run AlexNet experiments on Caffe 2 with 3 and without 4 static pruning. The results shows a similar runtime feature map sparsity of about 60%. Resulting 0.45% feature map loading reduction with channel-wise dynamic feature map pruning after static pruning is applied, the result for none static pruned is also minor as 0.79%. Balance Between Feature Map Loading Reduction and Accuracy We characterized a range of values between 0 and 0.5 with a step size of 0.1 to determine if a feature map should be pruned. We evaluated the effect of on MobileNet, SqueezeNet, and TinyNet using 2000 images from ImageNet. The results show that as increases the accuracy decreases. Table 5 shows the accuracy loss with varying . Discussion and Future Work In our experiment, we count feature map loading once as we assume the system has sufficient memory to hold all the feature map weights and data. For some networks (e.g. VGG-16) that may require 60MB just for the feature map weights. For processors with less capacity this will require the maps to be partially loaded. We plan to investigate this in future works. From our experiments we conclude that for ReLU activated neural networks (e.g. AlexNet) static pruning can be effective since ReLU converts negative values to zeros leading to sparse networks. We did find minor improvements of <0.5% when applying dynamic pruning after static pruning with Caffe. However not all networks can be statically pruned (e.g. ResNet and TinyNet). Figure 7 shows two CNN network architectures -one using traditional convolutional kernels (figure 7(a)) and one using depth-wise convolutional kernels ( figure 7(b)). Modern CNNs tend to use depth-wise convolutional kernels due to reduced computational complexity [37]. A traditional convolutional layer for an I input, O output, K kernel, A area has I × K × O × A computations required. A depth-wise convolutional layer requires only I × K × A computations, Depth-wise convolutional layers, in addition to requiring less computations, have the advantage that the feature map used only once to generate new ones thereby reducing feature map loading. Significantly, with the reduced kernel size, they also increase the ratio of feature maps to kernel maps (weight) by a factor of O. We expect this to be of benefit in dynamic feature map pruning. Point-wise convolutions have a similar benefit due to the reduction in weights. Table 5 shows that for some networks an = 0.2 feature map pruning had no loss of accuracy for top-1. As shown in figure 8, we believe that is because in the latter part of the network, part of the scattered weights have been transferred to the filter with the highest probability of predicting the correct classification. This is an area of future research. As we introduced in Section 4, compression networks typically operate on kernel maps (weights). The more network weights that are statically pruned, the more bandwidth will be consumed by feature maps. Further, unless statically pruning removes an entire neuron, it can not reduce bandwidth usage since the zero feature maps will still be loaded. We have shown that even without weight optimizations, feature map bandwidth is comparable to weight bandwidth when depth-wise (or point-wise) convolutions are employed. Additionally, weight pruning tunes the ratio of weights to feature maps for each layer to balance accuracy and compression. This requires retraining each time a layer is pruned. As networks become deep with many layers, retraining after pruning each layer is computationally expensive. As static pruning provides 50% or less contribution to convolutional layer weight compression, they don't hurt the high runtime sparsity of up to 70%, with our pruning activation the feature map sparsity going up with minor or without loss of accuracy. The 10% all-zero feature map also provides additional opportunity of kernel map pruning. In this work we have determined which feature maps to prune using a fixed epsilon. Our future work focuses on determining and possibly dynamically modifying these values by inspecting layer channels. This technique might also be useful during training. As we see in figure 4 some CNNs have 50+% sparsity. In such cases we found only 10% channel-wise reduction. We suspect that in a well trained classification model, among the large number of layers and channels, there should Figure 9: Sparsity distribution of weights: Zeros may be distributed through feature maps (left), tensors (middle), or tensor arrays (right) be specific maps that predict the final class. Our future work will look specifically at designing networks that take advantage of dynamic feature map pruning where few parameters are 0-valued but feature maps may dynamically be 0-valued. We suspect this may also be of benefit to capsule networks [39]. Conclusion In this paper, we analyzed feature map parameter sparsity of six different convolutional neural networks -AlexNet, MobileNet, ResNet-50, SqueezeNet, TinyNet, and VGG16. We found a range of sparsity from no sparsity to greater than 50% sparsity. When considering parameter values an epsilon away from 0, all networks exhibited some level of sparsity. Of the networks considered, those using ReLU (AlexNet, SqueezeNet, VGG16) contain a high percentage of 0-valued parameters (50%+) and can be statically pruned. However static pruning can lead to irregular networks. Networks with Non-ReLU activation functions in some cases may not contain any 0-valued parameters (ResNet-50, TinyNet). Further static pruning on large networks that require retraining may not be computationally feasible when epsilon values near 0 are considered. We also investigated runtime feature map usage and found that input feature maps comprise the majority of bandwidth requirements when depth-wise convolution and point-wise convolutions used. Our approach uses dynamic runtime pruning of feature maps rather than parameters. This technique is complimentary to static pruning and doesn't require retraining of the CNN. Using this technique we show that 10% of dynamic feature map execution can be removed without loss of accuracy. We then extend dynamic pruning to allow for values within an epsilon of zero and show a further 5% reduction of feature map loading with a 1% loss of accuracy in top-1. We achieved a slight further reduction on networks that were able to be statically pruned. As depth-wise and point-wise convolutional kernels become more common, the amount of computations performed by feature maps will increase possibly further benefiting from dynamic pruning.
3,038
1907.03048
2955034357
Download fraud is a prevalent threat in mobile App markets, where fraudsters manipulate the number of downloads of Apps via various cheating approaches. Purchased fake downloads can mislead recommendation and search algorithms and further lead to bad user experience in App markets. In this paper, we investigate download fraud problem based on a company's App Market, which is one of the most popular Android App markets. We release a honeypot App on the App Market and purchase fake downloads from fraudster agents to track fraud activities in the wild. Based on our interaction with the fraudsters, we categorize download fraud activities into three types according to their intentions: boosting front end downloads, optimizing App search ranking, and enhancing user acquisition&retention rate. For the download fraud aimed at optimizing App search ranking, we select, evaluate, and validate several features in identifying fake downloads based on billions of download data. To get a comprehensive understanding of download fraud, we further gather stances of App marketers, fraudster agencies, and market operators on download fraud. The followed analysis and suggestions shed light on the ways to mitigate download fraud in App markets and other social platforms. To the best of our knowledge, this is the first work that investigates the download fraud problem in mobile App markets.
Previous works have investigated various kinds of security issues in App markets. Chen @cite_14 and Rahman @cite_32 analyzed malware dissemination in Google Play. Zhu @cite_25 and Chen @cite_22 studied the suspicious Apps involved in search ranking in iOS App Store. @cite_15 delved crowdsourced spam reviews in both Google Play and iOS App Store. @cite_3 gave a longitudinal analysis of Apps in Google Play and provided suggestions on detecting search ranking fraud. According to @cite_31 , Google Play does not eliminate all fake downloads. Moreover, few previous works have investigated this problem either. It is mainly due to the lack of the ground truth of fraud activities. The data crawled from the front end, which has limited information, also hinders previous work for a comprehensive study on the download fraud. In this work, with server-side data and device vendor information as the ground truth, we could take a holistic approach to probe the download fraud in App market.
{ "abstract": [ "", "An app market's vetting process is expected to be scalable and effective. However, today's vetting mechanisms are slow and less capable of catching new threats. In our research, we found that a more powerful solution can be found by exploiting the way Android malware is constructed and disseminated, which is typically through repackaging legitimate apps with similar malicious components. As a result, such attack payloads often stand out from those of the same repackaging origin and also show up in the apps not supposed to relate to each other. Based upon this observation, we developed a new technique, called MassVet, for vetting apps at a massive scale, without knowing what malware looks like and how it behaves. Unlike existing detection mechanisms, which often utilize heavyweight program analysis techniques, our approach simply compares a submitted app with all those already on a market, focusing on the difference between those sharing a similar UI structure (indicating a possible repackaging relation), and the commonality among those seemingly unrelated. Once public libraries and other legitimate code reuse are removed, such diff common program components become highly suspicious. In our research, we built this \"Diff-Com\" analysis on top of an efficient similarity comparison algorithm, which maps the salient features of an app's UI structure or a method's control-flow graph to a value for a fast comparison. We implemented MassVet over a stream processing engine and evaluated it nearly 1.2 million apps from 33 app markets around the world, the scale of Google Play. Our study shows that the technique can vet an app within 10 seconds at a low false detection rate. Also, it outperformed all 54 scanners in VirusTotal (NOD32, Symantec, McAfee, etc.) in terms of detection coverage, capturing over a hundred thousand malicious apps, including over 20 likely zero-day malware and those installed millions of times. A close look at these apps brings to light intriguing new observations e.g., Google's detection strategy and malware authors' countermoves that cause the mysterious disappearance and reappearance of some Google Play apps.", "Incentivized by monetary gain, some app developers launch fraudulent campaigns to boost their apps' rankings in the mobile app stores. They pay some service providers for boost services, which then organize large groups of collusive attackers to take fraudulent actions such as posting high app ratings or inflating apps' downloads. If not addressed timely, such attacks will increasingly damage the healthiness of app ecosystems. In this work, we propose a novel approach to identify attackers of collusive promotion groups in an app store. Our approach exploits the unusual ranking change patterns of apps to identify promoted apps, measures their pairwise similarity, forms targeted app clusters (TACs), and finally identifies the collusive group members. Our evaluation based on a dataset of Apple's China App store has demonstrated that our approach is able and scalable to report highly suspicious apps and reviewers. App stores may use our techniques to narrow down the suspicious lists for further investigation.", "", "Recently emerged app markets provide a centralized paradigm for software distribution in smartphones. The difficulty of massively collecting app data has led to a lack a good understanding of app market dynamics. In this paper we seek to address this problem, through a detailed temporal analysis of Google Play, Google's app market. We perform the analysis on data that we collected daily from 160,000 apps, over a period of six months in 2012. We report often surprising results. For instance, at most 50 of the apps are updated in all categories, which significantly impacts the median price. The average price does not exhibit seasonal monthly trends and a changing price does not show any observable correlation with the download count. In addition, productive developers are not creating many popular apps, but a few developers control apps which dominate the total number of downloads. We discuss the research implications of such analytics on improving developer and user experiences, and detecting emerging threat vectors.", "With the rapid adoption of smartphones worldwide and the reliance on app marketplaces to discover new apps, these marketplaces are critical for connecting users with apps. And yet, the user reviews and ratings on these marketplaces may be strategically targeted by app developers. We investigate the use of crowdsourcing platforms to manipulate app reviews. We find that (i) apps targeted by crowdsourcing platforms are rated significantly higher on average than other apps; (ii) the reviews themselves arrive in bursts; (iii) app reviewers tend to repeat themselves by relying on some standard repeated text; and (iv) apps by the same developer tend to share a more similar language model: if one app has been targeted, it is likely that many of the other apps from the same developer have also been targeted.", "Ranking fraud in the mobile App market refers to fraudulent or deceptive activities which have a purpose of bumping up the Apps in the popularity list. Indeed, it becomes more and more frequent for App developers to use shady means, such as inflating their Apps' sales or posting phony App ratings, to commit ranking fraud. While the importance of preventing ranking fraud has been widely recognized, there is limited understanding and research in this area. To this end, in this paper, we provide a holistic view of ranking fraud and propose a ranking fraud detection system for mobile Apps. Specifically, we first propose to accurately locate the ranking fraud by mining the active periods, namely leading sessions, of mobile Apps. Such leading sessions can be leveraged for detecting the local anomaly instead of globalanomaly of App rankings. Furthermore, we investigate three types of evidences, i.e., ranking based evidences, rating based evidences and review based evidences, by modeling Apps' ranking, rating and review behaviors through statistical hypotheses tests. In addition, we propose an optimization based aggregation method to integrate all the evidences for fraud detection. Finally, we evaluate the proposed system with real-world App data collected from the iOS App Store for a long time period. In the experiments, we validate the effectiveness of the proposed system, and show the scalability of the detection algorithm as well as some regularity of ranking fraud activities." ], "cite_N": [ "@cite_31", "@cite_14", "@cite_22", "@cite_32", "@cite_3", "@cite_15", "@cite_25" ], "mid": [ "", "1445387515", "2599937196", "2962975150", "2292487094", "2741784227", "2009109712" ] }
0
1907.03048
2955034357
Download fraud is a prevalent threat in mobile App markets, where fraudsters manipulate the number of downloads of Apps via various cheating approaches. Purchased fake downloads can mislead recommendation and search algorithms and further lead to bad user experience in App markets. In this paper, we investigate download fraud problem based on a company's App Market, which is one of the most popular Android App markets. We release a honeypot App on the App Market and purchase fake downloads from fraudster agents to track fraud activities in the wild. Based on our interaction with the fraudsters, we categorize download fraud activities into three types according to their intentions: boosting front end downloads, optimizing App search ranking, and enhancing user acquisition&retention rate. For the download fraud aimed at optimizing App search ranking, we select, evaluate, and validate several features in identifying fake downloads based on billions of download data. To get a comprehensive understanding of download fraud, we further gather stances of App marketers, fraudster agencies, and market operators on download fraud. The followed analysis and suggestions shed light on the ways to mitigate download fraud in App markets and other social platforms. To the best of our knowledge, this is the first work that investigates the download fraud problem in mobile App markets.
The outcome of download fraud is similar to click fraud, which is a type of fraud that occurs in pay-per-click online advertising @cite_38 . Click fraudsters usually inject fake clicks to target URLs using click bots and steal money from advertisers. To detect click fraud, @cite_38 employed peer-to-peer measurements, command-and-control telemetry, and contemporaneous click data to analyze click fraud on botnets. @cite_28 devised various temporal and statistical patterns to detect click fraud in online advertising. @cite_35 leveraged behavior features and click patterns to detect spam URL sharing. The download fraud we investigated in this paper is more complicated than click fraud (i.e., mixed with human and bot activities). Inspired by the click fraud detection works mentioned above, we propose to model the download fraud activities in a multiview and feature-based perspective.
{ "abstract": [ "Click fraud-the deliberate clicking on advertisements with no real interest on the product or service offered-is one of the most daunting problems in online advertising. Building an effective fraud detection method is thus pivotal for online advertising businesses. We organized a Fraud Detection in Mobile Advertising (FDMA) 2012 Competition, opening the opportunity for participants to work on real-world fraud data from BuzzCity Pte. Ltd., a global mobile advertising company based in Singapore. In particular, the task is to identify fraudulent publishers who generate illegitimate clicks, and distinguish them from normal publishers. The competition was held from September 1 to September 30, 2012, attracting 127 teams from more than 15 countries. The mobile advertising data are unique and complex, involving heterogeneous information, noisy patterns with missing values, and highly imbalanced class distribution. The competition results provide a comprehensive study on the usability of data mining-based fraud detection approaches in practical setting. Our principal findings are that features derived from fine-grained time-series analysis are crucial for accurate fraud detection, and that ensemble methods offer promising solutions to highly-imbalanced nonlinear classification tasks with mixed variable types and noisy missing patterns. The competition data remain available for further studies at http: palanteer.sis.smu.edu.sg fdma2012 .", "Click fraud is a scam that hits a criminal sweet spot by both tapping into the vast wealth of online advertising and exploiting that ecosystem's complex structure to obfuscate the flow of money to its perpetrators. In this work, we illuminate the intricate nature of this activity through the lens of ZeroAccess--one of the largest click fraud botnets in operation. Using a broad range of data sources, including peer-to-peer measurements, command-and-control telemetry, and contemporaneous click data from one of the top ad networks, we construct a view into the scale and complexity of modern click fraud operations. By leveraging the dynamics associated with Microsoft's attempted takedown of ZeroAccess in December 2013, we employ this coordinated view to identify \"ad units\" whose traffic (and hence revenue) primarily derived from ZeroAccess. While it proves highly challenging to extrapolate from our direct observations to a truly global view, by anchoring our analysis in the data for these ad units we estimate that the botnet's fraudulent activities plausibly induced advertising losses on the order of $100,000 per day.", "Social media systems like Twitter and Facebook provide a global infrastructure for sharing information, and in one popular direction, of sharing web hyperlinks. Understanding the behavioral signals of both how URLs are inserted into these systems (via posting by users) and how URLs are received by social media users (via clicking) can provide new insights into social media search, recommendation, and user profiling, among many others. Such studies, however, have traditionally been difficult due to the proprietary (and sometimes private) nature of much URL-related data. Hence, in this paper, we begin a behavioral examination of URL sharing through two distinct perspectives: (i) the first is via a study of how these links are posted through publicly-accessible Twitter data; (ii) the second is via a study of how these links are received by measuring their click patterns through the publicly-accessible Bitly click API. We examine the differences between posting and click patterns in a sample application domain: the classification of spam URLs. We find that these behavioral signals - posting versus clicking - provide overlapping but fundamentally different perspectives on URLs, and that these perspectives can inform the design of future applications of spam link detection and link sharing." ], "cite_N": [ "@cite_28", "@cite_38", "@cite_35" ], "mid": [ "2148086182", "2097470225", "2028679974" ] }
0
1907.03048
2955034357
Download fraud is a prevalent threat in mobile App markets, where fraudsters manipulate the number of downloads of Apps via various cheating approaches. Purchased fake downloads can mislead recommendation and search algorithms and further lead to bad user experience in App markets. In this paper, we investigate download fraud problem based on a company's App Market, which is one of the most popular Android App markets. We release a honeypot App on the App Market and purchase fake downloads from fraudster agents to track fraud activities in the wild. Based on our interaction with the fraudsters, we categorize download fraud activities into three types according to their intentions: boosting front end downloads, optimizing App search ranking, and enhancing user acquisition&retention rate. For the download fraud aimed at optimizing App search ranking, we select, evaluate, and validate several features in identifying fake downloads based on billions of download data. To get a comprehensive understanding of download fraud, we further gather stances of App marketers, fraudster agencies, and market operators on download fraud. The followed analysis and suggestions shed light on the ways to mitigate download fraud in App markets and other social platforms. To the best of our knowledge, this is the first work that investigates the download fraud problem in mobile App markets.
For the black markets investigation, several works @cite_7 @cite_23 @cite_37 @cite_17 have probed the crowdsourcing websites and devised machine learning approaches to detect crowdturfing campaigns and crowd workers. Other works like @cite_4 inspected the transactions over trading App reviews and @cite_2 investigated the crowd fraud in Internet advertisement. However, seldom previous work has studied the black markets targeting download fraud. Like @cite_8 @cite_5 @cite_24 , we launch a honeypot App in App market to acquire reliable ground truth of download fraud activities. Moreover, we infiltrate into the black market and reap useful information from fraudsters to help our analysis.
{ "abstract": [ "", "Driven by huge monetary reward, some mobile application (app) developers turn to the underground market to buy positive reviews instead of doing legal advertisements. These promotion reviews are either directly posted in app stores like iTunes and Google Play, or published on some popular websites that have many app users. Until now, a clear understanding of this app promotion underground market is still lacking. In this work, we focus on unveiling this underground market and statistically analyzing the promotion incentives, characteristics of promoted apps and suspicious reviewers. To collect promoted apps, we built an automatic data collection system, AppWatcher, which monitored 52 paid review service providers for four months and crawled all the app metadata from their corresponding app stores. Finally, AppWatcher exposes 645 apps promoted in app stores and 29, 680 apps promoted in some popular websites. The current underground market is then reported from various perspectives (e.g., service price, app volume). We identified some interesting features of both promoted apps and suspicious reviewers, which are significantly different from those of randomly chosen apps. Finally, we built a simple tracer to narrow down the suspect list of promoted apps in the underground market.", "Popular Internet services in recent years have shown that remarkable things can be achieved by harnessing the power of the masses using crowd-sourcing systems. However, crowd-sourcing systems can also pose a real challenge to existing security mechanisms deployed to protect Internet services. Many of these security techniques rely on the assumption that malicious activity is generated automatically by automated programs. Thus they would perform poorly or be easily bypassed when attacks are generated by real users working in a crowd-sourcing system. Through measurements, we have found surprising evidence showing that not only do malicious crowd-sourcing systems exist, but they are rapidly growing in both user base and total revenue. We describe in this paper a significant effort to study and understand these \"crowdturfing\" systems in today's Internet. We use detailed crawls to extract data about the size and operational structure of these crowdturfing systems. We analyze details of campaigns offered and performed in these sites, and evaluate their end-to-end effectiveness by running active, benign campaigns of our own. Finally, we study and compare the source of workers on crowdturfing sites in different countries. Our results suggest that campaigns on these systems are highly effective at reaching users, and their continuing growth poses a concrete threat to online communities both in the US and elsewhere.", "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.", "", "", "The rise of crowdsourcing brings new types of malpractices in Internet advertising. One can easily hire web workers through malicious crowdsourcing platforms to attack other advertisers. Such human generated crowd frauds are hard to detect by conventional fraud detection methods. In this paper, we carefully examine the characteristics of the group behaviors of crowd fraud and identify three persistent patterns, which are moderateness, synchronicity and dispersivity. Then we propose an effective crowd fraud detection method for search engine advertising based on these patterns, which consists of a constructing stage, a clustering stage and a filtering stage. At the constructing stage, we remove irrelevant data and reorganize the click logs into a surfer-advertiser inverted list; At the clustering stage, we define the sync-similarity between surfers' click histories and transform the coalition detection to a clustering problem, solved by a nonparametric algorithm; and finally we build a dispersity filter to remove false alarm clusters. The nonparametric nature of our method ensures that we can find an unbounded number of coalitions with nearly no human interaction. We also provide a parallel solution to make the method scalable to Web data and conduct extensive experiments. The empirical results demonstrate that our method is accurate and scalable.", "Facebook pages offer an easy way to reach out to a very large audience as they can easily be promoted using Facebook's advertising platform. Recently, the number of likes of a Facebook page has become a measure of its popularity and profitability, and an underground market of services boosting page likes, aka like farms, has emerged. Some reports have suggested that like farms use a network of profiles that also like other pages to elude fraud protection algorithms, however, to the best of our knowledge, there has been no systematic analysis of Facebook pages' promotion methods. This paper presents a comparative measurement study of page likes garnered via Facebook ads and by a few like farms. We deploy a set of honeypot pages, promote them using both methods, and analyze garnered likes based on likers' demographic, temporal, and social characteristics. We highlight a few interesting findings, including that some farms seem to be operated by bots and do not really try to hide the nature of their operations, while others follow a stealthier approach, mimicking regular users' behavior.", "" ], "cite_N": [ "@cite_37", "@cite_4", "@cite_7", "@cite_8", "@cite_24", "@cite_23", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "", "2019333963", "2127935984", "1996802155", "", "", "2235660311", "1975219037", "" ] }
0
1812.08442
2950827124
This paper presents a "learning to learn" approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
The idea of GAN @cite_11 is to generate realistic samples through the adversarial game between generator @math and discriminator @math . GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods @cite_25 @cite_7 @cite_2 try to improve GAN in both the aspects of implementation and theory. DCGAN @cite_25 provides a new framework that is more stable and easier to train. WGAN @cite_7 suggests to use Wasserstein distance to measure the loss. WGAN-GP @cite_2 further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty.
{ "abstract": [ "Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ], "cite_N": [ "@cite_2", "@cite_25", "@cite_7", "@cite_11" ], "mid": [ "2605135824", "2173520492", "2739748921", "2099471712" ] }
Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects
In figure-ground segmentation, the regions of interest are conventionally defined by the provided ground truth, which is usually in the form of pixel-level annotations. Without such supervised information from intensive labeling efforts, it is challenging to teach a system to learn what the figure and the ground should be in each image. To address this issue, we propose an unsupervised meta-learning approach that can simultaneously learn both the figure-ground concept and the corresponding image segmentation. The proposed formulation explores the inherent but often unnoticeable relatedness between performing image segmentation and creating visual effects. In particular, to visually enrich a given image with a special effect often first needs to specify the regions to be emphasized. The procedure corresponds to constructing an internal representation that guides the image editing to operate on the target image regions. For this reason, we name such an internal guidance as the Visual-Effect Representation (VER) of the image. We observe that for a majority of visual effects, their resulting VER is closely related to image segmentation. Another advantage of focusing on visual-effect images is that such data are abundant from the Internet, while pixel-wise Figure 1: Given the same image (1st column), imitating different visual effects (2nd column) can yield distinct interpretations of figure-ground segmentation (3rd column), which are derived by our method via referencing the following visual effects (from top to bottom): black background, color selectivo, and defocus/Bokeh. The learned VERs are shown in the last column, respectively. annotating large datasets for image segmentation is timeconsuming. However, in practice, we only have access to the visual-effect images, but not the VERs as well as the original images. Taking all these factors into account, we reduce the meta-problem of figure-ground segmentation to predicting the proper VER of a given image for the underlying visual effect. Owing to its data richness from the Internet, the latter task is more suitable for our intention to cast the problem within the unsupervised generative framework. Many compositional image editing tasks have the aforementioned properties. For example, to create the color selectivo effect on an image, as shown in Fig. 2, we can i) identify the target and partition the image into foreground and background layers, ii) convert the color of background layer into grayscale, and iii) combine the converted background layer with the original foreground layer to get the final result. The operation of color conversion is local-it simply "equalizes" the RGB values of pixels in certain areas. The quality of the result depends on how properly the layers are decomposed. If a part of the target region is partitioned into the background, the result might look less plausible. Unlike the local operations, to localize the proper regions for editing would require certain understanding and analysis of the global or contextual information in the whole image. In this paper, we design a GAN-based model, called Visual-Effect GAN (VEGAN), that can learn to predict the internal representation (i.e., VER) and incorporate such information into facilitating the resulting figure-ground segmentation. We are thus motivated to formulate the following problem: Given an unaltered RGB image as the input and an image editing task with known compositional process and local operation, we aim to predict the proper VER that guides the editing process to generate the expected visual effect and accomplishes the underlying figure-ground segmentation. We adopt a data-driven setting in which the image editing task is exemplified by a collection of image samples with the expected visual effect. The task, therefore, is to transform the original RGB input image into an output image that exhibits the same effect of the exemplified samples. To make our approach general, we assume that no corresponding pairs of input and output images are available in training, and therefore supervised learning is not applicable. That is, the training data does not include pairs of the original color images and the corresponding edited images with visual effects. The flexibility is in line with the fact that although we could fetch a lot of images with certain visual effects over the Internet, we indeed do not know what their original counterpart should look like. Under this problem formulation, several issues are of our interest and need to be addressed. First, how do we solve the problem without paired input and output images? We build on the idea of generative adversarial network and develop a new unsupervised learning mechanism (shown in Figs. 2 & 3) to learn the internal representation for creating the visual effect. The generator aims to predict the internal VER and the editor is to convert the input image into the one that has the expected visual effect. The compositional procedure and local operation are generic and can be implemented as parts of the architecture of a ConvNet. The discriminator has to judge the quality of the edited images with respect to a set of sample images that exhibit the same visual effect. The experimental results show that our model works surprisingly well to learn meaningful representation and segmentation without supervision. Second, where do we acquire the collection of sample images for illustrating the expected visual effect? Indeed, it would not make sense if we have to manually generate the labor-intensive sample images for demonstrating the expected visual effects. We show that the required sample images can be conveniently collected from the Internet. We provide a couple of scripts to explore the effectiveness of using Internet images for training our model. Notice again that, although the required sample images with visual effects are available on the Internet, their original versions are unknown. Thus supervised learning of pairwise image-toimage translation cannot be applied here. Third, what can the VER be useful for, in addition to creating visual effects? We show that, if we are able to choose a suitable visual effect, the learned VER can be used to not only establish the intended figure-ground notion but also derive the image segmentation. More precisely, as in our formulation the visual-effect representation is characterized by a real-valued response map, the result of figure-ground separation can be obtained via binarizing the VER. Therefore, it is legitimate to take the proposed problem of VER prediction as a surrogate for unsupervised image segmentation. We have tested the following visual effects: i) black background, which is often caused by using flashlight; ii) color selectivo, which imposes color highlight on the subject and keeps the background in grayscale; iii) defocus/Bokeh, which is due to depth of field of camera lens. The second column in Fig. 1 shows the three types of visual effects. For these tasks our model can be end-toend trained from scratch in an unsupervised manner using training data that do not have either the ground-truth pixel labeling or the paired images with/without visual effects. While labor-intensive pixel-level segmentations for images are hard to acquire directly via Internet search, images with those three effects are easy to collect from photo-sharing websites, such as Flickr, using related tags. Generative Adversarial Networks The idea of GAN (Goodfellow et al. 2014) is to generate realistic samples through the adversarial game between generator G and discriminator D. GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods (Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) try to improve GAN in both the aspects of implementation and theory. DCGAN (Radford, Metz, and Chintala 2016) provides a new framework that is more stable and easier to train. WGAN (Arjovsky, Chintala, and Bottou 2017) suggests to use Wasserstein distance to measure the loss. WGAN-GP (Gulrajani et al. 2017) further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty. To reduce the burden of G, Denton et al. (Denton et al. 2015) use a pyramid structure and Karras et al. (Karras et al. 2018) consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of G by incorporating some well-defined image processing operations into the network model, e.g., converting background color into grayscale to simulate the visual effect of color selectivo, or blurring the background to create the Bokeh effect. Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection (Nguyen et al. 2017), saliency detection (Pan et al. 2017), and semantic segmentation (Luc et al. 2016). However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods (Liu, Breuel, and Kautz 2017;Figure 2: Learning and applying our model for the case of "color selectivo" visual effect. The image collection for learning is downloaded using Flickr API. Without explicit ground-truth pixel-level annotations being provided, our method can learn to estimate the visual-effect representations (VERs) from unpaired sets of natural RGB images and sample images with the expected visual effect. Our generative model is called Visual-Effect GAN (VEGAN), which has an additional component editor between the generator and the discriminator. After the unsupervised learning, the generator is able to predict the VER of an input color image for creating the expected visual effect. The VER can be further transformed into figure-ground segmentation. Yi et al. 2017;Zhu et al. 2017) can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation. Since our model has to identify the category-independent subjects for applying the visual effect without using imagepair relations and ground-truth pixel-level annotations, the problem we aim to address is more general and challenging than those of the aforementioned methods. Image Segmentation Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem (Simonyan and Zisserman 2015;Long, Shelhamer, and Darrell 2015;He et al. 2016). The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limitedannotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks. To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner (Hong, Noh, and Han 2015;Souly, Spampinato, and Shah 2017) or a weakly-supervised manner (Dai, He, and Sun 2015;Kwak, Hong, and Han 2017;Pinheiro and Collobert 2015) with a small number of pixellevel annotations. In contrast, our model is trained without explicit ground-truth annotations. Existing GAN-based segmentation methods (Nguyen et al. 2017;Luc et al. 2016) improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not "unsupervised" from the perspective of application and problem definition. We instead adopt a meta-learning viewpoint to address figure-ground segmentation. Depending on the visual effect to be imitated, the proposed approach interprets the task of image segmentation according to the learned VER. As a re-sult, our model indeed establishes a general setting of figureground segmentation, with the additional advantage of generating visual effects or photo-style manipulations. Our Method Given a natural RGB image I and an expected visual effect with known compositional process and local operation, the proposed VEGAN model learns to predict the visual-effect representation (VER) of I and to generate an edited image I edit with the expected effect. Fig. 2 illustrates the core idea. The training data are from two unpaired sets: the set {I} of original RGB images and the set {I sample } of images with the expected visual effect. The learning process is carried out as follows: i) Generator predicts the VER ν of the image I. ii) Editor uses the known local operation to create an edited image I edit possessing the expected visual effect. iii) Discriminator judges the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same visual effect. iv) Loss is computed for updating the whole model. Fig. 3 illustrates the components of VEGAN. Finally, we perform Binarization on VER for quantitatively assess the outcome of figure-ground segmentation. Generator: The task of the generator is to predict the VER ν that can be used to partition the input image I into foreground and background layers. Our network architecture is adapted from the state-of-the-art methods (Johnson, Alahi, and Fei-Fei 2016;Zhu et al. 2017) which show impressive results on image style transfer. The architecture follows the rules suggested by DCGAN (Radford, Metz, and Chintala 2016) such as replacing pooling layer with strided convolution. Our base architecture also uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). We have also tried a few slightly modified versions of the generator. The differences and details are described in the experiments. Discriminator: The discriminator is trained to judge the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same effect. We adopt a 70 × 70 patchGAN Ledig et al. 2017;Li and Wand 2016;Zhu et al. 2017) as our base discriminator network. PatcahGAN brings some benefits with multiple overlapping image patches. Namely, the scores The visual-effect representation (VER) produced by the generator indicates the strength of the visual effect at each location. The editor uses a well-defined trainable procedure (converting RGB to grayscale in this case) to create the expected visual effect. The discriminator receives the edited image I edit and evaluates how good it is. To train VEGAN, we need unpaired images from two domains. Domain A comprises real RGB images and Domain B comprises images with the expected visual effect. change more smoothly and the training process is more stable. Compared with a full-image discriminator, the receptive field of the 70 × 70 patchGAN might not capture the global context. In our work, the foreground objects are sensitive to their position in the whole image and are center-biased. If there are several objects in the image, our method would favor to pick out the object closest to the center. In our experiment, 70 × 70 patchGAN does produce better segments along the edges, but sometimes the segments tend to be tattered. A full-image discriminator (Goodfellow et al. 2014;Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017), on the other hand, could give coarser but more compact and structural segments. Editor: The editor is the core of the proposed model. Given an input image I and its VER ν predicted by the generator, the editor is responsible for creating a composed image I edit containing the expected visual effect. The first step is based on the well-defined procedure to perform local operations on the image and generate the expected visual effect I effect . More specifically, in our experiments we define three basic local operations: black-background, colorselectivo, and defocus/Bokeh, which involve clamping-tozero, grayscale conversion, and 11 × 11 average pooling, respectively. The next step is to combine the edited background layer with the foreground layer to get the final editing result I edit . An intuitive way is to use the VER ν as an alpha map α for image matting, i.e., I edit = α ⊗ I + (1 − α) ⊗ I effect , where α = {α ij }, α ij ∈ (0, 1) and ⊗ denotes the element-wise multiplication. However, in our experiments, we find that it is better to have ν = {ν ij }, ν ij ∈ (−1, 1) with hyperbolic-tangent as the output. Hence we combine the two layers as follows: I edit = τ (ν ⊗ (I − I effect ) + I effect ), ν ij ∈ (−1, 1) ,(1) where τ (·) truncates the values to be within (0, 255), which guarantees the I edit can be properly rendered. Under this formulation, our model turns to learning the residual. Loss: We refer to SOTA algorithms (Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) to design loss functions L G and L D for generator (G) and discriminator (D): L G = −E x∼Pg [D(x))] ,(2)L D = E x∼Pg [D(x))] − E y∼Pr [D(y)] + λ gp Ex ∼Px [( ∇xD(x) 2 − 1) 2 ] .(3) We alternately update the generator by Eq. 2 and the discriminator by Eq. 3. In our formulation, x is the edited image I edit , y is an image I sample which exhibits the expected visual effect, P g is the edited image distribution, P r is the sample image distribution, and Px is for sampling uniformly along straight lines between image pairs from P g and P r . We set the learning rate, λ gp , and other hyper-parameters the same as the configuration of WGAN-GP (Gulrajani et al. 2017). We keep the history of previously generated images and update the discriminator according to the history. We use the same way as ) to store 50 previously generated images {I edit } in a buffer. The training images are of size 224 × 224, and the batch size is 1. Binarization: The VEGAN model can be treated as aiming to predict the strength of the visual effect throughout the whole image. Although the VER provides effective intermediate representation for generating plausible edited images toward some expected visual effects, we observe that sometimes the VER might not be consistent with an object region, particularly with the Bokeh effect. Directly thresholding VER to make a binary mask for segmentation evaluation will cause some degree of false positives and degrade the segmentation quality. In general, we expect that the segmentation derived from the visual-effect representation to be smooth within an object and distinct across object boundaries. To respect this observation, we describe, in what follows, an optional procedure to obtain a smoothed VER and enable simple thresholding to yield a good binary mask for quantitative evaluation. Notice that all the VER maps visualized in this paper are obtained without binarization. To begin with, we over-segment (Achanta et al. 2012) an input image I into a superpixel set S and construct the corresponding superpixel-level graph G = (S, E, ω) with the edge set E and weights ω. Each edge e ij ∈ E denotes the spatial adjacency between superpixels s i and s j . The weighting function ω : E → [0, 1] is defined as ω ij = e −θ1 ci−cj , where c i and c j respectively denote the CIE Lab mean colors of two adjacent superpixels. Then the weight matrix of the graph is W = [ω ij ] |S|×|S| . We then smooth the VER via propagating the averaged value of each superpixel to all other superpixels. To this end, we use r i to denote the mean VER value of superpixel s i where r i = 1 |si| (i,j)∈si ν ij and |s i | is the number of pixels within s i . The propagation is carried out according to the feature similarity between every superpixel pair. Given the weight matrix W, the pairwise similarity matrix A can be constructed as A = (D − θ 2 W) −1 I, where D is a diagonal matrix with each diagonal entry equal to the row sum of W, θ 2 is a parameter in (0, 1], and I is the |S|-by-|S| identity matrix (Zhou et al. 2003). Finally, the smoothed VER value of each superpixel can be obtained by [r 1 ,r 2 , . . . ,r |S| ] T = D −1 A A · [r 1 , r 2 , . . . , r |S| ] T ,(4) where D A is a diagonal matrix with each diagonal entry equal to the corresponding row sum of A, and D −1 A A yields the row normalized version of A. From Eq. 4, we see that the smoothed VER valuer i is determined by not only neighboring superpixels of s i but also all other superpixels. To obtain the binary mask, we set the average value of {r 1 ,r 2 , . . . ,r |S| } as the threshold for obtaining the corresponding figure-ground segmentation for the input I. We set parameters θ 1 = 10 and θ 2 = 0.99 in all the experiments. Experiments We first describe the evaluation metric, the testing datasets, the training data, and the algorithms in comparison. Then, we show the comparison results of the relevant algorithms and our approach. Finally, we present the image segmentation and editing results of our approach. More experimental results can be found in the supplementary material. Evaluation Metric. We adopt the intersection-over-union (IoU) to evaluate the binary mask derived from the VER. The IoU score, which is defined as |P Q| |P Q| , where P denotes the machine segmentation and Q denotes the ground-truth segmentation. All algorithms are tested on Intel i7-4770 3.40 GHz CPU, 8GB RAM, and NVIDIA Titan X GPU. Datasets. The six datasets are GC50 (Rother, Kolmogorov, and Blake 2004), MSRA500, ECSSD (Shi et al. 2016), Flower17 (Nilsback and Zisserman 2006), Flower102 (Nilsback and Zisserman 2008), and CUB200 (Wah et al. 2011). MSRA500 is a subset of the MSRA10K dataset (Cheng et al. 2015), which contains 10,000 natural images. We randomly partition MSRA10K into two non-overlapping subsets of 500 and 9,500 images to create MSRA500 and MSRA9500 for testing and training, respectively. Their statistics are summarized in Table 1. Since these datasets provide pixellevel ground truths, we can compare the consistency be- tween the ground-truth labeling and the derived segmentation of each image for VER-quality assessment. Training Data. In training the VEGAN model, we consider using the images from two different sources for comparison. The first image source is MSRA9500 derived from the MSRA10K dataset (Cheng et al. 2015). The second image source is Flickr, and we acquire unorganized images for each task as the training data. We examine our model on three kinds of visual effects, namely, black background, color selectivo, and defocus/Bokeh. • For MSRA9500 images, we randomly select 4,750 images and then apply the three visual effects to yield three groups of images with visual effects, i.e., {I sample }. The other 4,750 images are hence the input images {I} for the generator to produce the edited images {I edit } later. • For Flickr images, we use "black background," "color selectivo," and "defocus/Bokeh" as the three query tags, and then collect 4,000 images for each query-tag as the real images with visual effects. We randomly download additional 4,000 images from Flickr as the images to be edited. Algorithms in Comparison. We quantitatively evaluate the learned VER using the standard segmentation assessment metric (IoU). Our approach is compared with several well-known algorithms, including two semantic segmentation algorithms, three saliency based algorithms, and two bounding-box based algorithms, listed as follows: ResNet , VGG16 (Simonyan and Zisserman 2015), CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS and MilCutG (Wu et al. 2014), GrabCut (Rother, Kolmogorov, and Blake 2004). The two supervised semantic segmentation algorithms, ResNet and VGG16, are pre-trained on ILSVRC-2012-CLS (Russakovsky et al. 2015) and then fine-tuned on MSRA9500 with ground-truth annotations. The bounding boxes of the two bounding-box based algorithms are initialized around the image borders. Quantitative Evaluation The first part of experiment aims to evaluate the segmentation quality of different methods. We first compare several variants of the VEGAN model to choose the best model configuration. Then, we analyze the results of the VEGAN model versus the other state-of-the-art algorithms. VEGAN Variants. In the legend blocks of Fig. 4, we use a compound notation "TrainingData -Version" to account for the variant versions of our model. Specifically, Train-ingData indicates the image source of the training data. The notation for Version contains two characters. The first character denotes the type of visual effect: "B" for black background, "C" for color selectivo, and "D" for defocus/Bokeh. The second character is the model configuration: "1" refers to the combination of base-generator and base-discriminator described in Our Method; "2" refers to using ResNet as the generator; "3" is the model "1" with additional skip-layers and replacing transpose convolution with bilinear interpolation; "4" is the model "3" yet replacing patch-based discriminator with full-image discriminator. We report the results of VEGAN variants in Table 6, and depict the sorted IoU scores for the test images in Flower17 and Flower102 datasets in Fig. 4. It can be seen that all models have similar segmentation qualities no matter what image source is used for training. In Table 6 and Fig. 4, the training configuration "B4" shows relatively better performance under black background. Hence, our VEGAN model adopts the version of MSRA-B4 as a representative variant for comparing with other state-of-the-art algorithms. Unseen Images. We further analyze the differences of the learned models on dealing with unseen and seen images. We test the variants B4, C4, and D4 on MSRA500 (unseen) and the subset {I} of MSRA9500 (seen). We find that the performance of VEGAN is quite stable. The IoU score for MSRA500 is only 0.01 lower than the score for MSRA9500 {I}. Note that, even for the seen images, the ground-truth pixel annotations are unknown to the VEGAN model during training. This result indicates that VEGAN has a good generalization ability to predict segmentation for either seen or unseen images. For comparison, we do the same experiment with the two supervised algorithms, ResNet and VGG16. They are fine-tuned with MSRA9500. The mean IoU scores of ResNet are 0.86 and 0.94 for MSRA500 and MSRA9500, respectively. The mean IoU scores of VGG16 are 0.72 and 0.88 for MSRA500 and MSRA9500, respectively. The performance of both supervised techniques significantly degrades while dealing with unseen images. From the results just described, the final VEGAN model is implemented with the following setting: i) Generator uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). ii) Discriminator uses the full-image discriminator as WGAN-GP (Gulrajani et al. 2017). Results. The top portion of Table 3 summarizes the mean IoU score of each algorithm evaluated with the six testing datasets. We first compare our method with five well-known segmentation/saliency-detection techniques, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). The proposed VE-GAN model outperforms all others on MSRA500, ECSSD, Flower17, and Flower102 datasets, and is only slightly behind the best on GC50 and CUB200 datasets. The bottom portion of Table 3 shows the results of two SOTA supervised learning algorithms on the six testing datasets. Owing to training with the paired images and ground-truths in a "supervised" manner, the two models of ResNet and VGG16 undoubtedly achieve good performance so that we treat them as the oracle models. Surprisingly, our unsupervised learning model is comparable with or even slightly better than the supervised learning algorithms on the MSRA500, Flower17, and Flower102 datasets. Fig. 9 depicts the sorted IoU scores, where a larger area under curve means better segmentation quality. VEGAN achieves better segmentation accuracy on the two datasets. Fig. 10 shows the results generated by our VEGAN model under different configurations. Each triplet of images contains the input image, the visual effect representation (VER), and the edited image. The results in Fig. 10 demonstrate that VEGAN can generate reasonable figure-ground segmentations and plausible edited images with expected visual effects. Visual-Effect Imitation as Style Transfer. Although existing GAN models cannot be directly applied to learning figure-ground segmentation, some of them are applicable to learning visual-effect transfer, e.g., CycleGAN . We use the two sets {I} and {I sample } of MSRA9500 to train CycleGAN, and show some comparison results in Fig. 7. We find that the task of imitating black background turns out to be challenging for CycleGAN since the information in {I sample } is too limited to derive the inverse mapping back to {I}. Moreover, CycleGAN focuses more on learning the mapping between local properties such as color or texture rather than learning how to create a glob- ) and VGG16 (Simonyan and Zisserman 2015) are pre-trained with ILSVRC-2012-CLS and then fine-tuned with MSRA9500. ally consistent visual effect. VEGAN instead follows a systematic learning procedure to imitate the visual effect. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to identify. Figure 6: The edited images generated by VEGAN with respect to specific visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. Qualitative Evaluation User Study. Fig. 8 shows VERs that testing on Flickr "bird" images using VEGAN models trained merely with Flick "flower" images. The results suggest that the meta-learning mechanism enables VE-GAN to identify unseen foreground figures based on the learned knowledge embodied in the generated VERs. Conclusion We characterize the two main contributions of our method as follows. First, we establish a meta-learning framework to learn a general concept of figure-ground application and an effective approach to the segmentation task. Second, we propose to cast the meta-learning as imitating relevant visual effects and develop a novel VEGAN model with following advantages: i) Our model offers a new way to predict meaningful figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. ii) The training images are easy to collect from photo-sharing websites using related tags. iii) The editor between the generator and the discriminator enables VEGAN to decouple the compositional process of imitating visual effects and hence allows VEGAN to effectively learn the underlying representation (VER) for deriving figure-ground segmentation. We have tested three visual effects, including "black background," "color selectivo," and "defocus/Bokeh" with extensive experiments on six datasets. For these visual effects, VEGAN can be end-to-end trained from scratch using unpaired training images that have no ground-truth labeling. Because state-of-the-art GAN models, e.g., CycleGAN , are not explicitly designed for unsupervised learning of figure-ground segmentation, we simply conduct qualitative comparisons with CycleGAN on the task of visual-effect transfer rather than the task of figure-ground segmentation. The task of visual-effect transfer is to convert an RGB image into an edited image with the intended visual effect. To train CycleGAN for visual-effect transfer, we use the set {I} of original RGB images and the set {I sample } of images with the expected visual effect as the two unpaired training sets. Fig. 12 shows the results of 'training on MSRA9500 and testing on MSRA500'. Fig. 13 shows the results of 'training on Flickr and testing on Flickr'. For CycleGAN and VEGAN, all the test images are unseen during training. The training process is done in an unsupervised manner without using any ground-truth annotations and paired images. Some comparison results are shown in Fig. 12 and Fig. 13. We observe that the task of imitating black background is actually more challenging for Cycle-GAN since the information of black regions in {I sample } is limited and hence does not provide good inverse mapping back to {I} under the setting of CycleGAN. The results of CycleGAN on imitating color selectivo and defocus/Bokeh are more comparable to those of VE-GAN. However, the images generated by CycleGAN may have some distortions in color. On the other hand, VEGAN follows a well-organized procedure to learn how to imitate visual effects. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to differentiate. GC50 MSRA500 ECSSD Flower17 Flower102 CUB200 Figure 9: Comparisons with well-known algorithms, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). Each sub-figure depicts the sorted IoU scores as the segmentation accuracy. Testing on MSRA500 using VEGAN models MSRA-B4, MSRA-C4, and MSRA-D4. Testing on Flickr images using VEGAN models Flickr-B4, Flickr-C4, and Flickr-D4. Figure 10: The edited images generated by our VEGAN models with respect to some expected visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. (Johnson, Alahi, and Fei-Fei 2016); ' ‡' refers to ; ' ' refers to (Gulrajani et al. 2017 the 9-residual-blocks version † WGAN-GP yes bilinear 'Color selectivo' visual effect generated by VEGAN (MSRA-C4) and CycleGAN. 'Defocus/Bokeh' visual effect generated by VEGAN (MSRA-D4) and CycleGAN.
5,461
1812.08442
2950827124
This paper presents a "learning to learn" approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
To reduce the burden of @math , Denton al @cite_34 use a pyramid structure and Karras al @cite_28 consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of @math by incorporating some well-defined image processing operations into the network model, , converting background color into grayscale to simulate the visual effect of color selectivo , or blurring the background to create the Bokeh effect.
{ "abstract": [ "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset." ], "cite_N": [ "@cite_28", "@cite_34" ], "mid": [ "2766527293", "2951523806" ] }
Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects
In figure-ground segmentation, the regions of interest are conventionally defined by the provided ground truth, which is usually in the form of pixel-level annotations. Without such supervised information from intensive labeling efforts, it is challenging to teach a system to learn what the figure and the ground should be in each image. To address this issue, we propose an unsupervised meta-learning approach that can simultaneously learn both the figure-ground concept and the corresponding image segmentation. The proposed formulation explores the inherent but often unnoticeable relatedness between performing image segmentation and creating visual effects. In particular, to visually enrich a given image with a special effect often first needs to specify the regions to be emphasized. The procedure corresponds to constructing an internal representation that guides the image editing to operate on the target image regions. For this reason, we name such an internal guidance as the Visual-Effect Representation (VER) of the image. We observe that for a majority of visual effects, their resulting VER is closely related to image segmentation. Another advantage of focusing on visual-effect images is that such data are abundant from the Internet, while pixel-wise Figure 1: Given the same image (1st column), imitating different visual effects (2nd column) can yield distinct interpretations of figure-ground segmentation (3rd column), which are derived by our method via referencing the following visual effects (from top to bottom): black background, color selectivo, and defocus/Bokeh. The learned VERs are shown in the last column, respectively. annotating large datasets for image segmentation is timeconsuming. However, in practice, we only have access to the visual-effect images, but not the VERs as well as the original images. Taking all these factors into account, we reduce the meta-problem of figure-ground segmentation to predicting the proper VER of a given image for the underlying visual effect. Owing to its data richness from the Internet, the latter task is more suitable for our intention to cast the problem within the unsupervised generative framework. Many compositional image editing tasks have the aforementioned properties. For example, to create the color selectivo effect on an image, as shown in Fig. 2, we can i) identify the target and partition the image into foreground and background layers, ii) convert the color of background layer into grayscale, and iii) combine the converted background layer with the original foreground layer to get the final result. The operation of color conversion is local-it simply "equalizes" the RGB values of pixels in certain areas. The quality of the result depends on how properly the layers are decomposed. If a part of the target region is partitioned into the background, the result might look less plausible. Unlike the local operations, to localize the proper regions for editing would require certain understanding and analysis of the global or contextual information in the whole image. In this paper, we design a GAN-based model, called Visual-Effect GAN (VEGAN), that can learn to predict the internal representation (i.e., VER) and incorporate such information into facilitating the resulting figure-ground segmentation. We are thus motivated to formulate the following problem: Given an unaltered RGB image as the input and an image editing task with known compositional process and local operation, we aim to predict the proper VER that guides the editing process to generate the expected visual effect and accomplishes the underlying figure-ground segmentation. We adopt a data-driven setting in which the image editing task is exemplified by a collection of image samples with the expected visual effect. The task, therefore, is to transform the original RGB input image into an output image that exhibits the same effect of the exemplified samples. To make our approach general, we assume that no corresponding pairs of input and output images are available in training, and therefore supervised learning is not applicable. That is, the training data does not include pairs of the original color images and the corresponding edited images with visual effects. The flexibility is in line with the fact that although we could fetch a lot of images with certain visual effects over the Internet, we indeed do not know what their original counterpart should look like. Under this problem formulation, several issues are of our interest and need to be addressed. First, how do we solve the problem without paired input and output images? We build on the idea of generative adversarial network and develop a new unsupervised learning mechanism (shown in Figs. 2 & 3) to learn the internal representation for creating the visual effect. The generator aims to predict the internal VER and the editor is to convert the input image into the one that has the expected visual effect. The compositional procedure and local operation are generic and can be implemented as parts of the architecture of a ConvNet. The discriminator has to judge the quality of the edited images with respect to a set of sample images that exhibit the same visual effect. The experimental results show that our model works surprisingly well to learn meaningful representation and segmentation without supervision. Second, where do we acquire the collection of sample images for illustrating the expected visual effect? Indeed, it would not make sense if we have to manually generate the labor-intensive sample images for demonstrating the expected visual effects. We show that the required sample images can be conveniently collected from the Internet. We provide a couple of scripts to explore the effectiveness of using Internet images for training our model. Notice again that, although the required sample images with visual effects are available on the Internet, their original versions are unknown. Thus supervised learning of pairwise image-toimage translation cannot be applied here. Third, what can the VER be useful for, in addition to creating visual effects? We show that, if we are able to choose a suitable visual effect, the learned VER can be used to not only establish the intended figure-ground notion but also derive the image segmentation. More precisely, as in our formulation the visual-effect representation is characterized by a real-valued response map, the result of figure-ground separation can be obtained via binarizing the VER. Therefore, it is legitimate to take the proposed problem of VER prediction as a surrogate for unsupervised image segmentation. We have tested the following visual effects: i) black background, which is often caused by using flashlight; ii) color selectivo, which imposes color highlight on the subject and keeps the background in grayscale; iii) defocus/Bokeh, which is due to depth of field of camera lens. The second column in Fig. 1 shows the three types of visual effects. For these tasks our model can be end-toend trained from scratch in an unsupervised manner using training data that do not have either the ground-truth pixel labeling or the paired images with/without visual effects. While labor-intensive pixel-level segmentations for images are hard to acquire directly via Internet search, images with those three effects are easy to collect from photo-sharing websites, such as Flickr, using related tags. Generative Adversarial Networks The idea of GAN (Goodfellow et al. 2014) is to generate realistic samples through the adversarial game between generator G and discriminator D. GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods (Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) try to improve GAN in both the aspects of implementation and theory. DCGAN (Radford, Metz, and Chintala 2016) provides a new framework that is more stable and easier to train. WGAN (Arjovsky, Chintala, and Bottou 2017) suggests to use Wasserstein distance to measure the loss. WGAN-GP (Gulrajani et al. 2017) further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty. To reduce the burden of G, Denton et al. (Denton et al. 2015) use a pyramid structure and Karras et al. (Karras et al. 2018) consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of G by incorporating some well-defined image processing operations into the network model, e.g., converting background color into grayscale to simulate the visual effect of color selectivo, or blurring the background to create the Bokeh effect. Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection (Nguyen et al. 2017), saliency detection (Pan et al. 2017), and semantic segmentation (Luc et al. 2016). However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods (Liu, Breuel, and Kautz 2017;Figure 2: Learning and applying our model for the case of "color selectivo" visual effect. The image collection for learning is downloaded using Flickr API. Without explicit ground-truth pixel-level annotations being provided, our method can learn to estimate the visual-effect representations (VERs) from unpaired sets of natural RGB images and sample images with the expected visual effect. Our generative model is called Visual-Effect GAN (VEGAN), which has an additional component editor between the generator and the discriminator. After the unsupervised learning, the generator is able to predict the VER of an input color image for creating the expected visual effect. The VER can be further transformed into figure-ground segmentation. Yi et al. 2017;Zhu et al. 2017) can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation. Since our model has to identify the category-independent subjects for applying the visual effect without using imagepair relations and ground-truth pixel-level annotations, the problem we aim to address is more general and challenging than those of the aforementioned methods. Image Segmentation Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem (Simonyan and Zisserman 2015;Long, Shelhamer, and Darrell 2015;He et al. 2016). The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limitedannotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks. To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner (Hong, Noh, and Han 2015;Souly, Spampinato, and Shah 2017) or a weakly-supervised manner (Dai, He, and Sun 2015;Kwak, Hong, and Han 2017;Pinheiro and Collobert 2015) with a small number of pixellevel annotations. In contrast, our model is trained without explicit ground-truth annotations. Existing GAN-based segmentation methods (Nguyen et al. 2017;Luc et al. 2016) improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not "unsupervised" from the perspective of application and problem definition. We instead adopt a meta-learning viewpoint to address figure-ground segmentation. Depending on the visual effect to be imitated, the proposed approach interprets the task of image segmentation according to the learned VER. As a re-sult, our model indeed establishes a general setting of figureground segmentation, with the additional advantage of generating visual effects or photo-style manipulations. Our Method Given a natural RGB image I and an expected visual effect with known compositional process and local operation, the proposed VEGAN model learns to predict the visual-effect representation (VER) of I and to generate an edited image I edit with the expected effect. Fig. 2 illustrates the core idea. The training data are from two unpaired sets: the set {I} of original RGB images and the set {I sample } of images with the expected visual effect. The learning process is carried out as follows: i) Generator predicts the VER ν of the image I. ii) Editor uses the known local operation to create an edited image I edit possessing the expected visual effect. iii) Discriminator judges the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same visual effect. iv) Loss is computed for updating the whole model. Fig. 3 illustrates the components of VEGAN. Finally, we perform Binarization on VER for quantitatively assess the outcome of figure-ground segmentation. Generator: The task of the generator is to predict the VER ν that can be used to partition the input image I into foreground and background layers. Our network architecture is adapted from the state-of-the-art methods (Johnson, Alahi, and Fei-Fei 2016;Zhu et al. 2017) which show impressive results on image style transfer. The architecture follows the rules suggested by DCGAN (Radford, Metz, and Chintala 2016) such as replacing pooling layer with strided convolution. Our base architecture also uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). We have also tried a few slightly modified versions of the generator. The differences and details are described in the experiments. Discriminator: The discriminator is trained to judge the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same effect. We adopt a 70 × 70 patchGAN Ledig et al. 2017;Li and Wand 2016;Zhu et al. 2017) as our base discriminator network. PatcahGAN brings some benefits with multiple overlapping image patches. Namely, the scores The visual-effect representation (VER) produced by the generator indicates the strength of the visual effect at each location. The editor uses a well-defined trainable procedure (converting RGB to grayscale in this case) to create the expected visual effect. The discriminator receives the edited image I edit and evaluates how good it is. To train VEGAN, we need unpaired images from two domains. Domain A comprises real RGB images and Domain B comprises images with the expected visual effect. change more smoothly and the training process is more stable. Compared with a full-image discriminator, the receptive field of the 70 × 70 patchGAN might not capture the global context. In our work, the foreground objects are sensitive to their position in the whole image and are center-biased. If there are several objects in the image, our method would favor to pick out the object closest to the center. In our experiment, 70 × 70 patchGAN does produce better segments along the edges, but sometimes the segments tend to be tattered. A full-image discriminator (Goodfellow et al. 2014;Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017), on the other hand, could give coarser but more compact and structural segments. Editor: The editor is the core of the proposed model. Given an input image I and its VER ν predicted by the generator, the editor is responsible for creating a composed image I edit containing the expected visual effect. The first step is based on the well-defined procedure to perform local operations on the image and generate the expected visual effect I effect . More specifically, in our experiments we define three basic local operations: black-background, colorselectivo, and defocus/Bokeh, which involve clamping-tozero, grayscale conversion, and 11 × 11 average pooling, respectively. The next step is to combine the edited background layer with the foreground layer to get the final editing result I edit . An intuitive way is to use the VER ν as an alpha map α for image matting, i.e., I edit = α ⊗ I + (1 − α) ⊗ I effect , where α = {α ij }, α ij ∈ (0, 1) and ⊗ denotes the element-wise multiplication. However, in our experiments, we find that it is better to have ν = {ν ij }, ν ij ∈ (−1, 1) with hyperbolic-tangent as the output. Hence we combine the two layers as follows: I edit = τ (ν ⊗ (I − I effect ) + I effect ), ν ij ∈ (−1, 1) ,(1) where τ (·) truncates the values to be within (0, 255), which guarantees the I edit can be properly rendered. Under this formulation, our model turns to learning the residual. Loss: We refer to SOTA algorithms (Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) to design loss functions L G and L D for generator (G) and discriminator (D): L G = −E x∼Pg [D(x))] ,(2)L D = E x∼Pg [D(x))] − E y∼Pr [D(y)] + λ gp Ex ∼Px [( ∇xD(x) 2 − 1) 2 ] .(3) We alternately update the generator by Eq. 2 and the discriminator by Eq. 3. In our formulation, x is the edited image I edit , y is an image I sample which exhibits the expected visual effect, P g is the edited image distribution, P r is the sample image distribution, and Px is for sampling uniformly along straight lines between image pairs from P g and P r . We set the learning rate, λ gp , and other hyper-parameters the same as the configuration of WGAN-GP (Gulrajani et al. 2017). We keep the history of previously generated images and update the discriminator according to the history. We use the same way as ) to store 50 previously generated images {I edit } in a buffer. The training images are of size 224 × 224, and the batch size is 1. Binarization: The VEGAN model can be treated as aiming to predict the strength of the visual effect throughout the whole image. Although the VER provides effective intermediate representation for generating plausible edited images toward some expected visual effects, we observe that sometimes the VER might not be consistent with an object region, particularly with the Bokeh effect. Directly thresholding VER to make a binary mask for segmentation evaluation will cause some degree of false positives and degrade the segmentation quality. In general, we expect that the segmentation derived from the visual-effect representation to be smooth within an object and distinct across object boundaries. To respect this observation, we describe, in what follows, an optional procedure to obtain a smoothed VER and enable simple thresholding to yield a good binary mask for quantitative evaluation. Notice that all the VER maps visualized in this paper are obtained without binarization. To begin with, we over-segment (Achanta et al. 2012) an input image I into a superpixel set S and construct the corresponding superpixel-level graph G = (S, E, ω) with the edge set E and weights ω. Each edge e ij ∈ E denotes the spatial adjacency between superpixels s i and s j . The weighting function ω : E → [0, 1] is defined as ω ij = e −θ1 ci−cj , where c i and c j respectively denote the CIE Lab mean colors of two adjacent superpixels. Then the weight matrix of the graph is W = [ω ij ] |S|×|S| . We then smooth the VER via propagating the averaged value of each superpixel to all other superpixels. To this end, we use r i to denote the mean VER value of superpixel s i where r i = 1 |si| (i,j)∈si ν ij and |s i | is the number of pixels within s i . The propagation is carried out according to the feature similarity between every superpixel pair. Given the weight matrix W, the pairwise similarity matrix A can be constructed as A = (D − θ 2 W) −1 I, where D is a diagonal matrix with each diagonal entry equal to the row sum of W, θ 2 is a parameter in (0, 1], and I is the |S|-by-|S| identity matrix (Zhou et al. 2003). Finally, the smoothed VER value of each superpixel can be obtained by [r 1 ,r 2 , . . . ,r |S| ] T = D −1 A A · [r 1 , r 2 , . . . , r |S| ] T ,(4) where D A is a diagonal matrix with each diagonal entry equal to the corresponding row sum of A, and D −1 A A yields the row normalized version of A. From Eq. 4, we see that the smoothed VER valuer i is determined by not only neighboring superpixels of s i but also all other superpixels. To obtain the binary mask, we set the average value of {r 1 ,r 2 , . . . ,r |S| } as the threshold for obtaining the corresponding figure-ground segmentation for the input I. We set parameters θ 1 = 10 and θ 2 = 0.99 in all the experiments. Experiments We first describe the evaluation metric, the testing datasets, the training data, and the algorithms in comparison. Then, we show the comparison results of the relevant algorithms and our approach. Finally, we present the image segmentation and editing results of our approach. More experimental results can be found in the supplementary material. Evaluation Metric. We adopt the intersection-over-union (IoU) to evaluate the binary mask derived from the VER. The IoU score, which is defined as |P Q| |P Q| , where P denotes the machine segmentation and Q denotes the ground-truth segmentation. All algorithms are tested on Intel i7-4770 3.40 GHz CPU, 8GB RAM, and NVIDIA Titan X GPU. Datasets. The six datasets are GC50 (Rother, Kolmogorov, and Blake 2004), MSRA500, ECSSD (Shi et al. 2016), Flower17 (Nilsback and Zisserman 2006), Flower102 (Nilsback and Zisserman 2008), and CUB200 (Wah et al. 2011). MSRA500 is a subset of the MSRA10K dataset (Cheng et al. 2015), which contains 10,000 natural images. We randomly partition MSRA10K into two non-overlapping subsets of 500 and 9,500 images to create MSRA500 and MSRA9500 for testing and training, respectively. Their statistics are summarized in Table 1. Since these datasets provide pixellevel ground truths, we can compare the consistency be- tween the ground-truth labeling and the derived segmentation of each image for VER-quality assessment. Training Data. In training the VEGAN model, we consider using the images from two different sources for comparison. The first image source is MSRA9500 derived from the MSRA10K dataset (Cheng et al. 2015). The second image source is Flickr, and we acquire unorganized images for each task as the training data. We examine our model on three kinds of visual effects, namely, black background, color selectivo, and defocus/Bokeh. • For MSRA9500 images, we randomly select 4,750 images and then apply the three visual effects to yield three groups of images with visual effects, i.e., {I sample }. The other 4,750 images are hence the input images {I} for the generator to produce the edited images {I edit } later. • For Flickr images, we use "black background," "color selectivo," and "defocus/Bokeh" as the three query tags, and then collect 4,000 images for each query-tag as the real images with visual effects. We randomly download additional 4,000 images from Flickr as the images to be edited. Algorithms in Comparison. We quantitatively evaluate the learned VER using the standard segmentation assessment metric (IoU). Our approach is compared with several well-known algorithms, including two semantic segmentation algorithms, three saliency based algorithms, and two bounding-box based algorithms, listed as follows: ResNet , VGG16 (Simonyan and Zisserman 2015), CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS and MilCutG (Wu et al. 2014), GrabCut (Rother, Kolmogorov, and Blake 2004). The two supervised semantic segmentation algorithms, ResNet and VGG16, are pre-trained on ILSVRC-2012-CLS (Russakovsky et al. 2015) and then fine-tuned on MSRA9500 with ground-truth annotations. The bounding boxes of the two bounding-box based algorithms are initialized around the image borders. Quantitative Evaluation The first part of experiment aims to evaluate the segmentation quality of different methods. We first compare several variants of the VEGAN model to choose the best model configuration. Then, we analyze the results of the VEGAN model versus the other state-of-the-art algorithms. VEGAN Variants. In the legend blocks of Fig. 4, we use a compound notation "TrainingData -Version" to account for the variant versions of our model. Specifically, Train-ingData indicates the image source of the training data. The notation for Version contains two characters. The first character denotes the type of visual effect: "B" for black background, "C" for color selectivo, and "D" for defocus/Bokeh. The second character is the model configuration: "1" refers to the combination of base-generator and base-discriminator described in Our Method; "2" refers to using ResNet as the generator; "3" is the model "1" with additional skip-layers and replacing transpose convolution with bilinear interpolation; "4" is the model "3" yet replacing patch-based discriminator with full-image discriminator. We report the results of VEGAN variants in Table 6, and depict the sorted IoU scores for the test images in Flower17 and Flower102 datasets in Fig. 4. It can be seen that all models have similar segmentation qualities no matter what image source is used for training. In Table 6 and Fig. 4, the training configuration "B4" shows relatively better performance under black background. Hence, our VEGAN model adopts the version of MSRA-B4 as a representative variant for comparing with other state-of-the-art algorithms. Unseen Images. We further analyze the differences of the learned models on dealing with unseen and seen images. We test the variants B4, C4, and D4 on MSRA500 (unseen) and the subset {I} of MSRA9500 (seen). We find that the performance of VEGAN is quite stable. The IoU score for MSRA500 is only 0.01 lower than the score for MSRA9500 {I}. Note that, even for the seen images, the ground-truth pixel annotations are unknown to the VEGAN model during training. This result indicates that VEGAN has a good generalization ability to predict segmentation for either seen or unseen images. For comparison, we do the same experiment with the two supervised algorithms, ResNet and VGG16. They are fine-tuned with MSRA9500. The mean IoU scores of ResNet are 0.86 and 0.94 for MSRA500 and MSRA9500, respectively. The mean IoU scores of VGG16 are 0.72 and 0.88 for MSRA500 and MSRA9500, respectively. The performance of both supervised techniques significantly degrades while dealing with unseen images. From the results just described, the final VEGAN model is implemented with the following setting: i) Generator uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). ii) Discriminator uses the full-image discriminator as WGAN-GP (Gulrajani et al. 2017). Results. The top portion of Table 3 summarizes the mean IoU score of each algorithm evaluated with the six testing datasets. We first compare our method with five well-known segmentation/saliency-detection techniques, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). The proposed VE-GAN model outperforms all others on MSRA500, ECSSD, Flower17, and Flower102 datasets, and is only slightly behind the best on GC50 and CUB200 datasets. The bottom portion of Table 3 shows the results of two SOTA supervised learning algorithms on the six testing datasets. Owing to training with the paired images and ground-truths in a "supervised" manner, the two models of ResNet and VGG16 undoubtedly achieve good performance so that we treat them as the oracle models. Surprisingly, our unsupervised learning model is comparable with or even slightly better than the supervised learning algorithms on the MSRA500, Flower17, and Flower102 datasets. Fig. 9 depicts the sorted IoU scores, where a larger area under curve means better segmentation quality. VEGAN achieves better segmentation accuracy on the two datasets. Fig. 10 shows the results generated by our VEGAN model under different configurations. Each triplet of images contains the input image, the visual effect representation (VER), and the edited image. The results in Fig. 10 demonstrate that VEGAN can generate reasonable figure-ground segmentations and plausible edited images with expected visual effects. Visual-Effect Imitation as Style Transfer. Although existing GAN models cannot be directly applied to learning figure-ground segmentation, some of them are applicable to learning visual-effect transfer, e.g., CycleGAN . We use the two sets {I} and {I sample } of MSRA9500 to train CycleGAN, and show some comparison results in Fig. 7. We find that the task of imitating black background turns out to be challenging for CycleGAN since the information in {I sample } is too limited to derive the inverse mapping back to {I}. Moreover, CycleGAN focuses more on learning the mapping between local properties such as color or texture rather than learning how to create a glob- ) and VGG16 (Simonyan and Zisserman 2015) are pre-trained with ILSVRC-2012-CLS and then fine-tuned with MSRA9500. ally consistent visual effect. VEGAN instead follows a systematic learning procedure to imitate the visual effect. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to identify. Figure 6: The edited images generated by VEGAN with respect to specific visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. Qualitative Evaluation User Study. Fig. 8 shows VERs that testing on Flickr "bird" images using VEGAN models trained merely with Flick "flower" images. The results suggest that the meta-learning mechanism enables VE-GAN to identify unseen foreground figures based on the learned knowledge embodied in the generated VERs. Conclusion We characterize the two main contributions of our method as follows. First, we establish a meta-learning framework to learn a general concept of figure-ground application and an effective approach to the segmentation task. Second, we propose to cast the meta-learning as imitating relevant visual effects and develop a novel VEGAN model with following advantages: i) Our model offers a new way to predict meaningful figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. ii) The training images are easy to collect from photo-sharing websites using related tags. iii) The editor between the generator and the discriminator enables VEGAN to decouple the compositional process of imitating visual effects and hence allows VEGAN to effectively learn the underlying representation (VER) for deriving figure-ground segmentation. We have tested three visual effects, including "black background," "color selectivo," and "defocus/Bokeh" with extensive experiments on six datasets. For these visual effects, VEGAN can be end-to-end trained from scratch using unpaired training images that have no ground-truth labeling. Because state-of-the-art GAN models, e.g., CycleGAN , are not explicitly designed for unsupervised learning of figure-ground segmentation, we simply conduct qualitative comparisons with CycleGAN on the task of visual-effect transfer rather than the task of figure-ground segmentation. The task of visual-effect transfer is to convert an RGB image into an edited image with the intended visual effect. To train CycleGAN for visual-effect transfer, we use the set {I} of original RGB images and the set {I sample } of images with the expected visual effect as the two unpaired training sets. Fig. 12 shows the results of 'training on MSRA9500 and testing on MSRA500'. Fig. 13 shows the results of 'training on Flickr and testing on Flickr'. For CycleGAN and VEGAN, all the test images are unseen during training. The training process is done in an unsupervised manner without using any ground-truth annotations and paired images. Some comparison results are shown in Fig. 12 and Fig. 13. We observe that the task of imitating black background is actually more challenging for Cycle-GAN since the information of black regions in {I sample } is limited and hence does not provide good inverse mapping back to {I} under the setting of CycleGAN. The results of CycleGAN on imitating color selectivo and defocus/Bokeh are more comparable to those of VE-GAN. However, the images generated by CycleGAN may have some distortions in color. On the other hand, VEGAN follows a well-organized procedure to learn how to imitate visual effects. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to differentiate. GC50 MSRA500 ECSSD Flower17 Flower102 CUB200 Figure 9: Comparisons with well-known algorithms, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). Each sub-figure depicts the sorted IoU scores as the segmentation accuracy. Testing on MSRA500 using VEGAN models MSRA-B4, MSRA-C4, and MSRA-D4. Testing on Flickr images using VEGAN models Flickr-B4, Flickr-C4, and Flickr-D4. Figure 10: The edited images generated by our VEGAN models with respect to some expected visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. (Johnson, Alahi, and Fei-Fei 2016); ' ‡' refers to ; ' ' refers to (Gulrajani et al. 2017 the 9-residual-blocks version † WGAN-GP yes bilinear 'Color selectivo' visual effect generated by VEGAN (MSRA-C4) and CycleGAN. 'Defocus/Bokeh' visual effect generated by VEGAN (MSRA-D4) and CycleGAN.
5,461
1812.08442
2950827124
This paper presents a "learning to learn" approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection @cite_16 , saliency detection @cite_20 , and semantic segmentation @cite_12 . However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods @cite_3 @cite_15 @cite_14 can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation.
{ "abstract": [ "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL .", "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.", "We introduce scGAN, a novel extension of conditional Generative Adversarial Networks (GAN) tailored for the challenging problem of shadow detection in images. Previous methods for shadow detection focus on learning the local appearance of shadow regions, while using limited local context reasoning in the form of pairwise potentials in a Conditional Random Field. In contrast, the proposed adversarial approach is able to model higher level relationships and global scene characteristics. We train a shadow detector that corresponds to the generator of a conditional GAN, and augment its shadow accuracy by combining the typical GAN loss with a data loss term. Due to the unbalanced distribution of the shadow labels, we use weighted cross entropy. With the standard GAN architecture, properly setting the weight for the cross entropy would require training multiple GANs, a computationally expensive grid procedure. In scGAN, we introduce an additional sensitivity parameter w to the generator. The proposed approach effectively parameterizes the loss of the trained detector. The resulting shadow detector is a single network that can generate shadow maps corresponding to different sensitivity levels, obviating the need for multiple models and a costly training procedure. We evaluate our method on the large-scale SBU and UCF shadow datasets, and observe up to 17 error reduction with respect to the previous state-of-the-art method.", "We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE. Our results can be reproduced with the source code and trained models available at https: imatge-upc.github. io saliency-salgan-2017 .", "Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. The motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net. Our experiments show that our adversarial training approach leads to improved accuracy on the Stanford Background and PASCAL VOC 2012 datasets." ], "cite_N": [ "@cite_14", "@cite_3", "@cite_15", "@cite_16", "@cite_20", "@cite_12" ], "mid": [ "2962793481", "2592480533", "2608015370", "2777654136", "2583180462", "2554423077" ] }
Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects
In figure-ground segmentation, the regions of interest are conventionally defined by the provided ground truth, which is usually in the form of pixel-level annotations. Without such supervised information from intensive labeling efforts, it is challenging to teach a system to learn what the figure and the ground should be in each image. To address this issue, we propose an unsupervised meta-learning approach that can simultaneously learn both the figure-ground concept and the corresponding image segmentation. The proposed formulation explores the inherent but often unnoticeable relatedness between performing image segmentation and creating visual effects. In particular, to visually enrich a given image with a special effect often first needs to specify the regions to be emphasized. The procedure corresponds to constructing an internal representation that guides the image editing to operate on the target image regions. For this reason, we name such an internal guidance as the Visual-Effect Representation (VER) of the image. We observe that for a majority of visual effects, their resulting VER is closely related to image segmentation. Another advantage of focusing on visual-effect images is that such data are abundant from the Internet, while pixel-wise Figure 1: Given the same image (1st column), imitating different visual effects (2nd column) can yield distinct interpretations of figure-ground segmentation (3rd column), which are derived by our method via referencing the following visual effects (from top to bottom): black background, color selectivo, and defocus/Bokeh. The learned VERs are shown in the last column, respectively. annotating large datasets for image segmentation is timeconsuming. However, in practice, we only have access to the visual-effect images, but not the VERs as well as the original images. Taking all these factors into account, we reduce the meta-problem of figure-ground segmentation to predicting the proper VER of a given image for the underlying visual effect. Owing to its data richness from the Internet, the latter task is more suitable for our intention to cast the problem within the unsupervised generative framework. Many compositional image editing tasks have the aforementioned properties. For example, to create the color selectivo effect on an image, as shown in Fig. 2, we can i) identify the target and partition the image into foreground and background layers, ii) convert the color of background layer into grayscale, and iii) combine the converted background layer with the original foreground layer to get the final result. The operation of color conversion is local-it simply "equalizes" the RGB values of pixels in certain areas. The quality of the result depends on how properly the layers are decomposed. If a part of the target region is partitioned into the background, the result might look less plausible. Unlike the local operations, to localize the proper regions for editing would require certain understanding and analysis of the global or contextual information in the whole image. In this paper, we design a GAN-based model, called Visual-Effect GAN (VEGAN), that can learn to predict the internal representation (i.e., VER) and incorporate such information into facilitating the resulting figure-ground segmentation. We are thus motivated to formulate the following problem: Given an unaltered RGB image as the input and an image editing task with known compositional process and local operation, we aim to predict the proper VER that guides the editing process to generate the expected visual effect and accomplishes the underlying figure-ground segmentation. We adopt a data-driven setting in which the image editing task is exemplified by a collection of image samples with the expected visual effect. The task, therefore, is to transform the original RGB input image into an output image that exhibits the same effect of the exemplified samples. To make our approach general, we assume that no corresponding pairs of input and output images are available in training, and therefore supervised learning is not applicable. That is, the training data does not include pairs of the original color images and the corresponding edited images with visual effects. The flexibility is in line with the fact that although we could fetch a lot of images with certain visual effects over the Internet, we indeed do not know what their original counterpart should look like. Under this problem formulation, several issues are of our interest and need to be addressed. First, how do we solve the problem without paired input and output images? We build on the idea of generative adversarial network and develop a new unsupervised learning mechanism (shown in Figs. 2 & 3) to learn the internal representation for creating the visual effect. The generator aims to predict the internal VER and the editor is to convert the input image into the one that has the expected visual effect. The compositional procedure and local operation are generic and can be implemented as parts of the architecture of a ConvNet. The discriminator has to judge the quality of the edited images with respect to a set of sample images that exhibit the same visual effect. The experimental results show that our model works surprisingly well to learn meaningful representation and segmentation without supervision. Second, where do we acquire the collection of sample images for illustrating the expected visual effect? Indeed, it would not make sense if we have to manually generate the labor-intensive sample images for demonstrating the expected visual effects. We show that the required sample images can be conveniently collected from the Internet. We provide a couple of scripts to explore the effectiveness of using Internet images for training our model. Notice again that, although the required sample images with visual effects are available on the Internet, their original versions are unknown. Thus supervised learning of pairwise image-toimage translation cannot be applied here. Third, what can the VER be useful for, in addition to creating visual effects? We show that, if we are able to choose a suitable visual effect, the learned VER can be used to not only establish the intended figure-ground notion but also derive the image segmentation. More precisely, as in our formulation the visual-effect representation is characterized by a real-valued response map, the result of figure-ground separation can be obtained via binarizing the VER. Therefore, it is legitimate to take the proposed problem of VER prediction as a surrogate for unsupervised image segmentation. We have tested the following visual effects: i) black background, which is often caused by using flashlight; ii) color selectivo, which imposes color highlight on the subject and keeps the background in grayscale; iii) defocus/Bokeh, which is due to depth of field of camera lens. The second column in Fig. 1 shows the three types of visual effects. For these tasks our model can be end-toend trained from scratch in an unsupervised manner using training data that do not have either the ground-truth pixel labeling or the paired images with/without visual effects. While labor-intensive pixel-level segmentations for images are hard to acquire directly via Internet search, images with those three effects are easy to collect from photo-sharing websites, such as Flickr, using related tags. Generative Adversarial Networks The idea of GAN (Goodfellow et al. 2014) is to generate realistic samples through the adversarial game between generator G and discriminator D. GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods (Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) try to improve GAN in both the aspects of implementation and theory. DCGAN (Radford, Metz, and Chintala 2016) provides a new framework that is more stable and easier to train. WGAN (Arjovsky, Chintala, and Bottou 2017) suggests to use Wasserstein distance to measure the loss. WGAN-GP (Gulrajani et al. 2017) further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty. To reduce the burden of G, Denton et al. (Denton et al. 2015) use a pyramid structure and Karras et al. (Karras et al. 2018) consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of G by incorporating some well-defined image processing operations into the network model, e.g., converting background color into grayscale to simulate the visual effect of color selectivo, or blurring the background to create the Bokeh effect. Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection (Nguyen et al. 2017), saliency detection (Pan et al. 2017), and semantic segmentation (Luc et al. 2016). However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods (Liu, Breuel, and Kautz 2017;Figure 2: Learning and applying our model for the case of "color selectivo" visual effect. The image collection for learning is downloaded using Flickr API. Without explicit ground-truth pixel-level annotations being provided, our method can learn to estimate the visual-effect representations (VERs) from unpaired sets of natural RGB images and sample images with the expected visual effect. Our generative model is called Visual-Effect GAN (VEGAN), which has an additional component editor between the generator and the discriminator. After the unsupervised learning, the generator is able to predict the VER of an input color image for creating the expected visual effect. The VER can be further transformed into figure-ground segmentation. Yi et al. 2017;Zhu et al. 2017) can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation. Since our model has to identify the category-independent subjects for applying the visual effect without using imagepair relations and ground-truth pixel-level annotations, the problem we aim to address is more general and challenging than those of the aforementioned methods. Image Segmentation Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem (Simonyan and Zisserman 2015;Long, Shelhamer, and Darrell 2015;He et al. 2016). The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limitedannotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks. To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner (Hong, Noh, and Han 2015;Souly, Spampinato, and Shah 2017) or a weakly-supervised manner (Dai, He, and Sun 2015;Kwak, Hong, and Han 2017;Pinheiro and Collobert 2015) with a small number of pixellevel annotations. In contrast, our model is trained without explicit ground-truth annotations. Existing GAN-based segmentation methods (Nguyen et al. 2017;Luc et al. 2016) improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not "unsupervised" from the perspective of application and problem definition. We instead adopt a meta-learning viewpoint to address figure-ground segmentation. Depending on the visual effect to be imitated, the proposed approach interprets the task of image segmentation according to the learned VER. As a re-sult, our model indeed establishes a general setting of figureground segmentation, with the additional advantage of generating visual effects or photo-style manipulations. Our Method Given a natural RGB image I and an expected visual effect with known compositional process and local operation, the proposed VEGAN model learns to predict the visual-effect representation (VER) of I and to generate an edited image I edit with the expected effect. Fig. 2 illustrates the core idea. The training data are from two unpaired sets: the set {I} of original RGB images and the set {I sample } of images with the expected visual effect. The learning process is carried out as follows: i) Generator predicts the VER ν of the image I. ii) Editor uses the known local operation to create an edited image I edit possessing the expected visual effect. iii) Discriminator judges the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same visual effect. iv) Loss is computed for updating the whole model. Fig. 3 illustrates the components of VEGAN. Finally, we perform Binarization on VER for quantitatively assess the outcome of figure-ground segmentation. Generator: The task of the generator is to predict the VER ν that can be used to partition the input image I into foreground and background layers. Our network architecture is adapted from the state-of-the-art methods (Johnson, Alahi, and Fei-Fei 2016;Zhu et al. 2017) which show impressive results on image style transfer. The architecture follows the rules suggested by DCGAN (Radford, Metz, and Chintala 2016) such as replacing pooling layer with strided convolution. Our base architecture also uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). We have also tried a few slightly modified versions of the generator. The differences and details are described in the experiments. Discriminator: The discriminator is trained to judge the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same effect. We adopt a 70 × 70 patchGAN Ledig et al. 2017;Li and Wand 2016;Zhu et al. 2017) as our base discriminator network. PatcahGAN brings some benefits with multiple overlapping image patches. Namely, the scores The visual-effect representation (VER) produced by the generator indicates the strength of the visual effect at each location. The editor uses a well-defined trainable procedure (converting RGB to grayscale in this case) to create the expected visual effect. The discriminator receives the edited image I edit and evaluates how good it is. To train VEGAN, we need unpaired images from two domains. Domain A comprises real RGB images and Domain B comprises images with the expected visual effect. change more smoothly and the training process is more stable. Compared with a full-image discriminator, the receptive field of the 70 × 70 patchGAN might not capture the global context. In our work, the foreground objects are sensitive to their position in the whole image and are center-biased. If there are several objects in the image, our method would favor to pick out the object closest to the center. In our experiment, 70 × 70 patchGAN does produce better segments along the edges, but sometimes the segments tend to be tattered. A full-image discriminator (Goodfellow et al. 2014;Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017), on the other hand, could give coarser but more compact and structural segments. Editor: The editor is the core of the proposed model. Given an input image I and its VER ν predicted by the generator, the editor is responsible for creating a composed image I edit containing the expected visual effect. The first step is based on the well-defined procedure to perform local operations on the image and generate the expected visual effect I effect . More specifically, in our experiments we define three basic local operations: black-background, colorselectivo, and defocus/Bokeh, which involve clamping-tozero, grayscale conversion, and 11 × 11 average pooling, respectively. The next step is to combine the edited background layer with the foreground layer to get the final editing result I edit . An intuitive way is to use the VER ν as an alpha map α for image matting, i.e., I edit = α ⊗ I + (1 − α) ⊗ I effect , where α = {α ij }, α ij ∈ (0, 1) and ⊗ denotes the element-wise multiplication. However, in our experiments, we find that it is better to have ν = {ν ij }, ν ij ∈ (−1, 1) with hyperbolic-tangent as the output. Hence we combine the two layers as follows: I edit = τ (ν ⊗ (I − I effect ) + I effect ), ν ij ∈ (−1, 1) ,(1) where τ (·) truncates the values to be within (0, 255), which guarantees the I edit can be properly rendered. Under this formulation, our model turns to learning the residual. Loss: We refer to SOTA algorithms (Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) to design loss functions L G and L D for generator (G) and discriminator (D): L G = −E x∼Pg [D(x))] ,(2)L D = E x∼Pg [D(x))] − E y∼Pr [D(y)] + λ gp Ex ∼Px [( ∇xD(x) 2 − 1) 2 ] .(3) We alternately update the generator by Eq. 2 and the discriminator by Eq. 3. In our formulation, x is the edited image I edit , y is an image I sample which exhibits the expected visual effect, P g is the edited image distribution, P r is the sample image distribution, and Px is for sampling uniformly along straight lines between image pairs from P g and P r . We set the learning rate, λ gp , and other hyper-parameters the same as the configuration of WGAN-GP (Gulrajani et al. 2017). We keep the history of previously generated images and update the discriminator according to the history. We use the same way as ) to store 50 previously generated images {I edit } in a buffer. The training images are of size 224 × 224, and the batch size is 1. Binarization: The VEGAN model can be treated as aiming to predict the strength of the visual effect throughout the whole image. Although the VER provides effective intermediate representation for generating plausible edited images toward some expected visual effects, we observe that sometimes the VER might not be consistent with an object region, particularly with the Bokeh effect. Directly thresholding VER to make a binary mask for segmentation evaluation will cause some degree of false positives and degrade the segmentation quality. In general, we expect that the segmentation derived from the visual-effect representation to be smooth within an object and distinct across object boundaries. To respect this observation, we describe, in what follows, an optional procedure to obtain a smoothed VER and enable simple thresholding to yield a good binary mask for quantitative evaluation. Notice that all the VER maps visualized in this paper are obtained without binarization. To begin with, we over-segment (Achanta et al. 2012) an input image I into a superpixel set S and construct the corresponding superpixel-level graph G = (S, E, ω) with the edge set E and weights ω. Each edge e ij ∈ E denotes the spatial adjacency between superpixels s i and s j . The weighting function ω : E → [0, 1] is defined as ω ij = e −θ1 ci−cj , where c i and c j respectively denote the CIE Lab mean colors of two adjacent superpixels. Then the weight matrix of the graph is W = [ω ij ] |S|×|S| . We then smooth the VER via propagating the averaged value of each superpixel to all other superpixels. To this end, we use r i to denote the mean VER value of superpixel s i where r i = 1 |si| (i,j)∈si ν ij and |s i | is the number of pixels within s i . The propagation is carried out according to the feature similarity between every superpixel pair. Given the weight matrix W, the pairwise similarity matrix A can be constructed as A = (D − θ 2 W) −1 I, where D is a diagonal matrix with each diagonal entry equal to the row sum of W, θ 2 is a parameter in (0, 1], and I is the |S|-by-|S| identity matrix (Zhou et al. 2003). Finally, the smoothed VER value of each superpixel can be obtained by [r 1 ,r 2 , . . . ,r |S| ] T = D −1 A A · [r 1 , r 2 , . . . , r |S| ] T ,(4) where D A is a diagonal matrix with each diagonal entry equal to the corresponding row sum of A, and D −1 A A yields the row normalized version of A. From Eq. 4, we see that the smoothed VER valuer i is determined by not only neighboring superpixels of s i but also all other superpixels. To obtain the binary mask, we set the average value of {r 1 ,r 2 , . . . ,r |S| } as the threshold for obtaining the corresponding figure-ground segmentation for the input I. We set parameters θ 1 = 10 and θ 2 = 0.99 in all the experiments. Experiments We first describe the evaluation metric, the testing datasets, the training data, and the algorithms in comparison. Then, we show the comparison results of the relevant algorithms and our approach. Finally, we present the image segmentation and editing results of our approach. More experimental results can be found in the supplementary material. Evaluation Metric. We adopt the intersection-over-union (IoU) to evaluate the binary mask derived from the VER. The IoU score, which is defined as |P Q| |P Q| , where P denotes the machine segmentation and Q denotes the ground-truth segmentation. All algorithms are tested on Intel i7-4770 3.40 GHz CPU, 8GB RAM, and NVIDIA Titan X GPU. Datasets. The six datasets are GC50 (Rother, Kolmogorov, and Blake 2004), MSRA500, ECSSD (Shi et al. 2016), Flower17 (Nilsback and Zisserman 2006), Flower102 (Nilsback and Zisserman 2008), and CUB200 (Wah et al. 2011). MSRA500 is a subset of the MSRA10K dataset (Cheng et al. 2015), which contains 10,000 natural images. We randomly partition MSRA10K into two non-overlapping subsets of 500 and 9,500 images to create MSRA500 and MSRA9500 for testing and training, respectively. Their statistics are summarized in Table 1. Since these datasets provide pixellevel ground truths, we can compare the consistency be- tween the ground-truth labeling and the derived segmentation of each image for VER-quality assessment. Training Data. In training the VEGAN model, we consider using the images from two different sources for comparison. The first image source is MSRA9500 derived from the MSRA10K dataset (Cheng et al. 2015). The second image source is Flickr, and we acquire unorganized images for each task as the training data. We examine our model on three kinds of visual effects, namely, black background, color selectivo, and defocus/Bokeh. • For MSRA9500 images, we randomly select 4,750 images and then apply the three visual effects to yield three groups of images with visual effects, i.e., {I sample }. The other 4,750 images are hence the input images {I} for the generator to produce the edited images {I edit } later. • For Flickr images, we use "black background," "color selectivo," and "defocus/Bokeh" as the three query tags, and then collect 4,000 images for each query-tag as the real images with visual effects. We randomly download additional 4,000 images from Flickr as the images to be edited. Algorithms in Comparison. We quantitatively evaluate the learned VER using the standard segmentation assessment metric (IoU). Our approach is compared with several well-known algorithms, including two semantic segmentation algorithms, three saliency based algorithms, and two bounding-box based algorithms, listed as follows: ResNet , VGG16 (Simonyan and Zisserman 2015), CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS and MilCutG (Wu et al. 2014), GrabCut (Rother, Kolmogorov, and Blake 2004). The two supervised semantic segmentation algorithms, ResNet and VGG16, are pre-trained on ILSVRC-2012-CLS (Russakovsky et al. 2015) and then fine-tuned on MSRA9500 with ground-truth annotations. The bounding boxes of the two bounding-box based algorithms are initialized around the image borders. Quantitative Evaluation The first part of experiment aims to evaluate the segmentation quality of different methods. We first compare several variants of the VEGAN model to choose the best model configuration. Then, we analyze the results of the VEGAN model versus the other state-of-the-art algorithms. VEGAN Variants. In the legend blocks of Fig. 4, we use a compound notation "TrainingData -Version" to account for the variant versions of our model. Specifically, Train-ingData indicates the image source of the training data. The notation for Version contains two characters. The first character denotes the type of visual effect: "B" for black background, "C" for color selectivo, and "D" for defocus/Bokeh. The second character is the model configuration: "1" refers to the combination of base-generator and base-discriminator described in Our Method; "2" refers to using ResNet as the generator; "3" is the model "1" with additional skip-layers and replacing transpose convolution with bilinear interpolation; "4" is the model "3" yet replacing patch-based discriminator with full-image discriminator. We report the results of VEGAN variants in Table 6, and depict the sorted IoU scores for the test images in Flower17 and Flower102 datasets in Fig. 4. It can be seen that all models have similar segmentation qualities no matter what image source is used for training. In Table 6 and Fig. 4, the training configuration "B4" shows relatively better performance under black background. Hence, our VEGAN model adopts the version of MSRA-B4 as a representative variant for comparing with other state-of-the-art algorithms. Unseen Images. We further analyze the differences of the learned models on dealing with unseen and seen images. We test the variants B4, C4, and D4 on MSRA500 (unseen) and the subset {I} of MSRA9500 (seen). We find that the performance of VEGAN is quite stable. The IoU score for MSRA500 is only 0.01 lower than the score for MSRA9500 {I}. Note that, even for the seen images, the ground-truth pixel annotations are unknown to the VEGAN model during training. This result indicates that VEGAN has a good generalization ability to predict segmentation for either seen or unseen images. For comparison, we do the same experiment with the two supervised algorithms, ResNet and VGG16. They are fine-tuned with MSRA9500. The mean IoU scores of ResNet are 0.86 and 0.94 for MSRA500 and MSRA9500, respectively. The mean IoU scores of VGG16 are 0.72 and 0.88 for MSRA500 and MSRA9500, respectively. The performance of both supervised techniques significantly degrades while dealing with unseen images. From the results just described, the final VEGAN model is implemented with the following setting: i) Generator uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). ii) Discriminator uses the full-image discriminator as WGAN-GP (Gulrajani et al. 2017). Results. The top portion of Table 3 summarizes the mean IoU score of each algorithm evaluated with the six testing datasets. We first compare our method with five well-known segmentation/saliency-detection techniques, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). The proposed VE-GAN model outperforms all others on MSRA500, ECSSD, Flower17, and Flower102 datasets, and is only slightly behind the best on GC50 and CUB200 datasets. The bottom portion of Table 3 shows the results of two SOTA supervised learning algorithms on the six testing datasets. Owing to training with the paired images and ground-truths in a "supervised" manner, the two models of ResNet and VGG16 undoubtedly achieve good performance so that we treat them as the oracle models. Surprisingly, our unsupervised learning model is comparable with or even slightly better than the supervised learning algorithms on the MSRA500, Flower17, and Flower102 datasets. Fig. 9 depicts the sorted IoU scores, where a larger area under curve means better segmentation quality. VEGAN achieves better segmentation accuracy on the two datasets. Fig. 10 shows the results generated by our VEGAN model under different configurations. Each triplet of images contains the input image, the visual effect representation (VER), and the edited image. The results in Fig. 10 demonstrate that VEGAN can generate reasonable figure-ground segmentations and plausible edited images with expected visual effects. Visual-Effect Imitation as Style Transfer. Although existing GAN models cannot be directly applied to learning figure-ground segmentation, some of them are applicable to learning visual-effect transfer, e.g., CycleGAN . We use the two sets {I} and {I sample } of MSRA9500 to train CycleGAN, and show some comparison results in Fig. 7. We find that the task of imitating black background turns out to be challenging for CycleGAN since the information in {I sample } is too limited to derive the inverse mapping back to {I}. Moreover, CycleGAN focuses more on learning the mapping between local properties such as color or texture rather than learning how to create a glob- ) and VGG16 (Simonyan and Zisserman 2015) are pre-trained with ILSVRC-2012-CLS and then fine-tuned with MSRA9500. ally consistent visual effect. VEGAN instead follows a systematic learning procedure to imitate the visual effect. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to identify. Figure 6: The edited images generated by VEGAN with respect to specific visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. Qualitative Evaluation User Study. Fig. 8 shows VERs that testing on Flickr "bird" images using VEGAN models trained merely with Flick "flower" images. The results suggest that the meta-learning mechanism enables VE-GAN to identify unseen foreground figures based on the learned knowledge embodied in the generated VERs. Conclusion We characterize the two main contributions of our method as follows. First, we establish a meta-learning framework to learn a general concept of figure-ground application and an effective approach to the segmentation task. Second, we propose to cast the meta-learning as imitating relevant visual effects and develop a novel VEGAN model with following advantages: i) Our model offers a new way to predict meaningful figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. ii) The training images are easy to collect from photo-sharing websites using related tags. iii) The editor between the generator and the discriminator enables VEGAN to decouple the compositional process of imitating visual effects and hence allows VEGAN to effectively learn the underlying representation (VER) for deriving figure-ground segmentation. We have tested three visual effects, including "black background," "color selectivo," and "defocus/Bokeh" with extensive experiments on six datasets. For these visual effects, VEGAN can be end-to-end trained from scratch using unpaired training images that have no ground-truth labeling. Because state-of-the-art GAN models, e.g., CycleGAN , are not explicitly designed for unsupervised learning of figure-ground segmentation, we simply conduct qualitative comparisons with CycleGAN on the task of visual-effect transfer rather than the task of figure-ground segmentation. The task of visual-effect transfer is to convert an RGB image into an edited image with the intended visual effect. To train CycleGAN for visual-effect transfer, we use the set {I} of original RGB images and the set {I sample } of images with the expected visual effect as the two unpaired training sets. Fig. 12 shows the results of 'training on MSRA9500 and testing on MSRA500'. Fig. 13 shows the results of 'training on Flickr and testing on Flickr'. For CycleGAN and VEGAN, all the test images are unseen during training. The training process is done in an unsupervised manner without using any ground-truth annotations and paired images. Some comparison results are shown in Fig. 12 and Fig. 13. We observe that the task of imitating black background is actually more challenging for Cycle-GAN since the information of black regions in {I sample } is limited and hence does not provide good inverse mapping back to {I} under the setting of CycleGAN. The results of CycleGAN on imitating color selectivo and defocus/Bokeh are more comparable to those of VE-GAN. However, the images generated by CycleGAN may have some distortions in color. On the other hand, VEGAN follows a well-organized procedure to learn how to imitate visual effects. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to differentiate. GC50 MSRA500 ECSSD Flower17 Flower102 CUB200 Figure 9: Comparisons with well-known algorithms, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). Each sub-figure depicts the sorted IoU scores as the segmentation accuracy. Testing on MSRA500 using VEGAN models MSRA-B4, MSRA-C4, and MSRA-D4. Testing on Flickr images using VEGAN models Flickr-B4, Flickr-C4, and Flickr-D4. Figure 10: The edited images generated by our VEGAN models with respect to some expected visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. (Johnson, Alahi, and Fei-Fei 2016); ' ‡' refers to ; ' ' refers to (Gulrajani et al. 2017 the 9-residual-blocks version † WGAN-GP yes bilinear 'Color selectivo' visual effect generated by VEGAN (MSRA-C4) and CycleGAN. 'Defocus/Bokeh' visual effect generated by VEGAN (MSRA-D4) and CycleGAN.
5,461
1812.08442
2950827124
This paper presents a "learning to learn" approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem @cite_18 @cite_27 @cite_35 . The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limited-annotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks.
{ "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ], "cite_N": [ "@cite_27", "@cite_18", "@cite_35" ], "mid": [ "2952632681", "1686810756", "2949650786" ] }
Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects
In figure-ground segmentation, the regions of interest are conventionally defined by the provided ground truth, which is usually in the form of pixel-level annotations. Without such supervised information from intensive labeling efforts, it is challenging to teach a system to learn what the figure and the ground should be in each image. To address this issue, we propose an unsupervised meta-learning approach that can simultaneously learn both the figure-ground concept and the corresponding image segmentation. The proposed formulation explores the inherent but often unnoticeable relatedness between performing image segmentation and creating visual effects. In particular, to visually enrich a given image with a special effect often first needs to specify the regions to be emphasized. The procedure corresponds to constructing an internal representation that guides the image editing to operate on the target image regions. For this reason, we name such an internal guidance as the Visual-Effect Representation (VER) of the image. We observe that for a majority of visual effects, their resulting VER is closely related to image segmentation. Another advantage of focusing on visual-effect images is that such data are abundant from the Internet, while pixel-wise Figure 1: Given the same image (1st column), imitating different visual effects (2nd column) can yield distinct interpretations of figure-ground segmentation (3rd column), which are derived by our method via referencing the following visual effects (from top to bottom): black background, color selectivo, and defocus/Bokeh. The learned VERs are shown in the last column, respectively. annotating large datasets for image segmentation is timeconsuming. However, in practice, we only have access to the visual-effect images, but not the VERs as well as the original images. Taking all these factors into account, we reduce the meta-problem of figure-ground segmentation to predicting the proper VER of a given image for the underlying visual effect. Owing to its data richness from the Internet, the latter task is more suitable for our intention to cast the problem within the unsupervised generative framework. Many compositional image editing tasks have the aforementioned properties. For example, to create the color selectivo effect on an image, as shown in Fig. 2, we can i) identify the target and partition the image into foreground and background layers, ii) convert the color of background layer into grayscale, and iii) combine the converted background layer with the original foreground layer to get the final result. The operation of color conversion is local-it simply "equalizes" the RGB values of pixels in certain areas. The quality of the result depends on how properly the layers are decomposed. If a part of the target region is partitioned into the background, the result might look less plausible. Unlike the local operations, to localize the proper regions for editing would require certain understanding and analysis of the global or contextual information in the whole image. In this paper, we design a GAN-based model, called Visual-Effect GAN (VEGAN), that can learn to predict the internal representation (i.e., VER) and incorporate such information into facilitating the resulting figure-ground segmentation. We are thus motivated to formulate the following problem: Given an unaltered RGB image as the input and an image editing task with known compositional process and local operation, we aim to predict the proper VER that guides the editing process to generate the expected visual effect and accomplishes the underlying figure-ground segmentation. We adopt a data-driven setting in which the image editing task is exemplified by a collection of image samples with the expected visual effect. The task, therefore, is to transform the original RGB input image into an output image that exhibits the same effect of the exemplified samples. To make our approach general, we assume that no corresponding pairs of input and output images are available in training, and therefore supervised learning is not applicable. That is, the training data does not include pairs of the original color images and the corresponding edited images with visual effects. The flexibility is in line with the fact that although we could fetch a lot of images with certain visual effects over the Internet, we indeed do not know what their original counterpart should look like. Under this problem formulation, several issues are of our interest and need to be addressed. First, how do we solve the problem without paired input and output images? We build on the idea of generative adversarial network and develop a new unsupervised learning mechanism (shown in Figs. 2 & 3) to learn the internal representation for creating the visual effect. The generator aims to predict the internal VER and the editor is to convert the input image into the one that has the expected visual effect. The compositional procedure and local operation are generic and can be implemented as parts of the architecture of a ConvNet. The discriminator has to judge the quality of the edited images with respect to a set of sample images that exhibit the same visual effect. The experimental results show that our model works surprisingly well to learn meaningful representation and segmentation without supervision. Second, where do we acquire the collection of sample images for illustrating the expected visual effect? Indeed, it would not make sense if we have to manually generate the labor-intensive sample images for demonstrating the expected visual effects. We show that the required sample images can be conveniently collected from the Internet. We provide a couple of scripts to explore the effectiveness of using Internet images for training our model. Notice again that, although the required sample images with visual effects are available on the Internet, their original versions are unknown. Thus supervised learning of pairwise image-toimage translation cannot be applied here. Third, what can the VER be useful for, in addition to creating visual effects? We show that, if we are able to choose a suitable visual effect, the learned VER can be used to not only establish the intended figure-ground notion but also derive the image segmentation. More precisely, as in our formulation the visual-effect representation is characterized by a real-valued response map, the result of figure-ground separation can be obtained via binarizing the VER. Therefore, it is legitimate to take the proposed problem of VER prediction as a surrogate for unsupervised image segmentation. We have tested the following visual effects: i) black background, which is often caused by using flashlight; ii) color selectivo, which imposes color highlight on the subject and keeps the background in grayscale; iii) defocus/Bokeh, which is due to depth of field of camera lens. The second column in Fig. 1 shows the three types of visual effects. For these tasks our model can be end-toend trained from scratch in an unsupervised manner using training data that do not have either the ground-truth pixel labeling or the paired images with/without visual effects. While labor-intensive pixel-level segmentations for images are hard to acquire directly via Internet search, images with those three effects are easy to collect from photo-sharing websites, such as Flickr, using related tags. Generative Adversarial Networks The idea of GAN (Goodfellow et al. 2014) is to generate realistic samples through the adversarial game between generator G and discriminator D. GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods (Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) try to improve GAN in both the aspects of implementation and theory. DCGAN (Radford, Metz, and Chintala 2016) provides a new framework that is more stable and easier to train. WGAN (Arjovsky, Chintala, and Bottou 2017) suggests to use Wasserstein distance to measure the loss. WGAN-GP (Gulrajani et al. 2017) further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty. To reduce the burden of G, Denton et al. (Denton et al. 2015) use a pyramid structure and Karras et al. (Karras et al. 2018) consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of G by incorporating some well-defined image processing operations into the network model, e.g., converting background color into grayscale to simulate the visual effect of color selectivo, or blurring the background to create the Bokeh effect. Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection (Nguyen et al. 2017), saliency detection (Pan et al. 2017), and semantic segmentation (Luc et al. 2016). However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods (Liu, Breuel, and Kautz 2017;Figure 2: Learning and applying our model for the case of "color selectivo" visual effect. The image collection for learning is downloaded using Flickr API. Without explicit ground-truth pixel-level annotations being provided, our method can learn to estimate the visual-effect representations (VERs) from unpaired sets of natural RGB images and sample images with the expected visual effect. Our generative model is called Visual-Effect GAN (VEGAN), which has an additional component editor between the generator and the discriminator. After the unsupervised learning, the generator is able to predict the VER of an input color image for creating the expected visual effect. The VER can be further transformed into figure-ground segmentation. Yi et al. 2017;Zhu et al. 2017) can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation. Since our model has to identify the category-independent subjects for applying the visual effect without using imagepair relations and ground-truth pixel-level annotations, the problem we aim to address is more general and challenging than those of the aforementioned methods. Image Segmentation Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem (Simonyan and Zisserman 2015;Long, Shelhamer, and Darrell 2015;He et al. 2016). The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limitedannotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks. To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner (Hong, Noh, and Han 2015;Souly, Spampinato, and Shah 2017) or a weakly-supervised manner (Dai, He, and Sun 2015;Kwak, Hong, and Han 2017;Pinheiro and Collobert 2015) with a small number of pixellevel annotations. In contrast, our model is trained without explicit ground-truth annotations. Existing GAN-based segmentation methods (Nguyen et al. 2017;Luc et al. 2016) improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not "unsupervised" from the perspective of application and problem definition. We instead adopt a meta-learning viewpoint to address figure-ground segmentation. Depending on the visual effect to be imitated, the proposed approach interprets the task of image segmentation according to the learned VER. As a re-sult, our model indeed establishes a general setting of figureground segmentation, with the additional advantage of generating visual effects or photo-style manipulations. Our Method Given a natural RGB image I and an expected visual effect with known compositional process and local operation, the proposed VEGAN model learns to predict the visual-effect representation (VER) of I and to generate an edited image I edit with the expected effect. Fig. 2 illustrates the core idea. The training data are from two unpaired sets: the set {I} of original RGB images and the set {I sample } of images with the expected visual effect. The learning process is carried out as follows: i) Generator predicts the VER ν of the image I. ii) Editor uses the known local operation to create an edited image I edit possessing the expected visual effect. iii) Discriminator judges the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same visual effect. iv) Loss is computed for updating the whole model. Fig. 3 illustrates the components of VEGAN. Finally, we perform Binarization on VER for quantitatively assess the outcome of figure-ground segmentation. Generator: The task of the generator is to predict the VER ν that can be used to partition the input image I into foreground and background layers. Our network architecture is adapted from the state-of-the-art methods (Johnson, Alahi, and Fei-Fei 2016;Zhu et al. 2017) which show impressive results on image style transfer. The architecture follows the rules suggested by DCGAN (Radford, Metz, and Chintala 2016) such as replacing pooling layer with strided convolution. Our base architecture also uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). We have also tried a few slightly modified versions of the generator. The differences and details are described in the experiments. Discriminator: The discriminator is trained to judge the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same effect. We adopt a 70 × 70 patchGAN Ledig et al. 2017;Li and Wand 2016;Zhu et al. 2017) as our base discriminator network. PatcahGAN brings some benefits with multiple overlapping image patches. Namely, the scores The visual-effect representation (VER) produced by the generator indicates the strength of the visual effect at each location. The editor uses a well-defined trainable procedure (converting RGB to grayscale in this case) to create the expected visual effect. The discriminator receives the edited image I edit and evaluates how good it is. To train VEGAN, we need unpaired images from two domains. Domain A comprises real RGB images and Domain B comprises images with the expected visual effect. change more smoothly and the training process is more stable. Compared with a full-image discriminator, the receptive field of the 70 × 70 patchGAN might not capture the global context. In our work, the foreground objects are sensitive to their position in the whole image and are center-biased. If there are several objects in the image, our method would favor to pick out the object closest to the center. In our experiment, 70 × 70 patchGAN does produce better segments along the edges, but sometimes the segments tend to be tattered. A full-image discriminator (Goodfellow et al. 2014;Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017), on the other hand, could give coarser but more compact and structural segments. Editor: The editor is the core of the proposed model. Given an input image I and its VER ν predicted by the generator, the editor is responsible for creating a composed image I edit containing the expected visual effect. The first step is based on the well-defined procedure to perform local operations on the image and generate the expected visual effect I effect . More specifically, in our experiments we define three basic local operations: black-background, colorselectivo, and defocus/Bokeh, which involve clamping-tozero, grayscale conversion, and 11 × 11 average pooling, respectively. The next step is to combine the edited background layer with the foreground layer to get the final editing result I edit . An intuitive way is to use the VER ν as an alpha map α for image matting, i.e., I edit = α ⊗ I + (1 − α) ⊗ I effect , where α = {α ij }, α ij ∈ (0, 1) and ⊗ denotes the element-wise multiplication. However, in our experiments, we find that it is better to have ν = {ν ij }, ν ij ∈ (−1, 1) with hyperbolic-tangent as the output. Hence we combine the two layers as follows: I edit = τ (ν ⊗ (I − I effect ) + I effect ), ν ij ∈ (−1, 1) ,(1) where τ (·) truncates the values to be within (0, 255), which guarantees the I edit can be properly rendered. Under this formulation, our model turns to learning the residual. Loss: We refer to SOTA algorithms (Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) to design loss functions L G and L D for generator (G) and discriminator (D): L G = −E x∼Pg [D(x))] ,(2)L D = E x∼Pg [D(x))] − E y∼Pr [D(y)] + λ gp Ex ∼Px [( ∇xD(x) 2 − 1) 2 ] .(3) We alternately update the generator by Eq. 2 and the discriminator by Eq. 3. In our formulation, x is the edited image I edit , y is an image I sample which exhibits the expected visual effect, P g is the edited image distribution, P r is the sample image distribution, and Px is for sampling uniformly along straight lines between image pairs from P g and P r . We set the learning rate, λ gp , and other hyper-parameters the same as the configuration of WGAN-GP (Gulrajani et al. 2017). We keep the history of previously generated images and update the discriminator according to the history. We use the same way as ) to store 50 previously generated images {I edit } in a buffer. The training images are of size 224 × 224, and the batch size is 1. Binarization: The VEGAN model can be treated as aiming to predict the strength of the visual effect throughout the whole image. Although the VER provides effective intermediate representation for generating plausible edited images toward some expected visual effects, we observe that sometimes the VER might not be consistent with an object region, particularly with the Bokeh effect. Directly thresholding VER to make a binary mask for segmentation evaluation will cause some degree of false positives and degrade the segmentation quality. In general, we expect that the segmentation derived from the visual-effect representation to be smooth within an object and distinct across object boundaries. To respect this observation, we describe, in what follows, an optional procedure to obtain a smoothed VER and enable simple thresholding to yield a good binary mask for quantitative evaluation. Notice that all the VER maps visualized in this paper are obtained without binarization. To begin with, we over-segment (Achanta et al. 2012) an input image I into a superpixel set S and construct the corresponding superpixel-level graph G = (S, E, ω) with the edge set E and weights ω. Each edge e ij ∈ E denotes the spatial adjacency between superpixels s i and s j . The weighting function ω : E → [0, 1] is defined as ω ij = e −θ1 ci−cj , where c i and c j respectively denote the CIE Lab mean colors of two adjacent superpixels. Then the weight matrix of the graph is W = [ω ij ] |S|×|S| . We then smooth the VER via propagating the averaged value of each superpixel to all other superpixels. To this end, we use r i to denote the mean VER value of superpixel s i where r i = 1 |si| (i,j)∈si ν ij and |s i | is the number of pixels within s i . The propagation is carried out according to the feature similarity between every superpixel pair. Given the weight matrix W, the pairwise similarity matrix A can be constructed as A = (D − θ 2 W) −1 I, where D is a diagonal matrix with each diagonal entry equal to the row sum of W, θ 2 is a parameter in (0, 1], and I is the |S|-by-|S| identity matrix (Zhou et al. 2003). Finally, the smoothed VER value of each superpixel can be obtained by [r 1 ,r 2 , . . . ,r |S| ] T = D −1 A A · [r 1 , r 2 , . . . , r |S| ] T ,(4) where D A is a diagonal matrix with each diagonal entry equal to the corresponding row sum of A, and D −1 A A yields the row normalized version of A. From Eq. 4, we see that the smoothed VER valuer i is determined by not only neighboring superpixels of s i but also all other superpixels. To obtain the binary mask, we set the average value of {r 1 ,r 2 , . . . ,r |S| } as the threshold for obtaining the corresponding figure-ground segmentation for the input I. We set parameters θ 1 = 10 and θ 2 = 0.99 in all the experiments. Experiments We first describe the evaluation metric, the testing datasets, the training data, and the algorithms in comparison. Then, we show the comparison results of the relevant algorithms and our approach. Finally, we present the image segmentation and editing results of our approach. More experimental results can be found in the supplementary material. Evaluation Metric. We adopt the intersection-over-union (IoU) to evaluate the binary mask derived from the VER. The IoU score, which is defined as |P Q| |P Q| , where P denotes the machine segmentation and Q denotes the ground-truth segmentation. All algorithms are tested on Intel i7-4770 3.40 GHz CPU, 8GB RAM, and NVIDIA Titan X GPU. Datasets. The six datasets are GC50 (Rother, Kolmogorov, and Blake 2004), MSRA500, ECSSD (Shi et al. 2016), Flower17 (Nilsback and Zisserman 2006), Flower102 (Nilsback and Zisserman 2008), and CUB200 (Wah et al. 2011). MSRA500 is a subset of the MSRA10K dataset (Cheng et al. 2015), which contains 10,000 natural images. We randomly partition MSRA10K into two non-overlapping subsets of 500 and 9,500 images to create MSRA500 and MSRA9500 for testing and training, respectively. Their statistics are summarized in Table 1. Since these datasets provide pixellevel ground truths, we can compare the consistency be- tween the ground-truth labeling and the derived segmentation of each image for VER-quality assessment. Training Data. In training the VEGAN model, we consider using the images from two different sources for comparison. The first image source is MSRA9500 derived from the MSRA10K dataset (Cheng et al. 2015). The second image source is Flickr, and we acquire unorganized images for each task as the training data. We examine our model on three kinds of visual effects, namely, black background, color selectivo, and defocus/Bokeh. • For MSRA9500 images, we randomly select 4,750 images and then apply the three visual effects to yield three groups of images with visual effects, i.e., {I sample }. The other 4,750 images are hence the input images {I} for the generator to produce the edited images {I edit } later. • For Flickr images, we use "black background," "color selectivo," and "defocus/Bokeh" as the three query tags, and then collect 4,000 images for each query-tag as the real images with visual effects. We randomly download additional 4,000 images from Flickr as the images to be edited. Algorithms in Comparison. We quantitatively evaluate the learned VER using the standard segmentation assessment metric (IoU). Our approach is compared with several well-known algorithms, including two semantic segmentation algorithms, three saliency based algorithms, and two bounding-box based algorithms, listed as follows: ResNet , VGG16 (Simonyan and Zisserman 2015), CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS and MilCutG (Wu et al. 2014), GrabCut (Rother, Kolmogorov, and Blake 2004). The two supervised semantic segmentation algorithms, ResNet and VGG16, are pre-trained on ILSVRC-2012-CLS (Russakovsky et al. 2015) and then fine-tuned on MSRA9500 with ground-truth annotations. The bounding boxes of the two bounding-box based algorithms are initialized around the image borders. Quantitative Evaluation The first part of experiment aims to evaluate the segmentation quality of different methods. We first compare several variants of the VEGAN model to choose the best model configuration. Then, we analyze the results of the VEGAN model versus the other state-of-the-art algorithms. VEGAN Variants. In the legend blocks of Fig. 4, we use a compound notation "TrainingData -Version" to account for the variant versions of our model. Specifically, Train-ingData indicates the image source of the training data. The notation for Version contains two characters. The first character denotes the type of visual effect: "B" for black background, "C" for color selectivo, and "D" for defocus/Bokeh. The second character is the model configuration: "1" refers to the combination of base-generator and base-discriminator described in Our Method; "2" refers to using ResNet as the generator; "3" is the model "1" with additional skip-layers and replacing transpose convolution with bilinear interpolation; "4" is the model "3" yet replacing patch-based discriminator with full-image discriminator. We report the results of VEGAN variants in Table 6, and depict the sorted IoU scores for the test images in Flower17 and Flower102 datasets in Fig. 4. It can be seen that all models have similar segmentation qualities no matter what image source is used for training. In Table 6 and Fig. 4, the training configuration "B4" shows relatively better performance under black background. Hence, our VEGAN model adopts the version of MSRA-B4 as a representative variant for comparing with other state-of-the-art algorithms. Unseen Images. We further analyze the differences of the learned models on dealing with unseen and seen images. We test the variants B4, C4, and D4 on MSRA500 (unseen) and the subset {I} of MSRA9500 (seen). We find that the performance of VEGAN is quite stable. The IoU score for MSRA500 is only 0.01 lower than the score for MSRA9500 {I}. Note that, even for the seen images, the ground-truth pixel annotations are unknown to the VEGAN model during training. This result indicates that VEGAN has a good generalization ability to predict segmentation for either seen or unseen images. For comparison, we do the same experiment with the two supervised algorithms, ResNet and VGG16. They are fine-tuned with MSRA9500. The mean IoU scores of ResNet are 0.86 and 0.94 for MSRA500 and MSRA9500, respectively. The mean IoU scores of VGG16 are 0.72 and 0.88 for MSRA500 and MSRA9500, respectively. The performance of both supervised techniques significantly degrades while dealing with unseen images. From the results just described, the final VEGAN model is implemented with the following setting: i) Generator uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). ii) Discriminator uses the full-image discriminator as WGAN-GP (Gulrajani et al. 2017). Results. The top portion of Table 3 summarizes the mean IoU score of each algorithm evaluated with the six testing datasets. We first compare our method with five well-known segmentation/saliency-detection techniques, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). The proposed VE-GAN model outperforms all others on MSRA500, ECSSD, Flower17, and Flower102 datasets, and is only slightly behind the best on GC50 and CUB200 datasets. The bottom portion of Table 3 shows the results of two SOTA supervised learning algorithms on the six testing datasets. Owing to training with the paired images and ground-truths in a "supervised" manner, the two models of ResNet and VGG16 undoubtedly achieve good performance so that we treat them as the oracle models. Surprisingly, our unsupervised learning model is comparable with or even slightly better than the supervised learning algorithms on the MSRA500, Flower17, and Flower102 datasets. Fig. 9 depicts the sorted IoU scores, where a larger area under curve means better segmentation quality. VEGAN achieves better segmentation accuracy on the two datasets. Fig. 10 shows the results generated by our VEGAN model under different configurations. Each triplet of images contains the input image, the visual effect representation (VER), and the edited image. The results in Fig. 10 demonstrate that VEGAN can generate reasonable figure-ground segmentations and plausible edited images with expected visual effects. Visual-Effect Imitation as Style Transfer. Although existing GAN models cannot be directly applied to learning figure-ground segmentation, some of them are applicable to learning visual-effect transfer, e.g., CycleGAN . We use the two sets {I} and {I sample } of MSRA9500 to train CycleGAN, and show some comparison results in Fig. 7. We find that the task of imitating black background turns out to be challenging for CycleGAN since the information in {I sample } is too limited to derive the inverse mapping back to {I}. Moreover, CycleGAN focuses more on learning the mapping between local properties such as color or texture rather than learning how to create a glob- ) and VGG16 (Simonyan and Zisserman 2015) are pre-trained with ILSVRC-2012-CLS and then fine-tuned with MSRA9500. ally consistent visual effect. VEGAN instead follows a systematic learning procedure to imitate the visual effect. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to identify. Figure 6: The edited images generated by VEGAN with respect to specific visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. Qualitative Evaluation User Study. Fig. 8 shows VERs that testing on Flickr "bird" images using VEGAN models trained merely with Flick "flower" images. The results suggest that the meta-learning mechanism enables VE-GAN to identify unseen foreground figures based on the learned knowledge embodied in the generated VERs. Conclusion We characterize the two main contributions of our method as follows. First, we establish a meta-learning framework to learn a general concept of figure-ground application and an effective approach to the segmentation task. Second, we propose to cast the meta-learning as imitating relevant visual effects and develop a novel VEGAN model with following advantages: i) Our model offers a new way to predict meaningful figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. ii) The training images are easy to collect from photo-sharing websites using related tags. iii) The editor between the generator and the discriminator enables VEGAN to decouple the compositional process of imitating visual effects and hence allows VEGAN to effectively learn the underlying representation (VER) for deriving figure-ground segmentation. We have tested three visual effects, including "black background," "color selectivo," and "defocus/Bokeh" with extensive experiments on six datasets. For these visual effects, VEGAN can be end-to-end trained from scratch using unpaired training images that have no ground-truth labeling. Because state-of-the-art GAN models, e.g., CycleGAN , are not explicitly designed for unsupervised learning of figure-ground segmentation, we simply conduct qualitative comparisons with CycleGAN on the task of visual-effect transfer rather than the task of figure-ground segmentation. The task of visual-effect transfer is to convert an RGB image into an edited image with the intended visual effect. To train CycleGAN for visual-effect transfer, we use the set {I} of original RGB images and the set {I sample } of images with the expected visual effect as the two unpaired training sets. Fig. 12 shows the results of 'training on MSRA9500 and testing on MSRA500'. Fig. 13 shows the results of 'training on Flickr and testing on Flickr'. For CycleGAN and VEGAN, all the test images are unseen during training. The training process is done in an unsupervised manner without using any ground-truth annotations and paired images. Some comparison results are shown in Fig. 12 and Fig. 13. We observe that the task of imitating black background is actually more challenging for Cycle-GAN since the information of black regions in {I sample } is limited and hence does not provide good inverse mapping back to {I} under the setting of CycleGAN. The results of CycleGAN on imitating color selectivo and defocus/Bokeh are more comparable to those of VE-GAN. However, the images generated by CycleGAN may have some distortions in color. On the other hand, VEGAN follows a well-organized procedure to learn how to imitate visual effects. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to differentiate. GC50 MSRA500 ECSSD Flower17 Flower102 CUB200 Figure 9: Comparisons with well-known algorithms, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). Each sub-figure depicts the sorted IoU scores as the segmentation accuracy. Testing on MSRA500 using VEGAN models MSRA-B4, MSRA-C4, and MSRA-D4. Testing on Flickr images using VEGAN models Flickr-B4, Flickr-C4, and Flickr-D4. Figure 10: The edited images generated by our VEGAN models with respect to some expected visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. (Johnson, Alahi, and Fei-Fei 2016); ' ‡' refers to ; ' ' refers to (Gulrajani et al. 2017 the 9-residual-blocks version † WGAN-GP yes bilinear 'Color selectivo' visual effect generated by VEGAN (MSRA-C4) and CycleGAN. 'Defocus/Bokeh' visual effect generated by VEGAN (MSRA-D4) and CycleGAN.
5,461
1812.08442
2950827124
This paper presents a "learning to learn" approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner @cite_0 @cite_4 or a weakly-supervised manner @cite_17 @cite_32 @cite_5 with a small number of pixel-level annotations. In contrast, our model is trained without explicit ground-truth annotations.
{ "abstract": [ "Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs a significant number of pixel-level annotated data, which is often unavailable. To address this lack of annotations, in this paper, we leverage, on one hand, a massive amount of available unlabeled or weakly labeled data, and on the other hand, non-real images created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework – based on Generative Adversarial Networks (GANs) – which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, which, in turn, improves multiclass pixel classification. To ensure a higher quality of generated images by GANs with consequently improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We test our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance compared to state-of-the-art semantic segmentation methods.", "", "We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.", "We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.", "Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called BoxSup, produces competitive results supervised by boxes only, on par with strong baselines fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT." ], "cite_N": [ "@cite_4", "@cite_32", "@cite_0", "@cite_5", "@cite_17" ], "mid": [ "2778764040", "2605214291", "2949847866", "1945608308", "2949086864" ] }
Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects
In figure-ground segmentation, the regions of interest are conventionally defined by the provided ground truth, which is usually in the form of pixel-level annotations. Without such supervised information from intensive labeling efforts, it is challenging to teach a system to learn what the figure and the ground should be in each image. To address this issue, we propose an unsupervised meta-learning approach that can simultaneously learn both the figure-ground concept and the corresponding image segmentation. The proposed formulation explores the inherent but often unnoticeable relatedness between performing image segmentation and creating visual effects. In particular, to visually enrich a given image with a special effect often first needs to specify the regions to be emphasized. The procedure corresponds to constructing an internal representation that guides the image editing to operate on the target image regions. For this reason, we name such an internal guidance as the Visual-Effect Representation (VER) of the image. We observe that for a majority of visual effects, their resulting VER is closely related to image segmentation. Another advantage of focusing on visual-effect images is that such data are abundant from the Internet, while pixel-wise Figure 1: Given the same image (1st column), imitating different visual effects (2nd column) can yield distinct interpretations of figure-ground segmentation (3rd column), which are derived by our method via referencing the following visual effects (from top to bottom): black background, color selectivo, and defocus/Bokeh. The learned VERs are shown in the last column, respectively. annotating large datasets for image segmentation is timeconsuming. However, in practice, we only have access to the visual-effect images, but not the VERs as well as the original images. Taking all these factors into account, we reduce the meta-problem of figure-ground segmentation to predicting the proper VER of a given image for the underlying visual effect. Owing to its data richness from the Internet, the latter task is more suitable for our intention to cast the problem within the unsupervised generative framework. Many compositional image editing tasks have the aforementioned properties. For example, to create the color selectivo effect on an image, as shown in Fig. 2, we can i) identify the target and partition the image into foreground and background layers, ii) convert the color of background layer into grayscale, and iii) combine the converted background layer with the original foreground layer to get the final result. The operation of color conversion is local-it simply "equalizes" the RGB values of pixels in certain areas. The quality of the result depends on how properly the layers are decomposed. If a part of the target region is partitioned into the background, the result might look less plausible. Unlike the local operations, to localize the proper regions for editing would require certain understanding and analysis of the global or contextual information in the whole image. In this paper, we design a GAN-based model, called Visual-Effect GAN (VEGAN), that can learn to predict the internal representation (i.e., VER) and incorporate such information into facilitating the resulting figure-ground segmentation. We are thus motivated to formulate the following problem: Given an unaltered RGB image as the input and an image editing task with known compositional process and local operation, we aim to predict the proper VER that guides the editing process to generate the expected visual effect and accomplishes the underlying figure-ground segmentation. We adopt a data-driven setting in which the image editing task is exemplified by a collection of image samples with the expected visual effect. The task, therefore, is to transform the original RGB input image into an output image that exhibits the same effect of the exemplified samples. To make our approach general, we assume that no corresponding pairs of input and output images are available in training, and therefore supervised learning is not applicable. That is, the training data does not include pairs of the original color images and the corresponding edited images with visual effects. The flexibility is in line with the fact that although we could fetch a lot of images with certain visual effects over the Internet, we indeed do not know what their original counterpart should look like. Under this problem formulation, several issues are of our interest and need to be addressed. First, how do we solve the problem without paired input and output images? We build on the idea of generative adversarial network and develop a new unsupervised learning mechanism (shown in Figs. 2 & 3) to learn the internal representation for creating the visual effect. The generator aims to predict the internal VER and the editor is to convert the input image into the one that has the expected visual effect. The compositional procedure and local operation are generic and can be implemented as parts of the architecture of a ConvNet. The discriminator has to judge the quality of the edited images with respect to a set of sample images that exhibit the same visual effect. The experimental results show that our model works surprisingly well to learn meaningful representation and segmentation without supervision. Second, where do we acquire the collection of sample images for illustrating the expected visual effect? Indeed, it would not make sense if we have to manually generate the labor-intensive sample images for demonstrating the expected visual effects. We show that the required sample images can be conveniently collected from the Internet. We provide a couple of scripts to explore the effectiveness of using Internet images for training our model. Notice again that, although the required sample images with visual effects are available on the Internet, their original versions are unknown. Thus supervised learning of pairwise image-toimage translation cannot be applied here. Third, what can the VER be useful for, in addition to creating visual effects? We show that, if we are able to choose a suitable visual effect, the learned VER can be used to not only establish the intended figure-ground notion but also derive the image segmentation. More precisely, as in our formulation the visual-effect representation is characterized by a real-valued response map, the result of figure-ground separation can be obtained via binarizing the VER. Therefore, it is legitimate to take the proposed problem of VER prediction as a surrogate for unsupervised image segmentation. We have tested the following visual effects: i) black background, which is often caused by using flashlight; ii) color selectivo, which imposes color highlight on the subject and keeps the background in grayscale; iii) defocus/Bokeh, which is due to depth of field of camera lens. The second column in Fig. 1 shows the three types of visual effects. For these tasks our model can be end-toend trained from scratch in an unsupervised manner using training data that do not have either the ground-truth pixel labeling or the paired images with/without visual effects. While labor-intensive pixel-level segmentations for images are hard to acquire directly via Internet search, images with those three effects are easy to collect from photo-sharing websites, such as Flickr, using related tags. Generative Adversarial Networks The idea of GAN (Goodfellow et al. 2014) is to generate realistic samples through the adversarial game between generator G and discriminator D. GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods (Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) try to improve GAN in both the aspects of implementation and theory. DCGAN (Radford, Metz, and Chintala 2016) provides a new framework that is more stable and easier to train. WGAN (Arjovsky, Chintala, and Bottou 2017) suggests to use Wasserstein distance to measure the loss. WGAN-GP (Gulrajani et al. 2017) further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty. To reduce the burden of G, Denton et al. (Denton et al. 2015) use a pyramid structure and Karras et al. (Karras et al. 2018) consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of G by incorporating some well-defined image processing operations into the network model, e.g., converting background color into grayscale to simulate the visual effect of color selectivo, or blurring the background to create the Bokeh effect. Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection (Nguyen et al. 2017), saliency detection (Pan et al. 2017), and semantic segmentation (Luc et al. 2016). However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods (Liu, Breuel, and Kautz 2017;Figure 2: Learning and applying our model for the case of "color selectivo" visual effect. The image collection for learning is downloaded using Flickr API. Without explicit ground-truth pixel-level annotations being provided, our method can learn to estimate the visual-effect representations (VERs) from unpaired sets of natural RGB images and sample images with the expected visual effect. Our generative model is called Visual-Effect GAN (VEGAN), which has an additional component editor between the generator and the discriminator. After the unsupervised learning, the generator is able to predict the VER of an input color image for creating the expected visual effect. The VER can be further transformed into figure-ground segmentation. Yi et al. 2017;Zhu et al. 2017) can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation. Since our model has to identify the category-independent subjects for applying the visual effect without using imagepair relations and ground-truth pixel-level annotations, the problem we aim to address is more general and challenging than those of the aforementioned methods. Image Segmentation Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem (Simonyan and Zisserman 2015;Long, Shelhamer, and Darrell 2015;He et al. 2016). The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limitedannotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks. To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner (Hong, Noh, and Han 2015;Souly, Spampinato, and Shah 2017) or a weakly-supervised manner (Dai, He, and Sun 2015;Kwak, Hong, and Han 2017;Pinheiro and Collobert 2015) with a small number of pixellevel annotations. In contrast, our model is trained without explicit ground-truth annotations. Existing GAN-based segmentation methods (Nguyen et al. 2017;Luc et al. 2016) improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not "unsupervised" from the perspective of application and problem definition. We instead adopt a meta-learning viewpoint to address figure-ground segmentation. Depending on the visual effect to be imitated, the proposed approach interprets the task of image segmentation according to the learned VER. As a re-sult, our model indeed establishes a general setting of figureground segmentation, with the additional advantage of generating visual effects or photo-style manipulations. Our Method Given a natural RGB image I and an expected visual effect with known compositional process and local operation, the proposed VEGAN model learns to predict the visual-effect representation (VER) of I and to generate an edited image I edit with the expected effect. Fig. 2 illustrates the core idea. The training data are from two unpaired sets: the set {I} of original RGB images and the set {I sample } of images with the expected visual effect. The learning process is carried out as follows: i) Generator predicts the VER ν of the image I. ii) Editor uses the known local operation to create an edited image I edit possessing the expected visual effect. iii) Discriminator judges the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same visual effect. iv) Loss is computed for updating the whole model. Fig. 3 illustrates the components of VEGAN. Finally, we perform Binarization on VER for quantitatively assess the outcome of figure-ground segmentation. Generator: The task of the generator is to predict the VER ν that can be used to partition the input image I into foreground and background layers. Our network architecture is adapted from the state-of-the-art methods (Johnson, Alahi, and Fei-Fei 2016;Zhu et al. 2017) which show impressive results on image style transfer. The architecture follows the rules suggested by DCGAN (Radford, Metz, and Chintala 2016) such as replacing pooling layer with strided convolution. Our base architecture also uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). We have also tried a few slightly modified versions of the generator. The differences and details are described in the experiments. Discriminator: The discriminator is trained to judge the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same effect. We adopt a 70 × 70 patchGAN Ledig et al. 2017;Li and Wand 2016;Zhu et al. 2017) as our base discriminator network. PatcahGAN brings some benefits with multiple overlapping image patches. Namely, the scores The visual-effect representation (VER) produced by the generator indicates the strength of the visual effect at each location. The editor uses a well-defined trainable procedure (converting RGB to grayscale in this case) to create the expected visual effect. The discriminator receives the edited image I edit and evaluates how good it is. To train VEGAN, we need unpaired images from two domains. Domain A comprises real RGB images and Domain B comprises images with the expected visual effect. change more smoothly and the training process is more stable. Compared with a full-image discriminator, the receptive field of the 70 × 70 patchGAN might not capture the global context. In our work, the foreground objects are sensitive to their position in the whole image and are center-biased. If there are several objects in the image, our method would favor to pick out the object closest to the center. In our experiment, 70 × 70 patchGAN does produce better segments along the edges, but sometimes the segments tend to be tattered. A full-image discriminator (Goodfellow et al. 2014;Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017), on the other hand, could give coarser but more compact and structural segments. Editor: The editor is the core of the proposed model. Given an input image I and its VER ν predicted by the generator, the editor is responsible for creating a composed image I edit containing the expected visual effect. The first step is based on the well-defined procedure to perform local operations on the image and generate the expected visual effect I effect . More specifically, in our experiments we define three basic local operations: black-background, colorselectivo, and defocus/Bokeh, which involve clamping-tozero, grayscale conversion, and 11 × 11 average pooling, respectively. The next step is to combine the edited background layer with the foreground layer to get the final editing result I edit . An intuitive way is to use the VER ν as an alpha map α for image matting, i.e., I edit = α ⊗ I + (1 − α) ⊗ I effect , where α = {α ij }, α ij ∈ (0, 1) and ⊗ denotes the element-wise multiplication. However, in our experiments, we find that it is better to have ν = {ν ij }, ν ij ∈ (−1, 1) with hyperbolic-tangent as the output. Hence we combine the two layers as follows: I edit = τ (ν ⊗ (I − I effect ) + I effect ), ν ij ∈ (−1, 1) ,(1) where τ (·) truncates the values to be within (0, 255), which guarantees the I edit can be properly rendered. Under this formulation, our model turns to learning the residual. Loss: We refer to SOTA algorithms (Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) to design loss functions L G and L D for generator (G) and discriminator (D): L G = −E x∼Pg [D(x))] ,(2)L D = E x∼Pg [D(x))] − E y∼Pr [D(y)] + λ gp Ex ∼Px [( ∇xD(x) 2 − 1) 2 ] .(3) We alternately update the generator by Eq. 2 and the discriminator by Eq. 3. In our formulation, x is the edited image I edit , y is an image I sample which exhibits the expected visual effect, P g is the edited image distribution, P r is the sample image distribution, and Px is for sampling uniformly along straight lines between image pairs from P g and P r . We set the learning rate, λ gp , and other hyper-parameters the same as the configuration of WGAN-GP (Gulrajani et al. 2017). We keep the history of previously generated images and update the discriminator according to the history. We use the same way as ) to store 50 previously generated images {I edit } in a buffer. The training images are of size 224 × 224, and the batch size is 1. Binarization: The VEGAN model can be treated as aiming to predict the strength of the visual effect throughout the whole image. Although the VER provides effective intermediate representation for generating plausible edited images toward some expected visual effects, we observe that sometimes the VER might not be consistent with an object region, particularly with the Bokeh effect. Directly thresholding VER to make a binary mask for segmentation evaluation will cause some degree of false positives and degrade the segmentation quality. In general, we expect that the segmentation derived from the visual-effect representation to be smooth within an object and distinct across object boundaries. To respect this observation, we describe, in what follows, an optional procedure to obtain a smoothed VER and enable simple thresholding to yield a good binary mask for quantitative evaluation. Notice that all the VER maps visualized in this paper are obtained without binarization. To begin with, we over-segment (Achanta et al. 2012) an input image I into a superpixel set S and construct the corresponding superpixel-level graph G = (S, E, ω) with the edge set E and weights ω. Each edge e ij ∈ E denotes the spatial adjacency between superpixels s i and s j . The weighting function ω : E → [0, 1] is defined as ω ij = e −θ1 ci−cj , where c i and c j respectively denote the CIE Lab mean colors of two adjacent superpixels. Then the weight matrix of the graph is W = [ω ij ] |S|×|S| . We then smooth the VER via propagating the averaged value of each superpixel to all other superpixels. To this end, we use r i to denote the mean VER value of superpixel s i where r i = 1 |si| (i,j)∈si ν ij and |s i | is the number of pixels within s i . The propagation is carried out according to the feature similarity between every superpixel pair. Given the weight matrix W, the pairwise similarity matrix A can be constructed as A = (D − θ 2 W) −1 I, where D is a diagonal matrix with each diagonal entry equal to the row sum of W, θ 2 is a parameter in (0, 1], and I is the |S|-by-|S| identity matrix (Zhou et al. 2003). Finally, the smoothed VER value of each superpixel can be obtained by [r 1 ,r 2 , . . . ,r |S| ] T = D −1 A A · [r 1 , r 2 , . . . , r |S| ] T ,(4) where D A is a diagonal matrix with each diagonal entry equal to the corresponding row sum of A, and D −1 A A yields the row normalized version of A. From Eq. 4, we see that the smoothed VER valuer i is determined by not only neighboring superpixels of s i but also all other superpixels. To obtain the binary mask, we set the average value of {r 1 ,r 2 , . . . ,r |S| } as the threshold for obtaining the corresponding figure-ground segmentation for the input I. We set parameters θ 1 = 10 and θ 2 = 0.99 in all the experiments. Experiments We first describe the evaluation metric, the testing datasets, the training data, and the algorithms in comparison. Then, we show the comparison results of the relevant algorithms and our approach. Finally, we present the image segmentation and editing results of our approach. More experimental results can be found in the supplementary material. Evaluation Metric. We adopt the intersection-over-union (IoU) to evaluate the binary mask derived from the VER. The IoU score, which is defined as |P Q| |P Q| , where P denotes the machine segmentation and Q denotes the ground-truth segmentation. All algorithms are tested on Intel i7-4770 3.40 GHz CPU, 8GB RAM, and NVIDIA Titan X GPU. Datasets. The six datasets are GC50 (Rother, Kolmogorov, and Blake 2004), MSRA500, ECSSD (Shi et al. 2016), Flower17 (Nilsback and Zisserman 2006), Flower102 (Nilsback and Zisserman 2008), and CUB200 (Wah et al. 2011). MSRA500 is a subset of the MSRA10K dataset (Cheng et al. 2015), which contains 10,000 natural images. We randomly partition MSRA10K into two non-overlapping subsets of 500 and 9,500 images to create MSRA500 and MSRA9500 for testing and training, respectively. Their statistics are summarized in Table 1. Since these datasets provide pixellevel ground truths, we can compare the consistency be- tween the ground-truth labeling and the derived segmentation of each image for VER-quality assessment. Training Data. In training the VEGAN model, we consider using the images from two different sources for comparison. The first image source is MSRA9500 derived from the MSRA10K dataset (Cheng et al. 2015). The second image source is Flickr, and we acquire unorganized images for each task as the training data. We examine our model on three kinds of visual effects, namely, black background, color selectivo, and defocus/Bokeh. • For MSRA9500 images, we randomly select 4,750 images and then apply the three visual effects to yield three groups of images with visual effects, i.e., {I sample }. The other 4,750 images are hence the input images {I} for the generator to produce the edited images {I edit } later. • For Flickr images, we use "black background," "color selectivo," and "defocus/Bokeh" as the three query tags, and then collect 4,000 images for each query-tag as the real images with visual effects. We randomly download additional 4,000 images from Flickr as the images to be edited. Algorithms in Comparison. We quantitatively evaluate the learned VER using the standard segmentation assessment metric (IoU). Our approach is compared with several well-known algorithms, including two semantic segmentation algorithms, three saliency based algorithms, and two bounding-box based algorithms, listed as follows: ResNet , VGG16 (Simonyan and Zisserman 2015), CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS and MilCutG (Wu et al. 2014), GrabCut (Rother, Kolmogorov, and Blake 2004). The two supervised semantic segmentation algorithms, ResNet and VGG16, are pre-trained on ILSVRC-2012-CLS (Russakovsky et al. 2015) and then fine-tuned on MSRA9500 with ground-truth annotations. The bounding boxes of the two bounding-box based algorithms are initialized around the image borders. Quantitative Evaluation The first part of experiment aims to evaluate the segmentation quality of different methods. We first compare several variants of the VEGAN model to choose the best model configuration. Then, we analyze the results of the VEGAN model versus the other state-of-the-art algorithms. VEGAN Variants. In the legend blocks of Fig. 4, we use a compound notation "TrainingData -Version" to account for the variant versions of our model. Specifically, Train-ingData indicates the image source of the training data. The notation for Version contains two characters. The first character denotes the type of visual effect: "B" for black background, "C" for color selectivo, and "D" for defocus/Bokeh. The second character is the model configuration: "1" refers to the combination of base-generator and base-discriminator described in Our Method; "2" refers to using ResNet as the generator; "3" is the model "1" with additional skip-layers and replacing transpose convolution with bilinear interpolation; "4" is the model "3" yet replacing patch-based discriminator with full-image discriminator. We report the results of VEGAN variants in Table 6, and depict the sorted IoU scores for the test images in Flower17 and Flower102 datasets in Fig. 4. It can be seen that all models have similar segmentation qualities no matter what image source is used for training. In Table 6 and Fig. 4, the training configuration "B4" shows relatively better performance under black background. Hence, our VEGAN model adopts the version of MSRA-B4 as a representative variant for comparing with other state-of-the-art algorithms. Unseen Images. We further analyze the differences of the learned models on dealing with unseen and seen images. We test the variants B4, C4, and D4 on MSRA500 (unseen) and the subset {I} of MSRA9500 (seen). We find that the performance of VEGAN is quite stable. The IoU score for MSRA500 is only 0.01 lower than the score for MSRA9500 {I}. Note that, even for the seen images, the ground-truth pixel annotations are unknown to the VEGAN model during training. This result indicates that VEGAN has a good generalization ability to predict segmentation for either seen or unseen images. For comparison, we do the same experiment with the two supervised algorithms, ResNet and VGG16. They are fine-tuned with MSRA9500. The mean IoU scores of ResNet are 0.86 and 0.94 for MSRA500 and MSRA9500, respectively. The mean IoU scores of VGG16 are 0.72 and 0.88 for MSRA500 and MSRA9500, respectively. The performance of both supervised techniques significantly degrades while dealing with unseen images. From the results just described, the final VEGAN model is implemented with the following setting: i) Generator uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). ii) Discriminator uses the full-image discriminator as WGAN-GP (Gulrajani et al. 2017). Results. The top portion of Table 3 summarizes the mean IoU score of each algorithm evaluated with the six testing datasets. We first compare our method with five well-known segmentation/saliency-detection techniques, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). The proposed VE-GAN model outperforms all others on MSRA500, ECSSD, Flower17, and Flower102 datasets, and is only slightly behind the best on GC50 and CUB200 datasets. The bottom portion of Table 3 shows the results of two SOTA supervised learning algorithms on the six testing datasets. Owing to training with the paired images and ground-truths in a "supervised" manner, the two models of ResNet and VGG16 undoubtedly achieve good performance so that we treat them as the oracle models. Surprisingly, our unsupervised learning model is comparable with or even slightly better than the supervised learning algorithms on the MSRA500, Flower17, and Flower102 datasets. Fig. 9 depicts the sorted IoU scores, where a larger area under curve means better segmentation quality. VEGAN achieves better segmentation accuracy on the two datasets. Fig. 10 shows the results generated by our VEGAN model under different configurations. Each triplet of images contains the input image, the visual effect representation (VER), and the edited image. The results in Fig. 10 demonstrate that VEGAN can generate reasonable figure-ground segmentations and plausible edited images with expected visual effects. Visual-Effect Imitation as Style Transfer. Although existing GAN models cannot be directly applied to learning figure-ground segmentation, some of them are applicable to learning visual-effect transfer, e.g., CycleGAN . We use the two sets {I} and {I sample } of MSRA9500 to train CycleGAN, and show some comparison results in Fig. 7. We find that the task of imitating black background turns out to be challenging for CycleGAN since the information in {I sample } is too limited to derive the inverse mapping back to {I}. Moreover, CycleGAN focuses more on learning the mapping between local properties such as color or texture rather than learning how to create a glob- ) and VGG16 (Simonyan and Zisserman 2015) are pre-trained with ILSVRC-2012-CLS and then fine-tuned with MSRA9500. ally consistent visual effect. VEGAN instead follows a systematic learning procedure to imitate the visual effect. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to identify. Figure 6: The edited images generated by VEGAN with respect to specific visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. Qualitative Evaluation User Study. Fig. 8 shows VERs that testing on Flickr "bird" images using VEGAN models trained merely with Flick "flower" images. The results suggest that the meta-learning mechanism enables VE-GAN to identify unseen foreground figures based on the learned knowledge embodied in the generated VERs. Conclusion We characterize the two main contributions of our method as follows. First, we establish a meta-learning framework to learn a general concept of figure-ground application and an effective approach to the segmentation task. Second, we propose to cast the meta-learning as imitating relevant visual effects and develop a novel VEGAN model with following advantages: i) Our model offers a new way to predict meaningful figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. ii) The training images are easy to collect from photo-sharing websites using related tags. iii) The editor between the generator and the discriminator enables VEGAN to decouple the compositional process of imitating visual effects and hence allows VEGAN to effectively learn the underlying representation (VER) for deriving figure-ground segmentation. We have tested three visual effects, including "black background," "color selectivo," and "defocus/Bokeh" with extensive experiments on six datasets. For these visual effects, VEGAN can be end-to-end trained from scratch using unpaired training images that have no ground-truth labeling. Because state-of-the-art GAN models, e.g., CycleGAN , are not explicitly designed for unsupervised learning of figure-ground segmentation, we simply conduct qualitative comparisons with CycleGAN on the task of visual-effect transfer rather than the task of figure-ground segmentation. The task of visual-effect transfer is to convert an RGB image into an edited image with the intended visual effect. To train CycleGAN for visual-effect transfer, we use the set {I} of original RGB images and the set {I sample } of images with the expected visual effect as the two unpaired training sets. Fig. 12 shows the results of 'training on MSRA9500 and testing on MSRA500'. Fig. 13 shows the results of 'training on Flickr and testing on Flickr'. For CycleGAN and VEGAN, all the test images are unseen during training. The training process is done in an unsupervised manner without using any ground-truth annotations and paired images. Some comparison results are shown in Fig. 12 and Fig. 13. We observe that the task of imitating black background is actually more challenging for Cycle-GAN since the information of black regions in {I sample } is limited and hence does not provide good inverse mapping back to {I} under the setting of CycleGAN. The results of CycleGAN on imitating color selectivo and defocus/Bokeh are more comparable to those of VE-GAN. However, the images generated by CycleGAN may have some distortions in color. On the other hand, VEGAN follows a well-organized procedure to learn how to imitate visual effects. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to differentiate. GC50 MSRA500 ECSSD Flower17 Flower102 CUB200 Figure 9: Comparisons with well-known algorithms, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). Each sub-figure depicts the sorted IoU scores as the segmentation accuracy. Testing on MSRA500 using VEGAN models MSRA-B4, MSRA-C4, and MSRA-D4. Testing on Flickr images using VEGAN models Flickr-B4, Flickr-C4, and Flickr-D4. Figure 10: The edited images generated by our VEGAN models with respect to some expected visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. (Johnson, Alahi, and Fei-Fei 2016); ' ‡' refers to ; ' ' refers to (Gulrajani et al. 2017 the 9-residual-blocks version † WGAN-GP yes bilinear 'Color selectivo' visual effect generated by VEGAN (MSRA-C4) and CycleGAN. 'Defocus/Bokeh' visual effect generated by VEGAN (MSRA-D4) and CycleGAN.
5,461
1812.08442
2950827124
This paper presents a "learning to learn" approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
Existing GAN-based segmentation methods @cite_16 @cite_12 improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not unsupervised'' from the perspective of application and problem definition.
{ "abstract": [ "We introduce scGAN, a novel extension of conditional Generative Adversarial Networks (GAN) tailored for the challenging problem of shadow detection in images. Previous methods for shadow detection focus on learning the local appearance of shadow regions, while using limited local context reasoning in the form of pairwise potentials in a Conditional Random Field. In contrast, the proposed adversarial approach is able to model higher level relationships and global scene characteristics. We train a shadow detector that corresponds to the generator of a conditional GAN, and augment its shadow accuracy by combining the typical GAN loss with a data loss term. Due to the unbalanced distribution of the shadow labels, we use weighted cross entropy. With the standard GAN architecture, properly setting the weight for the cross entropy would require training multiple GANs, a computationally expensive grid procedure. In scGAN, we introduce an additional sensitivity parameter w to the generator. The proposed approach effectively parameterizes the loss of the trained detector. The resulting shadow detector is a single network that can generate shadow maps corresponding to different sensitivity levels, obviating the need for multiple models and a costly training procedure. We evaluate our method on the large-scale SBU and UCF shadow datasets, and observe up to 17 error reduction with respect to the previous state-of-the-art method.", "Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. The motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net. Our experiments show that our adversarial training approach leads to improved accuracy on the Stanford Background and PASCAL VOC 2012 datasets." ], "cite_N": [ "@cite_16", "@cite_12" ], "mid": [ "2777654136", "2554423077" ] }
Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects
In figure-ground segmentation, the regions of interest are conventionally defined by the provided ground truth, which is usually in the form of pixel-level annotations. Without such supervised information from intensive labeling efforts, it is challenging to teach a system to learn what the figure and the ground should be in each image. To address this issue, we propose an unsupervised meta-learning approach that can simultaneously learn both the figure-ground concept and the corresponding image segmentation. The proposed formulation explores the inherent but often unnoticeable relatedness between performing image segmentation and creating visual effects. In particular, to visually enrich a given image with a special effect often first needs to specify the regions to be emphasized. The procedure corresponds to constructing an internal representation that guides the image editing to operate on the target image regions. For this reason, we name such an internal guidance as the Visual-Effect Representation (VER) of the image. We observe that for a majority of visual effects, their resulting VER is closely related to image segmentation. Another advantage of focusing on visual-effect images is that such data are abundant from the Internet, while pixel-wise Figure 1: Given the same image (1st column), imitating different visual effects (2nd column) can yield distinct interpretations of figure-ground segmentation (3rd column), which are derived by our method via referencing the following visual effects (from top to bottom): black background, color selectivo, and defocus/Bokeh. The learned VERs are shown in the last column, respectively. annotating large datasets for image segmentation is timeconsuming. However, in practice, we only have access to the visual-effect images, but not the VERs as well as the original images. Taking all these factors into account, we reduce the meta-problem of figure-ground segmentation to predicting the proper VER of a given image for the underlying visual effect. Owing to its data richness from the Internet, the latter task is more suitable for our intention to cast the problem within the unsupervised generative framework. Many compositional image editing tasks have the aforementioned properties. For example, to create the color selectivo effect on an image, as shown in Fig. 2, we can i) identify the target and partition the image into foreground and background layers, ii) convert the color of background layer into grayscale, and iii) combine the converted background layer with the original foreground layer to get the final result. The operation of color conversion is local-it simply "equalizes" the RGB values of pixels in certain areas. The quality of the result depends on how properly the layers are decomposed. If a part of the target region is partitioned into the background, the result might look less plausible. Unlike the local operations, to localize the proper regions for editing would require certain understanding and analysis of the global or contextual information in the whole image. In this paper, we design a GAN-based model, called Visual-Effect GAN (VEGAN), that can learn to predict the internal representation (i.e., VER) and incorporate such information into facilitating the resulting figure-ground segmentation. We are thus motivated to formulate the following problem: Given an unaltered RGB image as the input and an image editing task with known compositional process and local operation, we aim to predict the proper VER that guides the editing process to generate the expected visual effect and accomplishes the underlying figure-ground segmentation. We adopt a data-driven setting in which the image editing task is exemplified by a collection of image samples with the expected visual effect. The task, therefore, is to transform the original RGB input image into an output image that exhibits the same effect of the exemplified samples. To make our approach general, we assume that no corresponding pairs of input and output images are available in training, and therefore supervised learning is not applicable. That is, the training data does not include pairs of the original color images and the corresponding edited images with visual effects. The flexibility is in line with the fact that although we could fetch a lot of images with certain visual effects over the Internet, we indeed do not know what their original counterpart should look like. Under this problem formulation, several issues are of our interest and need to be addressed. First, how do we solve the problem without paired input and output images? We build on the idea of generative adversarial network and develop a new unsupervised learning mechanism (shown in Figs. 2 & 3) to learn the internal representation for creating the visual effect. The generator aims to predict the internal VER and the editor is to convert the input image into the one that has the expected visual effect. The compositional procedure and local operation are generic and can be implemented as parts of the architecture of a ConvNet. The discriminator has to judge the quality of the edited images with respect to a set of sample images that exhibit the same visual effect. The experimental results show that our model works surprisingly well to learn meaningful representation and segmentation without supervision. Second, where do we acquire the collection of sample images for illustrating the expected visual effect? Indeed, it would not make sense if we have to manually generate the labor-intensive sample images for demonstrating the expected visual effects. We show that the required sample images can be conveniently collected from the Internet. We provide a couple of scripts to explore the effectiveness of using Internet images for training our model. Notice again that, although the required sample images with visual effects are available on the Internet, their original versions are unknown. Thus supervised learning of pairwise image-toimage translation cannot be applied here. Third, what can the VER be useful for, in addition to creating visual effects? We show that, if we are able to choose a suitable visual effect, the learned VER can be used to not only establish the intended figure-ground notion but also derive the image segmentation. More precisely, as in our formulation the visual-effect representation is characterized by a real-valued response map, the result of figure-ground separation can be obtained via binarizing the VER. Therefore, it is legitimate to take the proposed problem of VER prediction as a surrogate for unsupervised image segmentation. We have tested the following visual effects: i) black background, which is often caused by using flashlight; ii) color selectivo, which imposes color highlight on the subject and keeps the background in grayscale; iii) defocus/Bokeh, which is due to depth of field of camera lens. The second column in Fig. 1 shows the three types of visual effects. For these tasks our model can be end-toend trained from scratch in an unsupervised manner using training data that do not have either the ground-truth pixel labeling or the paired images with/without visual effects. While labor-intensive pixel-level segmentations for images are hard to acquire directly via Internet search, images with those three effects are easy to collect from photo-sharing websites, such as Flickr, using related tags. Generative Adversarial Networks The idea of GAN (Goodfellow et al. 2014) is to generate realistic samples through the adversarial game between generator G and discriminator D. GAN becomes popular owing to its ability to achieve unsupervised learning. However, GAN also encounters many problems such as instability and model collapsing. Hence later methods (Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) try to improve GAN in both the aspects of implementation and theory. DCGAN (Radford, Metz, and Chintala 2016) provides a new framework that is more stable and easier to train. WGAN (Arjovsky, Chintala, and Bottou 2017) suggests to use Wasserstein distance to measure the loss. WGAN-GP (Gulrajani et al. 2017) further improves the way of the Lipschitz constraint being enforced, by replacing weight clipping with gradient penalty. To reduce the burden of G, Denton et al. (Denton et al. 2015) use a pyramid structure and Karras et al. (Karras et al. 2018) consider a progressive training methodology. Both of them divide the task into smaller sequential steps. In our case, we alleviate the burden of G by incorporating some well-defined image processing operations into the network model, e.g., converting background color into grayscale to simulate the visual effect of color selectivo, or blurring the background to create the Bokeh effect. Computer vision problems may benefit from GAN by including an adversarial loss into, say, a typical CNN model. Many intricate tasks have been shown to gain further improvements after adding adversarial loss, such as shadow detection (Nguyen et al. 2017), saliency detection (Pan et al. 2017), and semantic segmentation (Luc et al. 2016). However, those training methodologies require paired images (with ground-truth) and hence lack the advantage of unsupervised learning. For the applications of modifying photo styles, some methods (Liu, Breuel, and Kautz 2017;Figure 2: Learning and applying our model for the case of "color selectivo" visual effect. The image collection for learning is downloaded using Flickr API. Without explicit ground-truth pixel-level annotations being provided, our method can learn to estimate the visual-effect representations (VERs) from unpaired sets of natural RGB images and sample images with the expected visual effect. Our generative model is called Visual-Effect GAN (VEGAN), which has an additional component editor between the generator and the discriminator. After the unsupervised learning, the generator is able to predict the VER of an input color image for creating the expected visual effect. The VER can be further transformed into figure-ground segmentation. Yi et al. 2017;Zhu et al. 2017) can successfully achieve image-to-image style transfer using unpaired data, but their results are limited to subjective evaluation. Moreover, those style-transfer methods cannot be directly applied to the task of unsupervised segmentation. Since our model has to identify the category-independent subjects for applying the visual effect without using imagepair relations and ground-truth pixel-level annotations, the problem we aim to address is more general and challenging than those of the aforementioned methods. Image Segmentation Most of the existing segmentation methods that are based on deep neural networks (DNNs) to treat the segmentation problem as a pixel-level classification problem (Simonyan and Zisserman 2015;Long, Shelhamer, and Darrell 2015;He et al. 2016). The impressive performance relies on a large number of high-quality annotations. Unfortunately, collecting high-quality annotations at a large scale is another challenging task since it is exceedingly labor-intensive. As a result, existing datasets just provide limited-class and limitedannotation data for training DNNs. DNN-based segmentation methods thus can only be applied to a limited subset of category-dependent segmentation tasks. To reduce the dependency of detailed annotations and to simplify the way of acquiring a sufficient number of training data, a possible solution is to train DNNs in a semi-supervised manner (Hong, Noh, and Han 2015;Souly, Spampinato, and Shah 2017) or a weakly-supervised manner (Dai, He, and Sun 2015;Kwak, Hong, and Han 2017;Pinheiro and Collobert 2015) with a small number of pixellevel annotations. In contrast, our model is trained without explicit ground-truth annotations. Existing GAN-based segmentation methods (Nguyen et al. 2017;Luc et al. 2016) improve their segmentation performance using mainly the adversarial mechanism of GANs. The ground-truth annotations are needed in their training process for constructing the adversarial loss, and therefore they are GAN-based but not "unsupervised" from the perspective of application and problem definition. We instead adopt a meta-learning viewpoint to address figure-ground segmentation. Depending on the visual effect to be imitated, the proposed approach interprets the task of image segmentation according to the learned VER. As a re-sult, our model indeed establishes a general setting of figureground segmentation, with the additional advantage of generating visual effects or photo-style manipulations. Our Method Given a natural RGB image I and an expected visual effect with known compositional process and local operation, the proposed VEGAN model learns to predict the visual-effect representation (VER) of I and to generate an edited image I edit with the expected effect. Fig. 2 illustrates the core idea. The training data are from two unpaired sets: the set {I} of original RGB images and the set {I sample } of images with the expected visual effect. The learning process is carried out as follows: i) Generator predicts the VER ν of the image I. ii) Editor uses the known local operation to create an edited image I edit possessing the expected visual effect. iii) Discriminator judges the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same visual effect. iv) Loss is computed for updating the whole model. Fig. 3 illustrates the components of VEGAN. Finally, we perform Binarization on VER for quantitatively assess the outcome of figure-ground segmentation. Generator: The task of the generator is to predict the VER ν that can be used to partition the input image I into foreground and background layers. Our network architecture is adapted from the state-of-the-art methods (Johnson, Alahi, and Fei-Fei 2016;Zhu et al. 2017) which show impressive results on image style transfer. The architecture follows the rules suggested by DCGAN (Radford, Metz, and Chintala 2016) such as replacing pooling layer with strided convolution. Our base architecture also uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). We have also tried a few slightly modified versions of the generator. The differences and details are described in the experiments. Discriminator: The discriminator is trained to judge the quality of the edited images I edit with respect to a set {I sample } of sample images that exhibit the same effect. We adopt a 70 × 70 patchGAN Ledig et al. 2017;Li and Wand 2016;Zhu et al. 2017) as our base discriminator network. PatcahGAN brings some benefits with multiple overlapping image patches. Namely, the scores The visual-effect representation (VER) produced by the generator indicates the strength of the visual effect at each location. The editor uses a well-defined trainable procedure (converting RGB to grayscale in this case) to create the expected visual effect. The discriminator receives the edited image I edit and evaluates how good it is. To train VEGAN, we need unpaired images from two domains. Domain A comprises real RGB images and Domain B comprises images with the expected visual effect. change more smoothly and the training process is more stable. Compared with a full-image discriminator, the receptive field of the 70 × 70 patchGAN might not capture the global context. In our work, the foreground objects are sensitive to their position in the whole image and are center-biased. If there are several objects in the image, our method would favor to pick out the object closest to the center. In our experiment, 70 × 70 patchGAN does produce better segments along the edges, but sometimes the segments tend to be tattered. A full-image discriminator (Goodfellow et al. 2014;Radford, Metz, and Chintala 2016;Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017), on the other hand, could give coarser but more compact and structural segments. Editor: The editor is the core of the proposed model. Given an input image I and its VER ν predicted by the generator, the editor is responsible for creating a composed image I edit containing the expected visual effect. The first step is based on the well-defined procedure to perform local operations on the image and generate the expected visual effect I effect . More specifically, in our experiments we define three basic local operations: black-background, colorselectivo, and defocus/Bokeh, which involve clamping-tozero, grayscale conversion, and 11 × 11 average pooling, respectively. The next step is to combine the edited background layer with the foreground layer to get the final editing result I edit . An intuitive way is to use the VER ν as an alpha map α for image matting, i.e., I edit = α ⊗ I + (1 − α) ⊗ I effect , where α = {α ij }, α ij ∈ (0, 1) and ⊗ denotes the element-wise multiplication. However, in our experiments, we find that it is better to have ν = {ν ij }, ν ij ∈ (−1, 1) with hyperbolic-tangent as the output. Hence we combine the two layers as follows: I edit = τ (ν ⊗ (I − I effect ) + I effect ), ν ij ∈ (−1, 1) ,(1) where τ (·) truncates the values to be within (0, 255), which guarantees the I edit can be properly rendered. Under this formulation, our model turns to learning the residual. Loss: We refer to SOTA algorithms (Arjovsky, Chintala, and Bottou 2017;Gulrajani et al. 2017) to design loss functions L G and L D for generator (G) and discriminator (D): L G = −E x∼Pg [D(x))] ,(2)L D = E x∼Pg [D(x))] − E y∼Pr [D(y)] + λ gp Ex ∼Px [( ∇xD(x) 2 − 1) 2 ] .(3) We alternately update the generator by Eq. 2 and the discriminator by Eq. 3. In our formulation, x is the edited image I edit , y is an image I sample which exhibits the expected visual effect, P g is the edited image distribution, P r is the sample image distribution, and Px is for sampling uniformly along straight lines between image pairs from P g and P r . We set the learning rate, λ gp , and other hyper-parameters the same as the configuration of WGAN-GP (Gulrajani et al. 2017). We keep the history of previously generated images and update the discriminator according to the history. We use the same way as ) to store 50 previously generated images {I edit } in a buffer. The training images are of size 224 × 224, and the batch size is 1. Binarization: The VEGAN model can be treated as aiming to predict the strength of the visual effect throughout the whole image. Although the VER provides effective intermediate representation for generating plausible edited images toward some expected visual effects, we observe that sometimes the VER might not be consistent with an object region, particularly with the Bokeh effect. Directly thresholding VER to make a binary mask for segmentation evaluation will cause some degree of false positives and degrade the segmentation quality. In general, we expect that the segmentation derived from the visual-effect representation to be smooth within an object and distinct across object boundaries. To respect this observation, we describe, in what follows, an optional procedure to obtain a smoothed VER and enable simple thresholding to yield a good binary mask for quantitative evaluation. Notice that all the VER maps visualized in this paper are obtained without binarization. To begin with, we over-segment (Achanta et al. 2012) an input image I into a superpixel set S and construct the corresponding superpixel-level graph G = (S, E, ω) with the edge set E and weights ω. Each edge e ij ∈ E denotes the spatial adjacency between superpixels s i and s j . The weighting function ω : E → [0, 1] is defined as ω ij = e −θ1 ci−cj , where c i and c j respectively denote the CIE Lab mean colors of two adjacent superpixels. Then the weight matrix of the graph is W = [ω ij ] |S|×|S| . We then smooth the VER via propagating the averaged value of each superpixel to all other superpixels. To this end, we use r i to denote the mean VER value of superpixel s i where r i = 1 |si| (i,j)∈si ν ij and |s i | is the number of pixels within s i . The propagation is carried out according to the feature similarity between every superpixel pair. Given the weight matrix W, the pairwise similarity matrix A can be constructed as A = (D − θ 2 W) −1 I, where D is a diagonal matrix with each diagonal entry equal to the row sum of W, θ 2 is a parameter in (0, 1], and I is the |S|-by-|S| identity matrix (Zhou et al. 2003). Finally, the smoothed VER value of each superpixel can be obtained by [r 1 ,r 2 , . . . ,r |S| ] T = D −1 A A · [r 1 , r 2 , . . . , r |S| ] T ,(4) where D A is a diagonal matrix with each diagonal entry equal to the corresponding row sum of A, and D −1 A A yields the row normalized version of A. From Eq. 4, we see that the smoothed VER valuer i is determined by not only neighboring superpixels of s i but also all other superpixels. To obtain the binary mask, we set the average value of {r 1 ,r 2 , . . . ,r |S| } as the threshold for obtaining the corresponding figure-ground segmentation for the input I. We set parameters θ 1 = 10 and θ 2 = 0.99 in all the experiments. Experiments We first describe the evaluation metric, the testing datasets, the training data, and the algorithms in comparison. Then, we show the comparison results of the relevant algorithms and our approach. Finally, we present the image segmentation and editing results of our approach. More experimental results can be found in the supplementary material. Evaluation Metric. We adopt the intersection-over-union (IoU) to evaluate the binary mask derived from the VER. The IoU score, which is defined as |P Q| |P Q| , where P denotes the machine segmentation and Q denotes the ground-truth segmentation. All algorithms are tested on Intel i7-4770 3.40 GHz CPU, 8GB RAM, and NVIDIA Titan X GPU. Datasets. The six datasets are GC50 (Rother, Kolmogorov, and Blake 2004), MSRA500, ECSSD (Shi et al. 2016), Flower17 (Nilsback and Zisserman 2006), Flower102 (Nilsback and Zisserman 2008), and CUB200 (Wah et al. 2011). MSRA500 is a subset of the MSRA10K dataset (Cheng et al. 2015), which contains 10,000 natural images. We randomly partition MSRA10K into two non-overlapping subsets of 500 and 9,500 images to create MSRA500 and MSRA9500 for testing and training, respectively. Their statistics are summarized in Table 1. Since these datasets provide pixellevel ground truths, we can compare the consistency be- tween the ground-truth labeling and the derived segmentation of each image for VER-quality assessment. Training Data. In training the VEGAN model, we consider using the images from two different sources for comparison. The first image source is MSRA9500 derived from the MSRA10K dataset (Cheng et al. 2015). The second image source is Flickr, and we acquire unorganized images for each task as the training data. We examine our model on three kinds of visual effects, namely, black background, color selectivo, and defocus/Bokeh. • For MSRA9500 images, we randomly select 4,750 images and then apply the three visual effects to yield three groups of images with visual effects, i.e., {I sample }. The other 4,750 images are hence the input images {I} for the generator to produce the edited images {I edit } later. • For Flickr images, we use "black background," "color selectivo," and "defocus/Bokeh" as the three query tags, and then collect 4,000 images for each query-tag as the real images with visual effects. We randomly download additional 4,000 images from Flickr as the images to be edited. Algorithms in Comparison. We quantitatively evaluate the learned VER using the standard segmentation assessment metric (IoU). Our approach is compared with several well-known algorithms, including two semantic segmentation algorithms, three saliency based algorithms, and two bounding-box based algorithms, listed as follows: ResNet , VGG16 (Simonyan and Zisserman 2015), CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS and MilCutG (Wu et al. 2014), GrabCut (Rother, Kolmogorov, and Blake 2004). The two supervised semantic segmentation algorithms, ResNet and VGG16, are pre-trained on ILSVRC-2012-CLS (Russakovsky et al. 2015) and then fine-tuned on MSRA9500 with ground-truth annotations. The bounding boxes of the two bounding-box based algorithms are initialized around the image borders. Quantitative Evaluation The first part of experiment aims to evaluate the segmentation quality of different methods. We first compare several variants of the VEGAN model to choose the best model configuration. Then, we analyze the results of the VEGAN model versus the other state-of-the-art algorithms. VEGAN Variants. In the legend blocks of Fig. 4, we use a compound notation "TrainingData -Version" to account for the variant versions of our model. Specifically, Train-ingData indicates the image source of the training data. The notation for Version contains two characters. The first character denotes the type of visual effect: "B" for black background, "C" for color selectivo, and "D" for defocus/Bokeh. The second character is the model configuration: "1" refers to the combination of base-generator and base-discriminator described in Our Method; "2" refers to using ResNet as the generator; "3" is the model "1" with additional skip-layers and replacing transpose convolution with bilinear interpolation; "4" is the model "3" yet replacing patch-based discriminator with full-image discriminator. We report the results of VEGAN variants in Table 6, and depict the sorted IoU scores for the test images in Flower17 and Flower102 datasets in Fig. 4. It can be seen that all models have similar segmentation qualities no matter what image source is used for training. In Table 6 and Fig. 4, the training configuration "B4" shows relatively better performance under black background. Hence, our VEGAN model adopts the version of MSRA-B4 as a representative variant for comparing with other state-of-the-art algorithms. Unseen Images. We further analyze the differences of the learned models on dealing with unseen and seen images. We test the variants B4, C4, and D4 on MSRA500 (unseen) and the subset {I} of MSRA9500 (seen). We find that the performance of VEGAN is quite stable. The IoU score for MSRA500 is only 0.01 lower than the score for MSRA9500 {I}. Note that, even for the seen images, the ground-truth pixel annotations are unknown to the VEGAN model during training. This result indicates that VEGAN has a good generalization ability to predict segmentation for either seen or unseen images. For comparison, we do the same experiment with the two supervised algorithms, ResNet and VGG16. They are fine-tuned with MSRA9500. The mean IoU scores of ResNet are 0.86 and 0.94 for MSRA500 and MSRA9500, respectively. The mean IoU scores of VGG16 are 0.72 and 0.88 for MSRA500 and MSRA9500, respectively. The performance of both supervised techniques significantly degrades while dealing with unseen images. From the results just described, the final VEGAN model is implemented with the following setting: i) Generator uses the 9-residual-blocks version of (Johnson, Alahi, and Fei-Fei 2016). ii) Discriminator uses the full-image discriminator as WGAN-GP (Gulrajani et al. 2017). Results. The top portion of Table 3 summarizes the mean IoU score of each algorithm evaluated with the six testing datasets. We first compare our method with five well-known segmentation/saliency-detection techniques, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). The proposed VE-GAN model outperforms all others on MSRA500, ECSSD, Flower17, and Flower102 datasets, and is only slightly behind the best on GC50 and CUB200 datasets. The bottom portion of Table 3 shows the results of two SOTA supervised learning algorithms on the six testing datasets. Owing to training with the paired images and ground-truths in a "supervised" manner, the two models of ResNet and VGG16 undoubtedly achieve good performance so that we treat them as the oracle models. Surprisingly, our unsupervised learning model is comparable with or even slightly better than the supervised learning algorithms on the MSRA500, Flower17, and Flower102 datasets. Fig. 9 depicts the sorted IoU scores, where a larger area under curve means better segmentation quality. VEGAN achieves better segmentation accuracy on the two datasets. Fig. 10 shows the results generated by our VEGAN model under different configurations. Each triplet of images contains the input image, the visual effect representation (VER), and the edited image. The results in Fig. 10 demonstrate that VEGAN can generate reasonable figure-ground segmentations and plausible edited images with expected visual effects. Visual-Effect Imitation as Style Transfer. Although existing GAN models cannot be directly applied to learning figure-ground segmentation, some of them are applicable to learning visual-effect transfer, e.g., CycleGAN . We use the two sets {I} and {I sample } of MSRA9500 to train CycleGAN, and show some comparison results in Fig. 7. We find that the task of imitating black background turns out to be challenging for CycleGAN since the information in {I sample } is too limited to derive the inverse mapping back to {I}. Moreover, CycleGAN focuses more on learning the mapping between local properties such as color or texture rather than learning how to create a glob- ) and VGG16 (Simonyan and Zisserman 2015) are pre-trained with ILSVRC-2012-CLS and then fine-tuned with MSRA9500. ally consistent visual effect. VEGAN instead follows a systematic learning procedure to imitate the visual effect. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to identify. Figure 6: The edited images generated by VEGAN with respect to specific visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. Qualitative Evaluation User Study. Fig. 8 shows VERs that testing on Flickr "bird" images using VEGAN models trained merely with Flick "flower" images. The results suggest that the meta-learning mechanism enables VE-GAN to identify unseen foreground figures based on the learned knowledge embodied in the generated VERs. Conclusion We characterize the two main contributions of our method as follows. First, we establish a meta-learning framework to learn a general concept of figure-ground application and an effective approach to the segmentation task. Second, we propose to cast the meta-learning as imitating relevant visual effects and develop a novel VEGAN model with following advantages: i) Our model offers a new way to predict meaningful figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. ii) The training images are easy to collect from photo-sharing websites using related tags. iii) The editor between the generator and the discriminator enables VEGAN to decouple the compositional process of imitating visual effects and hence allows VEGAN to effectively learn the underlying representation (VER) for deriving figure-ground segmentation. We have tested three visual effects, including "black background," "color selectivo," and "defocus/Bokeh" with extensive experiments on six datasets. For these visual effects, VEGAN can be end-to-end trained from scratch using unpaired training images that have no ground-truth labeling. Because state-of-the-art GAN models, e.g., CycleGAN , are not explicitly designed for unsupervised learning of figure-ground segmentation, we simply conduct qualitative comparisons with CycleGAN on the task of visual-effect transfer rather than the task of figure-ground segmentation. The task of visual-effect transfer is to convert an RGB image into an edited image with the intended visual effect. To train CycleGAN for visual-effect transfer, we use the set {I} of original RGB images and the set {I sample } of images with the expected visual effect as the two unpaired training sets. Fig. 12 shows the results of 'training on MSRA9500 and testing on MSRA500'. Fig. 13 shows the results of 'training on Flickr and testing on Flickr'. For CycleGAN and VEGAN, all the test images are unseen during training. The training process is done in an unsupervised manner without using any ground-truth annotations and paired images. Some comparison results are shown in Fig. 12 and Fig. 13. We observe that the task of imitating black background is actually more challenging for Cycle-GAN since the information of black regions in {I sample } is limited and hence does not provide good inverse mapping back to {I} under the setting of CycleGAN. The results of CycleGAN on imitating color selectivo and defocus/Bokeh are more comparable to those of VE-GAN. However, the images generated by CycleGAN may have some distortions in color. On the other hand, VEGAN follows a well-organized procedure to learn how to imitate visual effects. The generator must produce a meaningful VER so that the editor can compose a plausible visual-effect image that does not contain noticeable artifacts for the discriminator to differentiate. GC50 MSRA500 ECSSD Flower17 Flower102 CUB200 Figure 9: Comparisons with well-known algorithms, including CA (Qin et al. 2015), MST (Tu et al. 2016), GBMR (Yang et al. 2013), MilCutS/MilCutG (Wu et al. 2014), and GrabCut (Rother, Kolmogorov, and Blake 2004). Each sub-figure depicts the sorted IoU scores as the segmentation accuracy. Testing on MSRA500 using VEGAN models MSRA-B4, MSRA-C4, and MSRA-D4. Testing on Flickr images using VEGAN models Flickr-B4, Flickr-C4, and Flickr-D4. Figure 10: The edited images generated by our VEGAN models with respect to some expected visual effects. Each image triplet from left to right: the input image, the VER, and the edited image. (Johnson, Alahi, and Fei-Fei 2016); ' ‡' refers to ; ' ' refers to (Gulrajani et al. 2017 the 9-residual-blocks version † WGAN-GP yes bilinear 'Color selectivo' visual effect generated by VEGAN (MSRA-C4) and CycleGAN. 'Defocus/Bokeh' visual effect generated by VEGAN (MSRA-D4) and CycleGAN.
5,461
1907.02757
2953654207
In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net.
For natural image and video analysis problems, a number of pretext tasks have been explored, including prediction of image rotation @cite_8 , relative position @cite_6 , colorisation @cite_10 and image impainting @cite_3 etc. In medical imaging domain, self-supervised learning has also been explored but to a less extent. proposed a pretext task for subject identification @cite_4 . A Siamese network was trained to classify whether two spinal MR images came from the same subject or not. The pretrained features were used to initialise a disease grade classification network. defined re-colourisation of surgical videos as a pretext task and used the pretrained features to initialise a surgical instrument segmentation network @cite_11 . used rotation prediction as a pretext task and the self-learnt features were transferred to lung lobe segmentation and nodule detection tasks @cite_2 . Different from previous works in the medical imaging domain, we propose a novel pretext task, which is to predict anatomical positions. In particular, we leverage the rich information encoded in the cardiac MR scan view planes and DICOM headers to define the anatomical positions for the task.
{ "abstract": [ "A significant proportion of patients scanned in a clinical setting have follow-up scans. We show in this work that such longitudinal scans alone can be used as a form of “free” self-supervision for training a deep network. We demonstrate this self-supervised learning for the case of T2-weighted sagittal lumbar Magnetic Resonance Images (MRIs). A Siamese convolutional neural network (CNN) is trained using two losses: (i) a contrastive loss on whether the scan is of the same person (i.e. longitudinal) or not, together with (ii) a classification loss on predicting the level of vertebral bodies. The performance of this pre-trained network is then assessed on a grading classification task. We experiment on a dataset of 1016 subjects, 423 possessing follow-up scans, with the end goal of learning the disc degeneration radiological gradings attached to the intervertebral discs. We show that the performance of the pre-trained CNN on the supervised classification task is (i) superior to that of a network trained from scratch; and (ii) requires far fewer annotated training samples to reach an equivalent performance to that of the network trained from scratch.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.", "We investigate the effectiveness of a simple solution to the common problem of deep learning in medical image analysis with limited quantities of labeled training data. The underlying idea is to assign artificial labels to abundantly available unlabeled medical images and, through a process known as surrogate supervision, pre-train a deep neural network model for the target medical image analysis task lacking sufficient labeled training data. In particular, we employ 3 surrogate supervision schemes, namely rotation, reconstruction, and colorization, in 4 different medical imaging applications representing classification and segmentation for both 2D and 3D medical images. 3 key findings emerge from our research: 1) pre-training with surrogate supervision is effective for small training sets; 2) deep models trained from initial weights pre-trained through surrogate supervision outperform the same models when trained from scratch, suggesting that pre-training with surrogate supervision should be considered prior to training any deep 3D models; 3) pre-training models in the medical domain with surrogate supervision is more effective than transfer learning from an unrelated domain (e.g., natural images), indicating the practical value of abundant unlabeled medical image data.", "Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.", "Purpose Surgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue." ], "cite_N": [ "@cite_4", "@cite_8", "@cite_3", "@cite_6", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2742126485", "2962742544", "2963420272", "343636949", "2913127216", "2326925005", "2962936819" ] }
Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction
0
1812.07894
2883454930
Smartphone apps usually have access to sensitive user data such as contacts, geo-location, and account credentials and they might share such data to external entities through the Internet or with other apps. Confidentiality of user data could be breached if there are anomalies in the way sensitive data is handled by an app which is vulnerable or malicious. Existing approaches that detect anomalous sensitive data flows have limitations in terms of accuracy because the definition of anomalous flows may differ for different apps with different functionalities; it is normal for "Health" apps to share heart rate information through the Internet but is anomalous for "Travel" apps. In this paper, we propose a novel approach to detect anomalous sensitive data flows in Android apps, with improved accuracy. To achieve this objective, we first group trusted apps according to the topics inferred from their functional descriptions. We then learn sensitive information flows with respect to each group of trusted apps. For a given app under analysis, anomalies are identified by comparing sensitive information flows in the app against those flows learned from trusted apps grouped under the same topic. In the evaluation, information flow is learned from 11,796 trusted apps. We then checked for anomalies in 596 new (benign) apps and identified 2 previously-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them.
The approaches proposed in Mudflow @cite_13 and Chabada @cite_19 are closely related to ours. Mudflow @cite_13 is a tool for malware detection based on sensitive information flow. Similar to our approach, they rely on static taint analysis to detect flows of sensitive data towards potential leaks. Then, these flows are used to train a @math SVM one-class classifier and later classify new apps. While we also use static analysis, the main difference is that we consider the dominant topic inferred from app description as an important feature for the classification. Our empirical evaluation shows that dominant topics are fundamental to achieve a higher in anomaly detection. Moreover, our approach not only focuses on detecting malware, but also focuses on vulnerable and defective apps. Lastly, while Mudflow applies intra-component static analysis, we use inter-component analysis for covering flows across components.
{ "abstract": [ "How do we know a program does what it claims to do? After clustering Android apps by their description topics, we identify outliers in each cluster with respect to their API usage. A \"weather\" app that sends messages thus becomes an anomaly; likewise, a \"messaging\" app would typically not be expected to access the current location. Applied on a set of 22,500+ Android applications, our CHABADA prototype identified several anomalies; additionally, it flagged 56 of novel malware as such, without requiring any known malware patterns.", "What is it that makes an app malicious? One important factor is that malicious apps treat sensitive data differently from benign apps. To capture such differences, we mined 2,866 benign Android applications for their data flow from sensitive sources, and compare these flows against those found in malicious apps. We find that (a) for every sensitive source, the data ends up in a small number of typical sinks; (b) these sinks differ considerably between benign and malicious apps; (c) these differences can be used to flag malicious apps due to their abnormal data flow; and (d) malicious apps can be identified by their abnormal data flow alone, without requiring known malware samples. In our evaluation, our mudflow prototype correctly identified 86.4 of all novel malware, and 90.1 of novel malware leaking sensitive data." ], "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2168649891", "2071536101" ] }
AnFlo: Detecting Anomalous Sensitive Information Flows in Android Apps
Android applications (apps) are often granted access to users' privacy-and security-sensitive information such as GPS position, phone contacts, camera, microphone, training log, and heart rate. Apps need such sensitive data to implement their functionalities and provide rich user experiences. For instance, accurate GPS position is needed to navigate users to their destinations, phone contact is needed to implement messaging and chat functionalities, and heart rate frequency is important to accurately monitor training im-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Often, to provide services, apps may also need to exchange data with other apps in the same smartphone or externally with a remote server. For instance, a camera app may share a picture with a multimedia messaging app for sending it to a friend. The messaging app, in turn, may send the full contacts list from the phone directory to a remote server in order to identify which contacts are registered to the messaging service so that they can be shown as possible destinations. As such, sensitive information may legitimately be propagated via message exchanges among apps or to remote servers. On the other hand, sensitive information might be exposed unintentionally by defective/vulnerable apps or intentionally by malicious apps (malware), which threatens the security and privacy of end users. Existing literature on information leak in smartphone apps tend to overlook the difference between legitimate data flows and illegitimate ones. Whenever information flow from a sensitive source to a sensitive sink is detected, either statically [23], [20], [19,15,3], [22], [17], [12] or dynamically [8], it is reported as potentially problematic. In this paper, we address the problem of detecting anomalous information flows with improved accuracy by classifying cases of information flows as either normal or anomalous according to a reference information flow model. More specifically, we build a model of sensitive information flows based on the following features: • Data source: the provenance of the sensitive data that is being propagated; • Data sink: the destination where the data is flowing to; and • App topic: the declared functionalities of the app according to its description. Data source and data sink features are used to reflect information flows from sensitive sources to sinks and summarize how sensitive data is handled by an app. However, these features are not expressive enough to build an accurate model. In fact, distinct apps might have very different functionalities. What is considered legitimate of a particular set of apps (e.g., sharing contacts for a messaging app) can be considered a malicious behavior for other apps (e.g., a piece of malware that steals contacts, to be later used by spammers). An accurate model should also take into consideration the main functionalities that is declared by an app (in our case the App topic). One should classify an app as anomalous only when it exhibits sensitive information flows that are not consistent with its declared functionalities. This characteristic, which makes an app anomalous, is captured by the App topic feature. In summary, our approach focuses on detecting apps that are anomalous in terms of information flows compared to other apps with similar functionalities. Such an approach would be useful for various stakeholders. For example, market owners (e.g., Google) can focus on performing more complex and expensive security analysis only on those cases that are reported as anomalous, before publishing them. If such information is available to end users, they could also make informed decision of whether or not to install the anomalous app. For example, when the user installs an app, a warning stating that this particular app sends contact information through the Internet differently from other apps with similar functionalities (as demonstrated in the tool website). In the context of BYOD (bring your own device) where employees use their own device to connect to the secure corporate network, a security analyst might benefit from this approach to emphasis manual analysis on those anomalous flows that might compromise the confidentiality of corporate data stored in the devices. The specific contributions of this paper are: • An automated, fast approach for detecting anomalous flows of sensitive information in Android apps through a seamless combination of static analysis, natural language processing, model inference, and classification techniques; • The implementation of the proposed approach in a tool called AnFlo which is publicly available 1 ; and • An extensive empirical evaluation of our approach based on 596 subject apps, which assesses the accuracy and runtime performance of anomalous information flow detection. We detected 2 previous-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them. The rest of the paper is organized as follows. Section 2 motivates this work. Section 3 compares our work with literature. Section 4 first gives an overview of our approach and then explains the steps in details. Section 5 evaluates our approach. Section 6 concludes the paper. MOTIVATION To implement their services, apps may access sensitive data. It is important that application code handling such data follows secure coding guidelines to protect user privacy and security. However, fast time-to-market pressure often pushes developers to implement data handling code quickly without considering security implications and release apps without proper testing. As a result, apps might contain defects that leak sensitive data unintentionally. They may also contain security vulnerabilities such as permission redelegation vulnerabilities [9], which could be exploited by malicious apps installed on the same device to steal sensitive data. Sensitive data could also be intentionally misused by malicious apps. Malicious apps such as malware and spyware often implement hidden functionalities not declared in 1 Tool and dataset available at http://selab.fbk.eu/anflo/ their functional descriptions. For example, a malicious app may declare only entertainment features (e.g., games) in its description, but it steals user data or subscribes to paid services without the knowledge and consent of the user. Defective, vulnerable, and malicious apps all share the same pattern, i.e., they (either intentionally or unintentionally) deal with sensitive data in an anomalous way, i.e., they behave differently in terms of dealing with sensitive data compared to other apps that state similar functionalities. Therefore, novel approaches should focus on detecting anomalies in sensitive data flows, caused by mismatches between expected flows (observed in benign and correct apps) and actual data flows observed in the app under analysis. However, the comparison should be only against similar apps that offer similar functionalities. For instance, messaging apps are expected to read information from phone contact list but they are not expected to use GPS position. These observations motivate our proposed approach. ANOMALOUS INFORMATION FLOW DE-TECTION Overview The overview of our approach is shown in Figure 1. It has two main phases -Learning and Classification. The input to the learning phase is a set of apps that are trusted to be benign and correct in the way sensitive data is handled (we shall denote them as trusted apps). It has two sub-steps -feature extraction and model inference. In the feature extraction step, (i) topics that best characterize the trusted apps are inferred using natural language processing (NLP) techniques and (ii) information flows from sensitive sources to sinks in the trusted apps are identified using static taint analysis. In the model inference step, we build sensitive information model that characterizes information flows regarding each topic. These models and a given app under analysis (we shall denote it as AUA) are the inputs to the classification phase. In this phase, basically, the dominant topic of the AUA is first identified to determine the relevant sensitive information flow model. Then, if the AUA contains any information flow that violates that model, i.e., is not consistent with the common flows characterized by the model, it is flagged as anomalous. Otherwise, it is flagged as normal. We implemented this approach in our tool AnFlo to automate the detection of anomalous information flows. However, a security analyst is required to further inspect those anomalous flows and determine whether or not the flows could actually lead to serious vulnerabilities such as information leakage issues. Topics discovery. Topics representative of a given preprocessed app description are identified using the Latent Dirichlet Allocation (LDA) technique [6], implemented in a tool called Mallet [18]. LDA is a generative statistical model that represents a collection of text as a mixture of topics with certain probabilities, where each word appearing in the text is attributable to one of the topics. The output of LDA is a list of topics, each of them with its corresponding probability. The topic with the highest probability is labeled as the dominant topic for its associated app. To illustrate, Figure 2 shows the functional description of an app called BestTravel, and the resulting output after performing pre-processing and topics discovery on the description. "Travel" is the dominant topic, the one with the highest probability of 70%. Then, the topics "Communication", "Finance", and "Photography" have the 15%, 10%, and 5% probabilities, respectively, of being the functionalities that the app declares to provide. The ultimate and most convenient way of traveling. Use BestTravel while on the move, to find restaurants (including pictures and prices), local transportation schedule, ATM machines and much more. App name Travel Communication Finance Photography BestTravel 70% 15% 10% 5% Figure 2: Example of app description and topic analysis result. Note that we did not consider Google Play categories as topics even though apps are grouped under those categories in Google Play. This is because recent studies [1,10] have reported that NLP-based topic analysis on app descriptions produces more cohesive clusters of apps than those apps grouped under Google Play categories. Static Analysis Sensitive information flows in the trusted apps are extracted using static taint analysis. Taint analysis is an instance of flow-analysis technique, which tags program data with labels that describe its provenance and propagates these tags through control and data dependencies. A different label is used for each distinct source of data. Tags are propagated from the operand(s) in the right-hand side of an assignment (uses) to the variable assigned in the left-hand side of the assignment (definition). The output of taint analysis is information flows, i.e., what data of which provenances (sources) are accessed at what program operations, e.g., on channels that may leak sensitive information (sinks). Our analysis focuses on the flows of sensitive information into sensitive program operations, i.e., our taint analysis generates tags at API calls that read sensitive information (e.g. GPS and phone contacts) and traces the propagation of tags into API calls that perform sensitive operations such as sending messages and Bluetooth packets. These sensitive APIs usually belong to dangerous permission group and hence, the APIs that we analyze are those privileged APIs that require to be specifically granted by the end user. Sources and sinks are the privileged APIs available from PScout [4]. The APIs that we analyze also include those APIs that enable Inter Process Communication (IPC) mechanism of Android because they can be used to exchange data among apps installed on the same device. As a result, our taint analysis generates a list of (source → sink) pairs, where each pair represents the flow of sensitive data originating from a source into a sink. APIs (both for sources and for sinks) are grouped according to the special permission required to run them. For example, all the network related sink functions, such as open-Connection(), connect() and getContent() are all modeled as Internet sinks, because they all require the INTER-NET permission to be executed. Figure 3 shows the static taint analysis result on the "BestTravel" running example app from Figure 2. It generates two (source → sink) pairs that correspond to two sensitive information flows. In the first flow, data read from the GPS is propagated through the program until it reaches a statement where it is sent over the network. In the second flow, data from the phone contacts is used to compose a text message. Our tool, AnFlo, runs on compiled byte-code of apps to perform the above static taint analysis. It relies on two existing tools -IC3 [19] and IccTA [15]. Android apps are usually composed of several components. Therefore, to precisely extract inter-component information flows, we need to analyze the links among components. AnFlo uses IC3 to resolve the target components when a flow is inter-component. IC3 uses a solver to infer all possible values of complex objects in an inter-procedural, flow-and context-sensitive manner. Once inter-component links are inferred, AnFlo uses an inter-component data-flow analysis tool called IccTA to perform static taint analysis. We customized IccTA to produce flows in a format as presented in Figure 3 and paths in a more verbose format to facilitate manual checks. App: BestTravel GPS → Internet Contacts → SMS Model Inference When results of topic analysis and of static analysis are available for all the trusted apps, they are used to build the Sensitive Information Flow Model. Such a model is a matrix with sensitive information sources in its rows and sinks in its columns, as shown in Figure 4. Firstly, apps with the same dominant topic are grouped together 5 , to build a sensitive information flow model corresponding to that specific topic. Each group is labeled with the dominant topic. Next, each cell of the matrix is filled with a number, representing the number of apps in this group having the corresponding (source → sink) pair. Figure 4 shows a sample sensitive information model regarding the topic "Travel". There are 36 distinct flows in the apps grouped under this dominant topic. The matrix shows that there are ten apps containing GPS position flowing through the Internet (one of them being the BestTravel app, see Figure 3); eight apps through text messages and three apps through Bluetooth. Similarly, the matrix shows that contacts information flows through SMS in seven apps and through Bluetooth in eight apps. From this model, we can observe that for Travel apps it is quite common to share the user's position via Internet and SMS. However, it is quite uncommon to share the position data via Bluetooth since it happened only in three cases. Likewise, the phone contacts are commonly shared through text messages and Bluetooth but not through Internet. To provide a formal and operative definition of common and uncommon flows, we compute a threshold denoted as τ . Flows that occur more than or equal to τ are considered as common; flows that never occur or that occur fewer than τ are considered as uncommon regarding this topic. Although our model assumes or trusts that the trusted apps are benign and correct, it is possible that some of them may contain defects, vulnerabilities or malware. This problem is addressed by classifying those flows occurring less than the threshold τ as uncommon, i.e., our approach tolerates the presence of some anomalous flows in the reference model since these flows would still be regarded as uncommon. Hence, our approach works as long as the majority of the trusted apps are truly trustworthy. To compute this threshold, we adopt the box-plot approach proposed by Laurikkala et al. [13], considering only flows occurring in the model, i.e., we consider only values greater than zero. τ is computed in the same way as drawing outlier dots in boxplots. It is the lower quartile (25th percentile) minus the step, where the step is 1.5 times the difference between the upper quartile (75th percentile) and the lower quartile (25th percentile). It should be noted that τ is not trivially the lower quartile; otherwise 25% of the apps would be outliers by construction. The threshold is lower, i.e., it is the lower quartile minus the step. Therefore, there is no fixed amount of outliers. Outliers could be few or many depending on the distribution of data. Outliers would only be those cases that are really different from the majority of the training data points. In the example regarding topic "Travel" in Figure 4, the threshold is computed considering only the five values that are > 0. The value for the threshold is τ T ravel = 7. It means that GPS data sent through Internet (GPS → Internet) or text messages (GPS → SMS) are common for traveling apps. Conversely, even though there are three trusted apps which send GPS data through Bluetooth (GPS → Bluetooth), there are too few cases to be considered common, and this sensitive information flow will be considered uncommon in the model. Likewise, phone contacts are commonly sent through text messages and Bluetooth, but it is uncommon for them to be sent through the Internet, since this never occurs in the trusted apps. Classification After the Sensitive Information Flow Models are built on trusted apps, they can be used to classify a new AUA. First of all, features must be extracted from the AUA. The features are the topics associated with the app description and the sensitive information flows in the app. As in Section 4.2.1, first data pre-processing is performed on the app description of the AUA. Then, topics and their probabilities are inferred from the pre-processed description using the Mallet tool. Among all the topics, we consider only the dominant topic, the one with the highest probability, because it is the topic that most characterizes this app. We then obtain the Sensitive Information Flow Model associated with this dominant topic. To ensure the availability of the Sensitive Information Flow Model, the Mallet tool is configured with the list of topics for which the Models are already built on the trusted apps. And given an app description, the Mallet tool only generates topics from this list. The more diverse trusted apps we analyze, the more complete list of models we expect to build. For example, Figure 5(a) shows the topics inferred from the description of a sample AUA "TripOrganizer". The topic "Travel" is highlighted in bold to denote that it is the dominant topic. Next, sensitive information flows in the AUA are extracted as described in Section 4.2.2. The extracted flows are then compared against the flows in the model associated with the dominant topic. If the AUA contains only flows that are common according to the model, the app is considered consistent with the model. If the app contains a flow that is not present in the model or a flow that is present but is uncommon according to the model, the flow and thus, the app is classified as anomalous. Anomalous flows require further manual inspection by a security analyst, because they could be due to defects, vulnerabilities, or malicious intentions. For example, Figure 5(b) shows three sensitive information flows extracted from "TripOrganizer" app. Since the dominant topic for this app is "Travel", these flows can be checked against the model associated with this topic shown in Figure 4. Regarding this model, earlier, we computed that the threshold is τ T ravel = 7 and the flow (Contacts → SMS) is common (see Section 4.3). Therefore, flow 1 ob-served in "TripOrganizer" (Figure 5(b)) is consistent with the model. However, flow 2 (Contacts → Internet) and flow 3 (GPS → Bluetooth), highlighted in bold in Figure 5(b), are uncommon according to the model. As a result, the AUA "TripOrganizer" is classified as anomalous. EMPIRICAL ASSESSMENT In this section, we evaluate the usefulness of our approach and report the results. We assess our approach by answering the following research questions: • RQ V ul : Is AnFlo useful for identifying vulnerable apps containing anomalous information flows? • RQT ime: How long does AnFlo take to classify apps? • RQT opics: Is the topic feature really needed to detect anomalous flows? • RQCat: Can app-store categories be used instead of topics to learn an accurate Sensitive Information Flow Model? • RQ M al : Is AnFlo useful for identifying malicious apps? The first research question RQ V ul investigates the result of AnFlo, whether it is useful for detecting anomalies in vulnerable apps that, for example, may leak sensitive information. RQT ime investigates the cost of using our approach in terms of the time taken to analyze a given AUA. A short analysis time is essential for tool adoption in a real production environment. Then, in the next two research questions, we investigate the role of topics as a feature for building the Sensitive Information Flow Models. RQT opics investigates the absolute contribution of topics, by learning the Sensitive Information Flow Model without considering the topics and by comparing its performance with that of our original model. To answer RQCat, we replace topics with the categories defined in the official market, and we compare the performance of this new model with that of our original model. Finally, the last research question RQ M al investigates the usefulness of AnFlo in detecting malware based on anomalies in sensitive information flows. Benchmarks and Experimental Settings Trusted Apps AnFlo needs a set of trusted apps to learn what is the normal behavior for "correct and benign" apps. We defined the following guidelines to collect trusted apps: (i) apps that come from the official Google Play Store (so they are scrutinized and checked by the store maintainer) and (ii) apps that are very popular (so they are widely used and reviewed by a large community of end users and programming mistakes are quickly notified and patched). At the time of crawling the Google Play Store, it had 30 different app categories. From each category, we downloaded, on average, the top 500 apps together with their descriptions. We then discarded apps with non-English description and those with very short descriptions (less than 10 words). Eventually, we are left with 11,796 apps for building references models. Additionally, we measured if these apps were actively maintained by looking at the date of the last update. 70% of the apps were last updated in the past 6 months before the Play Store was crawled, while 32% of the apps were last updated within the same month of the crawling. This supports the claim that the trusted apps are well maintained. The fact that the trusted apps we use are suggested and endorsed by the official store, and that they collected good end-user feedback allows us to assume that the apps are of high quality and do not contain many security problems. Nevertheless, as explained in Section 4.3, our approach is robust against the inclusion of a small number of anomalous apps in the training set since we adopt a threshold to classify anomalous information flows. Subject Benign Apps AnFlo works on compiled apps and, therefore the availability of source code is not a requirement for the analysis. However, for this experiment sake, we opted for open source projects, which enable us to inspect the source code and establish the ground truth. The F-Droid repository 6 represents an ideal setting for our experimentation because (i) it includes real world apps that are also popular in the Google Play Store, and (ii) apps can be downloaded with their source code for manual verification of the vulnerability reports delivered by AnFlo. The F-Droid repository was crawled in July 2017 for apps that meet our criteria. Among all the apps available in this repository, we used only those apps that are also available in the Google Play Store, whose descriptions meet our selection criteria (i.e., description is in English and it is longer than 10 words). Eventually, our experimental set of benign apps consists of 596 AUAs. Subject Malicious Apps To investigate if AnFlo can identify malware, we need a set of malicious apps with their declared functional descriptions. Malicious apps are usually repackaged versions of popular (benign) apps, injected with malicious code (Trojanized); hence the descriptions of those popular apps they disguise as can be considered as their app descriptions. Hence, by identifying the original versions of these malicious apps in the Google Play Store, we obtain their declared functional descriptions. We consider the malicious apps from the Drebin malware dataset [2], which consists of 5,560 samples that have been collected in the period of August 2010 to October 2012. We randomly sampled 560 apps from this dataset. For each malicious app, we performed static analysis to extract the package name, an identifier used by Android and by the official store to distinguish Android apps 7 . We queried the official Google Play market for the original apps, by searching for those having the same package name. Among our sampled repackaged malicious apps, we found 20 of the apps in the official market with the same package name. We analyzed their descriptions and found that only 18 of them have English descriptions. We therefore performed static taint analysis on these 18 malware samples, for which we found their "host" apps in the official market. Our static analysis crashed on 6 cases. Therefore, our experimental set of malicious apps consists of 12 AUAs. Results Detecting Vulnerable Apps Firstly, AnFlo was used to perform static taint analysis on the 11,796 trusted apps and topic analysis on their descriptions from the official Play Store. It then learns the Sensitive Information Flow Models based on the dominant topics and extracted flows as described in Section 4.3. Then, the AUAs from the F-Droid repository (Section 5.1.2) have been classified based on the Sensitive Information Flow Models. Out of 596 AUAs, static taint analysis reported 76 apps to contain flows of sensitive information that reach sinks, for a total of 1428 flows. These flows map to 147 distinct source-sink pairs. Out of these 76 apps, 14 AUAs are classified as anomalous. Table 1 shows the analysis results reported by AnFlo. The first column presents the name of the app. The second column presents the app's dominant topic. The third and fourth columns present the source of sensitive data and the sink identified by static taint analysis, respectively. As shown in Table 1, in total AnFlo reported 25 anomalous flows in these apps. We manually inspected the source code available from the repository to determine if these anomalous flows were due to programming defects or vulnerabilities. Two apps are found to be vulnerable (highlighted in boldface in Table 1), they are com.matoski. adbm and com.mschlauch.comfortreader. com.matoski.adbm is a utility app for managing the ADB debugging interface. The anomalous flow involves data from the WiFi configuration that leak to other apps through the Inter Process Communication. Among other information that may leak, the SSID data, which identifies the network to which the device is connected to, can be used to infer the user position and threaten the end user privacy. Hence, this programming defect leads to information leakage vulnerability that requires corrective maintenance. We reported this vulnerability to the app owners on their issue tracker. com.mschlauch.comfortreader is a book reader app, with an anomalous flow of data from IPC to the Internet. Manual inspection revealed that this anomalous flow results from a permission re-delegation vulnerability because data coming from another app is used, without sanitization, for opening a data stream. If a malicious app that does not have the permission to use the Internet passes a URL that contains sensitive privacy data (e.g., GPS coordinates), then the app could be used to leak information. We reported this vulnerability to the app developers. Regarding the other 12 AUAs, even though they contain 7 Even if it is easy to obfuscate this piece of information, in our experiment some apps did not rename their package name anomalous flows compared to trusted apps, manual inspection revealed that they are neither defective nor vulnerable. For example, some apps contain anomalous flows that involves IPC. Since data may come from other apps via IPC (source) or may flow to other apps via IPC (sink), such flows are considered dangerous in general. However, in these 12 apps, when IPC is a source (e.g., in com.alfray.timeriffic), data is either validated/sanitized before used in the sink or used in a way that do not threaten security. On the other hand, when IPC is a sink (e.g., in com.dozingcatsoftware. asciicam), the destination is always a component in the same app, so the flows are not actually dangerous. Since AnFlo helped us detect 2 vulnerable apps containing anomalous information flows, we can answer RQ V ul by stating that AnFlo is useful for identifying vulnerabilities related to anomalous information flows. Classification Time To investigate RQT ime, we analyze the time required to classify the AUAs. We instrumented the analysis script with the Linux date utility to log the time (in seconds) before starting the analysis and at its conclusion. Their difference is the amount of time spent in the computation. The experiment was run on a multi-core cluster, specifically designed to let a process run without sharing memory or computing resources with other processes. Thus, we assume that the time measurement is reliable. Classification time includes the static analysis step to exact data flow, the natural language step to extract topics from description and the comparison with the Sensitive Information Flow Model to check for consistency. Figure 6 reports the boxplot of the time (in minutes) needed to classify the F-Droid apps and the descriptive statistics. On average, an app takes 1.9 minutes to complete the classification and most of the analyses concluded in less than 3 minutes (median = 1.5). Only a few (outliers) cases require longer analysis time. Topics from App Description We now run another experiment to verify our claim that topics are important features to build an accurate model (RQT opics). We repeated the same experiment as before, but using only flows as features and without considering topics, to check how much detection accuracy we lose in this way. We still consider all the trusted apps for learning the reference model, but we only use static analysis data. That is, we do not create a separate matrix for each topic; instead we create one big single matrix with sources and sinks for all the apps. This Sensitive Information Flow Model is then used to classify F-Droid apps and the results are shown in Table 2. As we can see, only four apps are detected as anomalous by this second approach, and all of them were already detected by our original, proposed approach. Manual inspection revealed that all of them are not vulnerable. This suggests that topic is a very important feature to learn reference models in order to detect a larger amount of anomalous apps. In fact, when topics are not considered and all the apps are grouped together regardless of their topics, we observe a smoothing effect. Differences among apps become less relevant to detect anomalies. While in the previous model, an app was compared only against those apps grouped under the same topic. Here, an app is compared to all the trusted apps. Without topic as a feature, our model loses the ability to capture the characteristics of distinct groups and, thus, the ability to detect deviations from them. Play Store Categories To investigate RQCat, instead of grouping trusted apps based on topics, we group them according to their app categories as determined by the official Google Play Store. First of all we split trusted apps into groups based on the market category they belong to 8 . We then use static analysis information about flows to build a separate source-sink matrix per each category. Eventually we compute thresholds to complete the model. We then classify each AUA from F-Droid by comparing it with the model of the corresponding market category. The classification results are reported in Table 3. Ten apps are reported as containing anomalous flows and most of them were also detected by our original, proposed approach (Table 1). Two apps reported by this approach were not reported by our proposed approach, which are com.angrydoughnuts.android.alarmclock and com.futurice. android.reservator. However, they are neither the cases of vulnerabilities nor malicious behaviors. Only one flow detected by this approach is a case of vulnerability, namely com.matoski.adbm, highlighted in boldface, which was also detected by our proposed approach. Hence, this result supports our design decision of using topics. Table 4 summarizes the result of the models comparison. The first model (first row) considers both data flows and description topics as features. Even though this approach reported the largest number of false positives (12 apps, 'FP' column), we were able to detect 2 vulnerabilities ('Vuln.' column) by tracing the anomalies reported by this approach. It also detected 5 additional anomalous apps that other approaches did not detect ('Unique' column). Comparison of the Models The second model (second row) considers only data flows as a feature. Even though the number of false positives drops to 4, we were not able to detect any vulnerability by tracing the anomalies reported by this approach. This result suggests that modeling only flows is not enough for detecting vulnerabilities. When market categories are used instead of description topics (last row), the false positives drops to 9 (25% less compared to our proposed model). It detected 2 additional anomalous apps that other approaches did not detect ('Unique' column). Tracing the anomalies reported by this approach, we detected only one out of the two vulnerabilities that we detected using our proposed approach. This result suggests that topics are more useful than categories for detecting vulnerable apps containing anomalous information flows. Detecting Malicious Apps Anomalies in the flow of sensitive data could be due to malicious behaviors as well. The goal of this last experiment is to investigate whether AnFlo can be used to identify malware (RQ M al ). To this aim, we use the Sensitive Infor-mation Flow Model (learned on the trusted apps) to classify the 18 AUAs from the Drebin malware dataset. Data flow features are extracted using static analysis from these malicious apps. However, static taint analysis crashed on 6 apps because of their heavy obfuscation. Since improving the static taint analysis implementation to work on heavy obfuscated code is out of the scope of this paper, we run the experiment on the remaining 12 apps. Topics are extracted from the descriptions of the original versions of those malware, which are available at the official market store. The malicious apps have been subject to anomaly detection, based on the three distinct feature sets: (i) flows and topics; (ii) only flows; and (iii) flows and market categories. The classification results are shown in Table 5. The first column reports the malware name (according to ESET-NOD32 9 antivirus) and the second column contains the name of the original app that was repackaged to spread the malware. The remaining three columns report the results of malware detection by the three models based on different sets of features: a tick mark ("") means that the model correctly detected the app as anomalous, while a cross ("") means no anomaly detected. While the model based on topics and the model based on market categories classified the same 6 AUAs as malicious, the model based on only flows classified only 4 AUAs as malicious. All the malware except TrojanSMS.Agent are the cases of privacy sensitive information leaks such as device ID, phone number, e-mail or GPS coordinate, being sent over the network or via SMS. One typical malicious behavior is observed in Spy.GoldDream. In this case, after querying the list of installed packages (sensitive data source), the malware attempts to kill selected background processes (sensitive sink). This is a typical malicious behavior observed in malware that tries to avoid detection by stopping security products such as antiviruses. Botnet behavior is observed in Droid-KunFu. A command and control (C&C) server command is consulted (sensitive source) before performing privileged actions on the device (sensitive sink). As shown in Table 5, when only static analysis features are used in the model, two malicious apps are missed. This is because this limited model compares the given AUA against all the trusted apps, instead of only the apps from a specific subset (grouped by the common topic or the same category). A flow that would have been anomalous for the specific topic (or the specific category) might be normal for another topic/category. For example, acquiring GPS coordinate and sending it over the network is common for navigation or transportation apps. However, it is not a common behavior for tools apps, which is the case of the Anserver malware. The remaining 6 apps in the dataset were consistently classified as not-anomalous by all the models. These false negatives are mainly due to the malicious behaviors not related to sensitive information flows, such as dialing calls in the background or blocking messages. Another reason is due to the obfuscation by malware to hide the sensitive information flows. Static analysis inherently cannot handle obfuscation. Limitation and Discussion In the following, we discuss some of the limitations of our approach and of its experimental validation. The most prominent limitation to adopt our approach is the availability of trusted apps to build the model of sensitive information flows. In our experimental validation, we trusted top ranked popular apps from the official app store, but we have no guarantee that they are all immune from vulnerabilities and from malware content. However, as explained in Section 4.3, our approach is quite robust with respect to the inclusion of a small number of defective, vulnerable, or malicious apps in the training set, as long as the majority of the training apps are benign and correct. This is because we use a threshold-based approach that models flows common to a large set of apps. Thus, vulnerable flows occurring on few training apps are not learnt as normal in the model and they would be classified as anomalous when observed in a given AUA. A flow classified as anomalous by our model needs further manual analysis to check if the anomaly is a vulnerability, a malicious behavior or is safe. Manual inspection could be an expensive task that might delay the delivery of the software product. However, in our experimental validation, manual filtering on the experimental result took quite short time, on average 30 minutes per app. Considering that the code of the app to review was new to us, we expect a shorter manual filtering phase for a developer who is quite familiar with the code of her/his app. All in all, manual effort required to manual filter results of the automated tool seems to be compatible with the fast time-to-market pressure of smart phone apps. When building sensitive information flow models, we also considered grouping of apps by using clustering technique based on the topics distribution, instead of grouping based on the dominant topic alone. But we conducted preliminary experiments using this method and observed that grouping of apps based on dominant topics produce more cohesive groups, i.e., apps that are more similar. Inherently, it is difficult for static analysis-based approaches including ours to handle obfuscated code. Therefore, if training apps are obfuscated (e.g., to limit reverse engineering attacks), our approach may collect incomplete static information and only build a partial model. And if the AUA is obfuscated, our approach may not detect the anomalies. As future work, we plan to incorporate our approach with dynamic analysis to deal with obfuscation. CONCLUSION In this paper, we proposed a novel approach to analyze the flows of sensitive information in Android apps. In our approach, trusted apps are first analyzed to extract topics from their descriptions and data flows from their code. Topics and flows are then used to learn Sensitive Information Flow models. We can use these models for analyzing new Android apps to determine whether they contain anomalous information flows. Our experiments show that this approach could detect anomalous flows in vulnerable and malicious apps quite fast.
6,667
1812.07894
2883454930
Smartphone apps usually have access to sensitive user data such as contacts, geo-location, and account credentials and they might share such data to external entities through the Internet or with other apps. Confidentiality of user data could be breached if there are anomalies in the way sensitive data is handled by an app which is vulnerable or malicious. Existing approaches that detect anomalous sensitive data flows have limitations in terms of accuracy because the definition of anomalous flows may differ for different apps with different functionalities; it is normal for "Health" apps to share heart rate information through the Internet but is anomalous for "Travel" apps. In this paper, we propose a novel approach to detect anomalous sensitive data flows in Android apps, with improved accuracy. To achieve this objective, we first group trusted apps according to the topics inferred from their functional descriptions. We then learn sensitive information flows with respect to each group of trusted apps. For a given app under analysis, anomalies are identified by comparing sensitive information flows in the app against those flows learned from trusted apps grouped under the same topic. In the evaluation, information flow is learned from 11,796 trusted apps. We then checked for anomalies in 596 new (benign) apps and identified 2 previously-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them.
Chabada @cite_19 is a tool to find apps whose descriptions differ from their implementations. While we apply similar techniques in terms of natural language processing of apps descriptions, the goals differ. The goal of their approach is to find anomalous apps among the apps in the wild based on the inconsistencies between the advertised apps descriptions and their actual behaviors. By contrast, our approach specifically targets at identifying anomalies in the flow of sensitive information in the app code. More specifically, Chabada only identifies calls to sensitive APIs to characterize benign and anomalous apps. By contrast, we consider data flows from sensitive data sources to sensitive sinks.
{ "abstract": [ "How do we know a program does what it claims to do? After clustering Android apps by their description topics, we identify outliers in each cluster with respect to their API usage. A \"weather\" app that sends messages thus becomes an anomaly; likewise, a \"messaging\" app would typically not be expected to access the current location. Applied on a set of 22,500+ Android applications, our CHABADA prototype identified several anomalies; additionally, it flagged 56 of novel malware as such, without requiring any known malware patterns." ], "cite_N": [ "@cite_19" ], "mid": [ "2168649891" ] }
AnFlo: Detecting Anomalous Sensitive Information Flows in Android Apps
Android applications (apps) are often granted access to users' privacy-and security-sensitive information such as GPS position, phone contacts, camera, microphone, training log, and heart rate. Apps need such sensitive data to implement their functionalities and provide rich user experiences. For instance, accurate GPS position is needed to navigate users to their destinations, phone contact is needed to implement messaging and chat functionalities, and heart rate frequency is important to accurately monitor training im-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Often, to provide services, apps may also need to exchange data with other apps in the same smartphone or externally with a remote server. For instance, a camera app may share a picture with a multimedia messaging app for sending it to a friend. The messaging app, in turn, may send the full contacts list from the phone directory to a remote server in order to identify which contacts are registered to the messaging service so that they can be shown as possible destinations. As such, sensitive information may legitimately be propagated via message exchanges among apps or to remote servers. On the other hand, sensitive information might be exposed unintentionally by defective/vulnerable apps or intentionally by malicious apps (malware), which threatens the security and privacy of end users. Existing literature on information leak in smartphone apps tend to overlook the difference between legitimate data flows and illegitimate ones. Whenever information flow from a sensitive source to a sensitive sink is detected, either statically [23], [20], [19,15,3], [22], [17], [12] or dynamically [8], it is reported as potentially problematic. In this paper, we address the problem of detecting anomalous information flows with improved accuracy by classifying cases of information flows as either normal or anomalous according to a reference information flow model. More specifically, we build a model of sensitive information flows based on the following features: • Data source: the provenance of the sensitive data that is being propagated; • Data sink: the destination where the data is flowing to; and • App topic: the declared functionalities of the app according to its description. Data source and data sink features are used to reflect information flows from sensitive sources to sinks and summarize how sensitive data is handled by an app. However, these features are not expressive enough to build an accurate model. In fact, distinct apps might have very different functionalities. What is considered legitimate of a particular set of apps (e.g., sharing contacts for a messaging app) can be considered a malicious behavior for other apps (e.g., a piece of malware that steals contacts, to be later used by spammers). An accurate model should also take into consideration the main functionalities that is declared by an app (in our case the App topic). One should classify an app as anomalous only when it exhibits sensitive information flows that are not consistent with its declared functionalities. This characteristic, which makes an app anomalous, is captured by the App topic feature. In summary, our approach focuses on detecting apps that are anomalous in terms of information flows compared to other apps with similar functionalities. Such an approach would be useful for various stakeholders. For example, market owners (e.g., Google) can focus on performing more complex and expensive security analysis only on those cases that are reported as anomalous, before publishing them. If such information is available to end users, they could also make informed decision of whether or not to install the anomalous app. For example, when the user installs an app, a warning stating that this particular app sends contact information through the Internet differently from other apps with similar functionalities (as demonstrated in the tool website). In the context of BYOD (bring your own device) where employees use their own device to connect to the secure corporate network, a security analyst might benefit from this approach to emphasis manual analysis on those anomalous flows that might compromise the confidentiality of corporate data stored in the devices. The specific contributions of this paper are: • An automated, fast approach for detecting anomalous flows of sensitive information in Android apps through a seamless combination of static analysis, natural language processing, model inference, and classification techniques; • The implementation of the proposed approach in a tool called AnFlo which is publicly available 1 ; and • An extensive empirical evaluation of our approach based on 596 subject apps, which assesses the accuracy and runtime performance of anomalous information flow detection. We detected 2 previous-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them. The rest of the paper is organized as follows. Section 2 motivates this work. Section 3 compares our work with literature. Section 4 first gives an overview of our approach and then explains the steps in details. Section 5 evaluates our approach. Section 6 concludes the paper. MOTIVATION To implement their services, apps may access sensitive data. It is important that application code handling such data follows secure coding guidelines to protect user privacy and security. However, fast time-to-market pressure often pushes developers to implement data handling code quickly without considering security implications and release apps without proper testing. As a result, apps might contain defects that leak sensitive data unintentionally. They may also contain security vulnerabilities such as permission redelegation vulnerabilities [9], which could be exploited by malicious apps installed on the same device to steal sensitive data. Sensitive data could also be intentionally misused by malicious apps. Malicious apps such as malware and spyware often implement hidden functionalities not declared in 1 Tool and dataset available at http://selab.fbk.eu/anflo/ their functional descriptions. For example, a malicious app may declare only entertainment features (e.g., games) in its description, but it steals user data or subscribes to paid services without the knowledge and consent of the user. Defective, vulnerable, and malicious apps all share the same pattern, i.e., they (either intentionally or unintentionally) deal with sensitive data in an anomalous way, i.e., they behave differently in terms of dealing with sensitive data compared to other apps that state similar functionalities. Therefore, novel approaches should focus on detecting anomalies in sensitive data flows, caused by mismatches between expected flows (observed in benign and correct apps) and actual data flows observed in the app under analysis. However, the comparison should be only against similar apps that offer similar functionalities. For instance, messaging apps are expected to read information from phone contact list but they are not expected to use GPS position. These observations motivate our proposed approach. ANOMALOUS INFORMATION FLOW DE-TECTION Overview The overview of our approach is shown in Figure 1. It has two main phases -Learning and Classification. The input to the learning phase is a set of apps that are trusted to be benign and correct in the way sensitive data is handled (we shall denote them as trusted apps). It has two sub-steps -feature extraction and model inference. In the feature extraction step, (i) topics that best characterize the trusted apps are inferred using natural language processing (NLP) techniques and (ii) information flows from sensitive sources to sinks in the trusted apps are identified using static taint analysis. In the model inference step, we build sensitive information model that characterizes information flows regarding each topic. These models and a given app under analysis (we shall denote it as AUA) are the inputs to the classification phase. In this phase, basically, the dominant topic of the AUA is first identified to determine the relevant sensitive information flow model. Then, if the AUA contains any information flow that violates that model, i.e., is not consistent with the common flows characterized by the model, it is flagged as anomalous. Otherwise, it is flagged as normal. We implemented this approach in our tool AnFlo to automate the detection of anomalous information flows. However, a security analyst is required to further inspect those anomalous flows and determine whether or not the flows could actually lead to serious vulnerabilities such as information leakage issues. Topics discovery. Topics representative of a given preprocessed app description are identified using the Latent Dirichlet Allocation (LDA) technique [6], implemented in a tool called Mallet [18]. LDA is a generative statistical model that represents a collection of text as a mixture of topics with certain probabilities, where each word appearing in the text is attributable to one of the topics. The output of LDA is a list of topics, each of them with its corresponding probability. The topic with the highest probability is labeled as the dominant topic for its associated app. To illustrate, Figure 2 shows the functional description of an app called BestTravel, and the resulting output after performing pre-processing and topics discovery on the description. "Travel" is the dominant topic, the one with the highest probability of 70%. Then, the topics "Communication", "Finance", and "Photography" have the 15%, 10%, and 5% probabilities, respectively, of being the functionalities that the app declares to provide. The ultimate and most convenient way of traveling. Use BestTravel while on the move, to find restaurants (including pictures and prices), local transportation schedule, ATM machines and much more. App name Travel Communication Finance Photography BestTravel 70% 15% 10% 5% Figure 2: Example of app description and topic analysis result. Note that we did not consider Google Play categories as topics even though apps are grouped under those categories in Google Play. This is because recent studies [1,10] have reported that NLP-based topic analysis on app descriptions produces more cohesive clusters of apps than those apps grouped under Google Play categories. Static Analysis Sensitive information flows in the trusted apps are extracted using static taint analysis. Taint analysis is an instance of flow-analysis technique, which tags program data with labels that describe its provenance and propagates these tags through control and data dependencies. A different label is used for each distinct source of data. Tags are propagated from the operand(s) in the right-hand side of an assignment (uses) to the variable assigned in the left-hand side of the assignment (definition). The output of taint analysis is information flows, i.e., what data of which provenances (sources) are accessed at what program operations, e.g., on channels that may leak sensitive information (sinks). Our analysis focuses on the flows of sensitive information into sensitive program operations, i.e., our taint analysis generates tags at API calls that read sensitive information (e.g. GPS and phone contacts) and traces the propagation of tags into API calls that perform sensitive operations such as sending messages and Bluetooth packets. These sensitive APIs usually belong to dangerous permission group and hence, the APIs that we analyze are those privileged APIs that require to be specifically granted by the end user. Sources and sinks are the privileged APIs available from PScout [4]. The APIs that we analyze also include those APIs that enable Inter Process Communication (IPC) mechanism of Android because they can be used to exchange data among apps installed on the same device. As a result, our taint analysis generates a list of (source → sink) pairs, where each pair represents the flow of sensitive data originating from a source into a sink. APIs (both for sources and for sinks) are grouped according to the special permission required to run them. For example, all the network related sink functions, such as open-Connection(), connect() and getContent() are all modeled as Internet sinks, because they all require the INTER-NET permission to be executed. Figure 3 shows the static taint analysis result on the "BestTravel" running example app from Figure 2. It generates two (source → sink) pairs that correspond to two sensitive information flows. In the first flow, data read from the GPS is propagated through the program until it reaches a statement where it is sent over the network. In the second flow, data from the phone contacts is used to compose a text message. Our tool, AnFlo, runs on compiled byte-code of apps to perform the above static taint analysis. It relies on two existing tools -IC3 [19] and IccTA [15]. Android apps are usually composed of several components. Therefore, to precisely extract inter-component information flows, we need to analyze the links among components. AnFlo uses IC3 to resolve the target components when a flow is inter-component. IC3 uses a solver to infer all possible values of complex objects in an inter-procedural, flow-and context-sensitive manner. Once inter-component links are inferred, AnFlo uses an inter-component data-flow analysis tool called IccTA to perform static taint analysis. We customized IccTA to produce flows in a format as presented in Figure 3 and paths in a more verbose format to facilitate manual checks. App: BestTravel GPS → Internet Contacts → SMS Model Inference When results of topic analysis and of static analysis are available for all the trusted apps, they are used to build the Sensitive Information Flow Model. Such a model is a matrix with sensitive information sources in its rows and sinks in its columns, as shown in Figure 4. Firstly, apps with the same dominant topic are grouped together 5 , to build a sensitive information flow model corresponding to that specific topic. Each group is labeled with the dominant topic. Next, each cell of the matrix is filled with a number, representing the number of apps in this group having the corresponding (source → sink) pair. Figure 4 shows a sample sensitive information model regarding the topic "Travel". There are 36 distinct flows in the apps grouped under this dominant topic. The matrix shows that there are ten apps containing GPS position flowing through the Internet (one of them being the BestTravel app, see Figure 3); eight apps through text messages and three apps through Bluetooth. Similarly, the matrix shows that contacts information flows through SMS in seven apps and through Bluetooth in eight apps. From this model, we can observe that for Travel apps it is quite common to share the user's position via Internet and SMS. However, it is quite uncommon to share the position data via Bluetooth since it happened only in three cases. Likewise, the phone contacts are commonly shared through text messages and Bluetooth but not through Internet. To provide a formal and operative definition of common and uncommon flows, we compute a threshold denoted as τ . Flows that occur more than or equal to τ are considered as common; flows that never occur or that occur fewer than τ are considered as uncommon regarding this topic. Although our model assumes or trusts that the trusted apps are benign and correct, it is possible that some of them may contain defects, vulnerabilities or malware. This problem is addressed by classifying those flows occurring less than the threshold τ as uncommon, i.e., our approach tolerates the presence of some anomalous flows in the reference model since these flows would still be regarded as uncommon. Hence, our approach works as long as the majority of the trusted apps are truly trustworthy. To compute this threshold, we adopt the box-plot approach proposed by Laurikkala et al. [13], considering only flows occurring in the model, i.e., we consider only values greater than zero. τ is computed in the same way as drawing outlier dots in boxplots. It is the lower quartile (25th percentile) minus the step, where the step is 1.5 times the difference between the upper quartile (75th percentile) and the lower quartile (25th percentile). It should be noted that τ is not trivially the lower quartile; otherwise 25% of the apps would be outliers by construction. The threshold is lower, i.e., it is the lower quartile minus the step. Therefore, there is no fixed amount of outliers. Outliers could be few or many depending on the distribution of data. Outliers would only be those cases that are really different from the majority of the training data points. In the example regarding topic "Travel" in Figure 4, the threshold is computed considering only the five values that are > 0. The value for the threshold is τ T ravel = 7. It means that GPS data sent through Internet (GPS → Internet) or text messages (GPS → SMS) are common for traveling apps. Conversely, even though there are three trusted apps which send GPS data through Bluetooth (GPS → Bluetooth), there are too few cases to be considered common, and this sensitive information flow will be considered uncommon in the model. Likewise, phone contacts are commonly sent through text messages and Bluetooth, but it is uncommon for them to be sent through the Internet, since this never occurs in the trusted apps. Classification After the Sensitive Information Flow Models are built on trusted apps, they can be used to classify a new AUA. First of all, features must be extracted from the AUA. The features are the topics associated with the app description and the sensitive information flows in the app. As in Section 4.2.1, first data pre-processing is performed on the app description of the AUA. Then, topics and their probabilities are inferred from the pre-processed description using the Mallet tool. Among all the topics, we consider only the dominant topic, the one with the highest probability, because it is the topic that most characterizes this app. We then obtain the Sensitive Information Flow Model associated with this dominant topic. To ensure the availability of the Sensitive Information Flow Model, the Mallet tool is configured with the list of topics for which the Models are already built on the trusted apps. And given an app description, the Mallet tool only generates topics from this list. The more diverse trusted apps we analyze, the more complete list of models we expect to build. For example, Figure 5(a) shows the topics inferred from the description of a sample AUA "TripOrganizer". The topic "Travel" is highlighted in bold to denote that it is the dominant topic. Next, sensitive information flows in the AUA are extracted as described in Section 4.2.2. The extracted flows are then compared against the flows in the model associated with the dominant topic. If the AUA contains only flows that are common according to the model, the app is considered consistent with the model. If the app contains a flow that is not present in the model or a flow that is present but is uncommon according to the model, the flow and thus, the app is classified as anomalous. Anomalous flows require further manual inspection by a security analyst, because they could be due to defects, vulnerabilities, or malicious intentions. For example, Figure 5(b) shows three sensitive information flows extracted from "TripOrganizer" app. Since the dominant topic for this app is "Travel", these flows can be checked against the model associated with this topic shown in Figure 4. Regarding this model, earlier, we computed that the threshold is τ T ravel = 7 and the flow (Contacts → SMS) is common (see Section 4.3). Therefore, flow 1 ob-served in "TripOrganizer" (Figure 5(b)) is consistent with the model. However, flow 2 (Contacts → Internet) and flow 3 (GPS → Bluetooth), highlighted in bold in Figure 5(b), are uncommon according to the model. As a result, the AUA "TripOrganizer" is classified as anomalous. EMPIRICAL ASSESSMENT In this section, we evaluate the usefulness of our approach and report the results. We assess our approach by answering the following research questions: • RQ V ul : Is AnFlo useful for identifying vulnerable apps containing anomalous information flows? • RQT ime: How long does AnFlo take to classify apps? • RQT opics: Is the topic feature really needed to detect anomalous flows? • RQCat: Can app-store categories be used instead of topics to learn an accurate Sensitive Information Flow Model? • RQ M al : Is AnFlo useful for identifying malicious apps? The first research question RQ V ul investigates the result of AnFlo, whether it is useful for detecting anomalies in vulnerable apps that, for example, may leak sensitive information. RQT ime investigates the cost of using our approach in terms of the time taken to analyze a given AUA. A short analysis time is essential for tool adoption in a real production environment. Then, in the next two research questions, we investigate the role of topics as a feature for building the Sensitive Information Flow Models. RQT opics investigates the absolute contribution of topics, by learning the Sensitive Information Flow Model without considering the topics and by comparing its performance with that of our original model. To answer RQCat, we replace topics with the categories defined in the official market, and we compare the performance of this new model with that of our original model. Finally, the last research question RQ M al investigates the usefulness of AnFlo in detecting malware based on anomalies in sensitive information flows. Benchmarks and Experimental Settings Trusted Apps AnFlo needs a set of trusted apps to learn what is the normal behavior for "correct and benign" apps. We defined the following guidelines to collect trusted apps: (i) apps that come from the official Google Play Store (so they are scrutinized and checked by the store maintainer) and (ii) apps that are very popular (so they are widely used and reviewed by a large community of end users and programming mistakes are quickly notified and patched). At the time of crawling the Google Play Store, it had 30 different app categories. From each category, we downloaded, on average, the top 500 apps together with their descriptions. We then discarded apps with non-English description and those with very short descriptions (less than 10 words). Eventually, we are left with 11,796 apps for building references models. Additionally, we measured if these apps were actively maintained by looking at the date of the last update. 70% of the apps were last updated in the past 6 months before the Play Store was crawled, while 32% of the apps were last updated within the same month of the crawling. This supports the claim that the trusted apps are well maintained. The fact that the trusted apps we use are suggested and endorsed by the official store, and that they collected good end-user feedback allows us to assume that the apps are of high quality and do not contain many security problems. Nevertheless, as explained in Section 4.3, our approach is robust against the inclusion of a small number of anomalous apps in the training set since we adopt a threshold to classify anomalous information flows. Subject Benign Apps AnFlo works on compiled apps and, therefore the availability of source code is not a requirement for the analysis. However, for this experiment sake, we opted for open source projects, which enable us to inspect the source code and establish the ground truth. The F-Droid repository 6 represents an ideal setting for our experimentation because (i) it includes real world apps that are also popular in the Google Play Store, and (ii) apps can be downloaded with their source code for manual verification of the vulnerability reports delivered by AnFlo. The F-Droid repository was crawled in July 2017 for apps that meet our criteria. Among all the apps available in this repository, we used only those apps that are also available in the Google Play Store, whose descriptions meet our selection criteria (i.e., description is in English and it is longer than 10 words). Eventually, our experimental set of benign apps consists of 596 AUAs. Subject Malicious Apps To investigate if AnFlo can identify malware, we need a set of malicious apps with their declared functional descriptions. Malicious apps are usually repackaged versions of popular (benign) apps, injected with malicious code (Trojanized); hence the descriptions of those popular apps they disguise as can be considered as their app descriptions. Hence, by identifying the original versions of these malicious apps in the Google Play Store, we obtain their declared functional descriptions. We consider the malicious apps from the Drebin malware dataset [2], which consists of 5,560 samples that have been collected in the period of August 2010 to October 2012. We randomly sampled 560 apps from this dataset. For each malicious app, we performed static analysis to extract the package name, an identifier used by Android and by the official store to distinguish Android apps 7 . We queried the official Google Play market for the original apps, by searching for those having the same package name. Among our sampled repackaged malicious apps, we found 20 of the apps in the official market with the same package name. We analyzed their descriptions and found that only 18 of them have English descriptions. We therefore performed static taint analysis on these 18 malware samples, for which we found their "host" apps in the official market. Our static analysis crashed on 6 cases. Therefore, our experimental set of malicious apps consists of 12 AUAs. Results Detecting Vulnerable Apps Firstly, AnFlo was used to perform static taint analysis on the 11,796 trusted apps and topic analysis on their descriptions from the official Play Store. It then learns the Sensitive Information Flow Models based on the dominant topics and extracted flows as described in Section 4.3. Then, the AUAs from the F-Droid repository (Section 5.1.2) have been classified based on the Sensitive Information Flow Models. Out of 596 AUAs, static taint analysis reported 76 apps to contain flows of sensitive information that reach sinks, for a total of 1428 flows. These flows map to 147 distinct source-sink pairs. Out of these 76 apps, 14 AUAs are classified as anomalous. Table 1 shows the analysis results reported by AnFlo. The first column presents the name of the app. The second column presents the app's dominant topic. The third and fourth columns present the source of sensitive data and the sink identified by static taint analysis, respectively. As shown in Table 1, in total AnFlo reported 25 anomalous flows in these apps. We manually inspected the source code available from the repository to determine if these anomalous flows were due to programming defects or vulnerabilities. Two apps are found to be vulnerable (highlighted in boldface in Table 1), they are com.matoski. adbm and com.mschlauch.comfortreader. com.matoski.adbm is a utility app for managing the ADB debugging interface. The anomalous flow involves data from the WiFi configuration that leak to other apps through the Inter Process Communication. Among other information that may leak, the SSID data, which identifies the network to which the device is connected to, can be used to infer the user position and threaten the end user privacy. Hence, this programming defect leads to information leakage vulnerability that requires corrective maintenance. We reported this vulnerability to the app owners on their issue tracker. com.mschlauch.comfortreader is a book reader app, with an anomalous flow of data from IPC to the Internet. Manual inspection revealed that this anomalous flow results from a permission re-delegation vulnerability because data coming from another app is used, without sanitization, for opening a data stream. If a malicious app that does not have the permission to use the Internet passes a URL that contains sensitive privacy data (e.g., GPS coordinates), then the app could be used to leak information. We reported this vulnerability to the app developers. Regarding the other 12 AUAs, even though they contain 7 Even if it is easy to obfuscate this piece of information, in our experiment some apps did not rename their package name anomalous flows compared to trusted apps, manual inspection revealed that they are neither defective nor vulnerable. For example, some apps contain anomalous flows that involves IPC. Since data may come from other apps via IPC (source) or may flow to other apps via IPC (sink), such flows are considered dangerous in general. However, in these 12 apps, when IPC is a source (e.g., in com.alfray.timeriffic), data is either validated/sanitized before used in the sink or used in a way that do not threaten security. On the other hand, when IPC is a sink (e.g., in com.dozingcatsoftware. asciicam), the destination is always a component in the same app, so the flows are not actually dangerous. Since AnFlo helped us detect 2 vulnerable apps containing anomalous information flows, we can answer RQ V ul by stating that AnFlo is useful for identifying vulnerabilities related to anomalous information flows. Classification Time To investigate RQT ime, we analyze the time required to classify the AUAs. We instrumented the analysis script with the Linux date utility to log the time (in seconds) before starting the analysis and at its conclusion. Their difference is the amount of time spent in the computation. The experiment was run on a multi-core cluster, specifically designed to let a process run without sharing memory or computing resources with other processes. Thus, we assume that the time measurement is reliable. Classification time includes the static analysis step to exact data flow, the natural language step to extract topics from description and the comparison with the Sensitive Information Flow Model to check for consistency. Figure 6 reports the boxplot of the time (in minutes) needed to classify the F-Droid apps and the descriptive statistics. On average, an app takes 1.9 minutes to complete the classification and most of the analyses concluded in less than 3 minutes (median = 1.5). Only a few (outliers) cases require longer analysis time. Topics from App Description We now run another experiment to verify our claim that topics are important features to build an accurate model (RQT opics). We repeated the same experiment as before, but using only flows as features and without considering topics, to check how much detection accuracy we lose in this way. We still consider all the trusted apps for learning the reference model, but we only use static analysis data. That is, we do not create a separate matrix for each topic; instead we create one big single matrix with sources and sinks for all the apps. This Sensitive Information Flow Model is then used to classify F-Droid apps and the results are shown in Table 2. As we can see, only four apps are detected as anomalous by this second approach, and all of them were already detected by our original, proposed approach. Manual inspection revealed that all of them are not vulnerable. This suggests that topic is a very important feature to learn reference models in order to detect a larger amount of anomalous apps. In fact, when topics are not considered and all the apps are grouped together regardless of their topics, we observe a smoothing effect. Differences among apps become less relevant to detect anomalies. While in the previous model, an app was compared only against those apps grouped under the same topic. Here, an app is compared to all the trusted apps. Without topic as a feature, our model loses the ability to capture the characteristics of distinct groups and, thus, the ability to detect deviations from them. Play Store Categories To investigate RQCat, instead of grouping trusted apps based on topics, we group them according to their app categories as determined by the official Google Play Store. First of all we split trusted apps into groups based on the market category they belong to 8 . We then use static analysis information about flows to build a separate source-sink matrix per each category. Eventually we compute thresholds to complete the model. We then classify each AUA from F-Droid by comparing it with the model of the corresponding market category. The classification results are reported in Table 3. Ten apps are reported as containing anomalous flows and most of them were also detected by our original, proposed approach (Table 1). Two apps reported by this approach were not reported by our proposed approach, which are com.angrydoughnuts.android.alarmclock and com.futurice. android.reservator. However, they are neither the cases of vulnerabilities nor malicious behaviors. Only one flow detected by this approach is a case of vulnerability, namely com.matoski.adbm, highlighted in boldface, which was also detected by our proposed approach. Hence, this result supports our design decision of using topics. Table 4 summarizes the result of the models comparison. The first model (first row) considers both data flows and description topics as features. Even though this approach reported the largest number of false positives (12 apps, 'FP' column), we were able to detect 2 vulnerabilities ('Vuln.' column) by tracing the anomalies reported by this approach. It also detected 5 additional anomalous apps that other approaches did not detect ('Unique' column). Comparison of the Models The second model (second row) considers only data flows as a feature. Even though the number of false positives drops to 4, we were not able to detect any vulnerability by tracing the anomalies reported by this approach. This result suggests that modeling only flows is not enough for detecting vulnerabilities. When market categories are used instead of description topics (last row), the false positives drops to 9 (25% less compared to our proposed model). It detected 2 additional anomalous apps that other approaches did not detect ('Unique' column). Tracing the anomalies reported by this approach, we detected only one out of the two vulnerabilities that we detected using our proposed approach. This result suggests that topics are more useful than categories for detecting vulnerable apps containing anomalous information flows. Detecting Malicious Apps Anomalies in the flow of sensitive data could be due to malicious behaviors as well. The goal of this last experiment is to investigate whether AnFlo can be used to identify malware (RQ M al ). To this aim, we use the Sensitive Infor-mation Flow Model (learned on the trusted apps) to classify the 18 AUAs from the Drebin malware dataset. Data flow features are extracted using static analysis from these malicious apps. However, static taint analysis crashed on 6 apps because of their heavy obfuscation. Since improving the static taint analysis implementation to work on heavy obfuscated code is out of the scope of this paper, we run the experiment on the remaining 12 apps. Topics are extracted from the descriptions of the original versions of those malware, which are available at the official market store. The malicious apps have been subject to anomaly detection, based on the three distinct feature sets: (i) flows and topics; (ii) only flows; and (iii) flows and market categories. The classification results are shown in Table 5. The first column reports the malware name (according to ESET-NOD32 9 antivirus) and the second column contains the name of the original app that was repackaged to spread the malware. The remaining three columns report the results of malware detection by the three models based on different sets of features: a tick mark ("") means that the model correctly detected the app as anomalous, while a cross ("") means no anomaly detected. While the model based on topics and the model based on market categories classified the same 6 AUAs as malicious, the model based on only flows classified only 4 AUAs as malicious. All the malware except TrojanSMS.Agent are the cases of privacy sensitive information leaks such as device ID, phone number, e-mail or GPS coordinate, being sent over the network or via SMS. One typical malicious behavior is observed in Spy.GoldDream. In this case, after querying the list of installed packages (sensitive data source), the malware attempts to kill selected background processes (sensitive sink). This is a typical malicious behavior observed in malware that tries to avoid detection by stopping security products such as antiviruses. Botnet behavior is observed in Droid-KunFu. A command and control (C&C) server command is consulted (sensitive source) before performing privileged actions on the device (sensitive sink). As shown in Table 5, when only static analysis features are used in the model, two malicious apps are missed. This is because this limited model compares the given AUA against all the trusted apps, instead of only the apps from a specific subset (grouped by the common topic or the same category). A flow that would have been anomalous for the specific topic (or the specific category) might be normal for another topic/category. For example, acquiring GPS coordinate and sending it over the network is common for navigation or transportation apps. However, it is not a common behavior for tools apps, which is the case of the Anserver malware. The remaining 6 apps in the dataset were consistently classified as not-anomalous by all the models. These false negatives are mainly due to the malicious behaviors not related to sensitive information flows, such as dialing calls in the background or blocking messages. Another reason is due to the obfuscation by malware to hide the sensitive information flows. Static analysis inherently cannot handle obfuscation. Limitation and Discussion In the following, we discuss some of the limitations of our approach and of its experimental validation. The most prominent limitation to adopt our approach is the availability of trusted apps to build the model of sensitive information flows. In our experimental validation, we trusted top ranked popular apps from the official app store, but we have no guarantee that they are all immune from vulnerabilities and from malware content. However, as explained in Section 4.3, our approach is quite robust with respect to the inclusion of a small number of defective, vulnerable, or malicious apps in the training set, as long as the majority of the training apps are benign and correct. This is because we use a threshold-based approach that models flows common to a large set of apps. Thus, vulnerable flows occurring on few training apps are not learnt as normal in the model and they would be classified as anomalous when observed in a given AUA. A flow classified as anomalous by our model needs further manual analysis to check if the anomaly is a vulnerability, a malicious behavior or is safe. Manual inspection could be an expensive task that might delay the delivery of the software product. However, in our experimental validation, manual filtering on the experimental result took quite short time, on average 30 minutes per app. Considering that the code of the app to review was new to us, we expect a shorter manual filtering phase for a developer who is quite familiar with the code of her/his app. All in all, manual effort required to manual filter results of the automated tool seems to be compatible with the fast time-to-market pressure of smart phone apps. When building sensitive information flow models, we also considered grouping of apps by using clustering technique based on the topics distribution, instead of grouping based on the dominant topic alone. But we conducted preliminary experiments using this method and observed that grouping of apps based on dominant topics produce more cohesive groups, i.e., apps that are more similar. Inherently, it is difficult for static analysis-based approaches including ours to handle obfuscated code. Therefore, if training apps are obfuscated (e.g., to limit reverse engineering attacks), our approach may collect incomplete static information and only build a partial model. And if the AUA is obfuscated, our approach may not detect the anomalies. As future work, we plan to incorporate our approach with dynamic analysis to deal with obfuscation. CONCLUSION In this paper, we proposed a novel approach to analyze the flows of sensitive information in Android apps. In our approach, trusted apps are first analyzed to extract topics from their descriptions and data flows from their code. Topics and flows are then used to learn Sensitive Information Flow models. We can use these models for analyzing new Android apps to determine whether they contain anomalous information flows. Our experiments show that this approach could detect anomalous flows in vulnerable and malicious apps quite fast.
6,667
1812.07894
2883454930
Smartphone apps usually have access to sensitive user data such as contacts, geo-location, and account credentials and they might share such data to external entities through the Internet or with other apps. Confidentiality of user data could be breached if there are anomalies in the way sensitive data is handled by an app which is vulnerable or malicious. Existing approaches that detect anomalous sensitive data flows have limitations in terms of accuracy because the definition of anomalous flows may differ for different apps with different functionalities; it is normal for "Health" apps to share heart rate information through the Internet but is anomalous for "Travel" apps. In this paper, we propose a novel approach to detect anomalous sensitive data flows in Android apps, with improved accuracy. To achieve this objective, we first group trusted apps according to the topics inferred from their functional descriptions. We then learn sensitive information flows with respect to each group of trusted apps. For a given app under analysis, anomalies are identified by comparing sensitive information flows in the app against those flows learned from trusted apps grouped under the same topic. In the evaluation, information flow is learned from 11,796 trusted apps. We then checked for anomalies in 596 new (benign) apps and identified 2 previously-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them.
Information leak in mobile apps is a widespread security problem. Many approaches that deal with this security problem are related to ours. Information flow in mobile apps is analysed either statically @cite_5 , @cite_24 , @cite_20 , @cite_3 , @cite_22 , @cite_15 , @cite_7 or dynamically @cite_8 , to detect disclosure of sensible information. Tainted sources are system calls that access private data (e.g., global position, contacts entries), while sinks are all the possible ways that make data leave the system (e.g., network transmissions). An issue is detected when privileged information could potentially leave the app through one of the sinks. In the following, we discuss some of these approaches and then explain the major differences.
{ "abstract": [ "We report on applying techniques for static information flow analysis to identify privacy leaks in Android applications. We have crafted a framework which checks with the help of a security type system whether the Dalvik bytecode implementation of an Android app conforms to a given privacy policy. We have carefully analyzed the Android API for possible sources and sinks of private data and identified exemplary privacy policies based on this. We demonstrate the applicability of our framework on two case studies showing detection of privacy leaks.", "One approach to defending against malicious Android applications has been to analyze them to detect potential information leaks. This paper describes a new static taint analysis for Android that combines and augments the FlowDroid and Epicc analyses to precisely track both inter-component and intra-component data flow in a set of Android applications. The analysis takes place in two phases: given a set of applications, we first determine the data flows enabled individually by each application, and the conditions under which these are possible; we then build on these results to enumerate the potentially dangerous data flows enabled by the set of applications as a whole. This paper describes our analysis method, implementation, and experimental results.", "Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides realtime analysis by leveraging Android's virtualized execution environment. TaintDroid incurs only 14 performance overhead on a CPU-bound micro-benchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of potential misuse of users' private information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications.", "When multiple apps on an Android platform interact, faults and security vulnerabilities can occur. Software engineers need to be able to analyze interacting apps to detect such problems. Current approaches for performing such analyses, however, do not scale to the numbers of apps that may need to be considered, and thus, are impractical for application to real-world scenarios. In this paper, we introduce J itana , a program analysis framework designed to analyze multiple Android apps simultaneously. By using a classloader-based approach instead of a compiler-based approach such as S oot , J itana is able to simultaneously analyze large numbers of interacting apps, perform on-demand analysis of large libraries, and effectively analyze dynamically generated code. Empirical studies of J itana show that it is substantially more efficient than a state-of-the-art approach, and that it can effectively and efficiently analyze complex apps including Facebook, Pokemon Go, and Pandora that the state-of-the-art approach cannot handle.", "Many threats present in smartphones are the result of interactions between application components, not just artifacts of single components. However, current techniques for identifying inter-application communication are ad hoc and do not scale to large numbers of applications. In this paper, we reduce the discovery of inter-component communication (ICC) in smartphones to an instance of the Interprocedural Distributive Environment (IDE) problem, and develop a sound static analysis technique targeted to the Android platform. We apply this analysis to 1,200 applications selected from the Play store and characterize the locations and substance of their ICC. Experiments show that full specifications for ICC can be identified for over 93 of ICC locations for the applications studied. Further the analysis scales well; analysis of each application took on average 113 seconds to complete. Epicc, the resulting tool, finds ICC vulnerabilities with far fewer false positives than the next best tool. In this way, we develop a scalable vehicle to extend current security analysis to entire collections of applications as well as the interfaces they export.", "We propose a new approach to conduct static analysis for security vetting of Android apps, and built a general framework, called Amandroid for determining points-to information for all objects in an Android app in a flow- and context-sensitive way across Android apps components. We show that: (a) this type of comprehensive analysis is completely feasible in terms of computing resources needed with modern hardware, (b) one can easily leverage the results from this general analysis to build various types of specialized security analyses -- in many cases the amount of additional coding needed is around 100 lines of code, and (c) the result of those specialized analyses leveraging Amandroid is at least on par and often exceeds prior works designed for the specific problems, which we demonstrate by comparing Amandroid's results with those of prior works whenever we can obtain the executable of those tools. Since Amandroid's analysis directly handles inter-component control and data flows, it can be used to address security problems that result from interactions among multiple components from either the same or different apps. Amandroid's analysis is sound in that it can provide assurance of the absence of the specified security problems in an app with well-specified and reasonable assumptions on Android runtime system and its library.", "Shake Them All is a popular \"Wallpaper\" application exceeding millions of downloads on Google Play. At installation, this application is given permission to (1) access the Internet (for updating wallpapers) and (2) use the device microphone (to change background following noise changes). With these permissions, the application could silently record user conversations and upload them remotely. To give more confidence about how Shake Them All actually processes what it records, it is necessary to build a precise analysis tool that tracks the flow of any sensitive data from its source point to any sink, especially if those are in different components. Since Android applications may leak private data carelessly or maliciously, we propose IccTA, a static taint analyzer to detect privacy leaks among components in Android applications. IccTA goes beyond state-of-the-art approaches by supporting inter-component detection. By propagating context information among components, IccTA improves the precision of the analysis. IccTA outperforms existing tools on two benchmarks for ICC-leak detectors: DroidBench and ICC-Bench. Moreover, our approach detects 534 ICC leaks in 108 apps from MalGenome and 2,395 ICC leaks in 337 apps in a set of 15,000 Google Play apps.", "Many program analyses require statically inferring the possible values of composite types. However, current approaches either do not account for correlations between object fields or do so in an ad hoc manner. In this paper, we introduce the problem of composite constant propagation. We develop the first generic solver that infers all possible values of complex objects in an interprocedural, flow and context-sensitive manner, taking field correlations into account. Composite constant propagation problems are specified using COAL, a declarative language. We apply our COAL solver to the problem of inferring Android Inter-Component Communication (ICC) values, which is required to understand how the components of Android applications interact. Using COAL, we model ICC objects in Android more thoroughly than the state-of-the-art. We compute ICC values for 460 applications from the Play store. The ICC values we infer are substantially more precise than previous work. The analysis is efficient, taking slightly over two minutes per application on average. While this work can be used as the basis for many whole-program analyses of Android applications, the COAL solver can also be used to infer the values of composite objects in many other contexts." ], "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_3", "@cite_24", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "1972796262", "2113115074", "2101799521", "2618025997", "1630356589", "2027538101", "2077202047", "1986480799" ] }
AnFlo: Detecting Anomalous Sensitive Information Flows in Android Apps
Android applications (apps) are often granted access to users' privacy-and security-sensitive information such as GPS position, phone contacts, camera, microphone, training log, and heart rate. Apps need such sensitive data to implement their functionalities and provide rich user experiences. For instance, accurate GPS position is needed to navigate users to their destinations, phone contact is needed to implement messaging and chat functionalities, and heart rate frequency is important to accurately monitor training im-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Often, to provide services, apps may also need to exchange data with other apps in the same smartphone or externally with a remote server. For instance, a camera app may share a picture with a multimedia messaging app for sending it to a friend. The messaging app, in turn, may send the full contacts list from the phone directory to a remote server in order to identify which contacts are registered to the messaging service so that they can be shown as possible destinations. As such, sensitive information may legitimately be propagated via message exchanges among apps or to remote servers. On the other hand, sensitive information might be exposed unintentionally by defective/vulnerable apps or intentionally by malicious apps (malware), which threatens the security and privacy of end users. Existing literature on information leak in smartphone apps tend to overlook the difference between legitimate data flows and illegitimate ones. Whenever information flow from a sensitive source to a sensitive sink is detected, either statically [23], [20], [19,15,3], [22], [17], [12] or dynamically [8], it is reported as potentially problematic. In this paper, we address the problem of detecting anomalous information flows with improved accuracy by classifying cases of information flows as either normal or anomalous according to a reference information flow model. More specifically, we build a model of sensitive information flows based on the following features: • Data source: the provenance of the sensitive data that is being propagated; • Data sink: the destination where the data is flowing to; and • App topic: the declared functionalities of the app according to its description. Data source and data sink features are used to reflect information flows from sensitive sources to sinks and summarize how sensitive data is handled by an app. However, these features are not expressive enough to build an accurate model. In fact, distinct apps might have very different functionalities. What is considered legitimate of a particular set of apps (e.g., sharing contacts for a messaging app) can be considered a malicious behavior for other apps (e.g., a piece of malware that steals contacts, to be later used by spammers). An accurate model should also take into consideration the main functionalities that is declared by an app (in our case the App topic). One should classify an app as anomalous only when it exhibits sensitive information flows that are not consistent with its declared functionalities. This characteristic, which makes an app anomalous, is captured by the App topic feature. In summary, our approach focuses on detecting apps that are anomalous in terms of information flows compared to other apps with similar functionalities. Such an approach would be useful for various stakeholders. For example, market owners (e.g., Google) can focus on performing more complex and expensive security analysis only on those cases that are reported as anomalous, before publishing them. If such information is available to end users, they could also make informed decision of whether or not to install the anomalous app. For example, when the user installs an app, a warning stating that this particular app sends contact information through the Internet differently from other apps with similar functionalities (as demonstrated in the tool website). In the context of BYOD (bring your own device) where employees use their own device to connect to the secure corporate network, a security analyst might benefit from this approach to emphasis manual analysis on those anomalous flows that might compromise the confidentiality of corporate data stored in the devices. The specific contributions of this paper are: • An automated, fast approach for detecting anomalous flows of sensitive information in Android apps through a seamless combination of static analysis, natural language processing, model inference, and classification techniques; • The implementation of the proposed approach in a tool called AnFlo which is publicly available 1 ; and • An extensive empirical evaluation of our approach based on 596 subject apps, which assesses the accuracy and runtime performance of anomalous information flow detection. We detected 2 previous-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them. The rest of the paper is organized as follows. Section 2 motivates this work. Section 3 compares our work with literature. Section 4 first gives an overview of our approach and then explains the steps in details. Section 5 evaluates our approach. Section 6 concludes the paper. MOTIVATION To implement their services, apps may access sensitive data. It is important that application code handling such data follows secure coding guidelines to protect user privacy and security. However, fast time-to-market pressure often pushes developers to implement data handling code quickly without considering security implications and release apps without proper testing. As a result, apps might contain defects that leak sensitive data unintentionally. They may also contain security vulnerabilities such as permission redelegation vulnerabilities [9], which could be exploited by malicious apps installed on the same device to steal sensitive data. Sensitive data could also be intentionally misused by malicious apps. Malicious apps such as malware and spyware often implement hidden functionalities not declared in 1 Tool and dataset available at http://selab.fbk.eu/anflo/ their functional descriptions. For example, a malicious app may declare only entertainment features (e.g., games) in its description, but it steals user data or subscribes to paid services without the knowledge and consent of the user. Defective, vulnerable, and malicious apps all share the same pattern, i.e., they (either intentionally or unintentionally) deal with sensitive data in an anomalous way, i.e., they behave differently in terms of dealing with sensitive data compared to other apps that state similar functionalities. Therefore, novel approaches should focus on detecting anomalies in sensitive data flows, caused by mismatches between expected flows (observed in benign and correct apps) and actual data flows observed in the app under analysis. However, the comparison should be only against similar apps that offer similar functionalities. For instance, messaging apps are expected to read information from phone contact list but they are not expected to use GPS position. These observations motivate our proposed approach. ANOMALOUS INFORMATION FLOW DE-TECTION Overview The overview of our approach is shown in Figure 1. It has two main phases -Learning and Classification. The input to the learning phase is a set of apps that are trusted to be benign and correct in the way sensitive data is handled (we shall denote them as trusted apps). It has two sub-steps -feature extraction and model inference. In the feature extraction step, (i) topics that best characterize the trusted apps are inferred using natural language processing (NLP) techniques and (ii) information flows from sensitive sources to sinks in the trusted apps are identified using static taint analysis. In the model inference step, we build sensitive information model that characterizes information flows regarding each topic. These models and a given app under analysis (we shall denote it as AUA) are the inputs to the classification phase. In this phase, basically, the dominant topic of the AUA is first identified to determine the relevant sensitive information flow model. Then, if the AUA contains any information flow that violates that model, i.e., is not consistent with the common flows characterized by the model, it is flagged as anomalous. Otherwise, it is flagged as normal. We implemented this approach in our tool AnFlo to automate the detection of anomalous information flows. However, a security analyst is required to further inspect those anomalous flows and determine whether or not the flows could actually lead to serious vulnerabilities such as information leakage issues. Topics discovery. Topics representative of a given preprocessed app description are identified using the Latent Dirichlet Allocation (LDA) technique [6], implemented in a tool called Mallet [18]. LDA is a generative statistical model that represents a collection of text as a mixture of topics with certain probabilities, where each word appearing in the text is attributable to one of the topics. The output of LDA is a list of topics, each of them with its corresponding probability. The topic with the highest probability is labeled as the dominant topic for its associated app. To illustrate, Figure 2 shows the functional description of an app called BestTravel, and the resulting output after performing pre-processing and topics discovery on the description. "Travel" is the dominant topic, the one with the highest probability of 70%. Then, the topics "Communication", "Finance", and "Photography" have the 15%, 10%, and 5% probabilities, respectively, of being the functionalities that the app declares to provide. The ultimate and most convenient way of traveling. Use BestTravel while on the move, to find restaurants (including pictures and prices), local transportation schedule, ATM machines and much more. App name Travel Communication Finance Photography BestTravel 70% 15% 10% 5% Figure 2: Example of app description and topic analysis result. Note that we did not consider Google Play categories as topics even though apps are grouped under those categories in Google Play. This is because recent studies [1,10] have reported that NLP-based topic analysis on app descriptions produces more cohesive clusters of apps than those apps grouped under Google Play categories. Static Analysis Sensitive information flows in the trusted apps are extracted using static taint analysis. Taint analysis is an instance of flow-analysis technique, which tags program data with labels that describe its provenance and propagates these tags through control and data dependencies. A different label is used for each distinct source of data. Tags are propagated from the operand(s) in the right-hand side of an assignment (uses) to the variable assigned in the left-hand side of the assignment (definition). The output of taint analysis is information flows, i.e., what data of which provenances (sources) are accessed at what program operations, e.g., on channels that may leak sensitive information (sinks). Our analysis focuses on the flows of sensitive information into sensitive program operations, i.e., our taint analysis generates tags at API calls that read sensitive information (e.g. GPS and phone contacts) and traces the propagation of tags into API calls that perform sensitive operations such as sending messages and Bluetooth packets. These sensitive APIs usually belong to dangerous permission group and hence, the APIs that we analyze are those privileged APIs that require to be specifically granted by the end user. Sources and sinks are the privileged APIs available from PScout [4]. The APIs that we analyze also include those APIs that enable Inter Process Communication (IPC) mechanism of Android because they can be used to exchange data among apps installed on the same device. As a result, our taint analysis generates a list of (source → sink) pairs, where each pair represents the flow of sensitive data originating from a source into a sink. APIs (both for sources and for sinks) are grouped according to the special permission required to run them. For example, all the network related sink functions, such as open-Connection(), connect() and getContent() are all modeled as Internet sinks, because they all require the INTER-NET permission to be executed. Figure 3 shows the static taint analysis result on the "BestTravel" running example app from Figure 2. It generates two (source → sink) pairs that correspond to two sensitive information flows. In the first flow, data read from the GPS is propagated through the program until it reaches a statement where it is sent over the network. In the second flow, data from the phone contacts is used to compose a text message. Our tool, AnFlo, runs on compiled byte-code of apps to perform the above static taint analysis. It relies on two existing tools -IC3 [19] and IccTA [15]. Android apps are usually composed of several components. Therefore, to precisely extract inter-component information flows, we need to analyze the links among components. AnFlo uses IC3 to resolve the target components when a flow is inter-component. IC3 uses a solver to infer all possible values of complex objects in an inter-procedural, flow-and context-sensitive manner. Once inter-component links are inferred, AnFlo uses an inter-component data-flow analysis tool called IccTA to perform static taint analysis. We customized IccTA to produce flows in a format as presented in Figure 3 and paths in a more verbose format to facilitate manual checks. App: BestTravel GPS → Internet Contacts → SMS Model Inference When results of topic analysis and of static analysis are available for all the trusted apps, they are used to build the Sensitive Information Flow Model. Such a model is a matrix with sensitive information sources in its rows and sinks in its columns, as shown in Figure 4. Firstly, apps with the same dominant topic are grouped together 5 , to build a sensitive information flow model corresponding to that specific topic. Each group is labeled with the dominant topic. Next, each cell of the matrix is filled with a number, representing the number of apps in this group having the corresponding (source → sink) pair. Figure 4 shows a sample sensitive information model regarding the topic "Travel". There are 36 distinct flows in the apps grouped under this dominant topic. The matrix shows that there are ten apps containing GPS position flowing through the Internet (one of them being the BestTravel app, see Figure 3); eight apps through text messages and three apps through Bluetooth. Similarly, the matrix shows that contacts information flows through SMS in seven apps and through Bluetooth in eight apps. From this model, we can observe that for Travel apps it is quite common to share the user's position via Internet and SMS. However, it is quite uncommon to share the position data via Bluetooth since it happened only in three cases. Likewise, the phone contacts are commonly shared through text messages and Bluetooth but not through Internet. To provide a formal and operative definition of common and uncommon flows, we compute a threshold denoted as τ . Flows that occur more than or equal to τ are considered as common; flows that never occur or that occur fewer than τ are considered as uncommon regarding this topic. Although our model assumes or trusts that the trusted apps are benign and correct, it is possible that some of them may contain defects, vulnerabilities or malware. This problem is addressed by classifying those flows occurring less than the threshold τ as uncommon, i.e., our approach tolerates the presence of some anomalous flows in the reference model since these flows would still be regarded as uncommon. Hence, our approach works as long as the majority of the trusted apps are truly trustworthy. To compute this threshold, we adopt the box-plot approach proposed by Laurikkala et al. [13], considering only flows occurring in the model, i.e., we consider only values greater than zero. τ is computed in the same way as drawing outlier dots in boxplots. It is the lower quartile (25th percentile) minus the step, where the step is 1.5 times the difference between the upper quartile (75th percentile) and the lower quartile (25th percentile). It should be noted that τ is not trivially the lower quartile; otherwise 25% of the apps would be outliers by construction. The threshold is lower, i.e., it is the lower quartile minus the step. Therefore, there is no fixed amount of outliers. Outliers could be few or many depending on the distribution of data. Outliers would only be those cases that are really different from the majority of the training data points. In the example regarding topic "Travel" in Figure 4, the threshold is computed considering only the five values that are > 0. The value for the threshold is τ T ravel = 7. It means that GPS data sent through Internet (GPS → Internet) or text messages (GPS → SMS) are common for traveling apps. Conversely, even though there are three trusted apps which send GPS data through Bluetooth (GPS → Bluetooth), there are too few cases to be considered common, and this sensitive information flow will be considered uncommon in the model. Likewise, phone contacts are commonly sent through text messages and Bluetooth, but it is uncommon for them to be sent through the Internet, since this never occurs in the trusted apps. Classification After the Sensitive Information Flow Models are built on trusted apps, they can be used to classify a new AUA. First of all, features must be extracted from the AUA. The features are the topics associated with the app description and the sensitive information flows in the app. As in Section 4.2.1, first data pre-processing is performed on the app description of the AUA. Then, topics and their probabilities are inferred from the pre-processed description using the Mallet tool. Among all the topics, we consider only the dominant topic, the one with the highest probability, because it is the topic that most characterizes this app. We then obtain the Sensitive Information Flow Model associated with this dominant topic. To ensure the availability of the Sensitive Information Flow Model, the Mallet tool is configured with the list of topics for which the Models are already built on the trusted apps. And given an app description, the Mallet tool only generates topics from this list. The more diverse trusted apps we analyze, the more complete list of models we expect to build. For example, Figure 5(a) shows the topics inferred from the description of a sample AUA "TripOrganizer". The topic "Travel" is highlighted in bold to denote that it is the dominant topic. Next, sensitive information flows in the AUA are extracted as described in Section 4.2.2. The extracted flows are then compared against the flows in the model associated with the dominant topic. If the AUA contains only flows that are common according to the model, the app is considered consistent with the model. If the app contains a flow that is not present in the model or a flow that is present but is uncommon according to the model, the flow and thus, the app is classified as anomalous. Anomalous flows require further manual inspection by a security analyst, because they could be due to defects, vulnerabilities, or malicious intentions. For example, Figure 5(b) shows three sensitive information flows extracted from "TripOrganizer" app. Since the dominant topic for this app is "Travel", these flows can be checked against the model associated with this topic shown in Figure 4. Regarding this model, earlier, we computed that the threshold is τ T ravel = 7 and the flow (Contacts → SMS) is common (see Section 4.3). Therefore, flow 1 ob-served in "TripOrganizer" (Figure 5(b)) is consistent with the model. However, flow 2 (Contacts → Internet) and flow 3 (GPS → Bluetooth), highlighted in bold in Figure 5(b), are uncommon according to the model. As a result, the AUA "TripOrganizer" is classified as anomalous. EMPIRICAL ASSESSMENT In this section, we evaluate the usefulness of our approach and report the results. We assess our approach by answering the following research questions: • RQ V ul : Is AnFlo useful for identifying vulnerable apps containing anomalous information flows? • RQT ime: How long does AnFlo take to classify apps? • RQT opics: Is the topic feature really needed to detect anomalous flows? • RQCat: Can app-store categories be used instead of topics to learn an accurate Sensitive Information Flow Model? • RQ M al : Is AnFlo useful for identifying malicious apps? The first research question RQ V ul investigates the result of AnFlo, whether it is useful for detecting anomalies in vulnerable apps that, for example, may leak sensitive information. RQT ime investigates the cost of using our approach in terms of the time taken to analyze a given AUA. A short analysis time is essential for tool adoption in a real production environment. Then, in the next two research questions, we investigate the role of topics as a feature for building the Sensitive Information Flow Models. RQT opics investigates the absolute contribution of topics, by learning the Sensitive Information Flow Model without considering the topics and by comparing its performance with that of our original model. To answer RQCat, we replace topics with the categories defined in the official market, and we compare the performance of this new model with that of our original model. Finally, the last research question RQ M al investigates the usefulness of AnFlo in detecting malware based on anomalies in sensitive information flows. Benchmarks and Experimental Settings Trusted Apps AnFlo needs a set of trusted apps to learn what is the normal behavior for "correct and benign" apps. We defined the following guidelines to collect trusted apps: (i) apps that come from the official Google Play Store (so they are scrutinized and checked by the store maintainer) and (ii) apps that are very popular (so they are widely used and reviewed by a large community of end users and programming mistakes are quickly notified and patched). At the time of crawling the Google Play Store, it had 30 different app categories. From each category, we downloaded, on average, the top 500 apps together with their descriptions. We then discarded apps with non-English description and those with very short descriptions (less than 10 words). Eventually, we are left with 11,796 apps for building references models. Additionally, we measured if these apps were actively maintained by looking at the date of the last update. 70% of the apps were last updated in the past 6 months before the Play Store was crawled, while 32% of the apps were last updated within the same month of the crawling. This supports the claim that the trusted apps are well maintained. The fact that the trusted apps we use are suggested and endorsed by the official store, and that they collected good end-user feedback allows us to assume that the apps are of high quality and do not contain many security problems. Nevertheless, as explained in Section 4.3, our approach is robust against the inclusion of a small number of anomalous apps in the training set since we adopt a threshold to classify anomalous information flows. Subject Benign Apps AnFlo works on compiled apps and, therefore the availability of source code is not a requirement for the analysis. However, for this experiment sake, we opted for open source projects, which enable us to inspect the source code and establish the ground truth. The F-Droid repository 6 represents an ideal setting for our experimentation because (i) it includes real world apps that are also popular in the Google Play Store, and (ii) apps can be downloaded with their source code for manual verification of the vulnerability reports delivered by AnFlo. The F-Droid repository was crawled in July 2017 for apps that meet our criteria. Among all the apps available in this repository, we used only those apps that are also available in the Google Play Store, whose descriptions meet our selection criteria (i.e., description is in English and it is longer than 10 words). Eventually, our experimental set of benign apps consists of 596 AUAs. Subject Malicious Apps To investigate if AnFlo can identify malware, we need a set of malicious apps with their declared functional descriptions. Malicious apps are usually repackaged versions of popular (benign) apps, injected with malicious code (Trojanized); hence the descriptions of those popular apps they disguise as can be considered as their app descriptions. Hence, by identifying the original versions of these malicious apps in the Google Play Store, we obtain their declared functional descriptions. We consider the malicious apps from the Drebin malware dataset [2], which consists of 5,560 samples that have been collected in the period of August 2010 to October 2012. We randomly sampled 560 apps from this dataset. For each malicious app, we performed static analysis to extract the package name, an identifier used by Android and by the official store to distinguish Android apps 7 . We queried the official Google Play market for the original apps, by searching for those having the same package name. Among our sampled repackaged malicious apps, we found 20 of the apps in the official market with the same package name. We analyzed their descriptions and found that only 18 of them have English descriptions. We therefore performed static taint analysis on these 18 malware samples, for which we found their "host" apps in the official market. Our static analysis crashed on 6 cases. Therefore, our experimental set of malicious apps consists of 12 AUAs. Results Detecting Vulnerable Apps Firstly, AnFlo was used to perform static taint analysis on the 11,796 trusted apps and topic analysis on their descriptions from the official Play Store. It then learns the Sensitive Information Flow Models based on the dominant topics and extracted flows as described in Section 4.3. Then, the AUAs from the F-Droid repository (Section 5.1.2) have been classified based on the Sensitive Information Flow Models. Out of 596 AUAs, static taint analysis reported 76 apps to contain flows of sensitive information that reach sinks, for a total of 1428 flows. These flows map to 147 distinct source-sink pairs. Out of these 76 apps, 14 AUAs are classified as anomalous. Table 1 shows the analysis results reported by AnFlo. The first column presents the name of the app. The second column presents the app's dominant topic. The third and fourth columns present the source of sensitive data and the sink identified by static taint analysis, respectively. As shown in Table 1, in total AnFlo reported 25 anomalous flows in these apps. We manually inspected the source code available from the repository to determine if these anomalous flows were due to programming defects or vulnerabilities. Two apps are found to be vulnerable (highlighted in boldface in Table 1), they are com.matoski. adbm and com.mschlauch.comfortreader. com.matoski.adbm is a utility app for managing the ADB debugging interface. The anomalous flow involves data from the WiFi configuration that leak to other apps through the Inter Process Communication. Among other information that may leak, the SSID data, which identifies the network to which the device is connected to, can be used to infer the user position and threaten the end user privacy. Hence, this programming defect leads to information leakage vulnerability that requires corrective maintenance. We reported this vulnerability to the app owners on their issue tracker. com.mschlauch.comfortreader is a book reader app, with an anomalous flow of data from IPC to the Internet. Manual inspection revealed that this anomalous flow results from a permission re-delegation vulnerability because data coming from another app is used, without sanitization, for opening a data stream. If a malicious app that does not have the permission to use the Internet passes a URL that contains sensitive privacy data (e.g., GPS coordinates), then the app could be used to leak information. We reported this vulnerability to the app developers. Regarding the other 12 AUAs, even though they contain 7 Even if it is easy to obfuscate this piece of information, in our experiment some apps did not rename their package name anomalous flows compared to trusted apps, manual inspection revealed that they are neither defective nor vulnerable. For example, some apps contain anomalous flows that involves IPC. Since data may come from other apps via IPC (source) or may flow to other apps via IPC (sink), such flows are considered dangerous in general. However, in these 12 apps, when IPC is a source (e.g., in com.alfray.timeriffic), data is either validated/sanitized before used in the sink or used in a way that do not threaten security. On the other hand, when IPC is a sink (e.g., in com.dozingcatsoftware. asciicam), the destination is always a component in the same app, so the flows are not actually dangerous. Since AnFlo helped us detect 2 vulnerable apps containing anomalous information flows, we can answer RQ V ul by stating that AnFlo is useful for identifying vulnerabilities related to anomalous information flows. Classification Time To investigate RQT ime, we analyze the time required to classify the AUAs. We instrumented the analysis script with the Linux date utility to log the time (in seconds) before starting the analysis and at its conclusion. Their difference is the amount of time spent in the computation. The experiment was run on a multi-core cluster, specifically designed to let a process run without sharing memory or computing resources with other processes. Thus, we assume that the time measurement is reliable. Classification time includes the static analysis step to exact data flow, the natural language step to extract topics from description and the comparison with the Sensitive Information Flow Model to check for consistency. Figure 6 reports the boxplot of the time (in minutes) needed to classify the F-Droid apps and the descriptive statistics. On average, an app takes 1.9 minutes to complete the classification and most of the analyses concluded in less than 3 minutes (median = 1.5). Only a few (outliers) cases require longer analysis time. Topics from App Description We now run another experiment to verify our claim that topics are important features to build an accurate model (RQT opics). We repeated the same experiment as before, but using only flows as features and without considering topics, to check how much detection accuracy we lose in this way. We still consider all the trusted apps for learning the reference model, but we only use static analysis data. That is, we do not create a separate matrix for each topic; instead we create one big single matrix with sources and sinks for all the apps. This Sensitive Information Flow Model is then used to classify F-Droid apps and the results are shown in Table 2. As we can see, only four apps are detected as anomalous by this second approach, and all of them were already detected by our original, proposed approach. Manual inspection revealed that all of them are not vulnerable. This suggests that topic is a very important feature to learn reference models in order to detect a larger amount of anomalous apps. In fact, when topics are not considered and all the apps are grouped together regardless of their topics, we observe a smoothing effect. Differences among apps become less relevant to detect anomalies. While in the previous model, an app was compared only against those apps grouped under the same topic. Here, an app is compared to all the trusted apps. Without topic as a feature, our model loses the ability to capture the characteristics of distinct groups and, thus, the ability to detect deviations from them. Play Store Categories To investigate RQCat, instead of grouping trusted apps based on topics, we group them according to their app categories as determined by the official Google Play Store. First of all we split trusted apps into groups based on the market category they belong to 8 . We then use static analysis information about flows to build a separate source-sink matrix per each category. Eventually we compute thresholds to complete the model. We then classify each AUA from F-Droid by comparing it with the model of the corresponding market category. The classification results are reported in Table 3. Ten apps are reported as containing anomalous flows and most of them were also detected by our original, proposed approach (Table 1). Two apps reported by this approach were not reported by our proposed approach, which are com.angrydoughnuts.android.alarmclock and com.futurice. android.reservator. However, they are neither the cases of vulnerabilities nor malicious behaviors. Only one flow detected by this approach is a case of vulnerability, namely com.matoski.adbm, highlighted in boldface, which was also detected by our proposed approach. Hence, this result supports our design decision of using topics. Table 4 summarizes the result of the models comparison. The first model (first row) considers both data flows and description topics as features. Even though this approach reported the largest number of false positives (12 apps, 'FP' column), we were able to detect 2 vulnerabilities ('Vuln.' column) by tracing the anomalies reported by this approach. It also detected 5 additional anomalous apps that other approaches did not detect ('Unique' column). Comparison of the Models The second model (second row) considers only data flows as a feature. Even though the number of false positives drops to 4, we were not able to detect any vulnerability by tracing the anomalies reported by this approach. This result suggests that modeling only flows is not enough for detecting vulnerabilities. When market categories are used instead of description topics (last row), the false positives drops to 9 (25% less compared to our proposed model). It detected 2 additional anomalous apps that other approaches did not detect ('Unique' column). Tracing the anomalies reported by this approach, we detected only one out of the two vulnerabilities that we detected using our proposed approach. This result suggests that topics are more useful than categories for detecting vulnerable apps containing anomalous information flows. Detecting Malicious Apps Anomalies in the flow of sensitive data could be due to malicious behaviors as well. The goal of this last experiment is to investigate whether AnFlo can be used to identify malware (RQ M al ). To this aim, we use the Sensitive Infor-mation Flow Model (learned on the trusted apps) to classify the 18 AUAs from the Drebin malware dataset. Data flow features are extracted using static analysis from these malicious apps. However, static taint analysis crashed on 6 apps because of their heavy obfuscation. Since improving the static taint analysis implementation to work on heavy obfuscated code is out of the scope of this paper, we run the experiment on the remaining 12 apps. Topics are extracted from the descriptions of the original versions of those malware, which are available at the official market store. The malicious apps have been subject to anomaly detection, based on the three distinct feature sets: (i) flows and topics; (ii) only flows; and (iii) flows and market categories. The classification results are shown in Table 5. The first column reports the malware name (according to ESET-NOD32 9 antivirus) and the second column contains the name of the original app that was repackaged to spread the malware. The remaining three columns report the results of malware detection by the three models based on different sets of features: a tick mark ("") means that the model correctly detected the app as anomalous, while a cross ("") means no anomaly detected. While the model based on topics and the model based on market categories classified the same 6 AUAs as malicious, the model based on only flows classified only 4 AUAs as malicious. All the malware except TrojanSMS.Agent are the cases of privacy sensitive information leaks such as device ID, phone number, e-mail or GPS coordinate, being sent over the network or via SMS. One typical malicious behavior is observed in Spy.GoldDream. In this case, after querying the list of installed packages (sensitive data source), the malware attempts to kill selected background processes (sensitive sink). This is a typical malicious behavior observed in malware that tries to avoid detection by stopping security products such as antiviruses. Botnet behavior is observed in Droid-KunFu. A command and control (C&C) server command is consulted (sensitive source) before performing privileged actions on the device (sensitive sink). As shown in Table 5, when only static analysis features are used in the model, two malicious apps are missed. This is because this limited model compares the given AUA against all the trusted apps, instead of only the apps from a specific subset (grouped by the common topic or the same category). A flow that would have been anomalous for the specific topic (or the specific category) might be normal for another topic/category. For example, acquiring GPS coordinate and sending it over the network is common for navigation or transportation apps. However, it is not a common behavior for tools apps, which is the case of the Anserver malware. The remaining 6 apps in the dataset were consistently classified as not-anomalous by all the models. These false negatives are mainly due to the malicious behaviors not related to sensitive information flows, such as dialing calls in the background or blocking messages. Another reason is due to the obfuscation by malware to hide the sensitive information flows. Static analysis inherently cannot handle obfuscation. Limitation and Discussion In the following, we discuss some of the limitations of our approach and of its experimental validation. The most prominent limitation to adopt our approach is the availability of trusted apps to build the model of sensitive information flows. In our experimental validation, we trusted top ranked popular apps from the official app store, but we have no guarantee that they are all immune from vulnerabilities and from malware content. However, as explained in Section 4.3, our approach is quite robust with respect to the inclusion of a small number of defective, vulnerable, or malicious apps in the training set, as long as the majority of the training apps are benign and correct. This is because we use a threshold-based approach that models flows common to a large set of apps. Thus, vulnerable flows occurring on few training apps are not learnt as normal in the model and they would be classified as anomalous when observed in a given AUA. A flow classified as anomalous by our model needs further manual analysis to check if the anomaly is a vulnerability, a malicious behavior or is safe. Manual inspection could be an expensive task that might delay the delivery of the software product. However, in our experimental validation, manual filtering on the experimental result took quite short time, on average 30 minutes per app. Considering that the code of the app to review was new to us, we expect a shorter manual filtering phase for a developer who is quite familiar with the code of her/his app. All in all, manual effort required to manual filter results of the automated tool seems to be compatible with the fast time-to-market pressure of smart phone apps. When building sensitive information flow models, we also considered grouping of apps by using clustering technique based on the topics distribution, instead of grouping based on the dominant topic alone. But we conducted preliminary experiments using this method and observed that grouping of apps based on dominant topics produce more cohesive groups, i.e., apps that are more similar. Inherently, it is difficult for static analysis-based approaches including ours to handle obfuscated code. Therefore, if training apps are obfuscated (e.g., to limit reverse engineering attacks), our approach may collect incomplete static information and only build a partial model. And if the AUA is obfuscated, our approach may not detect the anomalies. As future work, we plan to incorporate our approach with dynamic analysis to deal with obfuscation. CONCLUSION In this paper, we proposed a novel approach to analyze the flows of sensitive information in Android apps. In our approach, trusted apps are first analyzed to extract topics from their descriptions and data flows from their code. Topics and flows are then used to learn Sensitive Information Flow models. We can use these models for analyzing new Android apps to determine whether they contain anomalous information flows. Our experiments show that this approach could detect anomalous flows in vulnerable and malicious apps quite fast.
6,667
1812.07894
2883454930
Smartphone apps usually have access to sensitive user data such as contacts, geo-location, and account credentials and they might share such data to external entities through the Internet or with other apps. Confidentiality of user data could be breached if there are anomalies in the way sensitive data is handled by an app which is vulnerable or malicious. Existing approaches that detect anomalous sensitive data flows have limitations in terms of accuracy because the definition of anomalous flows may differ for different apps with different functionalities; it is normal for "Health" apps to share heart rate information through the Internet but is anomalous for "Travel" apps. In this paper, we propose a novel approach to detect anomalous sensitive data flows in Android apps, with improved accuracy. To achieve this objective, we first group trusted apps according to the topics inferred from their functional descriptions. We then learn sensitive information flows with respect to each group of trusted apps. For a given app under analysis, anomalies are identified by comparing sensitive information flows in the app against those flows learned from trusted apps grouped under the same topic. In the evaluation, information flow is learned from 11,796 trusted apps. We then checked for anomalies in 596 new (benign) apps and identified 2 previously-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them.
Other closely related work is about detecting permission re-delegation vulnerabilities in apps. @cite_23 presented the permission re-delegation problems, and their approach detects them whenever there exists a path from a public entry point to a privileged API call. @cite_14 and @cite_12 also detect permission re-delegation vulnerabilities. However, as acknowledged by and , their approaches cannot differentiate between legitimate and illegitimate permission re-delegation behaviors.
{ "abstract": [ "Modern smartphone operating systems support the development of third-party applications with open system APIs. In addition to an open API, the Android operating system also provides a rich inter-application message passing system. This encourages inter-application collaboration and reduces developer burden by facilitating component reuse. Unfortunately, message passing is also an application attack surface. The content of messages can be sniffed, modified, stolen, or replaced, which can compromise user privacy. Also, a malicious application can inject forged or otherwise malicious messages, which can lead to breaches of user data and violate application security policies. We examine Android application interaction and identify security risks in application components. We provide a tool, ComDroid, that detects application communication vulnerabilities. ComDroid can be used by developers to analyze their own applications before release, by application reviewers to analyze applications in the Android Market, and by end users. We analyzed 20 applications with the help of ComDroid and found 34 exploitable vulnerabilities; 12 of the 20 applications have at least one vulnerability.", "An enormous number of apps have been developed for Android in recent years, making it one of the most popular mobile operating systems. However, the quality of the booming apps can be a concern [4]. Poorly engineered apps may contain security vulnerabilities that can severally undermine users' security and privacy. In this paper, we study a general category of vulnerabilities found in Android apps, namely the component hijacking vulnerabilities. Several types of previously reported app vulnerabilities, such as permission leakage, unauthorized data access, intent spoofing, and etc., belong to this category. We propose CHEX, a static analysis method to automatically vet Android apps for component hijacking vulnerabilities. Modeling these vulnerabilities from a data-flow analysis perspective, CHEX analyzes Android apps and detects possible hijack-enabling flows by conducting low-overhead reachability tests on customized system dependence graphs. To tackle analysis challenges imposed by Android's special programming paradigm, we employ a novel technique to discover component entry points in their completeness and introduce app splitting to model the asynchronous executions of multiple entry points in an app. We prototyped CHEX based on Dalysis, a generic static analysis framework that we built to support many types of analysis on Android app bytecode. We evaluated CHEX with 5,486 real Android apps and found 254 potential component hijacking vulnerabilities. The median execution time of CHEX on an app is 37.02 seconds, which is fast enough to be used in very high volume app vetting and testing scenarios.", "Modern browsers and smartphone operating systems treat applications as mutually untrusting, potentially malicious principals. Applications are (1) isolated except for explicit IPC or inter-application communication channels and (2) unprivileged by default, requiring user permission for additional privileges. Although inter-application communication supports useful collaboration, it also introduces the risk of permission redelegation. Permission re-delegation occurs when an application with permissions performs a privileged task for an application without permissions. This undermines the requirement that the user approve each application's access to privileged devices and data. We discuss permission re-delegation and demonstrate its risk by launching real-world attacks on Android system applications; several of the vulnerabilities have been confirmed as bugs. We discuss possible ways to address permission redelegation and present IPC Inspection, a new OS mechanism for defending against permission re-delegation. IPC Inspection prevents opportunities for permission redelegation by reducing an application's permissions after it receives communication from a less privileged application. We have implemented IPC Inspection for a browser and Android, and we show that it prevents the attacks we found in the Android system applications." ], "cite_N": [ "@cite_14", "@cite_12", "@cite_23" ], "mid": [ "1994588724", "1988036170", "1912565424" ] }
AnFlo: Detecting Anomalous Sensitive Information Flows in Android Apps
Android applications (apps) are often granted access to users' privacy-and security-sensitive information such as GPS position, phone contacts, camera, microphone, training log, and heart rate. Apps need such sensitive data to implement their functionalities and provide rich user experiences. For instance, accurate GPS position is needed to navigate users to their destinations, phone contact is needed to implement messaging and chat functionalities, and heart rate frequency is important to accurately monitor training im-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Often, to provide services, apps may also need to exchange data with other apps in the same smartphone or externally with a remote server. For instance, a camera app may share a picture with a multimedia messaging app for sending it to a friend. The messaging app, in turn, may send the full contacts list from the phone directory to a remote server in order to identify which contacts are registered to the messaging service so that they can be shown as possible destinations. As such, sensitive information may legitimately be propagated via message exchanges among apps or to remote servers. On the other hand, sensitive information might be exposed unintentionally by defective/vulnerable apps or intentionally by malicious apps (malware), which threatens the security and privacy of end users. Existing literature on information leak in smartphone apps tend to overlook the difference between legitimate data flows and illegitimate ones. Whenever information flow from a sensitive source to a sensitive sink is detected, either statically [23], [20], [19,15,3], [22], [17], [12] or dynamically [8], it is reported as potentially problematic. In this paper, we address the problem of detecting anomalous information flows with improved accuracy by classifying cases of information flows as either normal or anomalous according to a reference information flow model. More specifically, we build a model of sensitive information flows based on the following features: • Data source: the provenance of the sensitive data that is being propagated; • Data sink: the destination where the data is flowing to; and • App topic: the declared functionalities of the app according to its description. Data source and data sink features are used to reflect information flows from sensitive sources to sinks and summarize how sensitive data is handled by an app. However, these features are not expressive enough to build an accurate model. In fact, distinct apps might have very different functionalities. What is considered legitimate of a particular set of apps (e.g., sharing contacts for a messaging app) can be considered a malicious behavior for other apps (e.g., a piece of malware that steals contacts, to be later used by spammers). An accurate model should also take into consideration the main functionalities that is declared by an app (in our case the App topic). One should classify an app as anomalous only when it exhibits sensitive information flows that are not consistent with its declared functionalities. This characteristic, which makes an app anomalous, is captured by the App topic feature. In summary, our approach focuses on detecting apps that are anomalous in terms of information flows compared to other apps with similar functionalities. Such an approach would be useful for various stakeholders. For example, market owners (e.g., Google) can focus on performing more complex and expensive security analysis only on those cases that are reported as anomalous, before publishing them. If such information is available to end users, they could also make informed decision of whether or not to install the anomalous app. For example, when the user installs an app, a warning stating that this particular app sends contact information through the Internet differently from other apps with similar functionalities (as demonstrated in the tool website). In the context of BYOD (bring your own device) where employees use their own device to connect to the secure corporate network, a security analyst might benefit from this approach to emphasis manual analysis on those anomalous flows that might compromise the confidentiality of corporate data stored in the devices. The specific contributions of this paper are: • An automated, fast approach for detecting anomalous flows of sensitive information in Android apps through a seamless combination of static analysis, natural language processing, model inference, and classification techniques; • The implementation of the proposed approach in a tool called AnFlo which is publicly available 1 ; and • An extensive empirical evaluation of our approach based on 596 subject apps, which assesses the accuracy and runtime performance of anomalous information flow detection. We detected 2 previous-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them. The rest of the paper is organized as follows. Section 2 motivates this work. Section 3 compares our work with literature. Section 4 first gives an overview of our approach and then explains the steps in details. Section 5 evaluates our approach. Section 6 concludes the paper. MOTIVATION To implement their services, apps may access sensitive data. It is important that application code handling such data follows secure coding guidelines to protect user privacy and security. However, fast time-to-market pressure often pushes developers to implement data handling code quickly without considering security implications and release apps without proper testing. As a result, apps might contain defects that leak sensitive data unintentionally. They may also contain security vulnerabilities such as permission redelegation vulnerabilities [9], which could be exploited by malicious apps installed on the same device to steal sensitive data. Sensitive data could also be intentionally misused by malicious apps. Malicious apps such as malware and spyware often implement hidden functionalities not declared in 1 Tool and dataset available at http://selab.fbk.eu/anflo/ their functional descriptions. For example, a malicious app may declare only entertainment features (e.g., games) in its description, but it steals user data or subscribes to paid services without the knowledge and consent of the user. Defective, vulnerable, and malicious apps all share the same pattern, i.e., they (either intentionally or unintentionally) deal with sensitive data in an anomalous way, i.e., they behave differently in terms of dealing with sensitive data compared to other apps that state similar functionalities. Therefore, novel approaches should focus on detecting anomalies in sensitive data flows, caused by mismatches between expected flows (observed in benign and correct apps) and actual data flows observed in the app under analysis. However, the comparison should be only against similar apps that offer similar functionalities. For instance, messaging apps are expected to read information from phone contact list but they are not expected to use GPS position. These observations motivate our proposed approach. ANOMALOUS INFORMATION FLOW DE-TECTION Overview The overview of our approach is shown in Figure 1. It has two main phases -Learning and Classification. The input to the learning phase is a set of apps that are trusted to be benign and correct in the way sensitive data is handled (we shall denote them as trusted apps). It has two sub-steps -feature extraction and model inference. In the feature extraction step, (i) topics that best characterize the trusted apps are inferred using natural language processing (NLP) techniques and (ii) information flows from sensitive sources to sinks in the trusted apps are identified using static taint analysis. In the model inference step, we build sensitive information model that characterizes information flows regarding each topic. These models and a given app under analysis (we shall denote it as AUA) are the inputs to the classification phase. In this phase, basically, the dominant topic of the AUA is first identified to determine the relevant sensitive information flow model. Then, if the AUA contains any information flow that violates that model, i.e., is not consistent with the common flows characterized by the model, it is flagged as anomalous. Otherwise, it is flagged as normal. We implemented this approach in our tool AnFlo to automate the detection of anomalous information flows. However, a security analyst is required to further inspect those anomalous flows and determine whether or not the flows could actually lead to serious vulnerabilities such as information leakage issues. Topics discovery. Topics representative of a given preprocessed app description are identified using the Latent Dirichlet Allocation (LDA) technique [6], implemented in a tool called Mallet [18]. LDA is a generative statistical model that represents a collection of text as a mixture of topics with certain probabilities, where each word appearing in the text is attributable to one of the topics. The output of LDA is a list of topics, each of them with its corresponding probability. The topic with the highest probability is labeled as the dominant topic for its associated app. To illustrate, Figure 2 shows the functional description of an app called BestTravel, and the resulting output after performing pre-processing and topics discovery on the description. "Travel" is the dominant topic, the one with the highest probability of 70%. Then, the topics "Communication", "Finance", and "Photography" have the 15%, 10%, and 5% probabilities, respectively, of being the functionalities that the app declares to provide. The ultimate and most convenient way of traveling. Use BestTravel while on the move, to find restaurants (including pictures and prices), local transportation schedule, ATM machines and much more. App name Travel Communication Finance Photography BestTravel 70% 15% 10% 5% Figure 2: Example of app description and topic analysis result. Note that we did not consider Google Play categories as topics even though apps are grouped under those categories in Google Play. This is because recent studies [1,10] have reported that NLP-based topic analysis on app descriptions produces more cohesive clusters of apps than those apps grouped under Google Play categories. Static Analysis Sensitive information flows in the trusted apps are extracted using static taint analysis. Taint analysis is an instance of flow-analysis technique, which tags program data with labels that describe its provenance and propagates these tags through control and data dependencies. A different label is used for each distinct source of data. Tags are propagated from the operand(s) in the right-hand side of an assignment (uses) to the variable assigned in the left-hand side of the assignment (definition). The output of taint analysis is information flows, i.e., what data of which provenances (sources) are accessed at what program operations, e.g., on channels that may leak sensitive information (sinks). Our analysis focuses on the flows of sensitive information into sensitive program operations, i.e., our taint analysis generates tags at API calls that read sensitive information (e.g. GPS and phone contacts) and traces the propagation of tags into API calls that perform sensitive operations such as sending messages and Bluetooth packets. These sensitive APIs usually belong to dangerous permission group and hence, the APIs that we analyze are those privileged APIs that require to be specifically granted by the end user. Sources and sinks are the privileged APIs available from PScout [4]. The APIs that we analyze also include those APIs that enable Inter Process Communication (IPC) mechanism of Android because they can be used to exchange data among apps installed on the same device. As a result, our taint analysis generates a list of (source → sink) pairs, where each pair represents the flow of sensitive data originating from a source into a sink. APIs (both for sources and for sinks) are grouped according to the special permission required to run them. For example, all the network related sink functions, such as open-Connection(), connect() and getContent() are all modeled as Internet sinks, because they all require the INTER-NET permission to be executed. Figure 3 shows the static taint analysis result on the "BestTravel" running example app from Figure 2. It generates two (source → sink) pairs that correspond to two sensitive information flows. In the first flow, data read from the GPS is propagated through the program until it reaches a statement where it is sent over the network. In the second flow, data from the phone contacts is used to compose a text message. Our tool, AnFlo, runs on compiled byte-code of apps to perform the above static taint analysis. It relies on two existing tools -IC3 [19] and IccTA [15]. Android apps are usually composed of several components. Therefore, to precisely extract inter-component information flows, we need to analyze the links among components. AnFlo uses IC3 to resolve the target components when a flow is inter-component. IC3 uses a solver to infer all possible values of complex objects in an inter-procedural, flow-and context-sensitive manner. Once inter-component links are inferred, AnFlo uses an inter-component data-flow analysis tool called IccTA to perform static taint analysis. We customized IccTA to produce flows in a format as presented in Figure 3 and paths in a more verbose format to facilitate manual checks. App: BestTravel GPS → Internet Contacts → SMS Model Inference When results of topic analysis and of static analysis are available for all the trusted apps, they are used to build the Sensitive Information Flow Model. Such a model is a matrix with sensitive information sources in its rows and sinks in its columns, as shown in Figure 4. Firstly, apps with the same dominant topic are grouped together 5 , to build a sensitive information flow model corresponding to that specific topic. Each group is labeled with the dominant topic. Next, each cell of the matrix is filled with a number, representing the number of apps in this group having the corresponding (source → sink) pair. Figure 4 shows a sample sensitive information model regarding the topic "Travel". There are 36 distinct flows in the apps grouped under this dominant topic. The matrix shows that there are ten apps containing GPS position flowing through the Internet (one of them being the BestTravel app, see Figure 3); eight apps through text messages and three apps through Bluetooth. Similarly, the matrix shows that contacts information flows through SMS in seven apps and through Bluetooth in eight apps. From this model, we can observe that for Travel apps it is quite common to share the user's position via Internet and SMS. However, it is quite uncommon to share the position data via Bluetooth since it happened only in three cases. Likewise, the phone contacts are commonly shared through text messages and Bluetooth but not through Internet. To provide a formal and operative definition of common and uncommon flows, we compute a threshold denoted as τ . Flows that occur more than or equal to τ are considered as common; flows that never occur or that occur fewer than τ are considered as uncommon regarding this topic. Although our model assumes or trusts that the trusted apps are benign and correct, it is possible that some of them may contain defects, vulnerabilities or malware. This problem is addressed by classifying those flows occurring less than the threshold τ as uncommon, i.e., our approach tolerates the presence of some anomalous flows in the reference model since these flows would still be regarded as uncommon. Hence, our approach works as long as the majority of the trusted apps are truly trustworthy. To compute this threshold, we adopt the box-plot approach proposed by Laurikkala et al. [13], considering only flows occurring in the model, i.e., we consider only values greater than zero. τ is computed in the same way as drawing outlier dots in boxplots. It is the lower quartile (25th percentile) minus the step, where the step is 1.5 times the difference between the upper quartile (75th percentile) and the lower quartile (25th percentile). It should be noted that τ is not trivially the lower quartile; otherwise 25% of the apps would be outliers by construction. The threshold is lower, i.e., it is the lower quartile minus the step. Therefore, there is no fixed amount of outliers. Outliers could be few or many depending on the distribution of data. Outliers would only be those cases that are really different from the majority of the training data points. In the example regarding topic "Travel" in Figure 4, the threshold is computed considering only the five values that are > 0. The value for the threshold is τ T ravel = 7. It means that GPS data sent through Internet (GPS → Internet) or text messages (GPS → SMS) are common for traveling apps. Conversely, even though there are three trusted apps which send GPS data through Bluetooth (GPS → Bluetooth), there are too few cases to be considered common, and this sensitive information flow will be considered uncommon in the model. Likewise, phone contacts are commonly sent through text messages and Bluetooth, but it is uncommon for them to be sent through the Internet, since this never occurs in the trusted apps. Classification After the Sensitive Information Flow Models are built on trusted apps, they can be used to classify a new AUA. First of all, features must be extracted from the AUA. The features are the topics associated with the app description and the sensitive information flows in the app. As in Section 4.2.1, first data pre-processing is performed on the app description of the AUA. Then, topics and their probabilities are inferred from the pre-processed description using the Mallet tool. Among all the topics, we consider only the dominant topic, the one with the highest probability, because it is the topic that most characterizes this app. We then obtain the Sensitive Information Flow Model associated with this dominant topic. To ensure the availability of the Sensitive Information Flow Model, the Mallet tool is configured with the list of topics for which the Models are already built on the trusted apps. And given an app description, the Mallet tool only generates topics from this list. The more diverse trusted apps we analyze, the more complete list of models we expect to build. For example, Figure 5(a) shows the topics inferred from the description of a sample AUA "TripOrganizer". The topic "Travel" is highlighted in bold to denote that it is the dominant topic. Next, sensitive information flows in the AUA are extracted as described in Section 4.2.2. The extracted flows are then compared against the flows in the model associated with the dominant topic. If the AUA contains only flows that are common according to the model, the app is considered consistent with the model. If the app contains a flow that is not present in the model or a flow that is present but is uncommon according to the model, the flow and thus, the app is classified as anomalous. Anomalous flows require further manual inspection by a security analyst, because they could be due to defects, vulnerabilities, or malicious intentions. For example, Figure 5(b) shows three sensitive information flows extracted from "TripOrganizer" app. Since the dominant topic for this app is "Travel", these flows can be checked against the model associated with this topic shown in Figure 4. Regarding this model, earlier, we computed that the threshold is τ T ravel = 7 and the flow (Contacts → SMS) is common (see Section 4.3). Therefore, flow 1 ob-served in "TripOrganizer" (Figure 5(b)) is consistent with the model. However, flow 2 (Contacts → Internet) and flow 3 (GPS → Bluetooth), highlighted in bold in Figure 5(b), are uncommon according to the model. As a result, the AUA "TripOrganizer" is classified as anomalous. EMPIRICAL ASSESSMENT In this section, we evaluate the usefulness of our approach and report the results. We assess our approach by answering the following research questions: • RQ V ul : Is AnFlo useful for identifying vulnerable apps containing anomalous information flows? • RQT ime: How long does AnFlo take to classify apps? • RQT opics: Is the topic feature really needed to detect anomalous flows? • RQCat: Can app-store categories be used instead of topics to learn an accurate Sensitive Information Flow Model? • RQ M al : Is AnFlo useful for identifying malicious apps? The first research question RQ V ul investigates the result of AnFlo, whether it is useful for detecting anomalies in vulnerable apps that, for example, may leak sensitive information. RQT ime investigates the cost of using our approach in terms of the time taken to analyze a given AUA. A short analysis time is essential for tool adoption in a real production environment. Then, in the next two research questions, we investigate the role of topics as a feature for building the Sensitive Information Flow Models. RQT opics investigates the absolute contribution of topics, by learning the Sensitive Information Flow Model without considering the topics and by comparing its performance with that of our original model. To answer RQCat, we replace topics with the categories defined in the official market, and we compare the performance of this new model with that of our original model. Finally, the last research question RQ M al investigates the usefulness of AnFlo in detecting malware based on anomalies in sensitive information flows. Benchmarks and Experimental Settings Trusted Apps AnFlo needs a set of trusted apps to learn what is the normal behavior for "correct and benign" apps. We defined the following guidelines to collect trusted apps: (i) apps that come from the official Google Play Store (so they are scrutinized and checked by the store maintainer) and (ii) apps that are very popular (so they are widely used and reviewed by a large community of end users and programming mistakes are quickly notified and patched). At the time of crawling the Google Play Store, it had 30 different app categories. From each category, we downloaded, on average, the top 500 apps together with their descriptions. We then discarded apps with non-English description and those with very short descriptions (less than 10 words). Eventually, we are left with 11,796 apps for building references models. Additionally, we measured if these apps were actively maintained by looking at the date of the last update. 70% of the apps were last updated in the past 6 months before the Play Store was crawled, while 32% of the apps were last updated within the same month of the crawling. This supports the claim that the trusted apps are well maintained. The fact that the trusted apps we use are suggested and endorsed by the official store, and that they collected good end-user feedback allows us to assume that the apps are of high quality and do not contain many security problems. Nevertheless, as explained in Section 4.3, our approach is robust against the inclusion of a small number of anomalous apps in the training set since we adopt a threshold to classify anomalous information flows. Subject Benign Apps AnFlo works on compiled apps and, therefore the availability of source code is not a requirement for the analysis. However, for this experiment sake, we opted for open source projects, which enable us to inspect the source code and establish the ground truth. The F-Droid repository 6 represents an ideal setting for our experimentation because (i) it includes real world apps that are also popular in the Google Play Store, and (ii) apps can be downloaded with their source code for manual verification of the vulnerability reports delivered by AnFlo. The F-Droid repository was crawled in July 2017 for apps that meet our criteria. Among all the apps available in this repository, we used only those apps that are also available in the Google Play Store, whose descriptions meet our selection criteria (i.e., description is in English and it is longer than 10 words). Eventually, our experimental set of benign apps consists of 596 AUAs. Subject Malicious Apps To investigate if AnFlo can identify malware, we need a set of malicious apps with their declared functional descriptions. Malicious apps are usually repackaged versions of popular (benign) apps, injected with malicious code (Trojanized); hence the descriptions of those popular apps they disguise as can be considered as their app descriptions. Hence, by identifying the original versions of these malicious apps in the Google Play Store, we obtain their declared functional descriptions. We consider the malicious apps from the Drebin malware dataset [2], which consists of 5,560 samples that have been collected in the period of August 2010 to October 2012. We randomly sampled 560 apps from this dataset. For each malicious app, we performed static analysis to extract the package name, an identifier used by Android and by the official store to distinguish Android apps 7 . We queried the official Google Play market for the original apps, by searching for those having the same package name. Among our sampled repackaged malicious apps, we found 20 of the apps in the official market with the same package name. We analyzed their descriptions and found that only 18 of them have English descriptions. We therefore performed static taint analysis on these 18 malware samples, for which we found their "host" apps in the official market. Our static analysis crashed on 6 cases. Therefore, our experimental set of malicious apps consists of 12 AUAs. Results Detecting Vulnerable Apps Firstly, AnFlo was used to perform static taint analysis on the 11,796 trusted apps and topic analysis on their descriptions from the official Play Store. It then learns the Sensitive Information Flow Models based on the dominant topics and extracted flows as described in Section 4.3. Then, the AUAs from the F-Droid repository (Section 5.1.2) have been classified based on the Sensitive Information Flow Models. Out of 596 AUAs, static taint analysis reported 76 apps to contain flows of sensitive information that reach sinks, for a total of 1428 flows. These flows map to 147 distinct source-sink pairs. Out of these 76 apps, 14 AUAs are classified as anomalous. Table 1 shows the analysis results reported by AnFlo. The first column presents the name of the app. The second column presents the app's dominant topic. The third and fourth columns present the source of sensitive data and the sink identified by static taint analysis, respectively. As shown in Table 1, in total AnFlo reported 25 anomalous flows in these apps. We manually inspected the source code available from the repository to determine if these anomalous flows were due to programming defects or vulnerabilities. Two apps are found to be vulnerable (highlighted in boldface in Table 1), they are com.matoski. adbm and com.mschlauch.comfortreader. com.matoski.adbm is a utility app for managing the ADB debugging interface. The anomalous flow involves data from the WiFi configuration that leak to other apps through the Inter Process Communication. Among other information that may leak, the SSID data, which identifies the network to which the device is connected to, can be used to infer the user position and threaten the end user privacy. Hence, this programming defect leads to information leakage vulnerability that requires corrective maintenance. We reported this vulnerability to the app owners on their issue tracker. com.mschlauch.comfortreader is a book reader app, with an anomalous flow of data from IPC to the Internet. Manual inspection revealed that this anomalous flow results from a permission re-delegation vulnerability because data coming from another app is used, without sanitization, for opening a data stream. If a malicious app that does not have the permission to use the Internet passes a URL that contains sensitive privacy data (e.g., GPS coordinates), then the app could be used to leak information. We reported this vulnerability to the app developers. Regarding the other 12 AUAs, even though they contain 7 Even if it is easy to obfuscate this piece of information, in our experiment some apps did not rename their package name anomalous flows compared to trusted apps, manual inspection revealed that they are neither defective nor vulnerable. For example, some apps contain anomalous flows that involves IPC. Since data may come from other apps via IPC (source) or may flow to other apps via IPC (sink), such flows are considered dangerous in general. However, in these 12 apps, when IPC is a source (e.g., in com.alfray.timeriffic), data is either validated/sanitized before used in the sink or used in a way that do not threaten security. On the other hand, when IPC is a sink (e.g., in com.dozingcatsoftware. asciicam), the destination is always a component in the same app, so the flows are not actually dangerous. Since AnFlo helped us detect 2 vulnerable apps containing anomalous information flows, we can answer RQ V ul by stating that AnFlo is useful for identifying vulnerabilities related to anomalous information flows. Classification Time To investigate RQT ime, we analyze the time required to classify the AUAs. We instrumented the analysis script with the Linux date utility to log the time (in seconds) before starting the analysis and at its conclusion. Their difference is the amount of time spent in the computation. The experiment was run on a multi-core cluster, specifically designed to let a process run without sharing memory or computing resources with other processes. Thus, we assume that the time measurement is reliable. Classification time includes the static analysis step to exact data flow, the natural language step to extract topics from description and the comparison with the Sensitive Information Flow Model to check for consistency. Figure 6 reports the boxplot of the time (in minutes) needed to classify the F-Droid apps and the descriptive statistics. On average, an app takes 1.9 minutes to complete the classification and most of the analyses concluded in less than 3 minutes (median = 1.5). Only a few (outliers) cases require longer analysis time. Topics from App Description We now run another experiment to verify our claim that topics are important features to build an accurate model (RQT opics). We repeated the same experiment as before, but using only flows as features and without considering topics, to check how much detection accuracy we lose in this way. We still consider all the trusted apps for learning the reference model, but we only use static analysis data. That is, we do not create a separate matrix for each topic; instead we create one big single matrix with sources and sinks for all the apps. This Sensitive Information Flow Model is then used to classify F-Droid apps and the results are shown in Table 2. As we can see, only four apps are detected as anomalous by this second approach, and all of them were already detected by our original, proposed approach. Manual inspection revealed that all of them are not vulnerable. This suggests that topic is a very important feature to learn reference models in order to detect a larger amount of anomalous apps. In fact, when topics are not considered and all the apps are grouped together regardless of their topics, we observe a smoothing effect. Differences among apps become less relevant to detect anomalies. While in the previous model, an app was compared only against those apps grouped under the same topic. Here, an app is compared to all the trusted apps. Without topic as a feature, our model loses the ability to capture the characteristics of distinct groups and, thus, the ability to detect deviations from them. Play Store Categories To investigate RQCat, instead of grouping trusted apps based on topics, we group them according to their app categories as determined by the official Google Play Store. First of all we split trusted apps into groups based on the market category they belong to 8 . We then use static analysis information about flows to build a separate source-sink matrix per each category. Eventually we compute thresholds to complete the model. We then classify each AUA from F-Droid by comparing it with the model of the corresponding market category. The classification results are reported in Table 3. Ten apps are reported as containing anomalous flows and most of them were also detected by our original, proposed approach (Table 1). Two apps reported by this approach were not reported by our proposed approach, which are com.angrydoughnuts.android.alarmclock and com.futurice. android.reservator. However, they are neither the cases of vulnerabilities nor malicious behaviors. Only one flow detected by this approach is a case of vulnerability, namely com.matoski.adbm, highlighted in boldface, which was also detected by our proposed approach. Hence, this result supports our design decision of using topics. Table 4 summarizes the result of the models comparison. The first model (first row) considers both data flows and description topics as features. Even though this approach reported the largest number of false positives (12 apps, 'FP' column), we were able to detect 2 vulnerabilities ('Vuln.' column) by tracing the anomalies reported by this approach. It also detected 5 additional anomalous apps that other approaches did not detect ('Unique' column). Comparison of the Models The second model (second row) considers only data flows as a feature. Even though the number of false positives drops to 4, we were not able to detect any vulnerability by tracing the anomalies reported by this approach. This result suggests that modeling only flows is not enough for detecting vulnerabilities. When market categories are used instead of description topics (last row), the false positives drops to 9 (25% less compared to our proposed model). It detected 2 additional anomalous apps that other approaches did not detect ('Unique' column). Tracing the anomalies reported by this approach, we detected only one out of the two vulnerabilities that we detected using our proposed approach. This result suggests that topics are more useful than categories for detecting vulnerable apps containing anomalous information flows. Detecting Malicious Apps Anomalies in the flow of sensitive data could be due to malicious behaviors as well. The goal of this last experiment is to investigate whether AnFlo can be used to identify malware (RQ M al ). To this aim, we use the Sensitive Infor-mation Flow Model (learned on the trusted apps) to classify the 18 AUAs from the Drebin malware dataset. Data flow features are extracted using static analysis from these malicious apps. However, static taint analysis crashed on 6 apps because of their heavy obfuscation. Since improving the static taint analysis implementation to work on heavy obfuscated code is out of the scope of this paper, we run the experiment on the remaining 12 apps. Topics are extracted from the descriptions of the original versions of those malware, which are available at the official market store. The malicious apps have been subject to anomaly detection, based on the three distinct feature sets: (i) flows and topics; (ii) only flows; and (iii) flows and market categories. The classification results are shown in Table 5. The first column reports the malware name (according to ESET-NOD32 9 antivirus) and the second column contains the name of the original app that was repackaged to spread the malware. The remaining three columns report the results of malware detection by the three models based on different sets of features: a tick mark ("") means that the model correctly detected the app as anomalous, while a cross ("") means no anomaly detected. While the model based on topics and the model based on market categories classified the same 6 AUAs as malicious, the model based on only flows classified only 4 AUAs as malicious. All the malware except TrojanSMS.Agent are the cases of privacy sensitive information leaks such as device ID, phone number, e-mail or GPS coordinate, being sent over the network or via SMS. One typical malicious behavior is observed in Spy.GoldDream. In this case, after querying the list of installed packages (sensitive data source), the malware attempts to kill selected background processes (sensitive sink). This is a typical malicious behavior observed in malware that tries to avoid detection by stopping security products such as antiviruses. Botnet behavior is observed in Droid-KunFu. A command and control (C&C) server command is consulted (sensitive source) before performing privileged actions on the device (sensitive sink). As shown in Table 5, when only static analysis features are used in the model, two malicious apps are missed. This is because this limited model compares the given AUA against all the trusted apps, instead of only the apps from a specific subset (grouped by the common topic or the same category). A flow that would have been anomalous for the specific topic (or the specific category) might be normal for another topic/category. For example, acquiring GPS coordinate and sending it over the network is common for navigation or transportation apps. However, it is not a common behavior for tools apps, which is the case of the Anserver malware. The remaining 6 apps in the dataset were consistently classified as not-anomalous by all the models. These false negatives are mainly due to the malicious behaviors not related to sensitive information flows, such as dialing calls in the background or blocking messages. Another reason is due to the obfuscation by malware to hide the sensitive information flows. Static analysis inherently cannot handle obfuscation. Limitation and Discussion In the following, we discuss some of the limitations of our approach and of its experimental validation. The most prominent limitation to adopt our approach is the availability of trusted apps to build the model of sensitive information flows. In our experimental validation, we trusted top ranked popular apps from the official app store, but we have no guarantee that they are all immune from vulnerabilities and from malware content. However, as explained in Section 4.3, our approach is quite robust with respect to the inclusion of a small number of defective, vulnerable, or malicious apps in the training set, as long as the majority of the training apps are benign and correct. This is because we use a threshold-based approach that models flows common to a large set of apps. Thus, vulnerable flows occurring on few training apps are not learnt as normal in the model and they would be classified as anomalous when observed in a given AUA. A flow classified as anomalous by our model needs further manual analysis to check if the anomaly is a vulnerability, a malicious behavior or is safe. Manual inspection could be an expensive task that might delay the delivery of the software product. However, in our experimental validation, manual filtering on the experimental result took quite short time, on average 30 minutes per app. Considering that the code of the app to review was new to us, we expect a shorter manual filtering phase for a developer who is quite familiar with the code of her/his app. All in all, manual effort required to manual filter results of the automated tool seems to be compatible with the fast time-to-market pressure of smart phone apps. When building sensitive information flow models, we also considered grouping of apps by using clustering technique based on the topics distribution, instead of grouping based on the dominant topic alone. But we conducted preliminary experiments using this method and observed that grouping of apps based on dominant topics produce more cohesive groups, i.e., apps that are more similar. Inherently, it is difficult for static analysis-based approaches including ours to handle obfuscated code. Therefore, if training apps are obfuscated (e.g., to limit reverse engineering attacks), our approach may collect incomplete static information and only build a partial model. And if the AUA is obfuscated, our approach may not detect the anomalies. As future work, we plan to incorporate our approach with dynamic analysis to deal with obfuscation. CONCLUSION In this paper, we proposed a novel approach to analyze the flows of sensitive information in Android apps. In our approach, trusted apps are first analyzed to extract topics from their descriptions and data flows from their code. Topics and flows are then used to learn Sensitive Information Flow models. We can use these models for analyzing new Android apps to determine whether they contain anomalous information flows. Our experiments show that this approach could detect anomalous flows in vulnerable and malicious apps quite fast.
6,667
1812.07894
2883454930
Smartphone apps usually have access to sensitive user data such as contacts, geo-location, and account credentials and they might share such data to external entities through the Internet or with other apps. Confidentiality of user data could be breached if there are anomalies in the way sensitive data is handled by an app which is vulnerable or malicious. Existing approaches that detect anomalous sensitive data flows have limitations in terms of accuracy because the definition of anomalous flows may differ for different apps with different functionalities; it is normal for "Health" apps to share heart rate information through the Internet but is anomalous for "Travel" apps. In this paper, we propose a novel approach to detect anomalous sensitive data flows in Android apps, with improved accuracy. To achieve this objective, we first group trusted apps according to the topics inferred from their functional descriptions. We then learn sensitive information flows with respect to each group of trusted apps. For a given app under analysis, anomalies are identified by comparing sensitive information flows in the app against those flows learned from trusted apps grouped under the same topic. In the evaluation, information flow is learned from 11,796 trusted apps. We then checked for anomalies in 596 new (benign) apps and identified 2 previously-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them.
@cite_10 proposed Appsealer, a runtime patch to mitigate permission re-delegation problem. They perform static data flow analysis to determine sensitive data flows from sources to sinks and apply a patch before the invocations of privileged APIs such that the app alerts the user of potential permission re-delegation attacks and requests the user's authorization to continue. This is an alternative way of distinguishing behaviors and abnormal ones by relying on the user. @cite_6 also proposed a similar approach but they extended the Android framework to track ICC vulnerabilities instead of patching the app. Instead of relying on the user, who might not be aware of security implications, we resort to a model that reflects normal information flow behaviors to detect anomalies in the flow of sensitive information.
{ "abstract": [ "Component hijacking is a class of vulnerabilities commonly appearing in Android applications. When these vul- nerabilities are triggered by attackers, the vulnerable apps can exfiltrate sensitive information and compromise the data integrity on Android devices, on behalf of the attackers. It is often unrealis- tic to purely rely on developers to fix these vulnerabilities for two reasons: 1) it is a time-consuming process for the developers to confirm each vulnerability and release a patch for it; and 2) the developers may not be experienced enough to properly fix the problem. In this paper, we propose a technique for automatic patch generation. Given a vulnerable Android app (without source code) and a discovered component hijacking vulnerability, we automatically generate a patch to disable this vulnerability. We have implemented a prototype called AppSealer and evaluated its efficacy on apps with component hijacking vulnerabilities. Our evaluation on 16 real-world vulnerable Android apps demon- strates that the generated patches can effectively track and mitigate component hijacking vulnerabilities. Moreover, after going through a series of optimizations, the patch code only represents a small portion (15.9 on average) of the entire program. The runtime overhead introduced by AppSealer is also minimal, merely 2 on average.", "Android's communication model has a major security weakness: malicious apps can manipulate other apps into performing unintended operations and can steal end-user data, while appearing ordinary and harmless. This paper presents SEALANT, a technique that combines static analysis of app code, which infers vulnerable communication channels, with runtime monitoring of inter-app communication through those channels, which helps to prevent attacks. SEALANT's extensive evaluation demonstrates that (1) it detects and blocks inter-app attacks with high accuracy in a corpus of over 1,100 real-world apps, (2) it suffers from fewer false alarms than existing techniques in several representative scenarios, (3) its performance overhead is negligible, and (4) end-users do not find it challenging to adopt." ], "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2059610428", "2619760961" ] }
AnFlo: Detecting Anomalous Sensitive Information Flows in Android Apps
Android applications (apps) are often granted access to users' privacy-and security-sensitive information such as GPS position, phone contacts, camera, microphone, training log, and heart rate. Apps need such sensitive data to implement their functionalities and provide rich user experiences. For instance, accurate GPS position is needed to navigate users to their destinations, phone contact is needed to implement messaging and chat functionalities, and heart rate frequency is important to accurately monitor training im-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Often, to provide services, apps may also need to exchange data with other apps in the same smartphone or externally with a remote server. For instance, a camera app may share a picture with a multimedia messaging app for sending it to a friend. The messaging app, in turn, may send the full contacts list from the phone directory to a remote server in order to identify which contacts are registered to the messaging service so that they can be shown as possible destinations. As such, sensitive information may legitimately be propagated via message exchanges among apps or to remote servers. On the other hand, sensitive information might be exposed unintentionally by defective/vulnerable apps or intentionally by malicious apps (malware), which threatens the security and privacy of end users. Existing literature on information leak in smartphone apps tend to overlook the difference between legitimate data flows and illegitimate ones. Whenever information flow from a sensitive source to a sensitive sink is detected, either statically [23], [20], [19,15,3], [22], [17], [12] or dynamically [8], it is reported as potentially problematic. In this paper, we address the problem of detecting anomalous information flows with improved accuracy by classifying cases of information flows as either normal or anomalous according to a reference information flow model. More specifically, we build a model of sensitive information flows based on the following features: • Data source: the provenance of the sensitive data that is being propagated; • Data sink: the destination where the data is flowing to; and • App topic: the declared functionalities of the app according to its description. Data source and data sink features are used to reflect information flows from sensitive sources to sinks and summarize how sensitive data is handled by an app. However, these features are not expressive enough to build an accurate model. In fact, distinct apps might have very different functionalities. What is considered legitimate of a particular set of apps (e.g., sharing contacts for a messaging app) can be considered a malicious behavior for other apps (e.g., a piece of malware that steals contacts, to be later used by spammers). An accurate model should also take into consideration the main functionalities that is declared by an app (in our case the App topic). One should classify an app as anomalous only when it exhibits sensitive information flows that are not consistent with its declared functionalities. This characteristic, which makes an app anomalous, is captured by the App topic feature. In summary, our approach focuses on detecting apps that are anomalous in terms of information flows compared to other apps with similar functionalities. Such an approach would be useful for various stakeholders. For example, market owners (e.g., Google) can focus on performing more complex and expensive security analysis only on those cases that are reported as anomalous, before publishing them. If such information is available to end users, they could also make informed decision of whether or not to install the anomalous app. For example, when the user installs an app, a warning stating that this particular app sends contact information through the Internet differently from other apps with similar functionalities (as demonstrated in the tool website). In the context of BYOD (bring your own device) where employees use their own device to connect to the secure corporate network, a security analyst might benefit from this approach to emphasis manual analysis on those anomalous flows that might compromise the confidentiality of corporate data stored in the devices. The specific contributions of this paper are: • An automated, fast approach for detecting anomalous flows of sensitive information in Android apps through a seamless combination of static analysis, natural language processing, model inference, and classification techniques; • The implementation of the proposed approach in a tool called AnFlo which is publicly available 1 ; and • An extensive empirical evaluation of our approach based on 596 subject apps, which assesses the accuracy and runtime performance of anomalous information flow detection. We detected 2 previous-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them. The rest of the paper is organized as follows. Section 2 motivates this work. Section 3 compares our work with literature. Section 4 first gives an overview of our approach and then explains the steps in details. Section 5 evaluates our approach. Section 6 concludes the paper. MOTIVATION To implement their services, apps may access sensitive data. It is important that application code handling such data follows secure coding guidelines to protect user privacy and security. However, fast time-to-market pressure often pushes developers to implement data handling code quickly without considering security implications and release apps without proper testing. As a result, apps might contain defects that leak sensitive data unintentionally. They may also contain security vulnerabilities such as permission redelegation vulnerabilities [9], which could be exploited by malicious apps installed on the same device to steal sensitive data. Sensitive data could also be intentionally misused by malicious apps. Malicious apps such as malware and spyware often implement hidden functionalities not declared in 1 Tool and dataset available at http://selab.fbk.eu/anflo/ their functional descriptions. For example, a malicious app may declare only entertainment features (e.g., games) in its description, but it steals user data or subscribes to paid services without the knowledge and consent of the user. Defective, vulnerable, and malicious apps all share the same pattern, i.e., they (either intentionally or unintentionally) deal with sensitive data in an anomalous way, i.e., they behave differently in terms of dealing with sensitive data compared to other apps that state similar functionalities. Therefore, novel approaches should focus on detecting anomalies in sensitive data flows, caused by mismatches between expected flows (observed in benign and correct apps) and actual data flows observed in the app under analysis. However, the comparison should be only against similar apps that offer similar functionalities. For instance, messaging apps are expected to read information from phone contact list but they are not expected to use GPS position. These observations motivate our proposed approach. ANOMALOUS INFORMATION FLOW DE-TECTION Overview The overview of our approach is shown in Figure 1. It has two main phases -Learning and Classification. The input to the learning phase is a set of apps that are trusted to be benign and correct in the way sensitive data is handled (we shall denote them as trusted apps). It has two sub-steps -feature extraction and model inference. In the feature extraction step, (i) topics that best characterize the trusted apps are inferred using natural language processing (NLP) techniques and (ii) information flows from sensitive sources to sinks in the trusted apps are identified using static taint analysis. In the model inference step, we build sensitive information model that characterizes information flows regarding each topic. These models and a given app under analysis (we shall denote it as AUA) are the inputs to the classification phase. In this phase, basically, the dominant topic of the AUA is first identified to determine the relevant sensitive information flow model. Then, if the AUA contains any information flow that violates that model, i.e., is not consistent with the common flows characterized by the model, it is flagged as anomalous. Otherwise, it is flagged as normal. We implemented this approach in our tool AnFlo to automate the detection of anomalous information flows. However, a security analyst is required to further inspect those anomalous flows and determine whether or not the flows could actually lead to serious vulnerabilities such as information leakage issues. Topics discovery. Topics representative of a given preprocessed app description are identified using the Latent Dirichlet Allocation (LDA) technique [6], implemented in a tool called Mallet [18]. LDA is a generative statistical model that represents a collection of text as a mixture of topics with certain probabilities, where each word appearing in the text is attributable to one of the topics. The output of LDA is a list of topics, each of them with its corresponding probability. The topic with the highest probability is labeled as the dominant topic for its associated app. To illustrate, Figure 2 shows the functional description of an app called BestTravel, and the resulting output after performing pre-processing and topics discovery on the description. "Travel" is the dominant topic, the one with the highest probability of 70%. Then, the topics "Communication", "Finance", and "Photography" have the 15%, 10%, and 5% probabilities, respectively, of being the functionalities that the app declares to provide. The ultimate and most convenient way of traveling. Use BestTravel while on the move, to find restaurants (including pictures and prices), local transportation schedule, ATM machines and much more. App name Travel Communication Finance Photography BestTravel 70% 15% 10% 5% Figure 2: Example of app description and topic analysis result. Note that we did not consider Google Play categories as topics even though apps are grouped under those categories in Google Play. This is because recent studies [1,10] have reported that NLP-based topic analysis on app descriptions produces more cohesive clusters of apps than those apps grouped under Google Play categories. Static Analysis Sensitive information flows in the trusted apps are extracted using static taint analysis. Taint analysis is an instance of flow-analysis technique, which tags program data with labels that describe its provenance and propagates these tags through control and data dependencies. A different label is used for each distinct source of data. Tags are propagated from the operand(s) in the right-hand side of an assignment (uses) to the variable assigned in the left-hand side of the assignment (definition). The output of taint analysis is information flows, i.e., what data of which provenances (sources) are accessed at what program operations, e.g., on channels that may leak sensitive information (sinks). Our analysis focuses on the flows of sensitive information into sensitive program operations, i.e., our taint analysis generates tags at API calls that read sensitive information (e.g. GPS and phone contacts) and traces the propagation of tags into API calls that perform sensitive operations such as sending messages and Bluetooth packets. These sensitive APIs usually belong to dangerous permission group and hence, the APIs that we analyze are those privileged APIs that require to be specifically granted by the end user. Sources and sinks are the privileged APIs available from PScout [4]. The APIs that we analyze also include those APIs that enable Inter Process Communication (IPC) mechanism of Android because they can be used to exchange data among apps installed on the same device. As a result, our taint analysis generates a list of (source → sink) pairs, where each pair represents the flow of sensitive data originating from a source into a sink. APIs (both for sources and for sinks) are grouped according to the special permission required to run them. For example, all the network related sink functions, such as open-Connection(), connect() and getContent() are all modeled as Internet sinks, because they all require the INTER-NET permission to be executed. Figure 3 shows the static taint analysis result on the "BestTravel" running example app from Figure 2. It generates two (source → sink) pairs that correspond to two sensitive information flows. In the first flow, data read from the GPS is propagated through the program until it reaches a statement where it is sent over the network. In the second flow, data from the phone contacts is used to compose a text message. Our tool, AnFlo, runs on compiled byte-code of apps to perform the above static taint analysis. It relies on two existing tools -IC3 [19] and IccTA [15]. Android apps are usually composed of several components. Therefore, to precisely extract inter-component information flows, we need to analyze the links among components. AnFlo uses IC3 to resolve the target components when a flow is inter-component. IC3 uses a solver to infer all possible values of complex objects in an inter-procedural, flow-and context-sensitive manner. Once inter-component links are inferred, AnFlo uses an inter-component data-flow analysis tool called IccTA to perform static taint analysis. We customized IccTA to produce flows in a format as presented in Figure 3 and paths in a more verbose format to facilitate manual checks. App: BestTravel GPS → Internet Contacts → SMS Model Inference When results of topic analysis and of static analysis are available for all the trusted apps, they are used to build the Sensitive Information Flow Model. Such a model is a matrix with sensitive information sources in its rows and sinks in its columns, as shown in Figure 4. Firstly, apps with the same dominant topic are grouped together 5 , to build a sensitive information flow model corresponding to that specific topic. Each group is labeled with the dominant topic. Next, each cell of the matrix is filled with a number, representing the number of apps in this group having the corresponding (source → sink) pair. Figure 4 shows a sample sensitive information model regarding the topic "Travel". There are 36 distinct flows in the apps grouped under this dominant topic. The matrix shows that there are ten apps containing GPS position flowing through the Internet (one of them being the BestTravel app, see Figure 3); eight apps through text messages and three apps through Bluetooth. Similarly, the matrix shows that contacts information flows through SMS in seven apps and through Bluetooth in eight apps. From this model, we can observe that for Travel apps it is quite common to share the user's position via Internet and SMS. However, it is quite uncommon to share the position data via Bluetooth since it happened only in three cases. Likewise, the phone contacts are commonly shared through text messages and Bluetooth but not through Internet. To provide a formal and operative definition of common and uncommon flows, we compute a threshold denoted as τ . Flows that occur more than or equal to τ are considered as common; flows that never occur or that occur fewer than τ are considered as uncommon regarding this topic. Although our model assumes or trusts that the trusted apps are benign and correct, it is possible that some of them may contain defects, vulnerabilities or malware. This problem is addressed by classifying those flows occurring less than the threshold τ as uncommon, i.e., our approach tolerates the presence of some anomalous flows in the reference model since these flows would still be regarded as uncommon. Hence, our approach works as long as the majority of the trusted apps are truly trustworthy. To compute this threshold, we adopt the box-plot approach proposed by Laurikkala et al. [13], considering only flows occurring in the model, i.e., we consider only values greater than zero. τ is computed in the same way as drawing outlier dots in boxplots. It is the lower quartile (25th percentile) minus the step, where the step is 1.5 times the difference between the upper quartile (75th percentile) and the lower quartile (25th percentile). It should be noted that τ is not trivially the lower quartile; otherwise 25% of the apps would be outliers by construction. The threshold is lower, i.e., it is the lower quartile minus the step. Therefore, there is no fixed amount of outliers. Outliers could be few or many depending on the distribution of data. Outliers would only be those cases that are really different from the majority of the training data points. In the example regarding topic "Travel" in Figure 4, the threshold is computed considering only the five values that are > 0. The value for the threshold is τ T ravel = 7. It means that GPS data sent through Internet (GPS → Internet) or text messages (GPS → SMS) are common for traveling apps. Conversely, even though there are three trusted apps which send GPS data through Bluetooth (GPS → Bluetooth), there are too few cases to be considered common, and this sensitive information flow will be considered uncommon in the model. Likewise, phone contacts are commonly sent through text messages and Bluetooth, but it is uncommon for them to be sent through the Internet, since this never occurs in the trusted apps. Classification After the Sensitive Information Flow Models are built on trusted apps, they can be used to classify a new AUA. First of all, features must be extracted from the AUA. The features are the topics associated with the app description and the sensitive information flows in the app. As in Section 4.2.1, first data pre-processing is performed on the app description of the AUA. Then, topics and their probabilities are inferred from the pre-processed description using the Mallet tool. Among all the topics, we consider only the dominant topic, the one with the highest probability, because it is the topic that most characterizes this app. We then obtain the Sensitive Information Flow Model associated with this dominant topic. To ensure the availability of the Sensitive Information Flow Model, the Mallet tool is configured with the list of topics for which the Models are already built on the trusted apps. And given an app description, the Mallet tool only generates topics from this list. The more diverse trusted apps we analyze, the more complete list of models we expect to build. For example, Figure 5(a) shows the topics inferred from the description of a sample AUA "TripOrganizer". The topic "Travel" is highlighted in bold to denote that it is the dominant topic. Next, sensitive information flows in the AUA are extracted as described in Section 4.2.2. The extracted flows are then compared against the flows in the model associated with the dominant topic. If the AUA contains only flows that are common according to the model, the app is considered consistent with the model. If the app contains a flow that is not present in the model or a flow that is present but is uncommon according to the model, the flow and thus, the app is classified as anomalous. Anomalous flows require further manual inspection by a security analyst, because they could be due to defects, vulnerabilities, or malicious intentions. For example, Figure 5(b) shows three sensitive information flows extracted from "TripOrganizer" app. Since the dominant topic for this app is "Travel", these flows can be checked against the model associated with this topic shown in Figure 4. Regarding this model, earlier, we computed that the threshold is τ T ravel = 7 and the flow (Contacts → SMS) is common (see Section 4.3). Therefore, flow 1 ob-served in "TripOrganizer" (Figure 5(b)) is consistent with the model. However, flow 2 (Contacts → Internet) and flow 3 (GPS → Bluetooth), highlighted in bold in Figure 5(b), are uncommon according to the model. As a result, the AUA "TripOrganizer" is classified as anomalous. EMPIRICAL ASSESSMENT In this section, we evaluate the usefulness of our approach and report the results. We assess our approach by answering the following research questions: • RQ V ul : Is AnFlo useful for identifying vulnerable apps containing anomalous information flows? • RQT ime: How long does AnFlo take to classify apps? • RQT opics: Is the topic feature really needed to detect anomalous flows? • RQCat: Can app-store categories be used instead of topics to learn an accurate Sensitive Information Flow Model? • RQ M al : Is AnFlo useful for identifying malicious apps? The first research question RQ V ul investigates the result of AnFlo, whether it is useful for detecting anomalies in vulnerable apps that, for example, may leak sensitive information. RQT ime investigates the cost of using our approach in terms of the time taken to analyze a given AUA. A short analysis time is essential for tool adoption in a real production environment. Then, in the next two research questions, we investigate the role of topics as a feature for building the Sensitive Information Flow Models. RQT opics investigates the absolute contribution of topics, by learning the Sensitive Information Flow Model without considering the topics and by comparing its performance with that of our original model. To answer RQCat, we replace topics with the categories defined in the official market, and we compare the performance of this new model with that of our original model. Finally, the last research question RQ M al investigates the usefulness of AnFlo in detecting malware based on anomalies in sensitive information flows. Benchmarks and Experimental Settings Trusted Apps AnFlo needs a set of trusted apps to learn what is the normal behavior for "correct and benign" apps. We defined the following guidelines to collect trusted apps: (i) apps that come from the official Google Play Store (so they are scrutinized and checked by the store maintainer) and (ii) apps that are very popular (so they are widely used and reviewed by a large community of end users and programming mistakes are quickly notified and patched). At the time of crawling the Google Play Store, it had 30 different app categories. From each category, we downloaded, on average, the top 500 apps together with their descriptions. We then discarded apps with non-English description and those with very short descriptions (less than 10 words). Eventually, we are left with 11,796 apps for building references models. Additionally, we measured if these apps were actively maintained by looking at the date of the last update. 70% of the apps were last updated in the past 6 months before the Play Store was crawled, while 32% of the apps were last updated within the same month of the crawling. This supports the claim that the trusted apps are well maintained. The fact that the trusted apps we use are suggested and endorsed by the official store, and that they collected good end-user feedback allows us to assume that the apps are of high quality and do not contain many security problems. Nevertheless, as explained in Section 4.3, our approach is robust against the inclusion of a small number of anomalous apps in the training set since we adopt a threshold to classify anomalous information flows. Subject Benign Apps AnFlo works on compiled apps and, therefore the availability of source code is not a requirement for the analysis. However, for this experiment sake, we opted for open source projects, which enable us to inspect the source code and establish the ground truth. The F-Droid repository 6 represents an ideal setting for our experimentation because (i) it includes real world apps that are also popular in the Google Play Store, and (ii) apps can be downloaded with their source code for manual verification of the vulnerability reports delivered by AnFlo. The F-Droid repository was crawled in July 2017 for apps that meet our criteria. Among all the apps available in this repository, we used only those apps that are also available in the Google Play Store, whose descriptions meet our selection criteria (i.e., description is in English and it is longer than 10 words). Eventually, our experimental set of benign apps consists of 596 AUAs. Subject Malicious Apps To investigate if AnFlo can identify malware, we need a set of malicious apps with their declared functional descriptions. Malicious apps are usually repackaged versions of popular (benign) apps, injected with malicious code (Trojanized); hence the descriptions of those popular apps they disguise as can be considered as their app descriptions. Hence, by identifying the original versions of these malicious apps in the Google Play Store, we obtain their declared functional descriptions. We consider the malicious apps from the Drebin malware dataset [2], which consists of 5,560 samples that have been collected in the period of August 2010 to October 2012. We randomly sampled 560 apps from this dataset. For each malicious app, we performed static analysis to extract the package name, an identifier used by Android and by the official store to distinguish Android apps 7 . We queried the official Google Play market for the original apps, by searching for those having the same package name. Among our sampled repackaged malicious apps, we found 20 of the apps in the official market with the same package name. We analyzed their descriptions and found that only 18 of them have English descriptions. We therefore performed static taint analysis on these 18 malware samples, for which we found their "host" apps in the official market. Our static analysis crashed on 6 cases. Therefore, our experimental set of malicious apps consists of 12 AUAs. Results Detecting Vulnerable Apps Firstly, AnFlo was used to perform static taint analysis on the 11,796 trusted apps and topic analysis on their descriptions from the official Play Store. It then learns the Sensitive Information Flow Models based on the dominant topics and extracted flows as described in Section 4.3. Then, the AUAs from the F-Droid repository (Section 5.1.2) have been classified based on the Sensitive Information Flow Models. Out of 596 AUAs, static taint analysis reported 76 apps to contain flows of sensitive information that reach sinks, for a total of 1428 flows. These flows map to 147 distinct source-sink pairs. Out of these 76 apps, 14 AUAs are classified as anomalous. Table 1 shows the analysis results reported by AnFlo. The first column presents the name of the app. The second column presents the app's dominant topic. The third and fourth columns present the source of sensitive data and the sink identified by static taint analysis, respectively. As shown in Table 1, in total AnFlo reported 25 anomalous flows in these apps. We manually inspected the source code available from the repository to determine if these anomalous flows were due to programming defects or vulnerabilities. Two apps are found to be vulnerable (highlighted in boldface in Table 1), they are com.matoski. adbm and com.mschlauch.comfortreader. com.matoski.adbm is a utility app for managing the ADB debugging interface. The anomalous flow involves data from the WiFi configuration that leak to other apps through the Inter Process Communication. Among other information that may leak, the SSID data, which identifies the network to which the device is connected to, can be used to infer the user position and threaten the end user privacy. Hence, this programming defect leads to information leakage vulnerability that requires corrective maintenance. We reported this vulnerability to the app owners on their issue tracker. com.mschlauch.comfortreader is a book reader app, with an anomalous flow of data from IPC to the Internet. Manual inspection revealed that this anomalous flow results from a permission re-delegation vulnerability because data coming from another app is used, without sanitization, for opening a data stream. If a malicious app that does not have the permission to use the Internet passes a URL that contains sensitive privacy data (e.g., GPS coordinates), then the app could be used to leak information. We reported this vulnerability to the app developers. Regarding the other 12 AUAs, even though they contain 7 Even if it is easy to obfuscate this piece of information, in our experiment some apps did not rename their package name anomalous flows compared to trusted apps, manual inspection revealed that they are neither defective nor vulnerable. For example, some apps contain anomalous flows that involves IPC. Since data may come from other apps via IPC (source) or may flow to other apps via IPC (sink), such flows are considered dangerous in general. However, in these 12 apps, when IPC is a source (e.g., in com.alfray.timeriffic), data is either validated/sanitized before used in the sink or used in a way that do not threaten security. On the other hand, when IPC is a sink (e.g., in com.dozingcatsoftware. asciicam), the destination is always a component in the same app, so the flows are not actually dangerous. Since AnFlo helped us detect 2 vulnerable apps containing anomalous information flows, we can answer RQ V ul by stating that AnFlo is useful for identifying vulnerabilities related to anomalous information flows. Classification Time To investigate RQT ime, we analyze the time required to classify the AUAs. We instrumented the analysis script with the Linux date utility to log the time (in seconds) before starting the analysis and at its conclusion. Their difference is the amount of time spent in the computation. The experiment was run on a multi-core cluster, specifically designed to let a process run without sharing memory or computing resources with other processes. Thus, we assume that the time measurement is reliable. Classification time includes the static analysis step to exact data flow, the natural language step to extract topics from description and the comparison with the Sensitive Information Flow Model to check for consistency. Figure 6 reports the boxplot of the time (in minutes) needed to classify the F-Droid apps and the descriptive statistics. On average, an app takes 1.9 minutes to complete the classification and most of the analyses concluded in less than 3 minutes (median = 1.5). Only a few (outliers) cases require longer analysis time. Topics from App Description We now run another experiment to verify our claim that topics are important features to build an accurate model (RQT opics). We repeated the same experiment as before, but using only flows as features and without considering topics, to check how much detection accuracy we lose in this way. We still consider all the trusted apps for learning the reference model, but we only use static analysis data. That is, we do not create a separate matrix for each topic; instead we create one big single matrix with sources and sinks for all the apps. This Sensitive Information Flow Model is then used to classify F-Droid apps and the results are shown in Table 2. As we can see, only four apps are detected as anomalous by this second approach, and all of them were already detected by our original, proposed approach. Manual inspection revealed that all of them are not vulnerable. This suggests that topic is a very important feature to learn reference models in order to detect a larger amount of anomalous apps. In fact, when topics are not considered and all the apps are grouped together regardless of their topics, we observe a smoothing effect. Differences among apps become less relevant to detect anomalies. While in the previous model, an app was compared only against those apps grouped under the same topic. Here, an app is compared to all the trusted apps. Without topic as a feature, our model loses the ability to capture the characteristics of distinct groups and, thus, the ability to detect deviations from them. Play Store Categories To investigate RQCat, instead of grouping trusted apps based on topics, we group them according to their app categories as determined by the official Google Play Store. First of all we split trusted apps into groups based on the market category they belong to 8 . We then use static analysis information about flows to build a separate source-sink matrix per each category. Eventually we compute thresholds to complete the model. We then classify each AUA from F-Droid by comparing it with the model of the corresponding market category. The classification results are reported in Table 3. Ten apps are reported as containing anomalous flows and most of them were also detected by our original, proposed approach (Table 1). Two apps reported by this approach were not reported by our proposed approach, which are com.angrydoughnuts.android.alarmclock and com.futurice. android.reservator. However, they are neither the cases of vulnerabilities nor malicious behaviors. Only one flow detected by this approach is a case of vulnerability, namely com.matoski.adbm, highlighted in boldface, which was also detected by our proposed approach. Hence, this result supports our design decision of using topics. Table 4 summarizes the result of the models comparison. The first model (first row) considers both data flows and description topics as features. Even though this approach reported the largest number of false positives (12 apps, 'FP' column), we were able to detect 2 vulnerabilities ('Vuln.' column) by tracing the anomalies reported by this approach. It also detected 5 additional anomalous apps that other approaches did not detect ('Unique' column). Comparison of the Models The second model (second row) considers only data flows as a feature. Even though the number of false positives drops to 4, we were not able to detect any vulnerability by tracing the anomalies reported by this approach. This result suggests that modeling only flows is not enough for detecting vulnerabilities. When market categories are used instead of description topics (last row), the false positives drops to 9 (25% less compared to our proposed model). It detected 2 additional anomalous apps that other approaches did not detect ('Unique' column). Tracing the anomalies reported by this approach, we detected only one out of the two vulnerabilities that we detected using our proposed approach. This result suggests that topics are more useful than categories for detecting vulnerable apps containing anomalous information flows. Detecting Malicious Apps Anomalies in the flow of sensitive data could be due to malicious behaviors as well. The goal of this last experiment is to investigate whether AnFlo can be used to identify malware (RQ M al ). To this aim, we use the Sensitive Infor-mation Flow Model (learned on the trusted apps) to classify the 18 AUAs from the Drebin malware dataset. Data flow features are extracted using static analysis from these malicious apps. However, static taint analysis crashed on 6 apps because of their heavy obfuscation. Since improving the static taint analysis implementation to work on heavy obfuscated code is out of the scope of this paper, we run the experiment on the remaining 12 apps. Topics are extracted from the descriptions of the original versions of those malware, which are available at the official market store. The malicious apps have been subject to anomaly detection, based on the three distinct feature sets: (i) flows and topics; (ii) only flows; and (iii) flows and market categories. The classification results are shown in Table 5. The first column reports the malware name (according to ESET-NOD32 9 antivirus) and the second column contains the name of the original app that was repackaged to spread the malware. The remaining three columns report the results of malware detection by the three models based on different sets of features: a tick mark ("") means that the model correctly detected the app as anomalous, while a cross ("") means no anomaly detected. While the model based on topics and the model based on market categories classified the same 6 AUAs as malicious, the model based on only flows classified only 4 AUAs as malicious. All the malware except TrojanSMS.Agent are the cases of privacy sensitive information leaks such as device ID, phone number, e-mail or GPS coordinate, being sent over the network or via SMS. One typical malicious behavior is observed in Spy.GoldDream. In this case, after querying the list of installed packages (sensitive data source), the malware attempts to kill selected background processes (sensitive sink). This is a typical malicious behavior observed in malware that tries to avoid detection by stopping security products such as antiviruses. Botnet behavior is observed in Droid-KunFu. A command and control (C&C) server command is consulted (sensitive source) before performing privileged actions on the device (sensitive sink). As shown in Table 5, when only static analysis features are used in the model, two malicious apps are missed. This is because this limited model compares the given AUA against all the trusted apps, instead of only the apps from a specific subset (grouped by the common topic or the same category). A flow that would have been anomalous for the specific topic (or the specific category) might be normal for another topic/category. For example, acquiring GPS coordinate and sending it over the network is common for navigation or transportation apps. However, it is not a common behavior for tools apps, which is the case of the Anserver malware. The remaining 6 apps in the dataset were consistently classified as not-anomalous by all the models. These false negatives are mainly due to the malicious behaviors not related to sensitive information flows, such as dialing calls in the background or blocking messages. Another reason is due to the obfuscation by malware to hide the sensitive information flows. Static analysis inherently cannot handle obfuscation. Limitation and Discussion In the following, we discuss some of the limitations of our approach and of its experimental validation. The most prominent limitation to adopt our approach is the availability of trusted apps to build the model of sensitive information flows. In our experimental validation, we trusted top ranked popular apps from the official app store, but we have no guarantee that they are all immune from vulnerabilities and from malware content. However, as explained in Section 4.3, our approach is quite robust with respect to the inclusion of a small number of defective, vulnerable, or malicious apps in the training set, as long as the majority of the training apps are benign and correct. This is because we use a threshold-based approach that models flows common to a large set of apps. Thus, vulnerable flows occurring on few training apps are not learnt as normal in the model and they would be classified as anomalous when observed in a given AUA. A flow classified as anomalous by our model needs further manual analysis to check if the anomaly is a vulnerability, a malicious behavior or is safe. Manual inspection could be an expensive task that might delay the delivery of the software product. However, in our experimental validation, manual filtering on the experimental result took quite short time, on average 30 minutes per app. Considering that the code of the app to review was new to us, we expect a shorter manual filtering phase for a developer who is quite familiar with the code of her/his app. All in all, manual effort required to manual filter results of the automated tool seems to be compatible with the fast time-to-market pressure of smart phone apps. When building sensitive information flow models, we also considered grouping of apps by using clustering technique based on the topics distribution, instead of grouping based on the dominant topic alone. But we conducted preliminary experiments using this method and observed that grouping of apps based on dominant topics produce more cohesive groups, i.e., apps that are more similar. Inherently, it is difficult for static analysis-based approaches including ours to handle obfuscated code. Therefore, if training apps are obfuscated (e.g., to limit reverse engineering attacks), our approach may collect incomplete static information and only build a partial model. And if the AUA is obfuscated, our approach may not detect the anomalies. As future work, we plan to incorporate our approach with dynamic analysis to deal with obfuscation. CONCLUSION In this paper, we proposed a novel approach to analyze the flows of sensitive information in Android apps. In our approach, trusted apps are first analyzed to extract topics from their descriptions and data flows from their code. Topics and flows are then used to learn Sensitive Information Flow models. We can use these models for analyzing new Android apps to determine whether they contain anomalous information flows. Our experiments show that this approach could detect anomalous flows in vulnerable and malicious apps quite fast.
6,667
1812.07567
2951007713
Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are trained on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.
OpenAI published an interesting robotics application of one-shot learning, described in @cite_1 . Their algorithm, named , uses a meta-learning framework for training robotic systems in performing certain tasks based only on a couple of demonstrations. Also, the large amount of data required by Deep Reinforcement Learning systems has been approached in @cite_24 through the introduction of their algorithm.
{ "abstract": [ "In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.", "Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. Videos available at this https URL ." ], "cite_N": [ "@cite_24", "@cite_1" ], "mid": [ "2550182557", "2601322194" ] }
Generative One-Shot Learning (GOL): A Semi-Parametric Approach to One-Shot Learning in Autonomous Vision
0
1812.07567
2951007713
Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are trained on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.
One of the most influential work on GOL is the research on (GAN) @cite_27 . The major difference between GOL and the adversarial nets is that, within GOL, the generation of synthetic information is performed based on the generalization functions which generate new data from a one-shot object, thus making GOL a pure one-shot learning framework.
{ "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ], "cite_N": [ "@cite_27" ], "mid": [ "2099471712" ] }
Generative One-Shot Learning (GOL): A Semi-Parametric Approach to One-Shot Learning in Autonomous Vision
0
1812.07667
2950720513
This paper presents a novel deep learning framework for human trajectory prediction and detecting social group membership in crowds. We introduce a generative adversarial pipeline which preserves the spatio-temporal structure of the pedestrian's neighbourhood, enabling us to extract relevant attributes describing their social identity. We formulate the group detection task as an unsupervised learning problem, obviating the need for supervised learning of group memberships via hand labeled databases, allowing us to directly employ the proposed framework in different surveillance settings. We evaluate the proposed trajectory prediction and group detection frameworks on multiple public benchmarks, and for both tasks the proposed method demonstrates its capability to better anticipate human sociological behaviour compared to the existing state-of-the-art methods.
One of the most popular deep learning methods is the social LSTM @cite_24 model which represents the pedestrians in the local neighbourhood using LSTMs and then generates their future trajectory by systematically pooling the relavant information. This removes the need for handcrafted features and learns the required feature vectors automatically through the encoded trajectory representation. This architecture is further augmented in @cite_6 where the authors propose a more efficient method to embed the local neighbourhood information via a soft and hardwired attention framework. They demonstrate the importance of fully capturing the context information, which includes the short-term history of the pedestrian of interest as well as their neighbours.
{ "abstract": [ "Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.", "As humans we possess an intuitive ability for navigation which we master through years of practice; however existing approaches to model this trait for diverse tasks including monitoring pedestrian flow and detecting abnormal events have been limited by using a variety of hand-crafted features. Recent research in the area of deep-learning has demonstrated the power of learning features directly from the data; and related research in recurrent neural networks has shown exemplary results in sequence-to-sequence problems such as neural machine translation and neural image caption generation. Motivated by these approaches, we propose a novel method to predict the future motion of a pedestrian given a short history of their, and their neighbours, past behaviour. The novelty of the proposed method is the combined attention model which utilises both \"soft attention\" as well as \"hard-wired\" attention in order to map the trajectory information from the local neighbourhood to the future positions of the pedestrian of interest. We illustrate how a simple approximation of attention weights (i.e hard-wired) can be merged together with soft attention weights in order to make our model applicable for challenging real world scenarios with hundreds of neighbours. The navigational capability of the proposed method is tested on two challenging publicly available surveillance databases where our model outperforms the current-state-of-the-art methods. Additionally, we illustrate how the proposed architecture can be directly applied for the task of abnormal event detection without handcrafting the features." ], "cite_N": [ "@cite_24", "@cite_6" ], "mid": [ "2424778531", "2949153416" ] }
GD-GAN: Generative Adversarial Networks for Trajectory Prediction and Group Detection in Crowds
Understanding and predicting crowd behaviour plays a pivotal role in video based surveillance; and as such is becoming essential for discovering public safety risks, and predicting crimes or patterns of interest. Recently, focus has been given to understanding human behaviour at a group level, leveraging observed social interactions. Researchers have shown this to be important as interactions occur at a group level, rather than at an individual or whole of crowd level. As such we believe group detection has become a mandatory part of an intelligent surveillance system; however this group detection task presents several new challenges [31,32]. Other than identifying and tracking pedestrians from video, modelling the semantics of human social interaction and cultural gestures over a short sequence of clips is extremely challenging. Several attempts [27,31,32,34] have been made to incorporate handcrafted physics based features such as relative distance between pedestrians, trajectory shape and motion based features to model their social affinity. Hall et. al [16] proposed a proxemic theory for such physical interactions based on different distance boundaries; however recent works [31,32] have shown these quantisations fail in cluttered environments. Furthermore, proximity doesn't always describe the group membership. For instance two pedestrians sharing a common goal may start their trajectories in two distinct source positions, however, meet in the middle. Hence we believe being reliant on a handful of handcrafted features to be sub-optimal [1,10,19]. Fig. 1. Proposed group detection framework: After observing short segments of trajectories for each pedestrian in the scene, we apply the proposed trajectory prediction algorithm to forecast their future trajectories. The context representation generated at this step is extracted and compressed using t-SNE dimensionality reduction. Finally, the DBSCAN clustering algorithm is applied to detect the pedestrian groups. To this end we propose a deep learning algorithm which automatically learns these group attributes. We take inspiration from the trajectory modelling approaches of [8] and [11], where the approaches capture contextual information from the local neighbourhood. We further augment this approach with a Generative Adversarial Network (GAN) [10,15,28] learning pipeline where we learn a custom, task specific loss function which is specifically tailored for future trajectory prediction, learning to imitate complex human behaviours. Fig. 1 illustrates the proposed approach. First, we observe short segments of trajectories from 1 to T obs for each pedestrian, p k , in the scene. Then, we apply the proposed trajectory prediction algorithm to forecast their future trajectories from T obs+1 − T pred . This step generates hidden context representations for each pedestrian describing the current environmental context in the local neighbourhood of the pedestrian. We then apply t-SNE dimensionality reduction to extract the most discriminative features, and we detect the pedestrian groups by clustering these reduced features. The simplistic nature of the proposed framework offers direct transferability among different environments when compared to the supervised learning approaches of [27,31,32,34], which require re-training of the group detection process whenever the surveillance scene changes. This ability is a result of the proposed deep feature learning framework which learns the required group attributes automatically and attains commendable results among the state-of-the-art. Novel contributions of this paper can be summarised as follows: -We propose a novel GAN pipeline which jointly learns informative latent features for pedestrian trajectory forecasting and group detection. -We remove the supervised learning requirement for group detection, allowing direct transferability among different surveillance scenes. -We demonstrate how the original GAN objective could be augmented with sparsity regularisation to learn powerful features which are informative to both trajectory forecasting and group detection tasks. -We provide extensive evaluations of the proposed method on multiple public benchmarks where the proposed method is able to generate notable performance, especially among unsupervised learning based methods. -We present visual evidence on how the proposed trajectory modelling scheme has been able to embed social interaction attributes into its encoding scheme. Human Behaviour Prediction Social Force models [17,34], which rely on the attractive and repulsive forces between pedestrians to model their future behaviour, have been extensively applied for modelling human navigational behaviour. However with the dawn of deep learning, these methods have been replaced as they have been shown to ill represent the structure of human decision making [7,8,15]. One of the most popular deep learning methods is the social LSTM [1] model which represents the pedestrians in the local neighbourhood using LSTMs and then generates their future trajectory by systematically pooling the relavant information. This removes the need for handcrafted features and learns the required feature vectors automatically through the encoded trajectory representation. This architecture is further augmented in [8] where the authors propose a more efficient method to embed the local neighbourhood information via a soft and hardwired attention framework. They demonstrate the importance of fully capturing the context information, which includes the short-term history of the pedestrian of interest as well as their neighbours. Generative Adversarial Networks (GANs) [10,15,28] propose a task specific loss function learning process where the training objective is a minmax game between the generative and discriminative models. These methods have shown promising results, overcoming the intractable computation of a loss function, in tasks such as autonomous driving [9,23], saliency prediction [10,25], image to image translation [19] and human navigation modelling [15,28]. Even though the proposed GAN based trajectory modelling approach exhibits several similarities to recent works in [15,28], the proposed work differs in multiple aspects. Firstly, instead of using CNN features to extract the local structure of the neighbourhood as in [28], pooling out only the current state of the neighbourhood as in [15], or discarding the available historical behaviour which is shown to be ineffective [7,8,28]; we propose an efficient method to embed the local neighbourhood context based on the soft and hardwired attention framework proposed in [8]. Secondly, as we have an additional objective of localising the groups in the given crowd, we propose an augmentation to the original GAN objective which regularises the sparsity of the generator embeddings, generating more discriminative features and aiding the clustering processes. Group Detection Some earlier works in group detection [5,29] employ the concept of F-formations [20], which can be seen as specific orientation patterns that individuals engage in when in a group. However such methods are only suited to stationary groups. In a separate line of work researchers have analysed pedestrian trajectories to detect groups. Pellegrinin et. al [27] applied Conditional Random Fields to jointly predict the future trajectory of the pedestrian of interest as well as their group membership. [34] utilises distance, speed and overlap time to train a linear SVM to classify whether two pedestrians are in the same group or not. In contrast to these supervised methods, Ge et. al [13] proposed using agglomerative clustering of speed and proximity features to extract pedestrian groups. Most recently Solera et. al [31] proposed proximity and occupancy based social features to detect groups using a trained structural SVM. In [32] the authors extend this preliminary work with the introduction of sociologically inspired features such as path convergence and trajectory shape. However these supervised learning mechanisms rely on hand labeled datasets to learn group segmentation, limiting the methods applicability. Furthermore, the above methods all utilise a predefined set of handcrafted features to describe the sociological identity of each pedestrian, which may be suboptimal. Motivated by the impressive results obtained in [8] with the augmented context embedding, we make the first effort to learn group attributes automatically and jointly through trajectory prediction. Architecture Neighbourhood Modelling We use the trajectory modelling framework of [8] (shown in Fig. 2 is used to embed trajectory information from the pedestrian of interest, and a hardwired attention context vector C h,k Let the trajectory of the pedestrian k, from frame 1 to T obs be given by, p k = [p 1 , . . . , p T obs ],(1) where the trajectory is composed of points in a Cartesian grid. Then we pass each trajectory through an LSTM [18] encoder to generate its hidden embeddings, h k t = LST M (p k t , h k t−1 ),(2) generating a sequence of embeddings, h k = [h k 1 , . . . , h k T obs ].(3) Following [8], the trajectory of the pedestrian of interest is embedded with soft attention such that, C s,k t = T obs j=1 α tj h k j ,(4) which is the weighted sum of hidden states. The weight α tj is computed by, α tj = exp(e tj ) T l=1 exp(e tl ) ,(5)e tj = a(h k t−1 , h k j ).(6) The function a is a feed forward neural network jointly trained with the other components. To embed the effect of the neighbouring trajectories we use the hardwired attention context vector C h,k t from [8]. The hardwired weight w is computed by, w n j = 1 dist(n, j) ,(7) where dist(n, j) is the distance between the n th neighbour and the pedestrian of interest at the j th time instant. Then we compute C h,k t as the aggregation for all the neighbours such that, C h,k t = N n=1 T obs j=1 w n j h n j ,(8) where there are N neighbouring trajectories in the local neighbourhood, and h n j is the encoded hidden state of the n th neighbour at the j th time instant. Finally we merge the soft attention and hardwired attention context vectors to represent the current neighbourhood context such that, C * ,k t = tanh([C s,k t , C h,k ]).(9) Trajectory Prediction Unlike [8], we use a GAN to predict the future trajectory. There exists a minmax game between the generator (G) and the discriminator (D) guiding the model G to be closer to the ground truth distribution. The process is guided by learning a custom loss function which generates an additional advantage when modelling complex behaviours such as human navigation, where multiple factors such as human preferences and sociological factors influence behaviour. Trajectory prediction can be formulated as observing the trajectory from time 1 to T obs , denoted as [p 1 , . . . , p T obs ], and forecasting the future trajectory for time T obs+1 to T pred , denoted as [y T obs+1 , . . . , y T pred ]. The GAN learns a mapping from a noise vector z to an output vector y, G : z → y [10]. Adding the notion of time, the output of the model y t can be written as G : z t → y t . We augment the generic GAN mapping to be conditional on the current neighbourhood context C * t , G : (C * t , z t ) → y t , such that the synthesised trajectories follow the social navigational rules that are dictated by the environment. This objective can be written as, V = E yt,C * t ∼p data ([logD(C * t , y t )])+E C * t ∼p data ,zt∼noise ([1−logD(C * t , G(C * t , z t ))]).(10) Our final aim is to utilise the hidden state embeddings from the trajectory generator to discover the pedestrian groups via clustering those embeddings. Hence having a sparse feature vector for clustering is beneficial as they are more discriminative compared to their dense counterparts [12]. Hence we augment the objective in Eq. 10 with a sparsity regulariser such that, L 1 = ||f (G(C * t , z t ))|| 1 ,(11) and V * = V + λL 1 ,(12) where f is a feature extraction function which extracts the hidden embeddings from the trajectory generator G, and λ is a weight vector which controls the tradeoff between the GAN objective and the sparsity constraint. Trajectory Generator (G) The architecture of the proposed trajectory prediction framework is presented in Fig. 3. We utilise LSTMs as the Generator (G) and the Discriminator (D) models. G samples from the noise distribution, z, and synthesises a trajectory for the pedestrian motion which is conditioned upon the local neighbourhood context, C * t , of that particular pedestrian. Utilising these predicted trajectories, y t , and the context embeddings, C * t , D tries to discriminate between the synthesised and ground truth human trajectories. Fig. 1 illustrates the proposed group detection framework. We pass each trajectory in the given scene through Eq. 2 to Eq. 9 and generate the neighbourhood embeddings, C * ,k t . Then using the feature extraction function f we extract the hidden layer activations for each pedestrian k such that, Group Detection θ k t = f (G(C * ,k t , z t )).(13) Then we pass the extracted feature vectors through a t-SNE [24] dimensionality reduction step. The authors in [12] have shown that it is inefficient to cluster dense deep features. However they have shown the t-SNE algorithm to generate discriminative features capturing the salient aspects in each feature dimension. Hence we apply t-SNE for the k th pedestrian in the scene such that, η k = t-SNE([θ k 1 , . . . , θ k T obs ]).(14) As the final step we apply DBSCAN [6] to discover similar activation patterns, hence segmenting the pedestrian groups. DBSCAN enables us to cluster the data on the fly without specifying the number of clusters. The process can be written as, [β 1 , . . . , β N ] = DBSCAN([η 1 , . . . , η N ]),(15) where there are N pedestrians in the given scene and β n ∈ [β 1 , . . . , β N ] are the generated cluster identities. Evaluation and Discussion Implementation Details When encoding the neighbourhood information, similar to [8], we consider the closest 10 neighbours from each of the left, right, and front directions of the pedestrian of interest. If there are more than 10 neighbours in any direction, we take the closest 9 trajectories and the mean trajectory of the remaining neighbours. If a trajectory has less than 10 neighbours, we created dummy trajectories with hardwired weights (i.e Eq. 7) of 0, such that we always have 10 neighbours. For all LSTMs, including LSTMs for neighbourhood modelling (i.e Sec. 3.1), the trajectory generator and the discriminator (i.e Sec 3.2), we use a hidden state embedding size of 300 units. We trained the trajectory prediction framework iteratively, alternating between a generator epoch and a discriminator epoch with the Adam [21] optimiser, using a mini-batch size of 32 and a learning rate of 0.001 for 500 epochs. The hyper parameter λ = 0.2, and the hyper parameters of DBSCAN, epsilon= 0.50, minPts= 1, are chosen experimentally. Evaluation of the Trajectory Prediction Datasets We evaluate the proposed trajectory predictor framework on the publicly available walking pedestrian dataset (BIWI) [26], Crowds By Examples (CBE) [22] dataset and Vittorio Emanuele II Gallery (VEIIG) dataset [3]. The BIWI dataset records two scenes, one outside a university (ETH) and one at a bus stop (Hotel). CBE records a single video stream with a medium density crowd outside a university (Student 003). The VEIIG dataset provides one video sequence from an overhead camera in the Vittorio Emanuele II Gallery (gall). The training, testing and validation splits for BIWI, CBE and VEIIG are taken from [26], [31] and [32] respectively. These datasets include a variety of pedestrian social navigation scenarios including collisions, collision avoidance and group movements, hence presenting challenging settings for evaluation. Compared to BIWI which has low crowd densities, CBE and VEIIG contain higher crowd densities and as a result more challenging crowd behaviour arrangements, continuously varying from medium to high densities. Evaluation Metrics Similar to [15,28] we evaluated the trajectory prediction performance with the following 2 error metrics: Average Displacement Error (ADE) and Final Displacement Error (FDE). Please refer to [15,28] for details. Baselines and Evaluation We compared our trajectory prediction model to 5 state-of-the-art baselines. As the first baseline we use the Social Force (SF) model introduced in [34], where the destination direction is taken as an input to the model and we train a linear SVM model similar to [8] to generate this input. We use the Social-LSTM (So-LSTM) model of [1] as the next baseline and the neighbourhood size hyper-parameter is set to 32 px. We also compare to the Soft + Hardwired Attention (SHA) model of [8] and similar to the proposed model we set the embedding dimension to be 300 units and consider a 30 total neighbouring trajectories. We also considered the Social GAN (So-GAN) [15] and attentive GAN (SoPhie) [28] models. To provide fair comparisons we set the hidden state dimensions for the encoder and decoder models of So-GAN and SoPhie to be 300 units. For all models we observe the first 15 frames (i.e 1-T obs ) and predicted the future trajectory for the next 15 frames (i.e T obs+1 -T pred ). When observing the results tabulated in Tab. 1 we observe poor performance for the SF model due to it's lack of capacity to model history. Models So-LSTM and SHA utilise short term history from the pedestrian of interest and the local neighbourhood and generate improved predictions. However we observe a significant increase in performance from methods that optimise generic loss functions such as So-LSTM and SHA to GAN based methods such as So-GAN and So-Phie. This emphasises the need for task specific loss function learning in order to imitate complex human social navigation strategies. In the proposed method we further augment this performance by conditioning the trajectory generator on the proposed neighbourhood encoding mechanism. We present a qualitative evaluation of the proposed trajectory generation framework with the SHA and So-GAN baselines in Fig. 4 (selected based on the availability of their implementations). The observed portion of the trajectory is denoted in green, the ground truth observations in blue and predicted trajectories are shown in red (proposed), yellow (SHA) and brown (So-GAN). Observing the qualitative results it can be clearly seen that the proposed model generates better predictions compared to the state-of-the-art considering the varying nature of the neighbourhood clutter. For instance in Fig. 4 (c) and (d) we observe significant deviations between the predictions for SHA and So-GAN and the ground truth. However the proposed model better anticipates the pedestrian motion with the improved context modelling and learning process. It should be noted that the proposed method has a better ability to anticipate stationary groups compared to the baselines, which is visible in Fig. 4 (c). Evaluation of the Group Detection Datasets Similar to Sec. 4.2 we use the BIWI, CBE and VEIIG datasets in our evaluation. Dataset characteristics are reported in Tab. 2. Evaluation Metrics One popular measure of clustering accuracy is the pairwise loss ∆ pw [35], which is defined as the ratio between the number of pairs on which β andβ disagree on their cluster membership and the number of all possible pairs of elements in the set. However as described in [31,32] ∆ pw accounts only for positive intra-group relations and neglects singletons. Hence we also measure the Group-MITRE loss, ∆ GM , introduced in [31], which has overcome this deficiency. ∆ GM adds a fake counterpart for singletons and each singleton is connected with it's counterpart. Therefore δ GM also takes singletons into consideration. Baselines and Evaluation We compare the proposed Group Detection GAN (GD-GAN) framework against 5 recent state-of-the-art baselines, namely [13,30,32,34,35], selected based on their reported performance in public benchmarks. In Tab. 3 we report the Precision (P ) and Recall (R) values for ∆ pw and ∆ GM for the proposed method along with the state-of-the-art baselines. The proposed GD-GAN method has been able to achieve superior results, especially among unsupervised grouping methods. It should be noted that methods [30,32,34,35] utilise handcrafted features and use supervised learning to separate the groups. As noted in Sec. 1 these methods cannot adapt to scene variations and require hand labeled datasets for training. Furthermore we would like to point out that the supervised grouping mechanism in [32] directly optimises ∆ GM . However, without such tedious annotation requirements and learning strategies, the proposed method has been able to generate commendable and consistent results in all considered datasets, especially in cluttered environments 2 . In Fig. 5 we show groups detected by the proposed GD-GAN method for sequences from the CBE and VEIIG datasets. Regardless of the scene context, occlusions and the varying crowd densities, the proposed GD-GAN method generates acceptable results. We believe this is due to the augmented features that we derive through the automated deep feature learning process. These features account for both historical and future behaviour of the individual pedestrians, hence possessing an ability to detect groups even in the presence of occlusions such as in Fig. 5 (c). We selected the first 30 pedestrian trajectories from the VEIIG test set and in Fig. 6 we visualise the embedding space positions before (in blue) and after (in red) training of the proposed trajectory generator (G). Similar to [2] we extracted the activations using the feature extractor function f and applied PCA Table 3. Comparative results on the BIWI [26], CBE [22] and VEIIG [3] datasets using the ∆GM [31] and ∆P W [35] metrics. '-' refers to unavailability of that specific evaluation. The best results are shown in bold and the second best results are underlined. Shao et. al [30] zanotto et. al [35] Yamaguchi et. al [34] Ge et. al [13] Solera et al. [ [33] to plot them in 2D. The respective ground truth group IDs are indicated in brackets. This helps us to gain an insight into the encoding process that G utilises, which allows us to discover groups of pedestrians. Considering the examples given, it can be seen that trajectories from the same cluster become more tightly grouped. This is due to the model incorporating source positions, heading direction, trajectory similarity, when embedding trajectories, allowing us to extract pedestrian groups in an unsupervised manner. Ablation Experiment To further demonstrate the proposed group detection approach, we conducted a series of ablation experiments identifying the crucial components of the proposed methodology 3 . In the same setting as the previous experiment we compare the proposed GD-GAN model against a series of counter parts as follows: -GD-GAN / GAN: removes D and the model G is learnt through supervised learning as in [8]. -GD-GAN / cGAN: optimises the generic GAN objective defined in [14]. -GD-GAN / L 1 : removes sparsity regularisation and optimises Eq. 10. -GD-GAN + hf: utilises features from G as well as the handcrafted features defined in [32] for clustering. The results of our ablation experiment are presented in Tab. 4. Model GD-GAN / GAN performs poorly due to the deficiencies in the supervised learning process. It optimises a generic mean square error loss, which is not ideal to guide 3 see the supplementary material for an ablation study for the trajectory prediction the model through the learning process when modelling a complex behaviour such as human navigation. Therefore the resultant feature vectors do not capture the full context which contributes to the poor group detection accuracies. We observe an improvement in performance with GD-GAN / cGAN due to the GAN learning process which is further augmented and improved through GD-GAN / L 1 where the model learns a conditional behaviour depending on the neighbourhood context. L 1 regularisation further assists the group detection process via making the learnt feature distribution more discriminative. In order to demonstrate the credibility of the learnt group attributes from the proposed GD-GAN model, we augment the feature vector extracted in Eq. 13 together with the features proposed in [32] and apply subsequent process (i.e Eq. 14 and 15) to discover the groups. We utilise the public implementation 4 released by the authors for the feature extraction. We do not observe a substantial improvement with the group detection performance being very similar, indicating that the proposed GD-GAN model is sufficient for modelling the social navigation structure of the crowd. Fig. 6. Projections of the trajectory generator (G) hidden states before (in blue) and after (in red) training. Ground truth group IDs are in brackets. Each insert indicates the trajectory associated with the embedding. The given portion of the trajectory is in green, and the ground truth and prediction are in blue and red respectively Time efficiency We use the Keras [4] deep learning library for our implementation. The GD-GAN module does not require any special hardware such as GPUs to run and has 41.8K trainable parameters. We ran the test set in Sec. 4.3 on a single core of an Intel Xeon E5-2680 2.50GHz CPU and the GD-GAN algorithm was able to generate 100 predicted trajectories with 30, 2 dimensional data points in each trajectory (i.e. using 15 observations to predict the next 15 data points) and complete the group detection process in 0.712 seconds. Conclusions In this paper we have proposed an unsupervised learning approach for pedestrian group segmentation. We avoid the the need to handcraft sociological features by automatically learning group attributes through the proposed trajectory prediction framework. This allows us to discover a latent representation accounting for both historical and future behaviour of each pedestrian, yielding a more efficient platform for detecting their social identities. Furthermore, the unsupervised learning setting grants the approach the ability to employ the proposed framework in different surveillance settings without tedious learning of group memberships from a hand labeled dataset. Our quantitative and qualitative evaluations on multiple public benchmarks clearly emphasise the capacity of the proposed GD-GAN method to learn complex real world human navigation behaviour.
4,269
1812.06571
2904303242
Mode collapse is one of the key challenges in the training of Generative Adversarial Networks(GANs). Previous approaches have tried to address this challenge either by changing the loss of GANs, or by modifying optimization strategies. We argue that it is more desirable if we can find the underlying structure of real data and build a structured generative model to further get rid of mode collapse. To this end, we propose Latent Dirichlet Allocation based Generative Adversarial Networks (LDAGAN), which have high capacity of modeling complex image data. Moreover, we optimize our model by combing variational expectation-maximization (EM) algorithm with adversarial learning. Stochastic optimization strategy ensures the training process of LDAGAN is not time consuming. Experimental results demonstrate our method outperforms the existing standard CNN based GANs on the task of image generation.
Although mixture GANs seem to further get rid of problems like mode collapse, they exhibit two drawbacks. Firstly, mixing weights ( mode distribution) @math in mixture GANs, for example in MGAN and MixGAN @cite_21 @cite_12 , are fixed. Such a fixed mixing scheme limits the flexibility of model distribution @math , probably leading to an undesired large divergence between real and model distributions. Besides, @math sometimes is predefined, such as in @cite_21 , which constraints the model capacity, and thus causes mode dropping. Secondly, some mixture GANs @cite_21 plausibly encourage mode diversity of generated samples, resulting in intra-class mode dropping.
{ "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them." ], "cite_N": [ "@cite_21", "@cite_12" ], "mid": [ "2787223504", "2952745707" ] }
Latent Dirichlet Allocation in Generative Adversarial Networks
0
1812.06486
2904130053
Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e. non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our proposed construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks with two hidden layers, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks.
We discuss related work on suboptimal minima of the loss surface. In addition, we refer the reader to the overview article @cite_9 for a discussion on the non-convexity in neural network training.
{ "abstract": [ "Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification. However, the mathematical reasons for this success remain elusive. This tutorial will review recent work that aims to provide a mathematical justification for several properties of deep networks, such as global optimality, geometric stability, and invariance of the learned representations." ], "cite_N": [ "@cite_9" ], "mid": [ "2772552125" ] }
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
At the heart of most optimization problems lies the search for the global minimum of a loss function. The common approach to finding a solution is to initialize at random in parameter space and subsequently follow directions of decreasing loss based on local methods. This approach lacks a global progress criteria, which leads to descent into one of the nearest local minima. Since the loss function of deep neural networks is non-convex, the common approach of using gradient descent variants is vulnerable precisely to that problem. Authors pursuing the early approaches to local descent by back-propagating gradients [1] experimentally noticed that suboptimal local minima appeared surprisingly harmless. More recently, for deep neural networks, the earlier observations were further supported by the experiments of e.g., [2]. Several authors aimed to provide theoretical insight for this behavior. Broadly, two views may be distinguished. Some, aiming at explanation, rely on simplifying modeling assumptions. Others investigate neural networks under realistic assumptions, but often focus on failure cases only. Recently, Nguyen and Hein [3] provide partial explanations for deep and extremely wide neural networks for a class of activation functions including the commonly used sigmoid. Extreme width is characterized by a "wide" layer that has more neurons than input patterns to learn. For almost every instantiation of parameter values w (i.e. for all but a null set of parameter values) it is shown that, if the loss function has a local minimum at w, then this local minimum must be a global one. This suggests that for deep and wide neural networks, possibly every local minimum is global. The question on what happens at the null set of parameter values, for which the result does not hold, remains unanswered. Similar observations for neural networks with one hidden layer were made earlier by Gori and Tesi [4] and Poston et al. [5]. Poston et al. [5] show for a neural network with one hidden layer and sigmoid activation function that, if the hidden layer has more nodes than training patterns, then the error function (squared sum of prediction losses over the samples) has no suboptimal "local minimum" and "each point is arbitrarily close to a point from which a strictly decreasing path starts, so such a point cannot be separated from a so called good point by a barrier of any positive height" [5]. It was criticized by Sprinkhuizen-Kuyper and Boers [6] that the definition of a local minimum used in the proof of [5] was rather strict and unconventional. In particular, the results do not imply that no suboptimal local minima, defined in the usual way, exist. As a consequence, the notion of attracting and non-attracting regions of local minima were introduced and the authors prove that non-attracting regions exist by providing an example for the extended XOR problem. The existence of these regions imply that a gradient-based approach descending the loss surface using local information may still not converge to the global minimum. The main objective of this work is to revisit the problem of such non-attracting regions and show that they also exist in deep and wide networks. In particular, a gradient based approach may get stuck in a suboptimal local minimum. Most importantly, the performance of deep and wide neural networks cannot be explained by the analysis of the loss curve alone, without taking proper initialization or the stochasticity of SGD into account. Our observations are not fundamentally negative. At first, the local minima we find are rather degenerate. With proper initialization, a local descent technique is unlikely to get stuck in one of the degenerate, suboptimal local minima 1 . Secondly, the minima reside on a non-attracting region of local minima (see Definition 1). Due to its exploration properties, stochastic gradient descent will eventually be able to escape from such a region (see [8]). We conjecture that in sufficiently wide and deep networks, except for a null set of parameter values as starting points, there is always a monotonically decreasing path down to the global minimum. This was shown in [5] for neural networks with one hidden layer, sigmoid activation function and square loss, and we generalize this result to neural networks with two hidden layers. (More precisely, our result holds for all neural networks with square loss and a class of activation functions including the sigmoid, where the wide layer is the last or second last hidden layer). This implies that in such networks every local minimum belongs to a non-attracting region of local minima. Our proof of the existence of suboptimal local minima even in extremely wide and deep networks is based on a construction of local minima in neural networks given by Fukumizu and Amari [9]. By relying on careful computation we are able to characterize when this construction is applicable to deep neural networks. Interestingly, in deeper layers, the construction rarely seems to lead to local minima, but more often to saddle points. The argument that saddle points rather than suboptimal local minima are the main problem in deep networks has been raised before (see [10]) but a theoretical justification [11] uses strong assumptions that do not exactly hold in neural networks. Here, we provide the first analytical argument, under realistic assumptions on the neural network structure, describing when certain critical points of the training loss lead to saddle points in deeper networks. III. MAIN RESULTS A. Problem definition We consider regression networks with fully connected layers of size n l , 0 ≤ l ≤ L given by f (x) = w L (σ(w L−1 (σ(. . . (w 2 (σ(w 1 (x) + w 1 0 )) + w 2 0 ) . . .)) + w L−1 0 )) + w L 0 , where w l ∈ R nl×nl−1 denotes the weight matrix of the l-th layer, 1 ≤ l ≤ L, w l 0 the bias terms, and σ a nonlinear activation function. The neural network function is denoted by f and we notationally suppress dependence on parameters. We assume the activation function σ to belong to the class of strictly monotonically increasing, analytic, bounded functions on R with image in interval (c, d) such that 0 ∈ [c, d], a class we denote by A. As prominent examples, the sigmoid activation function σ(t) = 1 1+exp(−t) and σ(t) = tanh(x) lie in A. We assume no activation function at the output layer. The neural network is assumed to be a regression network mapping into the real domain R, i.e. n L = 1 and w L ∈ R 1×nL−1 . We train on a finite dataset (x α , y α ) 1≤α≤N of size N with input patterns x α ∈ R n0 and desired target value y α ∈ R. We aim to minimize the squared loss L = N α=1 (f (x α ) − y α ) 2 . Further, w denotes the collection of all w l . The dependence of the neural network function f on w translates into a dependence of L = L(w) of the loss function on the parameters w. Due to assumptions on σ, L(w) is twice continuously differentiable. The goal of training a neural network consists of minimizing L(w) over w. There is a unique value L 0 denoting the infimum of the neural network's loss (most often L 0 = 0 in our examples). Any set of weights w • that satisfies L(w • ) = L 0 is called a global minimum. Due to its non-convexity, the loss function L(w) of a neural network is in general known to potentially suffer from local minima (precise definition of a local minimum below). We will study the existence of suboptimal local minima in the sense that a local minimum w * is suboptimal if its loss L(w * ) is strictly larger than L 0 . We refer to deep neural networks as models with more than one hidden layer. Further, we refer to wide neural networks as the type of model considered in [3]- [5] with one hidden layer containing at least as many neurons as input patterns (i.e. n l ≥ N for some 1 ≤ l < L in our notation). Disclaimer: Naturally, training for zero global loss is not desirable in practice, neither is the use of fully connected wide and deep neural networks necessarily. The results of this paper are of theoretical importance. To be able to understand the complex learning behavior of deep neural networks in practice, it is a necessity to understand the networks with the most fundamental structure. In this regard, while our result are not directly applicable to neural networks used in practice, they do offer explanations for their learning behavior. B. A special kind of local minimum The standard definition of a local minimum, which is also used here, is a point w * such that w * has a neighborhood U with L(w) ≥ L(w * ) for all w ∈ U . Since local minima do not need to be isolated (i.e. L(w) > L(w * ) for all w ∈ U \ {w * }) two types of connected regions of local minima may be distinguished. Note that our definition slightly differs from the one by [6]. Definition 1. [6] Let : R n → R be a differentiable function. Suppose R is a maximal connected subset of parameter values w ∈ R m , such that every w ∈ R is a local minimum of with value (w) = c. • R is called an attracting region of local minima, if there is a neighborhood U of R such that every continuous path Γ(t), which is non-increasing in and starts from some Γ(0) ∈ U , satisfies (Γ(t)) ≥ c for all t. • R is called a non-attracting region of local minima, if every neighborhood U of R contains a point from where a continuous path Γ(t) exists that is non-increasing in and ends in a point Γ(1) with (Γ(1)) < c. Despite its non-attractive nature, a non-attracting region R of local minima may be harmful for a gradient descent approach. A path of greatest descent can end in a local minimum on R. However, no point z on R needs to have a neighborhood of attraction in the sense that following the path of greatest descent from a point in a neighborhood of z will lead back to z. (The path can lead to a different local minimum on R close by or reach points with strictly smaller values than c.) In the example of such a region for the 2-3-1 XOR network provided in [6], a local minimum (of higher loss than the global loss) resides at points in parameter space with some coordinates at infinity. In particular, a gradient descent approach may lead to diverging parameters in that case. However, a different non-increasing path down to the global minimum always exists. It can be shown that local minima at infinity also exist for wide and deep neural networks. (The proof can be found in Appendix A.) Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. A different type of non-attracting regions of local minima (without infinite parameter values) is considered for neural networks with one hidden layer by Fukumizu and Amari [9] and Wei et al. [8] under the name of singularities. This type of region is characterized by singularities in the weight space (a subset of the null set not covered by the results of Nguyen and Hein [3]) leading to a loss value strictly larger than the global loss. The dynamics around such region are investigated by Wei et al. [8]. Again, a full batch gradient descent approach can get stuck in a local minimum in this type of region. A rough illustration of the nature of these non-attracting regions of local minima is depicted in Fig. 1. Non-attracting regions of local minima do not only exist in small two-layer neural networks. Theorem 2. There exist deep and wide fully-connected neural networks with sigmoid activation function such that the squared loss function of a finite dataset has a non-attracting region of local minima (at finite parameter values). The construction of such local minima is discussed in Section V with a complete proof in Appendix B. Corollary 1. Any attempt to show for fully connected deep and wide neural networks that a gradient descent technique will always lead to a global minimum only based on a description of the loss curve will fail if it doesn't take into consideration properties of the learning procedure (such as the stochasticity of stochastic gradient descent), properties of a suitable initialization technique, or assumptions on the dataset. On the positive side, we point out that a stochastic method such as stochastic gradient descent has a good chance to escape a non-attracting region of local minima due to noise. With infinite time at hand and sufficient exploration, the region can be escaped from with high probability (see [8] for a more detailed discussion). In Section V-A we will further characterize when the method used to construct examples of regions of non-attracting local minima is applicable. This characterization limits us to the construction of extremely degenerate examples. We give an intuitive argument why assuring the necessary assumptions for the construction becomes more difficult for wider and deeper networks and why it is natural to expect a lower suboptimal loss (where the suboptimal minima are less "bad") the less degenerate the constructed minima are and the more parameters a neural network possesses. C. Non-increasing path to a global minimum By definition, every neighborhood of a non-attracting region of local minima contains points from where a non-increasing path to a value less than the value of the region exists. (By definition all points belonging to a nonattracting region have the same value, in fact they are all local minima.) The question therefore arises whether from almost everywhere in parameter space there is such a non-increasing path all the way down to a global minimum. If the last hidden layer is the wide layer having more neurons than input patterns (for example consider a wide two-layer neural network), then this holds true by the results of [3] (and [4], [5]). We show the same conclusion to hold for wide neural networks having the second last hidden layer the wide one. In particular, this implies that for wide neural networks with two hidden layers, starting from almost everywhere in parameter space, there is non-increasing path down to a global minimum. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. Corollary 2. Consider a wide, fully connected regression neural network with two hidden layers and activation function in the class A and trained to minimize the squared loss over a finite dataset. Then all suboptimal local minima are contained in a non-attracting region of local minima. The rest of the paper contains the arguments leading to the given results. IV. NOTATIONAL CHOICES We fix additional notation aside the problem definition from Section III-A. For input x α , we denote the pattern vector of values at all neurons at layer l before activation by n(l; x α ) and after activation by act(l; x α ). x α,1 x α,2 x 0 1, −1 1, 1 1, 2 1, 3 1, 3 1, 0 f (x α ) [u 1,i ] i [u 1,i ] i [u 2,i ] i [u 3,i ] i λ · v •,1 (1 − λ) · v •,1 v •,2 v •,3 v •,0 In general, we will denote column vectors of size n with coefficients z i by [z i ] 1≤i≤n or simply [z i ] i and matrices with entries a i,j at position (i, j) by [a i,j ] i,j . The neuron value pattern n(l; x) is then a vector of size n l denoted by n(l; x) = [n(l, k; x)] 1≤k≤nl , and the activation pattern act(l; x) = [act(l, k; x)] 1≤k≤nl . Using that f can be considered a composition of functions from consecutive layers, we denote the function from act(k; x) to the output by h •,k (x). For convenience of the reader, a tabular summary of all notation is provided in Appendix A. V. CONSTRUCTION OF LOCAL MINIMA We recall the construction of so-called hierarchical suboptimal local minima given in [9] and extend it to deep networks. For the hierarchical construction of critical points, we add one additional neuron n(l, −1; x) to a hidden layer l. (Negative indices are unused for neurons, which allows us to add a neuron with this index.) Once we have fixed the layer l, we denote the parameters of the incoming linear transformation by [u p,i ] p,i , so that u p,i denotes the contribution of neuron i in layer l − 1 to neuron p in layer l, and the parameters of the outgoing linear transformation by [v s,q ], where v s,q denotes the contribution of neuron q in layer l to neuron s in layer l + 1. For weights of the output layer (into a single neuron), we write w •,j instead of w 1,j . We recall the function γ used in [9] to construct local minima in a hierarchical way. This function γ describes the mapping from the parameters of the original network to the parameters after adding a neuron n(l, −1; x) and is determined by incoming weights u −1,i into n(l, −1; x), outgoing weights v s,−1 of n(l, −1; x), and a change of the outgoing weights v s,r of n(l, r; x) for one chosen r in the smaller network. Sorting the network parameters in a convenient way, the embedding of the smaller network into the larger one is defined for any λ ∈ R by a function γ r λ mapping parameters {([u r,i ] i , [v s,r ] s ,w} of the smaller network to parameters {([u −1,i ] i , [v s,−1 ] s , [u r,i ] i , [v s,r ] s ,w)} of the larger network and is defined by γ r λ ([u r,i ] i , [v s,r ] s ,w) := ([u r,i ] i , [λ · v s,r ] s , [u r,i ] i , [(1 − λ) · v s,r ] s ,w) . Herew denotes the collection of all remaining network parameters, i.e., all [u p,i ] i , [v s,q ] s for p, q / ∈ {−1, r} and all parameters from linear transformation of layers with index smaller than l or larger than l + 1, if existent. A visualization of γ 1 λ is shown in Fig. 2. Important fact: For the functions ϕ, f of smaller and larger network at parameters ([u * 1,i ] i , [v * s,1 ] s ,w * ) and γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) respectively, we have ϕ(x) = f (x) for all x. More generally, we even have n ϕ (l, k; x) = n(l, k; x) and act ϕ (l, k; x) = act(l, k; x) for all l, x and k ≥ 0. A. Characterization of hierarchical local minima Using γ r to embed a smaller deep neural network into a second one with one additional neuron, it has been shown that critical points get mapped to critical points. Theorem 4 (Nitta [15]). Consider two neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. If parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point for the squared loss over a finite dataset in the smaller network then, for each λ ∈ R, γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) determines a critical point in the larger network. As a consequence, whenever an embedding of a local minimum with γ r λ into a larger network does not lead to a local minimum, then it leads to a saddle point instead. (There are no local maxima in the networks we consider, since the loss function is convex with respect to the parameters of the last layer.) For neural networks with one hidden layer, it was characterized when a critical point leads to a local minimum. Theorem 5 (Fukumizu, Amari [9]). Consider two neural networks as in Section III-A with only one hidden layer and which differ by one neuron in the hidden layer with index n(1, −1; x) in the larger network. Assume that parameters ([u * r,i ] i , v * •,r ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller neural network and that λ / ∈ {0, 1}. Then γ r λ ([u * r,i ] i , v * •,r ,w * ) determines a local minimum in the larger network if the matrix [B r i,j ] i,j given by B r i,j = α (f (x α ) − y α ) · v * •,r · σ (n(1, r; x α )) · x α,i · x α,j is positive definite and 0 < λ < 1, or if [B r i,j ] i,j is negative definite and λ < 0 or λ > 1. (Here, we denote the k-th input dimension of input x α by x α,k .) We extend the previous theorem to a characterization in the case of deep networks. We note that a similar computation was performed in [19] for neural networks with two hidden layers. Theorem 6. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller network. If the matrix [B r i,j ] i,j defined by B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either • positive definite and λ ∈ I := (0, 1), or • negative definite and λ ∈ I : = (−∞, 0) ∪ (1, ∞), then γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) | λ ∈ I determines a non-attracting region of local minima in the larger network if and only if D r,s i := α (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) · σ (n(l, r; x α )) · act(l − 1, i; x α )(2) is zero, D r,s i = 0, for all i, s. Remark 1. In the case of a neural network with only one hidden layer as considered in Theorem 5, the function h •,l+1 (x) is the identity function on R and the matrix [B r i,j ] i,j in (1) reduces to the matrix [B r i,j ] i,j in Theorem 5. The condition that D r,s i = 0 for all i, s does hold for shallow neural networks with one hidden layer as we show below. This proves Theorem 6 to be consistent with Theorem 5. The theorem follows from a careful computation of the Hessian of the cost function L(w), characterizing when it is positive (or negative) semidefinite and checking that the loss function does not change along directions that correspond to an eigenvector of the Hessian with eigenvalue 0. We state the outcome of the computation in Lemma 1 and refer the reader interested in a full proof of Theorem 6 to Appendix B. Lemma 1. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Fix 1 ≤ r ≤ n l . Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point in the smaller network. Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ) , the Hessian of L (i.e., the second derivative with respect to the new network parameters) at γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) is given by        [ ∂ 2 ∂ur,i∂ur,j ] i,j 2[ ∂ 2 ∂ur,i∂vs,r ] i,s [ ∂ 2 ∂w ∂ur,i ] i,w 0 0 2[ ∂ 2 ∂ur,i∂vs,r ] s,i 4[ ∂ 2 ∂vs,r∂vt,r ] s,t 2[ ∂ 2 ∂w ∂vs,r ] s,w (α − β)[D r,s i ] s,i 0 [ ∂ 2 ∂w ∂ur,i ]w ,i 2[ ∂ 2 ∂w ∂vs,r ]w ,s [ ∂ 2 ∂w ∂w ]w ,w 0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        B. Shallow networks with a single hidden layer For the construction of suboptimal local minima in wide two-layer networks, we begin by following the experiments of [9] that prove the existence of suboptimal local minima in (non-wide) two-layer neural networks. Consider a neural network of size 1-2-1. We use the corresponding network function f to construct a dataset (x α , y α ) N α=1 by randomly choosing x α and letting y α = f (x α ). By construction, we know that a neural network of size 1-2-1 can perfectly fit the dataset with zero error. Consider now a smaller network of size 1-1-1 having too little expressibility for a global fit of all data points. We find parameters [u * 1,1 , v * • ] where the loss function of the neural network is in a local minimum with non-zero loss. For this small example, the required positive definiteness of [B 1 i,j ] i,j from (1) for a use of γ λ with λ ∈ (0, 1) reduces to checking a real number for positivity, which we assume to hold true. We can now apply γ λ and Theorem 5 to find parameters for a neural network of size 1-2-1 that determine a suboptimal local minimum. This example may serve as the base case for a proof by induction to show the following result. Theorem 7. There is a wide neural network with one hidden layer and arbitrarily many neurons in the hidden layer that has a non-attracting region of suboptimal local minima. Having already established the existence of parameters for a (small) neural network leading to a suboptimal local minimum, it suffices to note that iteratively adding neurons using Theorem 5 is possible. Iteratively at step t, we add a neuron n(1, −t; x) to the network by an application of γ 1 λ with the same λ ∈ (0, 1). The corresponding matrix from (1), B 1,(t) i,j = α (f (x α ) − y α ) · (1 − λ) t · v * •,1 · σ (n(l, 1; x α )) · x α,i · x α,j , is positive semidefinite. (We use here that neither f (x α ) nor n(l, 1; x α ) ever change during this construction.) By Theorem 5 we always find a suboptimal minimum with nonzero loss for the network for λ ∈ (0, 1). Note however, that a continuous change of λ to a value outside of [0, 1] does not change the network function, but leads to a saddle point. Hence, we found a non-attracting region of suboptimal minima. Remark 2. Since we started the construction from a network of size 1-1-1, our constructed example is extremely degenerate: The suboptimal local minima of the wide network have identical incoming weight vectors for each hidden neuron. Obviously, the suboptimality of this parameter setting is easily discovered. Also with proper initialization, the chance of landing in this local minimum is vanishing. However, one may also start the construction from a more complex network with a larger network with several hidden neurons. In this case, when adding a few more neurons using γ 1 λ , it is much harder to detect the suboptimality of the parameters from visual inspection. C. Deep neural networks According to Theorem 6, next to positive definiteness of the matrix B r i,j for some r, in deep networks there is a second condition for the construction of hierarchical local minima using the map γ r λ , i.e. D r,s i = 0. We consider conditions that make D r,s i = 0. Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) The proof is contained in Appendix C. D. Experiment for deep networks To construct a local minimum in a deep and wide neural network, we start by considering a three-layer network of size 2-2-4-1, i.e. we have two input dimensions, one output dimension and hidden layers of two and four neurons. We use its network function f to create a dataset of 50 samples (x α , f (x α )), hence we know that a network of size 2-2-4-1 can attain zero loss. We initialize a new neural network of size 2-2-2-1 and train it until convergence, before using the construction to add neurons to the network. When adding neurons to the last hidden layer using γ 1 λ , Proposition 1 assures that D 1,• i = 0 for all i. We check for positive definiteness of the matrix B 1 i,j , and only continue when this property holds. Having thus assured the necessary condition of Theorem 6, we can add a few neurons to the last hidden layer (by induction as in the two-layer case), which results in local minimum of a network of size 2-2-M-1. The local minimum of non-zero loss that we attain is suboptimal whenever M ≥ 4 by construction. For M ≥ 50 the network is wide. Experimentally, we show not only that indeed we end up with a suboptimal minimum, but also that it belongs to a non-attracting region of local minima. In Fig. 3 we show results after adding eleven neurons to the last hidden layer. On the left side, we plot the loss in the neighborhood of the constructed local minimum in parameter space. The top image shows the loss curve into randomly generated directions, the bottom displays the minimal loss over all these directions. On the top right we show the change of loss along one of the degenerate directions that allows reaching a saddle point. In such a saddle point we know from Lemma 1 the direction of descent. The image on the bottom right shows that indeed the direction allows a reduction in loss. Being able to reach a saddle point from a local minimum by a path of non-increasing loss shows that indeed we found a non-attracting region of local minima. E. A discussion of limitations and of the loss of non-attracting regions of suboptimal minima We fix a neuron in layer l and aim to use γ r λ to find a local minimum in the larger network. We then need to check whether a matrix B r i,j is positive definite, which depends on the dataset. Under strong independence assumptions (the signs of different eigenvalues of B r i,j are independent), one may argue similar to arguments in [10] that the probability of finding B r i,j to be positive definite (all eigenvalues positive) is exponentially decreasing in the number of possible neurons of the previous layer l − 1. At the same time, the number of neurons n(l, r; x) in layer l to use for the construction only increases linearly in the number of neurons in layer l. Experimentally, we use a four-layer neural network of size 2-8-12-8-1 to construct a (random) dataset containing 500 labeled samples. We train a network of size 2-4-6-4-1 on the dataset until convergence using SciPy's 2 BFGS implementation. For each layer l, we check each neuron r whether it can be used for enlargment of the network using the map γ r λ for some λ ∈ (0, 1), i.e., we check whether the corresponding matrix B r i,j is positive definite. We repeat this experiment 1000 times. For the first layer, we find that in 547 of 4000 test cases the matrix is positive definite. For the second layer we only find B r i,j positive definite in 33 of 6000 cases, and for the last hidden layer there are only 6 instances out of 4000 where the matrix B r i,j is positive definite. Since the matrix B r i,j is of size 2 × 2/4 × 4/6 × 6 for the first/second/last hidden layer respectively, the number of positive matrices is less than what would be expected under the strong independence assumptions discussed above. In addition, in deeper layers, further away from the output layer, it seems dataset dependent and unlikely to us that D r,s i = 0. Simulations seem to support this belief. However, it is difficult to check the condition numerically. Firstly, it is hard to find the exact position of minima and we only compute numerical approximations of D r,s i . Secondly, the terms are small for sufficiently large networks and numerical errors play a role. Due to these two facts, it becomes barely possible to check the condition of exact equality to zero. In Fig. 4 we show the distribution of maximal entries of the matrix D r,s i = 0 for neurons in the first, second and third layer of the network of size 2-4-6-4-1 trained as above. Note that for the third layer we know from theory that in a critical point we have D r,s i = 0, but due to numerical errors much larger values arise. Further, a region of local minima as above requires linearly dependent activation pattern vectors. This is how linear dimensions for subsequent layers get lost, reducing the ability to approximate the target function. Intuitively, in a deep and wide neural network there are many possible directions of descent. Loosing some of them still leaves the network with enough freedom to closely approximate the target function. As a result, these suboptimal minima have a loss close to the global loss. Conclusively, finding suboptimal local minima with high loss by the construction using γ r λ becomes hard when the networks become deep and wide. VI. PROVING THE EXISTENCE OF A NON-INCREASING PATH TO THE GLOBAL MINIMUM In the previous section we showed the existence of non-attracting regions of local minima. These type of local minima do not rule out the possibility of non-increasing paths to the global minimum from almost everywhere in parameter space. In this section, we sketch the proof to Theorem 3 illustrated in form of several lemmas, where up to the basic assumptions on the neural network structure as in Section III-A (with activation function in A), the assumption of one lemma is given by the conclusion of the previous one. A full proof can be found in Appendix D. We consider vectors that we call activation vectors, different from the activation pattern vectors act(l; x) from above. The activation vector at neuron k in layer l is denoted by a l k and defined by all values at the given neuron for different samples x α : a l k := [act(l, k; x α )] α . In other words while we fix l and x for the activation pattern vectors act(l; x) and let k run over its possible values, we fix l and k for the activation vectors a l k and let x run over its samples x α in the dataset. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . As it turns out, there is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum whenever we may assume that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it is necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. We apply Lemma 5 to activation vectors r i = a i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L 1,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. Fix z α = f (x α ), the prediction for the current weights. The main idea is to change the activation vectors of the last hidden layer according to ρ j : t ∈ [0, 1] → a L−1 j + t · 1 w L •,j · (y − z) N . With w L fixed, at the output this results in a change of t ∈ [0, 1] → z + t · (y − z), which reduces the loss to zero. The required change of activation vectors can be implemented by an application of Lemma 3 and Lemma 4, but only if the image of each ρ j lies in the image [c, d] of the activation function. Hence, the latter must be arranged. In the case that 0 ∈ (c, d), it suffices to first decrease the norm of a L−1 j while simultaneously increasing the norm of the outgoing weight w L •,j so that the output remains constant. If, however, 0 is in the boundary of the interval [c, d] (for example the case of a sigmoid activation function), then the assumption of non-zero weights with different signs becomes necessary. We let J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}, I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, I − = {α ∈ {1, 2, . . . , N } | (y − z) α < 0}. We further define (y − z) I+ to be the vector v with coordinate v α for α ∈ I + equal to (y − z) α and 0 otherwise, and we let analogously (y − z) I− denote the vector containing only the negative coordinates of y − z. Then the paths ρ j : [0, 1] → (c, d) defined by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I− |J − | can be arranged to all lie in the image of the activation function and they again lead to an output change of t ∈ [0, 1] → z + t · (y − z). (Appendix D contains a more detailed proof.) This concludes the proof of Theorem 3 having found a sufficient condition in Lemma 6 to confirm the existence of a path down to zero loss and having shown how to realize this condition in Lemmas 3, 4 and 5. VII. CONCLUSION In this paper we have studied the local minima of deep and wide regression neural networks with sigmoid activation functions. We established that the nature of local minima is such that they live in a special region of the cost function called a non-attractive region, and showed that a non-increasing path to a configuration with lower loss than that of the region can always be found. For sufficiently wide two-or three-layer neural networks, all local minima belong to such a region. We generalized the procedure to find such regions, introduced by Fukumizu and Amari [9], to deep networks and described sufficient conditions for the construction to work. The necessary conditions become very hard to satisfy in wider and deeper networks and, if they fail, the construction leads to saddle points instead. Finally, an intuitive argument shows a clear relation between the degree of degeneracy of a local minimum and the level of suboptimality of the constructed local minimum. APPENDIX NOTATION [x α ] α R n column vector with entries x α ∈ R [x i,j ] i,j ∈ R n1×n2 matrix with entry x i,j at position (i, j) Im(f) ⊆ R image of a function f C n (X, Y ) n-times continuously differentiable function from X to Y N ∈ N number of data samples in training set x α ∈ R n0 training sample input y α ∈ R target output for sample x α A ∈ C(R) class of real-analytic, strictly monotonically increasing, bounded (activation) functions such that the closure of the image contains zero σ ∈ C 2 (R, R) a nonlinear activation function in class A f ∈ C(R n0 , R) neural network function l 1 ≤ l ≤ L index of a layer L ∈ N number of layers excluding the input layer l=0 input layer l = L output layer n l ∈ N number of neurons in layer l k 1 ≤ k ≤ n l index of a neuron in layer l w l ∈ R nl×nl−1 weight matrix of the l-th layer w ∈ R L l=1 (nl·nl−1) collection of all w l w l i,j ∈ R the weight from neuron j of layer l − 1 to neuron j of layer l w L •,j ∈ R the weight from neuron j of layer L − 1 to the output L ∈ R + squared loss over training samples n(l, k; x) ∈ R value at neuron k in layer l before activation for input pattern x n(l; x) ∈ R nl neuron pattern at layer l before activation for input pattern x act(l, k; x) ∈ Im(σ) activation pattern at neuron k in layer l for input x act(l; x) ∈ Im(σ) nl neuron pattern at layer l for input x In Section V, where we fix a layer l, we additionally use the following notation. h •,k (x) ∈ C(R nl , R) the function from act(l; x) to the output [u p,i ] p,i ∈ R nl×nl−1 weights of the given layer l. [v s,q ] s,q ∈ R nl×nl+1 weights the layer l + 1. r ∈ {1, 2, . . . , n l } the index of the neuron of layer l that we use for the addition of one additional neuron M ∈ N = L t=1 (n t · n t−1 ), the number of weights in the smaller neural network w ∈ R M −nl−1−nl+1 all weights except u 1,i and v s,1 γ r λ ∈ C(R M , R M +nl−1+nl+1 ) the map defined in Section V to add a neuron in layer l using the neuron with index r in layer l In Section VI, we additionally use the following notation. A. Local minima at infinity in neural networks In this section we prove the existence of local minima at infinity in neural networks. Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. Proof. We will show that, if all bias terms u i,0 of the last hidden layer are sufficiently large, then there are parameters u i,0k for k = 0 and parameters v i of the output layer such that the minimal loss is achieved at u i,0 = ∞ for all i. We note that, if u i,0 = ∞ for all i, all neurons of the last hidden layer are fully active for all samples, i.e. act(L − 1, i; x α ) = 1 for all i. Therefore, in this case f ( x α ) = i v •,i for all α. A constant function f (x α ) = i v •,i = c minimizes the loss α (c − y α ) 2 uniquely for c := 1 N N α=1 y α . We will assume that the v •,i are chosen such that i v •,i = c does hold. That is, for fully active hidden neurons at the last hidden layer, the v •,i are chosen to minimize the loss. We write f (x α ) = c + α . Then L = 1 2 α (f (x α ) − y α ) 2 = 1 2 α (c + α − y α ) 2 = 1 2 α ( α + (c − y α )) 2 = 1 2 α (c − y α ) 2 Loss at ui,0 = ∞ for all i + 1 2 α 2 α ≥0 + α α (c − y α ) ( * ) . The idea is now to ensure that ( * ) ≥ 0 for sufficiently large u i,0 and in a neighborhood of the v •,i chosen as above. Then the loss L is larger than at infinity, and any point in parameter space with u i,0 = ∞ and v •,i with i v •,i = c is a local minimum. To study the behavior at u i,0 = ∞, we consider p i = exp(−u i,0 ). Note that lim ui,0→∞ p i = 0. We have f (x α ) = i v •,i σ(u i,0 + k u i,k act(L − 2, k; x α )) = i v •,i · 1 1 + p i · exp(− k u i,k act(L − 2, k; x α )) Now for p i close to 0 we can use Taylor expansion of g j i (p i ) : = 1 1+piexp(a j i ) to get g j i (p i ) = 1 − exp(a j i )p i + O(|p i | 2 ). Therefore f (x α ) = c − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) and we find that α = − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ). Recalling that we aim to ensure ( * ) = α α (c − y α ) ≥ 0 we consider α α (c − y α ) = − α (c − y α )( i v •,i p i exp(− k u i,k act(L − 2, k; x α ))) + O(p 2 i ) = − i v •,i p i α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) We are still able to choose the parameters u i,k for i = 0, the parameters from previous layers, and the v •,i subject to i v •,i = c. If now v •,i > 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) < 0 and v •,i < 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) > 0, then the term ( * ) is strictly positive, hence the overall loss is larger than the loss at p i = 0 for sufficiently small p i and in a neighborhood of v •,i . The only obstruction we have to get around is the case where we need all v •,i of the opposite sign of c (in other words, α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) has the same sign as c), conflicting with i v •,i = c. To avoid this case, we impose the mild condition that α (c−y α )act(L−2, r; x α ) = 0 for some r, which can be arranged to hold for almost every dataset by fixing all parameters of layers with index smaller than L − 2. By Lemma 7 below (with d α = (c−y α ) and a r α = act(L−2, r; x α )), we can find u > k such that α (c−y α ) exp(− k u > k act(L−2, k; x α )) > 0 and u < k such that α (c − y α ) exp(− k u < k act(L − 2, k; x α )) < 0. We fix u i,k for k ≥ 0 such that there is some i 1 with [u i1,k ] k = [u > k ] k and some i 2 with [u i2,k ] k = [u < k ] k . This assures that we can choose the v •,i of opposite sign to α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) and such that i v •,i = c, leading to a local minimum at infinity. The local minimum is suboptimal whenever a constant function is not the optimal network function for the given dataset. By assumption, there is r such that the last term is nonzero. Hence, using coordinate r, we can choose w = (0, 0, . . . , 0, w r , 0, . . . , 0) such that φ(w) is positive and we can choose w such that φ(w) is negative. B. Proofs for the construction of local minima Here we prove B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either is zero, D r,s i = 0, for all i, s. The previous theorem follows from two lemmas, with the first lemma containing the computation of the Hessian of the cost function L of the larger network at parameters γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) with respect to a suitable basis. In addition, to find local minima one needs to explain away all additional directions, i.e., we need to show that the loss function actually does not change into the direction of eigenvectors of the Hessian with eigenvalue 0. Otherwise a higher derivative into this direction could be nonzero and potentially lead to a saddle point (see [19]). Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ),0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        Proof. The proof only requires a tedious, but not complicated calculation (using the relation αλ − β(1 − λ) = 0 multiple times. To keep the argumentation streamlined, we moved all the necessary calculations into Appendix E. (z 1 , z 2 , z 3 , z 4 )     a 2b c 0 2b T 4d 2e 0 c T 2e T f 0 0 0 0 x         z 1 z 2 z 3 z 4     = (z 1 , 2z 2 , z 3 , z 4 )     a b c 0 b T d e 0 c T e T f 0 0 0 0 x         z 1 2z 2 z 3 z 4     (b) It is clear that the matrix x is positive semidefinite for g positive semidefinite and h = 0. To show the converse, first note that if g is not positive semidefinite and z is such that z T gz < 0 then (z T , 0) g h h T 0 z 0 = z T gz < 0. It therefore remains to show that also h = 0 is a necessary condition. Assume h = 0 and find z such that hz = 0. Then for any λ ∈ R we have ((hz) T , −λz T ) g h h T 0 hz −λz = (hz) T g(hz) − 2(hz) T hλz = (hz) T g(hz) − 2λ||hz|| 2 2 . For sufficiently large λ, the last term is negative. Proof of Theorem 6. In Lemma 1, we calculated the Hessian of L with respect to a suitable basis at a the critical point γ λ ([u * r,i ] i , [v * s,r ] s ,w * ). If the matrix [D r,s i ] i,] i,j is positive definite or if (λ < 0 or λ > 1) ⇔ αβ < 0 and [B r i,j ] i,j is negative definite. In each case we can alter the λ to values leading to saddle points without changing the network function or loss. Therefore, the critical points can only be saddle points or local minima on a non-attracting region of local minima. To determine whether the critical points in questions lead to local minima when [D r,s i ] i,s = 0, it is insufficient to only prove the Hessian to be positive semidefinite (in contrast to (strict) positive definiteness), but we need to consider directions for which the second order information is insufficient. We know that the loss is at a minimum with respect to all coordinates except for the degenerate directions [v s,−1 − v s,r ] s . However, the network function f (x) is constant along [v s,−1 − v s,r ] s (keeping [v s,−1 + v s, r ] s constant) at the critical point where u −1,i = u r,i for all i. Hence, no higher order information leads to saddle points and it follows that the critical point lies on a region of local minima. C. Construction of local minima in deep networks Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) Proof. The fact that property (i) suffices uses that h •,l+1 (x) reduces to the identity function on the networks output and hence its derivative is one. Then, considering a regression network as before, our assumption says that v * •,r = 0, hence its reciprocal can be factored out of the sum in Equation (2). Denoting incoming weights into n(l, r; x) by u r,i as before, this leads to D r,1• i = 1 v * •,r · α (f (x α ) − y α ) · v * •,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) = 1 v * •,r · ∂L ∂u r,i = 0 In the case of (ii), ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t and we can factor out the reciprocal of t v * r,s = 0 in Equation (2) to again see that for each i, ∂L ∂ur,i = 0 implies that D r,s i = 0 for all s. (iii) is evident since in this case clearly every summand in Equation (2) is zero. D. Proofs for the non-increasing path to a global minimum In this section we discuss how in wide neural networks with two hidden layers a non-increasing path to the global minimum may be found from almost everywhere in the parameter space. By [3] (and [4], [5]), we can find such a path if the last hidden layer is wide (containing more neurons than input patterns). We therefore only consider the case where the first hidden layer in a three-layer neural network is wide. More generally, our results apply to all deep neural networks with the second last hidden layer wide. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) = Γ(t) · [act(L − 2, k; x α )] k,α Proof. We write ν(t) = [n(L − 1, s; x α )] s,α +ν(t) withν(0) = 0. We will findΓ(t) such thatν(t) =Γ(t) · [act(L − 2, k; x α )] k,α withΓ(0) = 0. Then Γ(t) := w L−1 +Γ(t) does the job. Since by assumption [act(L − 2, k; x α )] k,α has full rank, we can find an invertible submatrixà ∈ R N ×N of [act(L−2, k; x α )] k,α . Then we can define a continuous pathρ in R nL−1×N given byρ(t) :=ν(t)·Ã −1 , which satisfies ρ(t) ·Ã = ν(t) andρ(0) = 0. Extendingρ(t) to a path in R nL−1×nL−2 by zero columns at positions corresponding to rows of [act(L − 2, k; x α )] k,α missing inÃ, gives a pathΓ(t) such thatΓ(t) · [act(L − 2, k; x α )] k,α =ν(t) and withΓ(0) = 0. Lemma 4. For all continuous paths ρ(t) in Im(σ) N , i.e. the N-fold copy of the image of σ, there is a continuous path ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. Proof. Since σ : R N → Im(σ) N is invertible with a continuous inverse, take ν(t) = σ −1 (ρ(t)). The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . There is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum. For activation functions in A with 0 in the boundary of the image interval [c, d], this path requires that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it seems necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. Proof. We only consider the case with all λ j ≥ 0. The other case can be treated analogously. If only one λ j0 is nonzero, then consider a vector r k corresponding to a zero coefficient λ k = 0 and change r k continuously until it equals the vector r j0 corresponding to the only nonzero coefficient. Then continuously increase the positive coefficient λ j0 , while introducing a corresponding negative contribution via λ k . It is then easy to see that this leads to a path satisfying conditions (i)-(iii). We may therefore assume that at least two coefficients λ j are nonzero, say λ 1 and λ 2 . Leaving all r j and λ j for j ≥ 3 unchanged, we only consider r 1 , r 2 , λ 1 , λ 2 for the desired path, i.e. r j (t) = r j and λ j (t) = λ j for all j ≥ 3. We have that λ 1 r 1 + λ 2 r 2 ∈ (λ 1 + λ 2 ) · Im(σ) N , hence can be written as λR for some λ > 0 and R ∈ Im(σ) N with λR = z − j≥3 λ j r j = λ 1 r 1 + λ 2 r 2 . For t ∈ [0, 1 2 ] we define r 1 (t) := r 1 + 2t(R − r 1 ) and r 2 (t) := r 2 , λ 1 (t) = λλ 1 (1 − 2t)λ + 2tλ 1 and λ 2 (t) = (1 − 2t) λλ 2 (1 − 2t)λ + 2tλ 1 . For t ∈ [ 1 2 , 1] we set r 1 (t) := (2 − 2t)R + (2t − 1)( λ 1 λ 1 + 2λ 2 r 1 + 2λ 2 λ 1 + 2λ 2 r 2 ) and r 2 (t) = r 2 , λ 1 (t) = λ(λ 1 + 2λ 2 ) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ and λ 2 (t) = −λ 2 λ(2t − 1) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ . Then (i) r 1 (0) = r 1 and r 2 (0) = r 2 as desired. Further (ii) z ∈ span j (r j (t)) for all t ∈ [0, 1] via z = j λ j (t)r j (t) . It is also easy to check that r 1 (t), r 2 (t) ∈ Im(σ) N for all t ∈ [0, 1]. Finally, (iii) λ 1 (1) = λ 1 +2λ 2 > 0 and λ 2 (1) = −λ 2 < 0. Hence, if all non-zero coefficients of w L have the same sign, then we apply Lemma 5 to activation vectors r i = a L−1 i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L •,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. We may now assume that not all non-zero entries of w L have the same sign. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. We first prove the result for the (more complicated) case when Im(σ) = (0, d) for some d > 0, e.g. for σ the sigmoid function: Let z ∈ R N be the vector given by z α = f (x α ) for the parameter w at the current weights. Let I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}. For each j ∈ {1, 2, . . . , n L−1 } \ J 0 = J + ∪ J − we consider the path ρ j 2 : [0, 1) → (0, d) N of activation values given by ρ j 2 (t) = (1 − t)[act(L − 1, j; x α )] α . Applying Lemma 3 and Lemma 4 we find the inducing path Γ j 2,L−1 for parameters w L−1 , and we simultaneously change the parameters w L via w L •,j (t) = Γ j 2,L (t) := 1 1−t w L •,j . Following along Γ j 2 (t) = (Γ j 2,L−1 (t), Γ j 2,L (t)) does not change the outcome f (x α ) = z α for any α. For j ∈ J + we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I+ |J + | ∈ (0, d) N . This is possible, since all involved terms are positive, ρ j 2 (t j ) < 1 and decreasing to zero for increasing t, while w L •,j (t) increases for growing t. Similarly, for j ∈ J − we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I− |J − | ∈ (0, d) N . This time the negative sign of w L •,j (t) for j ∈ J . and the negative signs of (y − z) I− cancel, again allowing to find suitable t j . We will consider the endpoints Γ j 2 (t j ) as the new parameter values for w and the induced endpoints ρ j 2 (t j ) as our new act(L − 1, j; x α ). The next part of the path incrementally adds positive or negative coordinates of (y − z) to each activation vector of the last hidden layer. For each j ∈ J + , we let ρ j 3 : [0, 1] → (0, d) N be the path defined by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I− |J − | Since ρ j 3 (t) is a path in Im(σ) for all j, this path can again be realized by an inducing change Γ 3 (t) of parameters w L−1 . The parameters w L are kept unchanged in this last part of the path. Simultaneously changing all ρ j 3 (t) results in a change of the output of the neural network given by [f t (x α )] α = w L •,0 + nL−1 j=1 w L •,j ρ j 3 (t) = w L •,0 +   j∈J+ w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I+,α |J + |   α +   j∈J− w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I−,α |J − |   α = w L •,0 +   nL−1 j=1 w L •,j act(L − 1, j; x α )   α + j∈J+ t · (y − z) I+ |J + | + j∈J− t · (y − z) I− |J − | = z + t · (y − z) I+ + t · (y − z) I− = z + t · (y − z). It is easy to see that for the path t ∈ [0, 1] → z + t · (y − z) the loss L = ||z + t · (y − z) − y|| 2 2 = (1 − t)||y − z|| 2 2 is strictly decreasing to zero. The concatenation of Γ 2 and Γ 3 gives us the desired path Γ 0 . The case that Im(σ) = (c, 0) for some c < 0 works analogously. In the case that Im(σ) = (c, d) with 0 ∈ (c, d), there is no need to split up into sets I + , I − and J + , J − . We haveρ j 2 (t j ) + 1 w L •,j (tj) · (y−z) N ∈ (c, d) N for t j close enough to 1. Hence we can follow Γ j 2 (t) as above until ρ j 2 (t) + 1 w L •,j (t) · (y − z) N ∈ (c, d) N for all j. From here, the paths ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y−z) N define paths in Im(σ) for each j, which can be implemented by an application of Lemma 3 and Lemma 4 and lead to the global minimum. E. Calculations for Lemma 1 For the calculations we may assume without loss of generality that r = 1. If we want to consider a different n(l, r; x) and its corresponding γ r λ , then this can be achieved by a reordering of the indices of neurons.) We let ϕ denote the network function of the smaller neural network and f the neural network function of the larger network after adding one neuron according to the map γ 1 λ . To distinguish the parameters of f and ϕ, we write w ϕ for the parameters of the network before the embedding. This gives for all i, s and all m ≥ 2: For the function f we have the following partial derivatives. u −1,i = u ϕ 1,i u 1,i = u ϕ 1,i v s,−1 = λv ϕ s,1 v s,1 = (1 − λ)v ϕ s,1 u m,i = u ϕ m,i v s,m = v ϕ s, ∂f (x) ∂u p,i = k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) and ∂f (x) ∂v s,q = ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) The analogous equations hold for ϕ. 2) Relating first order derivatives of network functions f and ϕ Therefore, at 3) Second order derivatives of network functions f and ϕ. For the second derivatives we get (with δ(a, a) = 1 and δ(a, b) = 0 for a = b) ∂ 2 f (x) ∂u p,i ∂u q,j = ∂ ∂u q,j k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, m; x)∂n(l + 1, k; x) · v m,q · σ (n(l, q; x)) · act(l − 1, j; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(p, q) k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) ·act(l − 1, i; x) · act(l − 1, j; x) and ∂ 2 f (x) ∂v s,p ∂v t,q = ∂ ∂v t,q ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, p; x) = ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, t; x) · act(l, p; x) · act(l, q; x) and ∂ 2 f (x) ∂u p,i ∂v s,q = ∂ ∂v s,q k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, k; x) · act(l, q; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(q, p) · ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · σ (n(l, p; x)) · act(l − 1, i; x) For a parameter w closer to the input than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂n(l + 1, m; x) · ∂n(l + 1, m; x) ∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂n(l, p; x) ∂w · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂act(l − 1, i; x) ∂w and ∂ 2 f (x) ∂v s,q ∂w = ∂ ∂w ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) = n ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, n; x) · ∂n(l + 1, n; x) ∂w · act(l, q; x) · act(l, q; x) + ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · ∂act(l, q; x) ∂w For a parameter w closer to the output than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, m; x)∂n ϕ (l + 1, k; x) · v ϕ m,q · σ (n ϕ (l, q; x)) · act ϕ (l − 1, j; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) B p i,j (x) := k ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, k; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) · act ϕ (l − 1, j; x) C p,s i,q (x) := k ∂ 2 h ϕ •,l+1 (n(l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, k; x) · act ϕ (l, q; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) D p,s i (x) := ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x) · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) E s,t p,q (x) := ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, t; x) · act ϕ (l, p; x) · act ϕ (l, q; x) Then for all i, j, p, q, s, t, we have ∂ 2 ϕ(x) ∂u ϕ p,i ∂u ϕ q,j = A p,q i,j (x) + δ(q, p)B p i,j (x) ∂ 2 ϕ(x) ∂u ϕ p,i ∂v ϕ s,q = C p,s i,q (x) + δ(q, p)D p,s i (x) ∂ 2 ϕ(x) ∂v s,p ∂v t,q = E s,t p,q (x) For f we get for p, q ∈ {−1, 1} and all i, j, s, t ∂ 2 f (x) ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j (x) + λB 1 i,j (x) ∂ 2 f (x) ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j (x) + (1 − λ)B 1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂u 1,j = ∂ 2 f (x) ∂u 1,i ∂u −1,j = λ(1 − λ) · A 1,1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂v s,−1 = λC 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u 1,i ∂v s,1 = (1 − λ)C 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u −1,i ∂v s,1 = λ · C 1,s i,1 (x) = λ · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂u 1,i ∂v s,−1 = (1 − λ) · C 1,s i,1 (x) = (1 − λ) · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂v s,p ∂v t,q = E s,t 1,1 (x) = ∂ 2 ϕ(x) ∂v ϕ s,1 ∂v ϕ t,1 and ∂ ∂w ϕ = α (ϕ(x α ) − y α ) · ∂ϕ(x α ) ∂w ϕ . From this it follows immediately that if ∂ ∂w ϕ (w ϕ ) = 0, then ∂L ∂w (γ 1 λ (w ϕ )) = 0 for all λ (cf. [9], [15]). For the second derivative we get and for q ≥ 2 and p ∈ {−1, 1} and all i, j, s, t ∂ 2 L ∂u −1,i ∂u q,j = λA 1,q i,j + λA 1,1 i,j ∂ 2 L ∂u 1,i ∂u q,j = (1 − λ)A 1,q i,j + (1 − λ)A 1,q i,j ∂ 2 L ∂u −1,i ∂v s,q = λC 1,s i,q + λC 1,s i,q ∂ 2 L ∂u 1,i ∂v s,q = (1 − λ)C 1,s i,q + (1 − λ)C 1,s i,q ∂ 2 L ∂u q,i ∂v s,p = C q,s i,p + C q,s i,p ∂ 2 L ∂v s,p ∂v t,q = E s,t 1,q + E s,t 1,q and for p, q ≥ 2 and all i, j, s, t ∂ 2 L ∂u p,i ∂u q,j = A p,q i,j + δ(q, p)B p i,j (x) + A p,q i,j = ∂ 2 ∂u ϕ p,i ∂u ϕ q,j ∂ 2 L ∂u p,i ∂v s,q = C p,s i,q + δ(q, p)D p,s i + C p,s i,q = ∂ 2 ∂u ϕ p,i ∂v ϕ s,q ∂ 2 L ∂v s,p ∂v t,q = E s,t p,q + E s,t p,q = ∂ 2 ∂v ϕ s,p ∂v ϕ t,q 6) Change of basis Choose any real numbers α = −β such that λ = β α+β (equivalently αλ − β(1 − λ) = 0) and set µ −1,i = u −1,i + u 1,i µ 1,i = α · u −1,i − β · u 1,i ν s,−1 = v s,−1 + v s,1 ν s,1 = v s,−1 − v s,1 . ∂ 2 L ∂w∂r = α (f (x α ) − y α ) · ∂ 2 f (x α ) ∂w∂r + α ∂f (x α ) ∂w · ∂f (x α )∂ 2 L ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j + λB 1 i,j + λ 2 A 1,1 i,j ∂ 2 L ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i,j + (1 − λ) 2 A 1,1 i,j ∂ 2 L ∂u −1,i ∂u 1,j = λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j Then at γ 1 λ ([u 1,i ] i , [v s,1 ] s ,w), ∂ 2 L ∂µ −1,i ∂µ −1,j = ∂ ∂u −1,i + ∂ ∂u 1,i ∂L(x) ∂u −1,j + ∂L(x) ∂u 1,j = ∂ 2 L(x) ∂u −1,i ∂u −1,j + ∂ 2 L(x) ∂u −1,i ∂u 1,j + ∂ 2 L(x) ∂u 1,i ∂u −1,j + ∂ 2 L(x) ∂u 1,i ∂u 1,j = λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = A 1,1 i,j + B 1 i.j + A 1,1 i,j ∂ 2 L ∂µ 1,i ∂µ 1,j = α ∂ ∂u −1,i − β ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α 2 ∂ 2 L(x) ∂u −1,i ∂u −1,j − αβ ∂ 2 L(x) ∂u −1,i ∂u 1,j − αβ ∂ 2 L(x) ∂u 1,i ∂u −1,j + β 2 ∂ 2 L(x) ∂u 1,i ∂u 1,j = α 2 λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + β 2 (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = αβB 1 i.j ∂ 2 L ∂µ −1,i ∂µ 1,j = ∂ ∂u −1,i + ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α ∂ 2 L(x) ∂u −1,i ∂u −1,j − β ∂ 2 L(x) ∂u −1,i ∂u 1,j + α ∂ 2 L(x) ∂u 1,i ∂u −1,j − β ∂ 2 L(x) ∂u 1,i ∂u 1,j = α λ 2 A 1,1 i,j + λB 2 i.j + λ 2 A 1,1 i,j − β λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + α λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − β (1 − λ) 2 A 1,1 i,j + (1 − λ)B 2 i.j + (1 − λ) 2 A 1,1 i,j = 0 ∂ 2 L ∂ν s,∂L(x) ∂v t,−1 − ∂L(x) ∂v t,1 = ∂ 2 L(x) ∂v s,−1 ∂v t,−1 − ∂ 2 L(x) ∂v s,−1 ∂v t,1 + ∂ 2 L(x) ∂v s,1 ∂v t,−1 − ∂ 2 L(x) ∂v s,1 ∂v t,1 = E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t 1,1 + E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t We also need to consider the second derivative with respect to the other variables ofw. If w is closer to the output than [u p,i ] p,i , [v s,q ] s,q belonging to layer γ where γ > l + 1, then we get
15,463
1812.06486
2904130053
Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e. non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our proposed construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks with two hidden layers, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks.
It is known that learning the parameters of neural networks is, in general, a hard problem. Blum and Rivest @cite_0 prove NP-completeness for a specific neural network. It has also been shown that local minima and other critical points exist in the loss function of neural network training (see e.g. @cite_36 @cite_23 @cite_16 @cite_7 @cite_28 @cite_26 ). The understanding of these critical points has led to significant improvements in neural network training. This includes weight initialization techniques (e.g. @cite_6 ), improved backpropagation algorithms to avoid saturation effects in neurons @cite_29 , entirely new activation functions, or the use of second order information @cite_34 @cite_18 .
{ "abstract": [ "", "", "Becoming trapped in suboptimal local minima is a perennial problem when optimizing visual models, particularly in applications like monocular human body tracking where complicated parametric models are repeatedly fitted to ambiguous image measurements. We show that trapping can be significantly reduced by building ‘roadmaps’ of nearby minima linked by transition pathways—paths leading over low ‘mountain passes’ in the cost surface—found by locating the transition state (codimension-1 saddle point) at the top of the pass and then sliding downhill to the next minimum. We present two families of transition-state-finding algorithms based on local optimization. In eigenvector tracking, unconstrained Newton minimization is modified to climb uphill towards a transition state, while in hypersurface sweeping, a moving hypersurface is swept through the space and moving local minima within it are tracked using a constrained Newton method. These widely applicable numerical methods, which appear not to be known in vision and optimization, generalize methods from computational chemistry where finding transition states is critical for predicting reaction parameters. Experiments on the challenging problem of estimating 3D human pose from monocular images show that our algorithms find nearby transition states and minima very efficiently, but also underline the disturbingly large numbers of minima that can exist in this and similar model based vision problems.", "We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension.", "It was assumed proven that two-layer feedforward neural networks with t-1 hidden nodes, when presented with t input patterns, can not have any suboptimal local minima on the error surface. In this paper, however, we shall give a counterexample to this assumption. This counterexample consists of a region of local minima with nonzero error on the error surface of a neural network with three hidden nodes when presented with four patterns (the XOR problem). We will also show that the original proof is valid only when an unusual definition of local minimum is used.", "Abstract We propose an improved backpropagation algorithm intended to avoid the local minima problem caused by neuron saturation in the hidden layer. Each training pattern has its own activation functions of neurons in the hidden layer. When the network outputs have not got their desired signals, the activation functions are adapted so as to prevent neurons in the hidden layer from saturating. Simulations on some benchmark problems have been performed to demonstrate the validity of the proposed method.", "The training of neural net classifiers is often hampered by the occurrence of local minima, which results in the attainment of inferior classification performance. It has been shown that the occurrence of local minima in the criterion function is often related to specific patterns of defects in the classifier. In particular, three main causes for local minima were identified. Such an understanding of the physical correlates of local minima suggests sensible ways of choosing the weights from which the training process is initiated. A method of initialization is introduced and shown to decrease the possibility of local minima occurring on various test problems. >", "In the neural-network parameter space, an attractive field is likely to be induced by singularities. In such a singularity region, first-order gradient learning typically causes a long plateau with very little change in the objective function value E (hence, a flat region). Therefore, it may be confused with \"attractive\" local minima. Our analysis shows that the Hessian matrix of E tends to be indefinite in the vicinity of (perturbed) singular points, suggesting a promising strategy that exploits negative curvature so as to escape from the singularity plateaus. For numerical evidence, we limit the scope to small examples (some of which are found in journal papers) that allow us to confirm singularities and the eigenvalues of the Hessian matrix, and for which computation using a descent direction of negative curvature encounters no plateau. Even for those small problems, no efficient methods have been previously developed that avoided plateaus.", "We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs. We show that it is NP-complete to decide whether there exist weights and thresholds for this network so that it produces output consistent with a given set of training examples. We extend the result to other simple networks. We also present a network for which training is hard but where switching to a more powerful representation makes training easier. These results suggest that those looking for perfect training algorithms cannot escape inherent computational difficulties just by considering only simple or very regular networks. They also suggest the importance, given a training problem, of finding an appropriate network and input encoding for that problem. It is left as an open problem to extend our result to nodes with nonlinear functions such as sigmoids.", "Local minima and plateaus pose a serious problem in learning of neural networks. We investigate the hierarchical geometric structure of the parameter space of three-layer perceptrons in order to show the existence of local minima and plateaus. It is proved that a critical point of the model with H−1 hidden units always gives many critical points of the model with H hidden units. These critical points consist of many lines in the parameter space, which can cause plateaus in learning of neural networks. Based on this result, we prove that a point in the critical lines corresponding to the global minimum of the smaller model can be a local minimum or a saddle point of the larger model. We give a necessary and sufficient condition for this, and show that this kind of local minima exist as a line segment if any. The results are universal in the sense that they do not require special properties of the target, loss functions and activation functions, but only use the hierarchical structure of the model.", "We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network)." ], "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_36", "@cite_28", "@cite_29", "@cite_6", "@cite_34", "@cite_0", "@cite_23", "@cite_16" ], "mid": [ "", "", "2101458411", "2101762657", "2100943555", "2093953062", "2052387497", "2155147726", "196871588", "1988485873", "2464095735" ] }
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
At the heart of most optimization problems lies the search for the global minimum of a loss function. The common approach to finding a solution is to initialize at random in parameter space and subsequently follow directions of decreasing loss based on local methods. This approach lacks a global progress criteria, which leads to descent into one of the nearest local minima. Since the loss function of deep neural networks is non-convex, the common approach of using gradient descent variants is vulnerable precisely to that problem. Authors pursuing the early approaches to local descent by back-propagating gradients [1] experimentally noticed that suboptimal local minima appeared surprisingly harmless. More recently, for deep neural networks, the earlier observations were further supported by the experiments of e.g., [2]. Several authors aimed to provide theoretical insight for this behavior. Broadly, two views may be distinguished. Some, aiming at explanation, rely on simplifying modeling assumptions. Others investigate neural networks under realistic assumptions, but often focus on failure cases only. Recently, Nguyen and Hein [3] provide partial explanations for deep and extremely wide neural networks for a class of activation functions including the commonly used sigmoid. Extreme width is characterized by a "wide" layer that has more neurons than input patterns to learn. For almost every instantiation of parameter values w (i.e. for all but a null set of parameter values) it is shown that, if the loss function has a local minimum at w, then this local minimum must be a global one. This suggests that for deep and wide neural networks, possibly every local minimum is global. The question on what happens at the null set of parameter values, for which the result does not hold, remains unanswered. Similar observations for neural networks with one hidden layer were made earlier by Gori and Tesi [4] and Poston et al. [5]. Poston et al. [5] show for a neural network with one hidden layer and sigmoid activation function that, if the hidden layer has more nodes than training patterns, then the error function (squared sum of prediction losses over the samples) has no suboptimal "local minimum" and "each point is arbitrarily close to a point from which a strictly decreasing path starts, so such a point cannot be separated from a so called good point by a barrier of any positive height" [5]. It was criticized by Sprinkhuizen-Kuyper and Boers [6] that the definition of a local minimum used in the proof of [5] was rather strict and unconventional. In particular, the results do not imply that no suboptimal local minima, defined in the usual way, exist. As a consequence, the notion of attracting and non-attracting regions of local minima were introduced and the authors prove that non-attracting regions exist by providing an example for the extended XOR problem. The existence of these regions imply that a gradient-based approach descending the loss surface using local information may still not converge to the global minimum. The main objective of this work is to revisit the problem of such non-attracting regions and show that they also exist in deep and wide networks. In particular, a gradient based approach may get stuck in a suboptimal local minimum. Most importantly, the performance of deep and wide neural networks cannot be explained by the analysis of the loss curve alone, without taking proper initialization or the stochasticity of SGD into account. Our observations are not fundamentally negative. At first, the local minima we find are rather degenerate. With proper initialization, a local descent technique is unlikely to get stuck in one of the degenerate, suboptimal local minima 1 . Secondly, the minima reside on a non-attracting region of local minima (see Definition 1). Due to its exploration properties, stochastic gradient descent will eventually be able to escape from such a region (see [8]). We conjecture that in sufficiently wide and deep networks, except for a null set of parameter values as starting points, there is always a monotonically decreasing path down to the global minimum. This was shown in [5] for neural networks with one hidden layer, sigmoid activation function and square loss, and we generalize this result to neural networks with two hidden layers. (More precisely, our result holds for all neural networks with square loss and a class of activation functions including the sigmoid, where the wide layer is the last or second last hidden layer). This implies that in such networks every local minimum belongs to a non-attracting region of local minima. Our proof of the existence of suboptimal local minima even in extremely wide and deep networks is based on a construction of local minima in neural networks given by Fukumizu and Amari [9]. By relying on careful computation we are able to characterize when this construction is applicable to deep neural networks. Interestingly, in deeper layers, the construction rarely seems to lead to local minima, but more often to saddle points. The argument that saddle points rather than suboptimal local minima are the main problem in deep networks has been raised before (see [10]) but a theoretical justification [11] uses strong assumptions that do not exactly hold in neural networks. Here, we provide the first analytical argument, under realistic assumptions on the neural network structure, describing when certain critical points of the training loss lead to saddle points in deeper networks. III. MAIN RESULTS A. Problem definition We consider regression networks with fully connected layers of size n l , 0 ≤ l ≤ L given by f (x) = w L (σ(w L−1 (σ(. . . (w 2 (σ(w 1 (x) + w 1 0 )) + w 2 0 ) . . .)) + w L−1 0 )) + w L 0 , where w l ∈ R nl×nl−1 denotes the weight matrix of the l-th layer, 1 ≤ l ≤ L, w l 0 the bias terms, and σ a nonlinear activation function. The neural network function is denoted by f and we notationally suppress dependence on parameters. We assume the activation function σ to belong to the class of strictly monotonically increasing, analytic, bounded functions on R with image in interval (c, d) such that 0 ∈ [c, d], a class we denote by A. As prominent examples, the sigmoid activation function σ(t) = 1 1+exp(−t) and σ(t) = tanh(x) lie in A. We assume no activation function at the output layer. The neural network is assumed to be a regression network mapping into the real domain R, i.e. n L = 1 and w L ∈ R 1×nL−1 . We train on a finite dataset (x α , y α ) 1≤α≤N of size N with input patterns x α ∈ R n0 and desired target value y α ∈ R. We aim to minimize the squared loss L = N α=1 (f (x α ) − y α ) 2 . Further, w denotes the collection of all w l . The dependence of the neural network function f on w translates into a dependence of L = L(w) of the loss function on the parameters w. Due to assumptions on σ, L(w) is twice continuously differentiable. The goal of training a neural network consists of minimizing L(w) over w. There is a unique value L 0 denoting the infimum of the neural network's loss (most often L 0 = 0 in our examples). Any set of weights w • that satisfies L(w • ) = L 0 is called a global minimum. Due to its non-convexity, the loss function L(w) of a neural network is in general known to potentially suffer from local minima (precise definition of a local minimum below). We will study the existence of suboptimal local minima in the sense that a local minimum w * is suboptimal if its loss L(w * ) is strictly larger than L 0 . We refer to deep neural networks as models with more than one hidden layer. Further, we refer to wide neural networks as the type of model considered in [3]- [5] with one hidden layer containing at least as many neurons as input patterns (i.e. n l ≥ N for some 1 ≤ l < L in our notation). Disclaimer: Naturally, training for zero global loss is not desirable in practice, neither is the use of fully connected wide and deep neural networks necessarily. The results of this paper are of theoretical importance. To be able to understand the complex learning behavior of deep neural networks in practice, it is a necessity to understand the networks with the most fundamental structure. In this regard, while our result are not directly applicable to neural networks used in practice, they do offer explanations for their learning behavior. B. A special kind of local minimum The standard definition of a local minimum, which is also used here, is a point w * such that w * has a neighborhood U with L(w) ≥ L(w * ) for all w ∈ U . Since local minima do not need to be isolated (i.e. L(w) > L(w * ) for all w ∈ U \ {w * }) two types of connected regions of local minima may be distinguished. Note that our definition slightly differs from the one by [6]. Definition 1. [6] Let : R n → R be a differentiable function. Suppose R is a maximal connected subset of parameter values w ∈ R m , such that every w ∈ R is a local minimum of with value (w) = c. • R is called an attracting region of local minima, if there is a neighborhood U of R such that every continuous path Γ(t), which is non-increasing in and starts from some Γ(0) ∈ U , satisfies (Γ(t)) ≥ c for all t. • R is called a non-attracting region of local minima, if every neighborhood U of R contains a point from where a continuous path Γ(t) exists that is non-increasing in and ends in a point Γ(1) with (Γ(1)) < c. Despite its non-attractive nature, a non-attracting region R of local minima may be harmful for a gradient descent approach. A path of greatest descent can end in a local minimum on R. However, no point z on R needs to have a neighborhood of attraction in the sense that following the path of greatest descent from a point in a neighborhood of z will lead back to z. (The path can lead to a different local minimum on R close by or reach points with strictly smaller values than c.) In the example of such a region for the 2-3-1 XOR network provided in [6], a local minimum (of higher loss than the global loss) resides at points in parameter space with some coordinates at infinity. In particular, a gradient descent approach may lead to diverging parameters in that case. However, a different non-increasing path down to the global minimum always exists. It can be shown that local minima at infinity also exist for wide and deep neural networks. (The proof can be found in Appendix A.) Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. A different type of non-attracting regions of local minima (without infinite parameter values) is considered for neural networks with one hidden layer by Fukumizu and Amari [9] and Wei et al. [8] under the name of singularities. This type of region is characterized by singularities in the weight space (a subset of the null set not covered by the results of Nguyen and Hein [3]) leading to a loss value strictly larger than the global loss. The dynamics around such region are investigated by Wei et al. [8]. Again, a full batch gradient descent approach can get stuck in a local minimum in this type of region. A rough illustration of the nature of these non-attracting regions of local minima is depicted in Fig. 1. Non-attracting regions of local minima do not only exist in small two-layer neural networks. Theorem 2. There exist deep and wide fully-connected neural networks with sigmoid activation function such that the squared loss function of a finite dataset has a non-attracting region of local minima (at finite parameter values). The construction of such local minima is discussed in Section V with a complete proof in Appendix B. Corollary 1. Any attempt to show for fully connected deep and wide neural networks that a gradient descent technique will always lead to a global minimum only based on a description of the loss curve will fail if it doesn't take into consideration properties of the learning procedure (such as the stochasticity of stochastic gradient descent), properties of a suitable initialization technique, or assumptions on the dataset. On the positive side, we point out that a stochastic method such as stochastic gradient descent has a good chance to escape a non-attracting region of local minima due to noise. With infinite time at hand and sufficient exploration, the region can be escaped from with high probability (see [8] for a more detailed discussion). In Section V-A we will further characterize when the method used to construct examples of regions of non-attracting local minima is applicable. This characterization limits us to the construction of extremely degenerate examples. We give an intuitive argument why assuring the necessary assumptions for the construction becomes more difficult for wider and deeper networks and why it is natural to expect a lower suboptimal loss (where the suboptimal minima are less "bad") the less degenerate the constructed minima are and the more parameters a neural network possesses. C. Non-increasing path to a global minimum By definition, every neighborhood of a non-attracting region of local minima contains points from where a non-increasing path to a value less than the value of the region exists. (By definition all points belonging to a nonattracting region have the same value, in fact they are all local minima.) The question therefore arises whether from almost everywhere in parameter space there is such a non-increasing path all the way down to a global minimum. If the last hidden layer is the wide layer having more neurons than input patterns (for example consider a wide two-layer neural network), then this holds true by the results of [3] (and [4], [5]). We show the same conclusion to hold for wide neural networks having the second last hidden layer the wide one. In particular, this implies that for wide neural networks with two hidden layers, starting from almost everywhere in parameter space, there is non-increasing path down to a global minimum. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. Corollary 2. Consider a wide, fully connected regression neural network with two hidden layers and activation function in the class A and trained to minimize the squared loss over a finite dataset. Then all suboptimal local minima are contained in a non-attracting region of local minima. The rest of the paper contains the arguments leading to the given results. IV. NOTATIONAL CHOICES We fix additional notation aside the problem definition from Section III-A. For input x α , we denote the pattern vector of values at all neurons at layer l before activation by n(l; x α ) and after activation by act(l; x α ). x α,1 x α,2 x 0 1, −1 1, 1 1, 2 1, 3 1, 3 1, 0 f (x α ) [u 1,i ] i [u 1,i ] i [u 2,i ] i [u 3,i ] i λ · v •,1 (1 − λ) · v •,1 v •,2 v •,3 v •,0 In general, we will denote column vectors of size n with coefficients z i by [z i ] 1≤i≤n or simply [z i ] i and matrices with entries a i,j at position (i, j) by [a i,j ] i,j . The neuron value pattern n(l; x) is then a vector of size n l denoted by n(l; x) = [n(l, k; x)] 1≤k≤nl , and the activation pattern act(l; x) = [act(l, k; x)] 1≤k≤nl . Using that f can be considered a composition of functions from consecutive layers, we denote the function from act(k; x) to the output by h •,k (x). For convenience of the reader, a tabular summary of all notation is provided in Appendix A. V. CONSTRUCTION OF LOCAL MINIMA We recall the construction of so-called hierarchical suboptimal local minima given in [9] and extend it to deep networks. For the hierarchical construction of critical points, we add one additional neuron n(l, −1; x) to a hidden layer l. (Negative indices are unused for neurons, which allows us to add a neuron with this index.) Once we have fixed the layer l, we denote the parameters of the incoming linear transformation by [u p,i ] p,i , so that u p,i denotes the contribution of neuron i in layer l − 1 to neuron p in layer l, and the parameters of the outgoing linear transformation by [v s,q ], where v s,q denotes the contribution of neuron q in layer l to neuron s in layer l + 1. For weights of the output layer (into a single neuron), we write w •,j instead of w 1,j . We recall the function γ used in [9] to construct local minima in a hierarchical way. This function γ describes the mapping from the parameters of the original network to the parameters after adding a neuron n(l, −1; x) and is determined by incoming weights u −1,i into n(l, −1; x), outgoing weights v s,−1 of n(l, −1; x), and a change of the outgoing weights v s,r of n(l, r; x) for one chosen r in the smaller network. Sorting the network parameters in a convenient way, the embedding of the smaller network into the larger one is defined for any λ ∈ R by a function γ r λ mapping parameters {([u r,i ] i , [v s,r ] s ,w} of the smaller network to parameters {([u −1,i ] i , [v s,−1 ] s , [u r,i ] i , [v s,r ] s ,w)} of the larger network and is defined by γ r λ ([u r,i ] i , [v s,r ] s ,w) := ([u r,i ] i , [λ · v s,r ] s , [u r,i ] i , [(1 − λ) · v s,r ] s ,w) . Herew denotes the collection of all remaining network parameters, i.e., all [u p,i ] i , [v s,q ] s for p, q / ∈ {−1, r} and all parameters from linear transformation of layers with index smaller than l or larger than l + 1, if existent. A visualization of γ 1 λ is shown in Fig. 2. Important fact: For the functions ϕ, f of smaller and larger network at parameters ([u * 1,i ] i , [v * s,1 ] s ,w * ) and γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) respectively, we have ϕ(x) = f (x) for all x. More generally, we even have n ϕ (l, k; x) = n(l, k; x) and act ϕ (l, k; x) = act(l, k; x) for all l, x and k ≥ 0. A. Characterization of hierarchical local minima Using γ r to embed a smaller deep neural network into a second one with one additional neuron, it has been shown that critical points get mapped to critical points. Theorem 4 (Nitta [15]). Consider two neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. If parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point for the squared loss over a finite dataset in the smaller network then, for each λ ∈ R, γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) determines a critical point in the larger network. As a consequence, whenever an embedding of a local minimum with γ r λ into a larger network does not lead to a local minimum, then it leads to a saddle point instead. (There are no local maxima in the networks we consider, since the loss function is convex with respect to the parameters of the last layer.) For neural networks with one hidden layer, it was characterized when a critical point leads to a local minimum. Theorem 5 (Fukumizu, Amari [9]). Consider two neural networks as in Section III-A with only one hidden layer and which differ by one neuron in the hidden layer with index n(1, −1; x) in the larger network. Assume that parameters ([u * r,i ] i , v * •,r ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller neural network and that λ / ∈ {0, 1}. Then γ r λ ([u * r,i ] i , v * •,r ,w * ) determines a local minimum in the larger network if the matrix [B r i,j ] i,j given by B r i,j = α (f (x α ) − y α ) · v * •,r · σ (n(1, r; x α )) · x α,i · x α,j is positive definite and 0 < λ < 1, or if [B r i,j ] i,j is negative definite and λ < 0 or λ > 1. (Here, we denote the k-th input dimension of input x α by x α,k .) We extend the previous theorem to a characterization in the case of deep networks. We note that a similar computation was performed in [19] for neural networks with two hidden layers. Theorem 6. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller network. If the matrix [B r i,j ] i,j defined by B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either • positive definite and λ ∈ I := (0, 1), or • negative definite and λ ∈ I : = (−∞, 0) ∪ (1, ∞), then γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) | λ ∈ I determines a non-attracting region of local minima in the larger network if and only if D r,s i := α (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) · σ (n(l, r; x α )) · act(l − 1, i; x α )(2) is zero, D r,s i = 0, for all i, s. Remark 1. In the case of a neural network with only one hidden layer as considered in Theorem 5, the function h •,l+1 (x) is the identity function on R and the matrix [B r i,j ] i,j in (1) reduces to the matrix [B r i,j ] i,j in Theorem 5. The condition that D r,s i = 0 for all i, s does hold for shallow neural networks with one hidden layer as we show below. This proves Theorem 6 to be consistent with Theorem 5. The theorem follows from a careful computation of the Hessian of the cost function L(w), characterizing when it is positive (or negative) semidefinite and checking that the loss function does not change along directions that correspond to an eigenvector of the Hessian with eigenvalue 0. We state the outcome of the computation in Lemma 1 and refer the reader interested in a full proof of Theorem 6 to Appendix B. Lemma 1. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Fix 1 ≤ r ≤ n l . Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point in the smaller network. Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ) , the Hessian of L (i.e., the second derivative with respect to the new network parameters) at γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) is given by        [ ∂ 2 ∂ur,i∂ur,j ] i,j 2[ ∂ 2 ∂ur,i∂vs,r ] i,s [ ∂ 2 ∂w ∂ur,i ] i,w 0 0 2[ ∂ 2 ∂ur,i∂vs,r ] s,i 4[ ∂ 2 ∂vs,r∂vt,r ] s,t 2[ ∂ 2 ∂w ∂vs,r ] s,w (α − β)[D r,s i ] s,i 0 [ ∂ 2 ∂w ∂ur,i ]w ,i 2[ ∂ 2 ∂w ∂vs,r ]w ,s [ ∂ 2 ∂w ∂w ]w ,w 0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        B. Shallow networks with a single hidden layer For the construction of suboptimal local minima in wide two-layer networks, we begin by following the experiments of [9] that prove the existence of suboptimal local minima in (non-wide) two-layer neural networks. Consider a neural network of size 1-2-1. We use the corresponding network function f to construct a dataset (x α , y α ) N α=1 by randomly choosing x α and letting y α = f (x α ). By construction, we know that a neural network of size 1-2-1 can perfectly fit the dataset with zero error. Consider now a smaller network of size 1-1-1 having too little expressibility for a global fit of all data points. We find parameters [u * 1,1 , v * • ] where the loss function of the neural network is in a local minimum with non-zero loss. For this small example, the required positive definiteness of [B 1 i,j ] i,j from (1) for a use of γ λ with λ ∈ (0, 1) reduces to checking a real number for positivity, which we assume to hold true. We can now apply γ λ and Theorem 5 to find parameters for a neural network of size 1-2-1 that determine a suboptimal local minimum. This example may serve as the base case for a proof by induction to show the following result. Theorem 7. There is a wide neural network with one hidden layer and arbitrarily many neurons in the hidden layer that has a non-attracting region of suboptimal local minima. Having already established the existence of parameters for a (small) neural network leading to a suboptimal local minimum, it suffices to note that iteratively adding neurons using Theorem 5 is possible. Iteratively at step t, we add a neuron n(1, −t; x) to the network by an application of γ 1 λ with the same λ ∈ (0, 1). The corresponding matrix from (1), B 1,(t) i,j = α (f (x α ) − y α ) · (1 − λ) t · v * •,1 · σ (n(l, 1; x α )) · x α,i · x α,j , is positive semidefinite. (We use here that neither f (x α ) nor n(l, 1; x α ) ever change during this construction.) By Theorem 5 we always find a suboptimal minimum with nonzero loss for the network for λ ∈ (0, 1). Note however, that a continuous change of λ to a value outside of [0, 1] does not change the network function, but leads to a saddle point. Hence, we found a non-attracting region of suboptimal minima. Remark 2. Since we started the construction from a network of size 1-1-1, our constructed example is extremely degenerate: The suboptimal local minima of the wide network have identical incoming weight vectors for each hidden neuron. Obviously, the suboptimality of this parameter setting is easily discovered. Also with proper initialization, the chance of landing in this local minimum is vanishing. However, one may also start the construction from a more complex network with a larger network with several hidden neurons. In this case, when adding a few more neurons using γ 1 λ , it is much harder to detect the suboptimality of the parameters from visual inspection. C. Deep neural networks According to Theorem 6, next to positive definiteness of the matrix B r i,j for some r, in deep networks there is a second condition for the construction of hierarchical local minima using the map γ r λ , i.e. D r,s i = 0. We consider conditions that make D r,s i = 0. Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) The proof is contained in Appendix C. D. Experiment for deep networks To construct a local minimum in a deep and wide neural network, we start by considering a three-layer network of size 2-2-4-1, i.e. we have two input dimensions, one output dimension and hidden layers of two and four neurons. We use its network function f to create a dataset of 50 samples (x α , f (x α )), hence we know that a network of size 2-2-4-1 can attain zero loss. We initialize a new neural network of size 2-2-2-1 and train it until convergence, before using the construction to add neurons to the network. When adding neurons to the last hidden layer using γ 1 λ , Proposition 1 assures that D 1,• i = 0 for all i. We check for positive definiteness of the matrix B 1 i,j , and only continue when this property holds. Having thus assured the necessary condition of Theorem 6, we can add a few neurons to the last hidden layer (by induction as in the two-layer case), which results in local minimum of a network of size 2-2-M-1. The local minimum of non-zero loss that we attain is suboptimal whenever M ≥ 4 by construction. For M ≥ 50 the network is wide. Experimentally, we show not only that indeed we end up with a suboptimal minimum, but also that it belongs to a non-attracting region of local minima. In Fig. 3 we show results after adding eleven neurons to the last hidden layer. On the left side, we plot the loss in the neighborhood of the constructed local minimum in parameter space. The top image shows the loss curve into randomly generated directions, the bottom displays the minimal loss over all these directions. On the top right we show the change of loss along one of the degenerate directions that allows reaching a saddle point. In such a saddle point we know from Lemma 1 the direction of descent. The image on the bottom right shows that indeed the direction allows a reduction in loss. Being able to reach a saddle point from a local minimum by a path of non-increasing loss shows that indeed we found a non-attracting region of local minima. E. A discussion of limitations and of the loss of non-attracting regions of suboptimal minima We fix a neuron in layer l and aim to use γ r λ to find a local minimum in the larger network. We then need to check whether a matrix B r i,j is positive definite, which depends on the dataset. Under strong independence assumptions (the signs of different eigenvalues of B r i,j are independent), one may argue similar to arguments in [10] that the probability of finding B r i,j to be positive definite (all eigenvalues positive) is exponentially decreasing in the number of possible neurons of the previous layer l − 1. At the same time, the number of neurons n(l, r; x) in layer l to use for the construction only increases linearly in the number of neurons in layer l. Experimentally, we use a four-layer neural network of size 2-8-12-8-1 to construct a (random) dataset containing 500 labeled samples. We train a network of size 2-4-6-4-1 on the dataset until convergence using SciPy's 2 BFGS implementation. For each layer l, we check each neuron r whether it can be used for enlargment of the network using the map γ r λ for some λ ∈ (0, 1), i.e., we check whether the corresponding matrix B r i,j is positive definite. We repeat this experiment 1000 times. For the first layer, we find that in 547 of 4000 test cases the matrix is positive definite. For the second layer we only find B r i,j positive definite in 33 of 6000 cases, and for the last hidden layer there are only 6 instances out of 4000 where the matrix B r i,j is positive definite. Since the matrix B r i,j is of size 2 × 2/4 × 4/6 × 6 for the first/second/last hidden layer respectively, the number of positive matrices is less than what would be expected under the strong independence assumptions discussed above. In addition, in deeper layers, further away from the output layer, it seems dataset dependent and unlikely to us that D r,s i = 0. Simulations seem to support this belief. However, it is difficult to check the condition numerically. Firstly, it is hard to find the exact position of minima and we only compute numerical approximations of D r,s i . Secondly, the terms are small for sufficiently large networks and numerical errors play a role. Due to these two facts, it becomes barely possible to check the condition of exact equality to zero. In Fig. 4 we show the distribution of maximal entries of the matrix D r,s i = 0 for neurons in the first, second and third layer of the network of size 2-4-6-4-1 trained as above. Note that for the third layer we know from theory that in a critical point we have D r,s i = 0, but due to numerical errors much larger values arise. Further, a region of local minima as above requires linearly dependent activation pattern vectors. This is how linear dimensions for subsequent layers get lost, reducing the ability to approximate the target function. Intuitively, in a deep and wide neural network there are many possible directions of descent. Loosing some of them still leaves the network with enough freedom to closely approximate the target function. As a result, these suboptimal minima have a loss close to the global loss. Conclusively, finding suboptimal local minima with high loss by the construction using γ r λ becomes hard when the networks become deep and wide. VI. PROVING THE EXISTENCE OF A NON-INCREASING PATH TO THE GLOBAL MINIMUM In the previous section we showed the existence of non-attracting regions of local minima. These type of local minima do not rule out the possibility of non-increasing paths to the global minimum from almost everywhere in parameter space. In this section, we sketch the proof to Theorem 3 illustrated in form of several lemmas, where up to the basic assumptions on the neural network structure as in Section III-A (with activation function in A), the assumption of one lemma is given by the conclusion of the previous one. A full proof can be found in Appendix D. We consider vectors that we call activation vectors, different from the activation pattern vectors act(l; x) from above. The activation vector at neuron k in layer l is denoted by a l k and defined by all values at the given neuron for different samples x α : a l k := [act(l, k; x α )] α . In other words while we fix l and x for the activation pattern vectors act(l; x) and let k run over its possible values, we fix l and k for the activation vectors a l k and let x run over its samples x α in the dataset. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . As it turns out, there is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum whenever we may assume that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it is necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. We apply Lemma 5 to activation vectors r i = a i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L 1,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. Fix z α = f (x α ), the prediction for the current weights. The main idea is to change the activation vectors of the last hidden layer according to ρ j : t ∈ [0, 1] → a L−1 j + t · 1 w L •,j · (y − z) N . With w L fixed, at the output this results in a change of t ∈ [0, 1] → z + t · (y − z), which reduces the loss to zero. The required change of activation vectors can be implemented by an application of Lemma 3 and Lemma 4, but only if the image of each ρ j lies in the image [c, d] of the activation function. Hence, the latter must be arranged. In the case that 0 ∈ (c, d), it suffices to first decrease the norm of a L−1 j while simultaneously increasing the norm of the outgoing weight w L •,j so that the output remains constant. If, however, 0 is in the boundary of the interval [c, d] (for example the case of a sigmoid activation function), then the assumption of non-zero weights with different signs becomes necessary. We let J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}, I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, I − = {α ∈ {1, 2, . . . , N } | (y − z) α < 0}. We further define (y − z) I+ to be the vector v with coordinate v α for α ∈ I + equal to (y − z) α and 0 otherwise, and we let analogously (y − z) I− denote the vector containing only the negative coordinates of y − z. Then the paths ρ j : [0, 1] → (c, d) defined by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I− |J − | can be arranged to all lie in the image of the activation function and they again lead to an output change of t ∈ [0, 1] → z + t · (y − z). (Appendix D contains a more detailed proof.) This concludes the proof of Theorem 3 having found a sufficient condition in Lemma 6 to confirm the existence of a path down to zero loss and having shown how to realize this condition in Lemmas 3, 4 and 5. VII. CONCLUSION In this paper we have studied the local minima of deep and wide regression neural networks with sigmoid activation functions. We established that the nature of local minima is such that they live in a special region of the cost function called a non-attractive region, and showed that a non-increasing path to a configuration with lower loss than that of the region can always be found. For sufficiently wide two-or three-layer neural networks, all local minima belong to such a region. We generalized the procedure to find such regions, introduced by Fukumizu and Amari [9], to deep networks and described sufficient conditions for the construction to work. The necessary conditions become very hard to satisfy in wider and deeper networks and, if they fail, the construction leads to saddle points instead. Finally, an intuitive argument shows a clear relation between the degree of degeneracy of a local minimum and the level of suboptimality of the constructed local minimum. APPENDIX NOTATION [x α ] α R n column vector with entries x α ∈ R [x i,j ] i,j ∈ R n1×n2 matrix with entry x i,j at position (i, j) Im(f) ⊆ R image of a function f C n (X, Y ) n-times continuously differentiable function from X to Y N ∈ N number of data samples in training set x α ∈ R n0 training sample input y α ∈ R target output for sample x α A ∈ C(R) class of real-analytic, strictly monotonically increasing, bounded (activation) functions such that the closure of the image contains zero σ ∈ C 2 (R, R) a nonlinear activation function in class A f ∈ C(R n0 , R) neural network function l 1 ≤ l ≤ L index of a layer L ∈ N number of layers excluding the input layer l=0 input layer l = L output layer n l ∈ N number of neurons in layer l k 1 ≤ k ≤ n l index of a neuron in layer l w l ∈ R nl×nl−1 weight matrix of the l-th layer w ∈ R L l=1 (nl·nl−1) collection of all w l w l i,j ∈ R the weight from neuron j of layer l − 1 to neuron j of layer l w L •,j ∈ R the weight from neuron j of layer L − 1 to the output L ∈ R + squared loss over training samples n(l, k; x) ∈ R value at neuron k in layer l before activation for input pattern x n(l; x) ∈ R nl neuron pattern at layer l before activation for input pattern x act(l, k; x) ∈ Im(σ) activation pattern at neuron k in layer l for input x act(l; x) ∈ Im(σ) nl neuron pattern at layer l for input x In Section V, where we fix a layer l, we additionally use the following notation. h •,k (x) ∈ C(R nl , R) the function from act(l; x) to the output [u p,i ] p,i ∈ R nl×nl−1 weights of the given layer l. [v s,q ] s,q ∈ R nl×nl+1 weights the layer l + 1. r ∈ {1, 2, . . . , n l } the index of the neuron of layer l that we use for the addition of one additional neuron M ∈ N = L t=1 (n t · n t−1 ), the number of weights in the smaller neural network w ∈ R M −nl−1−nl+1 all weights except u 1,i and v s,1 γ r λ ∈ C(R M , R M +nl−1+nl+1 ) the map defined in Section V to add a neuron in layer l using the neuron with index r in layer l In Section VI, we additionally use the following notation. A. Local minima at infinity in neural networks In this section we prove the existence of local minima at infinity in neural networks. Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. Proof. We will show that, if all bias terms u i,0 of the last hidden layer are sufficiently large, then there are parameters u i,0k for k = 0 and parameters v i of the output layer such that the minimal loss is achieved at u i,0 = ∞ for all i. We note that, if u i,0 = ∞ for all i, all neurons of the last hidden layer are fully active for all samples, i.e. act(L − 1, i; x α ) = 1 for all i. Therefore, in this case f ( x α ) = i v •,i for all α. A constant function f (x α ) = i v •,i = c minimizes the loss α (c − y α ) 2 uniquely for c := 1 N N α=1 y α . We will assume that the v •,i are chosen such that i v •,i = c does hold. That is, for fully active hidden neurons at the last hidden layer, the v •,i are chosen to minimize the loss. We write f (x α ) = c + α . Then L = 1 2 α (f (x α ) − y α ) 2 = 1 2 α (c + α − y α ) 2 = 1 2 α ( α + (c − y α )) 2 = 1 2 α (c − y α ) 2 Loss at ui,0 = ∞ for all i + 1 2 α 2 α ≥0 + α α (c − y α ) ( * ) . The idea is now to ensure that ( * ) ≥ 0 for sufficiently large u i,0 and in a neighborhood of the v •,i chosen as above. Then the loss L is larger than at infinity, and any point in parameter space with u i,0 = ∞ and v •,i with i v •,i = c is a local minimum. To study the behavior at u i,0 = ∞, we consider p i = exp(−u i,0 ). Note that lim ui,0→∞ p i = 0. We have f (x α ) = i v •,i σ(u i,0 + k u i,k act(L − 2, k; x α )) = i v •,i · 1 1 + p i · exp(− k u i,k act(L − 2, k; x α )) Now for p i close to 0 we can use Taylor expansion of g j i (p i ) : = 1 1+piexp(a j i ) to get g j i (p i ) = 1 − exp(a j i )p i + O(|p i | 2 ). Therefore f (x α ) = c − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) and we find that α = − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ). Recalling that we aim to ensure ( * ) = α α (c − y α ) ≥ 0 we consider α α (c − y α ) = − α (c − y α )( i v •,i p i exp(− k u i,k act(L − 2, k; x α ))) + O(p 2 i ) = − i v •,i p i α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) We are still able to choose the parameters u i,k for i = 0, the parameters from previous layers, and the v •,i subject to i v •,i = c. If now v •,i > 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) < 0 and v •,i < 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) > 0, then the term ( * ) is strictly positive, hence the overall loss is larger than the loss at p i = 0 for sufficiently small p i and in a neighborhood of v •,i . The only obstruction we have to get around is the case where we need all v •,i of the opposite sign of c (in other words, α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) has the same sign as c), conflicting with i v •,i = c. To avoid this case, we impose the mild condition that α (c−y α )act(L−2, r; x α ) = 0 for some r, which can be arranged to hold for almost every dataset by fixing all parameters of layers with index smaller than L − 2. By Lemma 7 below (with d α = (c−y α ) and a r α = act(L−2, r; x α )), we can find u > k such that α (c−y α ) exp(− k u > k act(L−2, k; x α )) > 0 and u < k such that α (c − y α ) exp(− k u < k act(L − 2, k; x α )) < 0. We fix u i,k for k ≥ 0 such that there is some i 1 with [u i1,k ] k = [u > k ] k and some i 2 with [u i2,k ] k = [u < k ] k . This assures that we can choose the v •,i of opposite sign to α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) and such that i v •,i = c, leading to a local minimum at infinity. The local minimum is suboptimal whenever a constant function is not the optimal network function for the given dataset. By assumption, there is r such that the last term is nonzero. Hence, using coordinate r, we can choose w = (0, 0, . . . , 0, w r , 0, . . . , 0) such that φ(w) is positive and we can choose w such that φ(w) is negative. B. Proofs for the construction of local minima Here we prove B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either is zero, D r,s i = 0, for all i, s. The previous theorem follows from two lemmas, with the first lemma containing the computation of the Hessian of the cost function L of the larger network at parameters γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) with respect to a suitable basis. In addition, to find local minima one needs to explain away all additional directions, i.e., we need to show that the loss function actually does not change into the direction of eigenvectors of the Hessian with eigenvalue 0. Otherwise a higher derivative into this direction could be nonzero and potentially lead to a saddle point (see [19]). Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ),0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        Proof. The proof only requires a tedious, but not complicated calculation (using the relation αλ − β(1 − λ) = 0 multiple times. To keep the argumentation streamlined, we moved all the necessary calculations into Appendix E. (z 1 , z 2 , z 3 , z 4 )     a 2b c 0 2b T 4d 2e 0 c T 2e T f 0 0 0 0 x         z 1 z 2 z 3 z 4     = (z 1 , 2z 2 , z 3 , z 4 )     a b c 0 b T d e 0 c T e T f 0 0 0 0 x         z 1 2z 2 z 3 z 4     (b) It is clear that the matrix x is positive semidefinite for g positive semidefinite and h = 0. To show the converse, first note that if g is not positive semidefinite and z is such that z T gz < 0 then (z T , 0) g h h T 0 z 0 = z T gz < 0. It therefore remains to show that also h = 0 is a necessary condition. Assume h = 0 and find z such that hz = 0. Then for any λ ∈ R we have ((hz) T , −λz T ) g h h T 0 hz −λz = (hz) T g(hz) − 2(hz) T hλz = (hz) T g(hz) − 2λ||hz|| 2 2 . For sufficiently large λ, the last term is negative. Proof of Theorem 6. In Lemma 1, we calculated the Hessian of L with respect to a suitable basis at a the critical point γ λ ([u * r,i ] i , [v * s,r ] s ,w * ). If the matrix [D r,s i ] i,] i,j is positive definite or if (λ < 0 or λ > 1) ⇔ αβ < 0 and [B r i,j ] i,j is negative definite. In each case we can alter the λ to values leading to saddle points without changing the network function or loss. Therefore, the critical points can only be saddle points or local minima on a non-attracting region of local minima. To determine whether the critical points in questions lead to local minima when [D r,s i ] i,s = 0, it is insufficient to only prove the Hessian to be positive semidefinite (in contrast to (strict) positive definiteness), but we need to consider directions for which the second order information is insufficient. We know that the loss is at a minimum with respect to all coordinates except for the degenerate directions [v s,−1 − v s,r ] s . However, the network function f (x) is constant along [v s,−1 − v s,r ] s (keeping [v s,−1 + v s, r ] s constant) at the critical point where u −1,i = u r,i for all i. Hence, no higher order information leads to saddle points and it follows that the critical point lies on a region of local minima. C. Construction of local minima in deep networks Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) Proof. The fact that property (i) suffices uses that h •,l+1 (x) reduces to the identity function on the networks output and hence its derivative is one. Then, considering a regression network as before, our assumption says that v * •,r = 0, hence its reciprocal can be factored out of the sum in Equation (2). Denoting incoming weights into n(l, r; x) by u r,i as before, this leads to D r,1• i = 1 v * •,r · α (f (x α ) − y α ) · v * •,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) = 1 v * •,r · ∂L ∂u r,i = 0 In the case of (ii), ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t and we can factor out the reciprocal of t v * r,s = 0 in Equation (2) to again see that for each i, ∂L ∂ur,i = 0 implies that D r,s i = 0 for all s. (iii) is evident since in this case clearly every summand in Equation (2) is zero. D. Proofs for the non-increasing path to a global minimum In this section we discuss how in wide neural networks with two hidden layers a non-increasing path to the global minimum may be found from almost everywhere in the parameter space. By [3] (and [4], [5]), we can find such a path if the last hidden layer is wide (containing more neurons than input patterns). We therefore only consider the case where the first hidden layer in a three-layer neural network is wide. More generally, our results apply to all deep neural networks with the second last hidden layer wide. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) = Γ(t) · [act(L − 2, k; x α )] k,α Proof. We write ν(t) = [n(L − 1, s; x α )] s,α +ν(t) withν(0) = 0. We will findΓ(t) such thatν(t) =Γ(t) · [act(L − 2, k; x α )] k,α withΓ(0) = 0. Then Γ(t) := w L−1 +Γ(t) does the job. Since by assumption [act(L − 2, k; x α )] k,α has full rank, we can find an invertible submatrixà ∈ R N ×N of [act(L−2, k; x α )] k,α . Then we can define a continuous pathρ in R nL−1×N given byρ(t) :=ν(t)·Ã −1 , which satisfies ρ(t) ·Ã = ν(t) andρ(0) = 0. Extendingρ(t) to a path in R nL−1×nL−2 by zero columns at positions corresponding to rows of [act(L − 2, k; x α )] k,α missing inÃ, gives a pathΓ(t) such thatΓ(t) · [act(L − 2, k; x α )] k,α =ν(t) and withΓ(0) = 0. Lemma 4. For all continuous paths ρ(t) in Im(σ) N , i.e. the N-fold copy of the image of σ, there is a continuous path ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. Proof. Since σ : R N → Im(σ) N is invertible with a continuous inverse, take ν(t) = σ −1 (ρ(t)). The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . There is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum. For activation functions in A with 0 in the boundary of the image interval [c, d], this path requires that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it seems necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. Proof. We only consider the case with all λ j ≥ 0. The other case can be treated analogously. If only one λ j0 is nonzero, then consider a vector r k corresponding to a zero coefficient λ k = 0 and change r k continuously until it equals the vector r j0 corresponding to the only nonzero coefficient. Then continuously increase the positive coefficient λ j0 , while introducing a corresponding negative contribution via λ k . It is then easy to see that this leads to a path satisfying conditions (i)-(iii). We may therefore assume that at least two coefficients λ j are nonzero, say λ 1 and λ 2 . Leaving all r j and λ j for j ≥ 3 unchanged, we only consider r 1 , r 2 , λ 1 , λ 2 for the desired path, i.e. r j (t) = r j and λ j (t) = λ j for all j ≥ 3. We have that λ 1 r 1 + λ 2 r 2 ∈ (λ 1 + λ 2 ) · Im(σ) N , hence can be written as λR for some λ > 0 and R ∈ Im(σ) N with λR = z − j≥3 λ j r j = λ 1 r 1 + λ 2 r 2 . For t ∈ [0, 1 2 ] we define r 1 (t) := r 1 + 2t(R − r 1 ) and r 2 (t) := r 2 , λ 1 (t) = λλ 1 (1 − 2t)λ + 2tλ 1 and λ 2 (t) = (1 − 2t) λλ 2 (1 − 2t)λ + 2tλ 1 . For t ∈ [ 1 2 , 1] we set r 1 (t) := (2 − 2t)R + (2t − 1)( λ 1 λ 1 + 2λ 2 r 1 + 2λ 2 λ 1 + 2λ 2 r 2 ) and r 2 (t) = r 2 , λ 1 (t) = λ(λ 1 + 2λ 2 ) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ and λ 2 (t) = −λ 2 λ(2t − 1) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ . Then (i) r 1 (0) = r 1 and r 2 (0) = r 2 as desired. Further (ii) z ∈ span j (r j (t)) for all t ∈ [0, 1] via z = j λ j (t)r j (t) . It is also easy to check that r 1 (t), r 2 (t) ∈ Im(σ) N for all t ∈ [0, 1]. Finally, (iii) λ 1 (1) = λ 1 +2λ 2 > 0 and λ 2 (1) = −λ 2 < 0. Hence, if all non-zero coefficients of w L have the same sign, then we apply Lemma 5 to activation vectors r i = a L−1 i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L •,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. We may now assume that not all non-zero entries of w L have the same sign. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. We first prove the result for the (more complicated) case when Im(σ) = (0, d) for some d > 0, e.g. for σ the sigmoid function: Let z ∈ R N be the vector given by z α = f (x α ) for the parameter w at the current weights. Let I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}. For each j ∈ {1, 2, . . . , n L−1 } \ J 0 = J + ∪ J − we consider the path ρ j 2 : [0, 1) → (0, d) N of activation values given by ρ j 2 (t) = (1 − t)[act(L − 1, j; x α )] α . Applying Lemma 3 and Lemma 4 we find the inducing path Γ j 2,L−1 for parameters w L−1 , and we simultaneously change the parameters w L via w L •,j (t) = Γ j 2,L (t) := 1 1−t w L •,j . Following along Γ j 2 (t) = (Γ j 2,L−1 (t), Γ j 2,L (t)) does not change the outcome f (x α ) = z α for any α. For j ∈ J + we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I+ |J + | ∈ (0, d) N . This is possible, since all involved terms are positive, ρ j 2 (t j ) < 1 and decreasing to zero for increasing t, while w L •,j (t) increases for growing t. Similarly, for j ∈ J − we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I− |J − | ∈ (0, d) N . This time the negative sign of w L •,j (t) for j ∈ J . and the negative signs of (y − z) I− cancel, again allowing to find suitable t j . We will consider the endpoints Γ j 2 (t j ) as the new parameter values for w and the induced endpoints ρ j 2 (t j ) as our new act(L − 1, j; x α ). The next part of the path incrementally adds positive or negative coordinates of (y − z) to each activation vector of the last hidden layer. For each j ∈ J + , we let ρ j 3 : [0, 1] → (0, d) N be the path defined by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I− |J − | Since ρ j 3 (t) is a path in Im(σ) for all j, this path can again be realized by an inducing change Γ 3 (t) of parameters w L−1 . The parameters w L are kept unchanged in this last part of the path. Simultaneously changing all ρ j 3 (t) results in a change of the output of the neural network given by [f t (x α )] α = w L •,0 + nL−1 j=1 w L •,j ρ j 3 (t) = w L •,0 +   j∈J+ w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I+,α |J + |   α +   j∈J− w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I−,α |J − |   α = w L •,0 +   nL−1 j=1 w L •,j act(L − 1, j; x α )   α + j∈J+ t · (y − z) I+ |J + | + j∈J− t · (y − z) I− |J − | = z + t · (y − z) I+ + t · (y − z) I− = z + t · (y − z). It is easy to see that for the path t ∈ [0, 1] → z + t · (y − z) the loss L = ||z + t · (y − z) − y|| 2 2 = (1 − t)||y − z|| 2 2 is strictly decreasing to zero. The concatenation of Γ 2 and Γ 3 gives us the desired path Γ 0 . The case that Im(σ) = (c, 0) for some c < 0 works analogously. In the case that Im(σ) = (c, d) with 0 ∈ (c, d), there is no need to split up into sets I + , I − and J + , J − . We haveρ j 2 (t j ) + 1 w L •,j (tj) · (y−z) N ∈ (c, d) N for t j close enough to 1. Hence we can follow Γ j 2 (t) as above until ρ j 2 (t) + 1 w L •,j (t) · (y − z) N ∈ (c, d) N for all j. From here, the paths ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y−z) N define paths in Im(σ) for each j, which can be implemented by an application of Lemma 3 and Lemma 4 and lead to the global minimum. E. Calculations for Lemma 1 For the calculations we may assume without loss of generality that r = 1. If we want to consider a different n(l, r; x) and its corresponding γ r λ , then this can be achieved by a reordering of the indices of neurons.) We let ϕ denote the network function of the smaller neural network and f the neural network function of the larger network after adding one neuron according to the map γ 1 λ . To distinguish the parameters of f and ϕ, we write w ϕ for the parameters of the network before the embedding. This gives for all i, s and all m ≥ 2: For the function f we have the following partial derivatives. u −1,i = u ϕ 1,i u 1,i = u ϕ 1,i v s,−1 = λv ϕ s,1 v s,1 = (1 − λ)v ϕ s,1 u m,i = u ϕ m,i v s,m = v ϕ s, ∂f (x) ∂u p,i = k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) and ∂f (x) ∂v s,q = ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) The analogous equations hold for ϕ. 2) Relating first order derivatives of network functions f and ϕ Therefore, at 3) Second order derivatives of network functions f and ϕ. For the second derivatives we get (with δ(a, a) = 1 and δ(a, b) = 0 for a = b) ∂ 2 f (x) ∂u p,i ∂u q,j = ∂ ∂u q,j k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, m; x)∂n(l + 1, k; x) · v m,q · σ (n(l, q; x)) · act(l − 1, j; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(p, q) k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) ·act(l − 1, i; x) · act(l − 1, j; x) and ∂ 2 f (x) ∂v s,p ∂v t,q = ∂ ∂v t,q ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, p; x) = ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, t; x) · act(l, p; x) · act(l, q; x) and ∂ 2 f (x) ∂u p,i ∂v s,q = ∂ ∂v s,q k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, k; x) · act(l, q; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(q, p) · ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · σ (n(l, p; x)) · act(l − 1, i; x) For a parameter w closer to the input than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂n(l + 1, m; x) · ∂n(l + 1, m; x) ∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂n(l, p; x) ∂w · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂act(l − 1, i; x) ∂w and ∂ 2 f (x) ∂v s,q ∂w = ∂ ∂w ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) = n ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, n; x) · ∂n(l + 1, n; x) ∂w · act(l, q; x) · act(l, q; x) + ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · ∂act(l, q; x) ∂w For a parameter w closer to the output than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, m; x)∂n ϕ (l + 1, k; x) · v ϕ m,q · σ (n ϕ (l, q; x)) · act ϕ (l − 1, j; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) B p i,j (x) := k ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, k; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) · act ϕ (l − 1, j; x) C p,s i,q (x) := k ∂ 2 h ϕ •,l+1 (n(l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, k; x) · act ϕ (l, q; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) D p,s i (x) := ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x) · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) E s,t p,q (x) := ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, t; x) · act ϕ (l, p; x) · act ϕ (l, q; x) Then for all i, j, p, q, s, t, we have ∂ 2 ϕ(x) ∂u ϕ p,i ∂u ϕ q,j = A p,q i,j (x) + δ(q, p)B p i,j (x) ∂ 2 ϕ(x) ∂u ϕ p,i ∂v ϕ s,q = C p,s i,q (x) + δ(q, p)D p,s i (x) ∂ 2 ϕ(x) ∂v s,p ∂v t,q = E s,t p,q (x) For f we get for p, q ∈ {−1, 1} and all i, j, s, t ∂ 2 f (x) ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j (x) + λB 1 i,j (x) ∂ 2 f (x) ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j (x) + (1 − λ)B 1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂u 1,j = ∂ 2 f (x) ∂u 1,i ∂u −1,j = λ(1 − λ) · A 1,1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂v s,−1 = λC 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u 1,i ∂v s,1 = (1 − λ)C 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u −1,i ∂v s,1 = λ · C 1,s i,1 (x) = λ · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂u 1,i ∂v s,−1 = (1 − λ) · C 1,s i,1 (x) = (1 − λ) · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂v s,p ∂v t,q = E s,t 1,1 (x) = ∂ 2 ϕ(x) ∂v ϕ s,1 ∂v ϕ t,1 and ∂ ∂w ϕ = α (ϕ(x α ) − y α ) · ∂ϕ(x α ) ∂w ϕ . From this it follows immediately that if ∂ ∂w ϕ (w ϕ ) = 0, then ∂L ∂w (γ 1 λ (w ϕ )) = 0 for all λ (cf. [9], [15]). For the second derivative we get and for q ≥ 2 and p ∈ {−1, 1} and all i, j, s, t ∂ 2 L ∂u −1,i ∂u q,j = λA 1,q i,j + λA 1,1 i,j ∂ 2 L ∂u 1,i ∂u q,j = (1 − λ)A 1,q i,j + (1 − λ)A 1,q i,j ∂ 2 L ∂u −1,i ∂v s,q = λC 1,s i,q + λC 1,s i,q ∂ 2 L ∂u 1,i ∂v s,q = (1 − λ)C 1,s i,q + (1 − λ)C 1,s i,q ∂ 2 L ∂u q,i ∂v s,p = C q,s i,p + C q,s i,p ∂ 2 L ∂v s,p ∂v t,q = E s,t 1,q + E s,t 1,q and for p, q ≥ 2 and all i, j, s, t ∂ 2 L ∂u p,i ∂u q,j = A p,q i,j + δ(q, p)B p i,j (x) + A p,q i,j = ∂ 2 ∂u ϕ p,i ∂u ϕ q,j ∂ 2 L ∂u p,i ∂v s,q = C p,s i,q + δ(q, p)D p,s i + C p,s i,q = ∂ 2 ∂u ϕ p,i ∂v ϕ s,q ∂ 2 L ∂v s,p ∂v t,q = E s,t p,q + E s,t p,q = ∂ 2 ∂v ϕ s,p ∂v ϕ t,q 6) Change of basis Choose any real numbers α = −β such that λ = β α+β (equivalently αλ − β(1 − λ) = 0) and set µ −1,i = u −1,i + u 1,i µ 1,i = α · u −1,i − β · u 1,i ν s,−1 = v s,−1 + v s,1 ν s,1 = v s,−1 − v s,1 . ∂ 2 L ∂w∂r = α (f (x α ) − y α ) · ∂ 2 f (x α ) ∂w∂r + α ∂f (x α ) ∂w · ∂f (x α )∂ 2 L ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j + λB 1 i,j + λ 2 A 1,1 i,j ∂ 2 L ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i,j + (1 − λ) 2 A 1,1 i,j ∂ 2 L ∂u −1,i ∂u 1,j = λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j Then at γ 1 λ ([u 1,i ] i , [v s,1 ] s ,w), ∂ 2 L ∂µ −1,i ∂µ −1,j = ∂ ∂u −1,i + ∂ ∂u 1,i ∂L(x) ∂u −1,j + ∂L(x) ∂u 1,j = ∂ 2 L(x) ∂u −1,i ∂u −1,j + ∂ 2 L(x) ∂u −1,i ∂u 1,j + ∂ 2 L(x) ∂u 1,i ∂u −1,j + ∂ 2 L(x) ∂u 1,i ∂u 1,j = λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = A 1,1 i,j + B 1 i.j + A 1,1 i,j ∂ 2 L ∂µ 1,i ∂µ 1,j = α ∂ ∂u −1,i − β ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α 2 ∂ 2 L(x) ∂u −1,i ∂u −1,j − αβ ∂ 2 L(x) ∂u −1,i ∂u 1,j − αβ ∂ 2 L(x) ∂u 1,i ∂u −1,j + β 2 ∂ 2 L(x) ∂u 1,i ∂u 1,j = α 2 λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + β 2 (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = αβB 1 i.j ∂ 2 L ∂µ −1,i ∂µ 1,j = ∂ ∂u −1,i + ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α ∂ 2 L(x) ∂u −1,i ∂u −1,j − β ∂ 2 L(x) ∂u −1,i ∂u 1,j + α ∂ 2 L(x) ∂u 1,i ∂u −1,j − β ∂ 2 L(x) ∂u 1,i ∂u 1,j = α λ 2 A 1,1 i,j + λB 2 i.j + λ 2 A 1,1 i,j − β λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + α λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − β (1 − λ) 2 A 1,1 i,j + (1 − λ)B 2 i.j + (1 − λ) 2 A 1,1 i,j = 0 ∂ 2 L ∂ν s,∂L(x) ∂v t,−1 − ∂L(x) ∂v t,1 = ∂ 2 L(x) ∂v s,−1 ∂v t,−1 − ∂ 2 L(x) ∂v s,−1 ∂v t,1 + ∂ 2 L(x) ∂v s,1 ∂v t,−1 − ∂ 2 L(x) ∂v s,1 ∂v t,1 = E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t 1,1 + E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t We also need to consider the second derivative with respect to the other variables ofw. If w is closer to the output than [u p,i ] p,i , [v s,q ] s,q belonging to layer γ where γ > l + 1, then we get
15,463
1812.06486
2904130053
Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e. non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our proposed construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks with two hidden layers, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks.
That suboptimal local minima must become rather degenerate if the neural network becomes sufficiently large was observed for networks with one hidden layer in @cite_5 and @cite_13 . Recently, Nguyen and Hein @cite_15 generalized this result to deeper networks containing an extremely wide hidden layer. Our contribution can be considered as a continuation of this work.
{ "abstract": [ "The authors propose a theoretical framework for backpropagation (BP) in order to identify some of its limitations as a general learning procedure and the reasons for its success in several experiments on pattern recognition. The first important conclusion is that examples can be found in which BP gets stuck in local minima. A simple example in which BP can get stuck during gradient descent without having learned the entire training set is presented. This example guarantees the existence of a solution with null cost. Some conditions on the network architecture and the learning environment that ensure the convergence of the BP algorithm are proposed. It is proven in particular that the convergence holds if the classes are linearly separable. In this case, the experience gained in several experiments shows that multilayered neural networks (MLNs) exceed perceptrons in generalization to new examples. >", "", "It is shown that, in a feedforward net of logistic units, if there are as many hidden nodes as patterns to learn then almost certainly a solution exists, and the error function has no local minima. A large enough feedforward net can reproduce almost any finite set of targets for almost any set of input patterns, and will almost certainly not be trapped in a local minimum while learning to do so. >" ], "cite_N": [ "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "2022740958", "2963427613", "2133656711" ] }
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
At the heart of most optimization problems lies the search for the global minimum of a loss function. The common approach to finding a solution is to initialize at random in parameter space and subsequently follow directions of decreasing loss based on local methods. This approach lacks a global progress criteria, which leads to descent into one of the nearest local minima. Since the loss function of deep neural networks is non-convex, the common approach of using gradient descent variants is vulnerable precisely to that problem. Authors pursuing the early approaches to local descent by back-propagating gradients [1] experimentally noticed that suboptimal local minima appeared surprisingly harmless. More recently, for deep neural networks, the earlier observations were further supported by the experiments of e.g., [2]. Several authors aimed to provide theoretical insight for this behavior. Broadly, two views may be distinguished. Some, aiming at explanation, rely on simplifying modeling assumptions. Others investigate neural networks under realistic assumptions, but often focus on failure cases only. Recently, Nguyen and Hein [3] provide partial explanations for deep and extremely wide neural networks for a class of activation functions including the commonly used sigmoid. Extreme width is characterized by a "wide" layer that has more neurons than input patterns to learn. For almost every instantiation of parameter values w (i.e. for all but a null set of parameter values) it is shown that, if the loss function has a local minimum at w, then this local minimum must be a global one. This suggests that for deep and wide neural networks, possibly every local minimum is global. The question on what happens at the null set of parameter values, for which the result does not hold, remains unanswered. Similar observations for neural networks with one hidden layer were made earlier by Gori and Tesi [4] and Poston et al. [5]. Poston et al. [5] show for a neural network with one hidden layer and sigmoid activation function that, if the hidden layer has more nodes than training patterns, then the error function (squared sum of prediction losses over the samples) has no suboptimal "local minimum" and "each point is arbitrarily close to a point from which a strictly decreasing path starts, so such a point cannot be separated from a so called good point by a barrier of any positive height" [5]. It was criticized by Sprinkhuizen-Kuyper and Boers [6] that the definition of a local minimum used in the proof of [5] was rather strict and unconventional. In particular, the results do not imply that no suboptimal local minima, defined in the usual way, exist. As a consequence, the notion of attracting and non-attracting regions of local minima were introduced and the authors prove that non-attracting regions exist by providing an example for the extended XOR problem. The existence of these regions imply that a gradient-based approach descending the loss surface using local information may still not converge to the global minimum. The main objective of this work is to revisit the problem of such non-attracting regions and show that they also exist in deep and wide networks. In particular, a gradient based approach may get stuck in a suboptimal local minimum. Most importantly, the performance of deep and wide neural networks cannot be explained by the analysis of the loss curve alone, without taking proper initialization or the stochasticity of SGD into account. Our observations are not fundamentally negative. At first, the local minima we find are rather degenerate. With proper initialization, a local descent technique is unlikely to get stuck in one of the degenerate, suboptimal local minima 1 . Secondly, the minima reside on a non-attracting region of local minima (see Definition 1). Due to its exploration properties, stochastic gradient descent will eventually be able to escape from such a region (see [8]). We conjecture that in sufficiently wide and deep networks, except for a null set of parameter values as starting points, there is always a monotonically decreasing path down to the global minimum. This was shown in [5] for neural networks with one hidden layer, sigmoid activation function and square loss, and we generalize this result to neural networks with two hidden layers. (More precisely, our result holds for all neural networks with square loss and a class of activation functions including the sigmoid, where the wide layer is the last or second last hidden layer). This implies that in such networks every local minimum belongs to a non-attracting region of local minima. Our proof of the existence of suboptimal local minima even in extremely wide and deep networks is based on a construction of local minima in neural networks given by Fukumizu and Amari [9]. By relying on careful computation we are able to characterize when this construction is applicable to deep neural networks. Interestingly, in deeper layers, the construction rarely seems to lead to local minima, but more often to saddle points. The argument that saddle points rather than suboptimal local minima are the main problem in deep networks has been raised before (see [10]) but a theoretical justification [11] uses strong assumptions that do not exactly hold in neural networks. Here, we provide the first analytical argument, under realistic assumptions on the neural network structure, describing when certain critical points of the training loss lead to saddle points in deeper networks. III. MAIN RESULTS A. Problem definition We consider regression networks with fully connected layers of size n l , 0 ≤ l ≤ L given by f (x) = w L (σ(w L−1 (σ(. . . (w 2 (σ(w 1 (x) + w 1 0 )) + w 2 0 ) . . .)) + w L−1 0 )) + w L 0 , where w l ∈ R nl×nl−1 denotes the weight matrix of the l-th layer, 1 ≤ l ≤ L, w l 0 the bias terms, and σ a nonlinear activation function. The neural network function is denoted by f and we notationally suppress dependence on parameters. We assume the activation function σ to belong to the class of strictly monotonically increasing, analytic, bounded functions on R with image in interval (c, d) such that 0 ∈ [c, d], a class we denote by A. As prominent examples, the sigmoid activation function σ(t) = 1 1+exp(−t) and σ(t) = tanh(x) lie in A. We assume no activation function at the output layer. The neural network is assumed to be a regression network mapping into the real domain R, i.e. n L = 1 and w L ∈ R 1×nL−1 . We train on a finite dataset (x α , y α ) 1≤α≤N of size N with input patterns x α ∈ R n0 and desired target value y α ∈ R. We aim to minimize the squared loss L = N α=1 (f (x α ) − y α ) 2 . Further, w denotes the collection of all w l . The dependence of the neural network function f on w translates into a dependence of L = L(w) of the loss function on the parameters w. Due to assumptions on σ, L(w) is twice continuously differentiable. The goal of training a neural network consists of minimizing L(w) over w. There is a unique value L 0 denoting the infimum of the neural network's loss (most often L 0 = 0 in our examples). Any set of weights w • that satisfies L(w • ) = L 0 is called a global minimum. Due to its non-convexity, the loss function L(w) of a neural network is in general known to potentially suffer from local minima (precise definition of a local minimum below). We will study the existence of suboptimal local minima in the sense that a local minimum w * is suboptimal if its loss L(w * ) is strictly larger than L 0 . We refer to deep neural networks as models with more than one hidden layer. Further, we refer to wide neural networks as the type of model considered in [3]- [5] with one hidden layer containing at least as many neurons as input patterns (i.e. n l ≥ N for some 1 ≤ l < L in our notation). Disclaimer: Naturally, training for zero global loss is not desirable in practice, neither is the use of fully connected wide and deep neural networks necessarily. The results of this paper are of theoretical importance. To be able to understand the complex learning behavior of deep neural networks in practice, it is a necessity to understand the networks with the most fundamental structure. In this regard, while our result are not directly applicable to neural networks used in practice, they do offer explanations for their learning behavior. B. A special kind of local minimum The standard definition of a local minimum, which is also used here, is a point w * such that w * has a neighborhood U with L(w) ≥ L(w * ) for all w ∈ U . Since local minima do not need to be isolated (i.e. L(w) > L(w * ) for all w ∈ U \ {w * }) two types of connected regions of local minima may be distinguished. Note that our definition slightly differs from the one by [6]. Definition 1. [6] Let : R n → R be a differentiable function. Suppose R is a maximal connected subset of parameter values w ∈ R m , such that every w ∈ R is a local minimum of with value (w) = c. • R is called an attracting region of local minima, if there is a neighborhood U of R such that every continuous path Γ(t), which is non-increasing in and starts from some Γ(0) ∈ U , satisfies (Γ(t)) ≥ c for all t. • R is called a non-attracting region of local minima, if every neighborhood U of R contains a point from where a continuous path Γ(t) exists that is non-increasing in and ends in a point Γ(1) with (Γ(1)) < c. Despite its non-attractive nature, a non-attracting region R of local minima may be harmful for a gradient descent approach. A path of greatest descent can end in a local minimum on R. However, no point z on R needs to have a neighborhood of attraction in the sense that following the path of greatest descent from a point in a neighborhood of z will lead back to z. (The path can lead to a different local minimum on R close by or reach points with strictly smaller values than c.) In the example of such a region for the 2-3-1 XOR network provided in [6], a local minimum (of higher loss than the global loss) resides at points in parameter space with some coordinates at infinity. In particular, a gradient descent approach may lead to diverging parameters in that case. However, a different non-increasing path down to the global minimum always exists. It can be shown that local minima at infinity also exist for wide and deep neural networks. (The proof can be found in Appendix A.) Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. A different type of non-attracting regions of local minima (without infinite parameter values) is considered for neural networks with one hidden layer by Fukumizu and Amari [9] and Wei et al. [8] under the name of singularities. This type of region is characterized by singularities in the weight space (a subset of the null set not covered by the results of Nguyen and Hein [3]) leading to a loss value strictly larger than the global loss. The dynamics around such region are investigated by Wei et al. [8]. Again, a full batch gradient descent approach can get stuck in a local minimum in this type of region. A rough illustration of the nature of these non-attracting regions of local minima is depicted in Fig. 1. Non-attracting regions of local minima do not only exist in small two-layer neural networks. Theorem 2. There exist deep and wide fully-connected neural networks with sigmoid activation function such that the squared loss function of a finite dataset has a non-attracting region of local minima (at finite parameter values). The construction of such local minima is discussed in Section V with a complete proof in Appendix B. Corollary 1. Any attempt to show for fully connected deep and wide neural networks that a gradient descent technique will always lead to a global minimum only based on a description of the loss curve will fail if it doesn't take into consideration properties of the learning procedure (such as the stochasticity of stochastic gradient descent), properties of a suitable initialization technique, or assumptions on the dataset. On the positive side, we point out that a stochastic method such as stochastic gradient descent has a good chance to escape a non-attracting region of local minima due to noise. With infinite time at hand and sufficient exploration, the region can be escaped from with high probability (see [8] for a more detailed discussion). In Section V-A we will further characterize when the method used to construct examples of regions of non-attracting local minima is applicable. This characterization limits us to the construction of extremely degenerate examples. We give an intuitive argument why assuring the necessary assumptions for the construction becomes more difficult for wider and deeper networks and why it is natural to expect a lower suboptimal loss (where the suboptimal minima are less "bad") the less degenerate the constructed minima are and the more parameters a neural network possesses. C. Non-increasing path to a global minimum By definition, every neighborhood of a non-attracting region of local minima contains points from where a non-increasing path to a value less than the value of the region exists. (By definition all points belonging to a nonattracting region have the same value, in fact they are all local minima.) The question therefore arises whether from almost everywhere in parameter space there is such a non-increasing path all the way down to a global minimum. If the last hidden layer is the wide layer having more neurons than input patterns (for example consider a wide two-layer neural network), then this holds true by the results of [3] (and [4], [5]). We show the same conclusion to hold for wide neural networks having the second last hidden layer the wide one. In particular, this implies that for wide neural networks with two hidden layers, starting from almost everywhere in parameter space, there is non-increasing path down to a global minimum. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. Corollary 2. Consider a wide, fully connected regression neural network with two hidden layers and activation function in the class A and trained to minimize the squared loss over a finite dataset. Then all suboptimal local minima are contained in a non-attracting region of local minima. The rest of the paper contains the arguments leading to the given results. IV. NOTATIONAL CHOICES We fix additional notation aside the problem definition from Section III-A. For input x α , we denote the pattern vector of values at all neurons at layer l before activation by n(l; x α ) and after activation by act(l; x α ). x α,1 x α,2 x 0 1, −1 1, 1 1, 2 1, 3 1, 3 1, 0 f (x α ) [u 1,i ] i [u 1,i ] i [u 2,i ] i [u 3,i ] i λ · v •,1 (1 − λ) · v •,1 v •,2 v •,3 v •,0 In general, we will denote column vectors of size n with coefficients z i by [z i ] 1≤i≤n or simply [z i ] i and matrices with entries a i,j at position (i, j) by [a i,j ] i,j . The neuron value pattern n(l; x) is then a vector of size n l denoted by n(l; x) = [n(l, k; x)] 1≤k≤nl , and the activation pattern act(l; x) = [act(l, k; x)] 1≤k≤nl . Using that f can be considered a composition of functions from consecutive layers, we denote the function from act(k; x) to the output by h •,k (x). For convenience of the reader, a tabular summary of all notation is provided in Appendix A. V. CONSTRUCTION OF LOCAL MINIMA We recall the construction of so-called hierarchical suboptimal local minima given in [9] and extend it to deep networks. For the hierarchical construction of critical points, we add one additional neuron n(l, −1; x) to a hidden layer l. (Negative indices are unused for neurons, which allows us to add a neuron with this index.) Once we have fixed the layer l, we denote the parameters of the incoming linear transformation by [u p,i ] p,i , so that u p,i denotes the contribution of neuron i in layer l − 1 to neuron p in layer l, and the parameters of the outgoing linear transformation by [v s,q ], where v s,q denotes the contribution of neuron q in layer l to neuron s in layer l + 1. For weights of the output layer (into a single neuron), we write w •,j instead of w 1,j . We recall the function γ used in [9] to construct local minima in a hierarchical way. This function γ describes the mapping from the parameters of the original network to the parameters after adding a neuron n(l, −1; x) and is determined by incoming weights u −1,i into n(l, −1; x), outgoing weights v s,−1 of n(l, −1; x), and a change of the outgoing weights v s,r of n(l, r; x) for one chosen r in the smaller network. Sorting the network parameters in a convenient way, the embedding of the smaller network into the larger one is defined for any λ ∈ R by a function γ r λ mapping parameters {([u r,i ] i , [v s,r ] s ,w} of the smaller network to parameters {([u −1,i ] i , [v s,−1 ] s , [u r,i ] i , [v s,r ] s ,w)} of the larger network and is defined by γ r λ ([u r,i ] i , [v s,r ] s ,w) := ([u r,i ] i , [λ · v s,r ] s , [u r,i ] i , [(1 − λ) · v s,r ] s ,w) . Herew denotes the collection of all remaining network parameters, i.e., all [u p,i ] i , [v s,q ] s for p, q / ∈ {−1, r} and all parameters from linear transformation of layers with index smaller than l or larger than l + 1, if existent. A visualization of γ 1 λ is shown in Fig. 2. Important fact: For the functions ϕ, f of smaller and larger network at parameters ([u * 1,i ] i , [v * s,1 ] s ,w * ) and γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) respectively, we have ϕ(x) = f (x) for all x. More generally, we even have n ϕ (l, k; x) = n(l, k; x) and act ϕ (l, k; x) = act(l, k; x) for all l, x and k ≥ 0. A. Characterization of hierarchical local minima Using γ r to embed a smaller deep neural network into a second one with one additional neuron, it has been shown that critical points get mapped to critical points. Theorem 4 (Nitta [15]). Consider two neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. If parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point for the squared loss over a finite dataset in the smaller network then, for each λ ∈ R, γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) determines a critical point in the larger network. As a consequence, whenever an embedding of a local minimum with γ r λ into a larger network does not lead to a local minimum, then it leads to a saddle point instead. (There are no local maxima in the networks we consider, since the loss function is convex with respect to the parameters of the last layer.) For neural networks with one hidden layer, it was characterized when a critical point leads to a local minimum. Theorem 5 (Fukumizu, Amari [9]). Consider two neural networks as in Section III-A with only one hidden layer and which differ by one neuron in the hidden layer with index n(1, −1; x) in the larger network. Assume that parameters ([u * r,i ] i , v * •,r ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller neural network and that λ / ∈ {0, 1}. Then γ r λ ([u * r,i ] i , v * •,r ,w * ) determines a local minimum in the larger network if the matrix [B r i,j ] i,j given by B r i,j = α (f (x α ) − y α ) · v * •,r · σ (n(1, r; x α )) · x α,i · x α,j is positive definite and 0 < λ < 1, or if [B r i,j ] i,j is negative definite and λ < 0 or λ > 1. (Here, we denote the k-th input dimension of input x α by x α,k .) We extend the previous theorem to a characterization in the case of deep networks. We note that a similar computation was performed in [19] for neural networks with two hidden layers. Theorem 6. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller network. If the matrix [B r i,j ] i,j defined by B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either • positive definite and λ ∈ I := (0, 1), or • negative definite and λ ∈ I : = (−∞, 0) ∪ (1, ∞), then γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) | λ ∈ I determines a non-attracting region of local minima in the larger network if and only if D r,s i := α (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) · σ (n(l, r; x α )) · act(l − 1, i; x α )(2) is zero, D r,s i = 0, for all i, s. Remark 1. In the case of a neural network with only one hidden layer as considered in Theorem 5, the function h •,l+1 (x) is the identity function on R and the matrix [B r i,j ] i,j in (1) reduces to the matrix [B r i,j ] i,j in Theorem 5. The condition that D r,s i = 0 for all i, s does hold for shallow neural networks with one hidden layer as we show below. This proves Theorem 6 to be consistent with Theorem 5. The theorem follows from a careful computation of the Hessian of the cost function L(w), characterizing when it is positive (or negative) semidefinite and checking that the loss function does not change along directions that correspond to an eigenvector of the Hessian with eigenvalue 0. We state the outcome of the computation in Lemma 1 and refer the reader interested in a full proof of Theorem 6 to Appendix B. Lemma 1. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Fix 1 ≤ r ≤ n l . Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point in the smaller network. Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ) , the Hessian of L (i.e., the second derivative with respect to the new network parameters) at γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) is given by        [ ∂ 2 ∂ur,i∂ur,j ] i,j 2[ ∂ 2 ∂ur,i∂vs,r ] i,s [ ∂ 2 ∂w ∂ur,i ] i,w 0 0 2[ ∂ 2 ∂ur,i∂vs,r ] s,i 4[ ∂ 2 ∂vs,r∂vt,r ] s,t 2[ ∂ 2 ∂w ∂vs,r ] s,w (α − β)[D r,s i ] s,i 0 [ ∂ 2 ∂w ∂ur,i ]w ,i 2[ ∂ 2 ∂w ∂vs,r ]w ,s [ ∂ 2 ∂w ∂w ]w ,w 0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        B. Shallow networks with a single hidden layer For the construction of suboptimal local minima in wide two-layer networks, we begin by following the experiments of [9] that prove the existence of suboptimal local minima in (non-wide) two-layer neural networks. Consider a neural network of size 1-2-1. We use the corresponding network function f to construct a dataset (x α , y α ) N α=1 by randomly choosing x α and letting y α = f (x α ). By construction, we know that a neural network of size 1-2-1 can perfectly fit the dataset with zero error. Consider now a smaller network of size 1-1-1 having too little expressibility for a global fit of all data points. We find parameters [u * 1,1 , v * • ] where the loss function of the neural network is in a local minimum with non-zero loss. For this small example, the required positive definiteness of [B 1 i,j ] i,j from (1) for a use of γ λ with λ ∈ (0, 1) reduces to checking a real number for positivity, which we assume to hold true. We can now apply γ λ and Theorem 5 to find parameters for a neural network of size 1-2-1 that determine a suboptimal local minimum. This example may serve as the base case for a proof by induction to show the following result. Theorem 7. There is a wide neural network with one hidden layer and arbitrarily many neurons in the hidden layer that has a non-attracting region of suboptimal local minima. Having already established the existence of parameters for a (small) neural network leading to a suboptimal local minimum, it suffices to note that iteratively adding neurons using Theorem 5 is possible. Iteratively at step t, we add a neuron n(1, −t; x) to the network by an application of γ 1 λ with the same λ ∈ (0, 1). The corresponding matrix from (1), B 1,(t) i,j = α (f (x α ) − y α ) · (1 − λ) t · v * •,1 · σ (n(l, 1; x α )) · x α,i · x α,j , is positive semidefinite. (We use here that neither f (x α ) nor n(l, 1; x α ) ever change during this construction.) By Theorem 5 we always find a suboptimal minimum with nonzero loss for the network for λ ∈ (0, 1). Note however, that a continuous change of λ to a value outside of [0, 1] does not change the network function, but leads to a saddle point. Hence, we found a non-attracting region of suboptimal minima. Remark 2. Since we started the construction from a network of size 1-1-1, our constructed example is extremely degenerate: The suboptimal local minima of the wide network have identical incoming weight vectors for each hidden neuron. Obviously, the suboptimality of this parameter setting is easily discovered. Also with proper initialization, the chance of landing in this local minimum is vanishing. However, one may also start the construction from a more complex network with a larger network with several hidden neurons. In this case, when adding a few more neurons using γ 1 λ , it is much harder to detect the suboptimality of the parameters from visual inspection. C. Deep neural networks According to Theorem 6, next to positive definiteness of the matrix B r i,j for some r, in deep networks there is a second condition for the construction of hierarchical local minima using the map γ r λ , i.e. D r,s i = 0. We consider conditions that make D r,s i = 0. Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) The proof is contained in Appendix C. D. Experiment for deep networks To construct a local minimum in a deep and wide neural network, we start by considering a three-layer network of size 2-2-4-1, i.e. we have two input dimensions, one output dimension and hidden layers of two and four neurons. We use its network function f to create a dataset of 50 samples (x α , f (x α )), hence we know that a network of size 2-2-4-1 can attain zero loss. We initialize a new neural network of size 2-2-2-1 and train it until convergence, before using the construction to add neurons to the network. When adding neurons to the last hidden layer using γ 1 λ , Proposition 1 assures that D 1,• i = 0 for all i. We check for positive definiteness of the matrix B 1 i,j , and only continue when this property holds. Having thus assured the necessary condition of Theorem 6, we can add a few neurons to the last hidden layer (by induction as in the two-layer case), which results in local minimum of a network of size 2-2-M-1. The local minimum of non-zero loss that we attain is suboptimal whenever M ≥ 4 by construction. For M ≥ 50 the network is wide. Experimentally, we show not only that indeed we end up with a suboptimal minimum, but also that it belongs to a non-attracting region of local minima. In Fig. 3 we show results after adding eleven neurons to the last hidden layer. On the left side, we plot the loss in the neighborhood of the constructed local minimum in parameter space. The top image shows the loss curve into randomly generated directions, the bottom displays the minimal loss over all these directions. On the top right we show the change of loss along one of the degenerate directions that allows reaching a saddle point. In such a saddle point we know from Lemma 1 the direction of descent. The image on the bottom right shows that indeed the direction allows a reduction in loss. Being able to reach a saddle point from a local minimum by a path of non-increasing loss shows that indeed we found a non-attracting region of local minima. E. A discussion of limitations and of the loss of non-attracting regions of suboptimal minima We fix a neuron in layer l and aim to use γ r λ to find a local minimum in the larger network. We then need to check whether a matrix B r i,j is positive definite, which depends on the dataset. Under strong independence assumptions (the signs of different eigenvalues of B r i,j are independent), one may argue similar to arguments in [10] that the probability of finding B r i,j to be positive definite (all eigenvalues positive) is exponentially decreasing in the number of possible neurons of the previous layer l − 1. At the same time, the number of neurons n(l, r; x) in layer l to use for the construction only increases linearly in the number of neurons in layer l. Experimentally, we use a four-layer neural network of size 2-8-12-8-1 to construct a (random) dataset containing 500 labeled samples. We train a network of size 2-4-6-4-1 on the dataset until convergence using SciPy's 2 BFGS implementation. For each layer l, we check each neuron r whether it can be used for enlargment of the network using the map γ r λ for some λ ∈ (0, 1), i.e., we check whether the corresponding matrix B r i,j is positive definite. We repeat this experiment 1000 times. For the first layer, we find that in 547 of 4000 test cases the matrix is positive definite. For the second layer we only find B r i,j positive definite in 33 of 6000 cases, and for the last hidden layer there are only 6 instances out of 4000 where the matrix B r i,j is positive definite. Since the matrix B r i,j is of size 2 × 2/4 × 4/6 × 6 for the first/second/last hidden layer respectively, the number of positive matrices is less than what would be expected under the strong independence assumptions discussed above. In addition, in deeper layers, further away from the output layer, it seems dataset dependent and unlikely to us that D r,s i = 0. Simulations seem to support this belief. However, it is difficult to check the condition numerically. Firstly, it is hard to find the exact position of minima and we only compute numerical approximations of D r,s i . Secondly, the terms are small for sufficiently large networks and numerical errors play a role. Due to these two facts, it becomes barely possible to check the condition of exact equality to zero. In Fig. 4 we show the distribution of maximal entries of the matrix D r,s i = 0 for neurons in the first, second and third layer of the network of size 2-4-6-4-1 trained as above. Note that for the third layer we know from theory that in a critical point we have D r,s i = 0, but due to numerical errors much larger values arise. Further, a region of local minima as above requires linearly dependent activation pattern vectors. This is how linear dimensions for subsequent layers get lost, reducing the ability to approximate the target function. Intuitively, in a deep and wide neural network there are many possible directions of descent. Loosing some of them still leaves the network with enough freedom to closely approximate the target function. As a result, these suboptimal minima have a loss close to the global loss. Conclusively, finding suboptimal local minima with high loss by the construction using γ r λ becomes hard when the networks become deep and wide. VI. PROVING THE EXISTENCE OF A NON-INCREASING PATH TO THE GLOBAL MINIMUM In the previous section we showed the existence of non-attracting regions of local minima. These type of local minima do not rule out the possibility of non-increasing paths to the global minimum from almost everywhere in parameter space. In this section, we sketch the proof to Theorem 3 illustrated in form of several lemmas, where up to the basic assumptions on the neural network structure as in Section III-A (with activation function in A), the assumption of one lemma is given by the conclusion of the previous one. A full proof can be found in Appendix D. We consider vectors that we call activation vectors, different from the activation pattern vectors act(l; x) from above. The activation vector at neuron k in layer l is denoted by a l k and defined by all values at the given neuron for different samples x α : a l k := [act(l, k; x α )] α . In other words while we fix l and x for the activation pattern vectors act(l; x) and let k run over its possible values, we fix l and k for the activation vectors a l k and let x run over its samples x α in the dataset. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . As it turns out, there is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum whenever we may assume that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it is necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. We apply Lemma 5 to activation vectors r i = a i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L 1,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. Fix z α = f (x α ), the prediction for the current weights. The main idea is to change the activation vectors of the last hidden layer according to ρ j : t ∈ [0, 1] → a L−1 j + t · 1 w L •,j · (y − z) N . With w L fixed, at the output this results in a change of t ∈ [0, 1] → z + t · (y − z), which reduces the loss to zero. The required change of activation vectors can be implemented by an application of Lemma 3 and Lemma 4, but only if the image of each ρ j lies in the image [c, d] of the activation function. Hence, the latter must be arranged. In the case that 0 ∈ (c, d), it suffices to first decrease the norm of a L−1 j while simultaneously increasing the norm of the outgoing weight w L •,j so that the output remains constant. If, however, 0 is in the boundary of the interval [c, d] (for example the case of a sigmoid activation function), then the assumption of non-zero weights with different signs becomes necessary. We let J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}, I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, I − = {α ∈ {1, 2, . . . , N } | (y − z) α < 0}. We further define (y − z) I+ to be the vector v with coordinate v α for α ∈ I + equal to (y − z) α and 0 otherwise, and we let analogously (y − z) I− denote the vector containing only the negative coordinates of y − z. Then the paths ρ j : [0, 1] → (c, d) defined by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I− |J − | can be arranged to all lie in the image of the activation function and they again lead to an output change of t ∈ [0, 1] → z + t · (y − z). (Appendix D contains a more detailed proof.) This concludes the proof of Theorem 3 having found a sufficient condition in Lemma 6 to confirm the existence of a path down to zero loss and having shown how to realize this condition in Lemmas 3, 4 and 5. VII. CONCLUSION In this paper we have studied the local minima of deep and wide regression neural networks with sigmoid activation functions. We established that the nature of local minima is such that they live in a special region of the cost function called a non-attractive region, and showed that a non-increasing path to a configuration with lower loss than that of the region can always be found. For sufficiently wide two-or three-layer neural networks, all local minima belong to such a region. We generalized the procedure to find such regions, introduced by Fukumizu and Amari [9], to deep networks and described sufficient conditions for the construction to work. The necessary conditions become very hard to satisfy in wider and deeper networks and, if they fail, the construction leads to saddle points instead. Finally, an intuitive argument shows a clear relation between the degree of degeneracy of a local minimum and the level of suboptimality of the constructed local minimum. APPENDIX NOTATION [x α ] α R n column vector with entries x α ∈ R [x i,j ] i,j ∈ R n1×n2 matrix with entry x i,j at position (i, j) Im(f) ⊆ R image of a function f C n (X, Y ) n-times continuously differentiable function from X to Y N ∈ N number of data samples in training set x α ∈ R n0 training sample input y α ∈ R target output for sample x α A ∈ C(R) class of real-analytic, strictly monotonically increasing, bounded (activation) functions such that the closure of the image contains zero σ ∈ C 2 (R, R) a nonlinear activation function in class A f ∈ C(R n0 , R) neural network function l 1 ≤ l ≤ L index of a layer L ∈ N number of layers excluding the input layer l=0 input layer l = L output layer n l ∈ N number of neurons in layer l k 1 ≤ k ≤ n l index of a neuron in layer l w l ∈ R nl×nl−1 weight matrix of the l-th layer w ∈ R L l=1 (nl·nl−1) collection of all w l w l i,j ∈ R the weight from neuron j of layer l − 1 to neuron j of layer l w L •,j ∈ R the weight from neuron j of layer L − 1 to the output L ∈ R + squared loss over training samples n(l, k; x) ∈ R value at neuron k in layer l before activation for input pattern x n(l; x) ∈ R nl neuron pattern at layer l before activation for input pattern x act(l, k; x) ∈ Im(σ) activation pattern at neuron k in layer l for input x act(l; x) ∈ Im(σ) nl neuron pattern at layer l for input x In Section V, where we fix a layer l, we additionally use the following notation. h •,k (x) ∈ C(R nl , R) the function from act(l; x) to the output [u p,i ] p,i ∈ R nl×nl−1 weights of the given layer l. [v s,q ] s,q ∈ R nl×nl+1 weights the layer l + 1. r ∈ {1, 2, . . . , n l } the index of the neuron of layer l that we use for the addition of one additional neuron M ∈ N = L t=1 (n t · n t−1 ), the number of weights in the smaller neural network w ∈ R M −nl−1−nl+1 all weights except u 1,i and v s,1 γ r λ ∈ C(R M , R M +nl−1+nl+1 ) the map defined in Section V to add a neuron in layer l using the neuron with index r in layer l In Section VI, we additionally use the following notation. A. Local minima at infinity in neural networks In this section we prove the existence of local minima at infinity in neural networks. Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. Proof. We will show that, if all bias terms u i,0 of the last hidden layer are sufficiently large, then there are parameters u i,0k for k = 0 and parameters v i of the output layer such that the minimal loss is achieved at u i,0 = ∞ for all i. We note that, if u i,0 = ∞ for all i, all neurons of the last hidden layer are fully active for all samples, i.e. act(L − 1, i; x α ) = 1 for all i. Therefore, in this case f ( x α ) = i v •,i for all α. A constant function f (x α ) = i v •,i = c minimizes the loss α (c − y α ) 2 uniquely for c := 1 N N α=1 y α . We will assume that the v •,i are chosen such that i v •,i = c does hold. That is, for fully active hidden neurons at the last hidden layer, the v •,i are chosen to minimize the loss. We write f (x α ) = c + α . Then L = 1 2 α (f (x α ) − y α ) 2 = 1 2 α (c + α − y α ) 2 = 1 2 α ( α + (c − y α )) 2 = 1 2 α (c − y α ) 2 Loss at ui,0 = ∞ for all i + 1 2 α 2 α ≥0 + α α (c − y α ) ( * ) . The idea is now to ensure that ( * ) ≥ 0 for sufficiently large u i,0 and in a neighborhood of the v •,i chosen as above. Then the loss L is larger than at infinity, and any point in parameter space with u i,0 = ∞ and v •,i with i v •,i = c is a local minimum. To study the behavior at u i,0 = ∞, we consider p i = exp(−u i,0 ). Note that lim ui,0→∞ p i = 0. We have f (x α ) = i v •,i σ(u i,0 + k u i,k act(L − 2, k; x α )) = i v •,i · 1 1 + p i · exp(− k u i,k act(L − 2, k; x α )) Now for p i close to 0 we can use Taylor expansion of g j i (p i ) : = 1 1+piexp(a j i ) to get g j i (p i ) = 1 − exp(a j i )p i + O(|p i | 2 ). Therefore f (x α ) = c − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) and we find that α = − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ). Recalling that we aim to ensure ( * ) = α α (c − y α ) ≥ 0 we consider α α (c − y α ) = − α (c − y α )( i v •,i p i exp(− k u i,k act(L − 2, k; x α ))) + O(p 2 i ) = − i v •,i p i α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) We are still able to choose the parameters u i,k for i = 0, the parameters from previous layers, and the v •,i subject to i v •,i = c. If now v •,i > 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) < 0 and v •,i < 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) > 0, then the term ( * ) is strictly positive, hence the overall loss is larger than the loss at p i = 0 for sufficiently small p i and in a neighborhood of v •,i . The only obstruction we have to get around is the case where we need all v •,i of the opposite sign of c (in other words, α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) has the same sign as c), conflicting with i v •,i = c. To avoid this case, we impose the mild condition that α (c−y α )act(L−2, r; x α ) = 0 for some r, which can be arranged to hold for almost every dataset by fixing all parameters of layers with index smaller than L − 2. By Lemma 7 below (with d α = (c−y α ) and a r α = act(L−2, r; x α )), we can find u > k such that α (c−y α ) exp(− k u > k act(L−2, k; x α )) > 0 and u < k such that α (c − y α ) exp(− k u < k act(L − 2, k; x α )) < 0. We fix u i,k for k ≥ 0 such that there is some i 1 with [u i1,k ] k = [u > k ] k and some i 2 with [u i2,k ] k = [u < k ] k . This assures that we can choose the v •,i of opposite sign to α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) and such that i v •,i = c, leading to a local minimum at infinity. The local minimum is suboptimal whenever a constant function is not the optimal network function for the given dataset. By assumption, there is r such that the last term is nonzero. Hence, using coordinate r, we can choose w = (0, 0, . . . , 0, w r , 0, . . . , 0) such that φ(w) is positive and we can choose w such that φ(w) is negative. B. Proofs for the construction of local minima Here we prove B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either is zero, D r,s i = 0, for all i, s. The previous theorem follows from two lemmas, with the first lemma containing the computation of the Hessian of the cost function L of the larger network at parameters γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) with respect to a suitable basis. In addition, to find local minima one needs to explain away all additional directions, i.e., we need to show that the loss function actually does not change into the direction of eigenvectors of the Hessian with eigenvalue 0. Otherwise a higher derivative into this direction could be nonzero and potentially lead to a saddle point (see [19]). Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ),0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        Proof. The proof only requires a tedious, but not complicated calculation (using the relation αλ − β(1 − λ) = 0 multiple times. To keep the argumentation streamlined, we moved all the necessary calculations into Appendix E. (z 1 , z 2 , z 3 , z 4 )     a 2b c 0 2b T 4d 2e 0 c T 2e T f 0 0 0 0 x         z 1 z 2 z 3 z 4     = (z 1 , 2z 2 , z 3 , z 4 )     a b c 0 b T d e 0 c T e T f 0 0 0 0 x         z 1 2z 2 z 3 z 4     (b) It is clear that the matrix x is positive semidefinite for g positive semidefinite and h = 0. To show the converse, first note that if g is not positive semidefinite and z is such that z T gz < 0 then (z T , 0) g h h T 0 z 0 = z T gz < 0. It therefore remains to show that also h = 0 is a necessary condition. Assume h = 0 and find z such that hz = 0. Then for any λ ∈ R we have ((hz) T , −λz T ) g h h T 0 hz −λz = (hz) T g(hz) − 2(hz) T hλz = (hz) T g(hz) − 2λ||hz|| 2 2 . For sufficiently large λ, the last term is negative. Proof of Theorem 6. In Lemma 1, we calculated the Hessian of L with respect to a suitable basis at a the critical point γ λ ([u * r,i ] i , [v * s,r ] s ,w * ). If the matrix [D r,s i ] i,] i,j is positive definite or if (λ < 0 or λ > 1) ⇔ αβ < 0 and [B r i,j ] i,j is negative definite. In each case we can alter the λ to values leading to saddle points without changing the network function or loss. Therefore, the critical points can only be saddle points or local minima on a non-attracting region of local minima. To determine whether the critical points in questions lead to local minima when [D r,s i ] i,s = 0, it is insufficient to only prove the Hessian to be positive semidefinite (in contrast to (strict) positive definiteness), but we need to consider directions for which the second order information is insufficient. We know that the loss is at a minimum with respect to all coordinates except for the degenerate directions [v s,−1 − v s,r ] s . However, the network function f (x) is constant along [v s,−1 − v s,r ] s (keeping [v s,−1 + v s, r ] s constant) at the critical point where u −1,i = u r,i for all i. Hence, no higher order information leads to saddle points and it follows that the critical point lies on a region of local minima. C. Construction of local minima in deep networks Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) Proof. The fact that property (i) suffices uses that h •,l+1 (x) reduces to the identity function on the networks output and hence its derivative is one. Then, considering a regression network as before, our assumption says that v * •,r = 0, hence its reciprocal can be factored out of the sum in Equation (2). Denoting incoming weights into n(l, r; x) by u r,i as before, this leads to D r,1• i = 1 v * •,r · α (f (x α ) − y α ) · v * •,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) = 1 v * •,r · ∂L ∂u r,i = 0 In the case of (ii), ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t and we can factor out the reciprocal of t v * r,s = 0 in Equation (2) to again see that for each i, ∂L ∂ur,i = 0 implies that D r,s i = 0 for all s. (iii) is evident since in this case clearly every summand in Equation (2) is zero. D. Proofs for the non-increasing path to a global minimum In this section we discuss how in wide neural networks with two hidden layers a non-increasing path to the global minimum may be found from almost everywhere in the parameter space. By [3] (and [4], [5]), we can find such a path if the last hidden layer is wide (containing more neurons than input patterns). We therefore only consider the case where the first hidden layer in a three-layer neural network is wide. More generally, our results apply to all deep neural networks with the second last hidden layer wide. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) = Γ(t) · [act(L − 2, k; x α )] k,α Proof. We write ν(t) = [n(L − 1, s; x α )] s,α +ν(t) withν(0) = 0. We will findΓ(t) such thatν(t) =Γ(t) · [act(L − 2, k; x α )] k,α withΓ(0) = 0. Then Γ(t) := w L−1 +Γ(t) does the job. Since by assumption [act(L − 2, k; x α )] k,α has full rank, we can find an invertible submatrixà ∈ R N ×N of [act(L−2, k; x α )] k,α . Then we can define a continuous pathρ in R nL−1×N given byρ(t) :=ν(t)·Ã −1 , which satisfies ρ(t) ·Ã = ν(t) andρ(0) = 0. Extendingρ(t) to a path in R nL−1×nL−2 by zero columns at positions corresponding to rows of [act(L − 2, k; x α )] k,α missing inÃ, gives a pathΓ(t) such thatΓ(t) · [act(L − 2, k; x α )] k,α =ν(t) and withΓ(0) = 0. Lemma 4. For all continuous paths ρ(t) in Im(σ) N , i.e. the N-fold copy of the image of σ, there is a continuous path ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. Proof. Since σ : R N → Im(σ) N is invertible with a continuous inverse, take ν(t) = σ −1 (ρ(t)). The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . There is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum. For activation functions in A with 0 in the boundary of the image interval [c, d], this path requires that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it seems necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. Proof. We only consider the case with all λ j ≥ 0. The other case can be treated analogously. If only one λ j0 is nonzero, then consider a vector r k corresponding to a zero coefficient λ k = 0 and change r k continuously until it equals the vector r j0 corresponding to the only nonzero coefficient. Then continuously increase the positive coefficient λ j0 , while introducing a corresponding negative contribution via λ k . It is then easy to see that this leads to a path satisfying conditions (i)-(iii). We may therefore assume that at least two coefficients λ j are nonzero, say λ 1 and λ 2 . Leaving all r j and λ j for j ≥ 3 unchanged, we only consider r 1 , r 2 , λ 1 , λ 2 for the desired path, i.e. r j (t) = r j and λ j (t) = λ j for all j ≥ 3. We have that λ 1 r 1 + λ 2 r 2 ∈ (λ 1 + λ 2 ) · Im(σ) N , hence can be written as λR for some λ > 0 and R ∈ Im(σ) N with λR = z − j≥3 λ j r j = λ 1 r 1 + λ 2 r 2 . For t ∈ [0, 1 2 ] we define r 1 (t) := r 1 + 2t(R − r 1 ) and r 2 (t) := r 2 , λ 1 (t) = λλ 1 (1 − 2t)λ + 2tλ 1 and λ 2 (t) = (1 − 2t) λλ 2 (1 − 2t)λ + 2tλ 1 . For t ∈ [ 1 2 , 1] we set r 1 (t) := (2 − 2t)R + (2t − 1)( λ 1 λ 1 + 2λ 2 r 1 + 2λ 2 λ 1 + 2λ 2 r 2 ) and r 2 (t) = r 2 , λ 1 (t) = λ(λ 1 + 2λ 2 ) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ and λ 2 (t) = −λ 2 λ(2t − 1) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ . Then (i) r 1 (0) = r 1 and r 2 (0) = r 2 as desired. Further (ii) z ∈ span j (r j (t)) for all t ∈ [0, 1] via z = j λ j (t)r j (t) . It is also easy to check that r 1 (t), r 2 (t) ∈ Im(σ) N for all t ∈ [0, 1]. Finally, (iii) λ 1 (1) = λ 1 +2λ 2 > 0 and λ 2 (1) = −λ 2 < 0. Hence, if all non-zero coefficients of w L have the same sign, then we apply Lemma 5 to activation vectors r i = a L−1 i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L •,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. We may now assume that not all non-zero entries of w L have the same sign. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. We first prove the result for the (more complicated) case when Im(σ) = (0, d) for some d > 0, e.g. for σ the sigmoid function: Let z ∈ R N be the vector given by z α = f (x α ) for the parameter w at the current weights. Let I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}. For each j ∈ {1, 2, . . . , n L−1 } \ J 0 = J + ∪ J − we consider the path ρ j 2 : [0, 1) → (0, d) N of activation values given by ρ j 2 (t) = (1 − t)[act(L − 1, j; x α )] α . Applying Lemma 3 and Lemma 4 we find the inducing path Γ j 2,L−1 for parameters w L−1 , and we simultaneously change the parameters w L via w L •,j (t) = Γ j 2,L (t) := 1 1−t w L •,j . Following along Γ j 2 (t) = (Γ j 2,L−1 (t), Γ j 2,L (t)) does not change the outcome f (x α ) = z α for any α. For j ∈ J + we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I+ |J + | ∈ (0, d) N . This is possible, since all involved terms are positive, ρ j 2 (t j ) < 1 and decreasing to zero for increasing t, while w L •,j (t) increases for growing t. Similarly, for j ∈ J − we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I− |J − | ∈ (0, d) N . This time the negative sign of w L •,j (t) for j ∈ J . and the negative signs of (y − z) I− cancel, again allowing to find suitable t j . We will consider the endpoints Γ j 2 (t j ) as the new parameter values for w and the induced endpoints ρ j 2 (t j ) as our new act(L − 1, j; x α ). The next part of the path incrementally adds positive or negative coordinates of (y − z) to each activation vector of the last hidden layer. For each j ∈ J + , we let ρ j 3 : [0, 1] → (0, d) N be the path defined by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I− |J − | Since ρ j 3 (t) is a path in Im(σ) for all j, this path can again be realized by an inducing change Γ 3 (t) of parameters w L−1 . The parameters w L are kept unchanged in this last part of the path. Simultaneously changing all ρ j 3 (t) results in a change of the output of the neural network given by [f t (x α )] α = w L •,0 + nL−1 j=1 w L •,j ρ j 3 (t) = w L •,0 +   j∈J+ w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I+,α |J + |   α +   j∈J− w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I−,α |J − |   α = w L •,0 +   nL−1 j=1 w L •,j act(L − 1, j; x α )   α + j∈J+ t · (y − z) I+ |J + | + j∈J− t · (y − z) I− |J − | = z + t · (y − z) I+ + t · (y − z) I− = z + t · (y − z). It is easy to see that for the path t ∈ [0, 1] → z + t · (y − z) the loss L = ||z + t · (y − z) − y|| 2 2 = (1 − t)||y − z|| 2 2 is strictly decreasing to zero. The concatenation of Γ 2 and Γ 3 gives us the desired path Γ 0 . The case that Im(σ) = (c, 0) for some c < 0 works analogously. In the case that Im(σ) = (c, d) with 0 ∈ (c, d), there is no need to split up into sets I + , I − and J + , J − . We haveρ j 2 (t j ) + 1 w L •,j (tj) · (y−z) N ∈ (c, d) N for t j close enough to 1. Hence we can follow Γ j 2 (t) as above until ρ j 2 (t) + 1 w L •,j (t) · (y − z) N ∈ (c, d) N for all j. From here, the paths ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y−z) N define paths in Im(σ) for each j, which can be implemented by an application of Lemma 3 and Lemma 4 and lead to the global minimum. E. Calculations for Lemma 1 For the calculations we may assume without loss of generality that r = 1. If we want to consider a different n(l, r; x) and its corresponding γ r λ , then this can be achieved by a reordering of the indices of neurons.) We let ϕ denote the network function of the smaller neural network and f the neural network function of the larger network after adding one neuron according to the map γ 1 λ . To distinguish the parameters of f and ϕ, we write w ϕ for the parameters of the network before the embedding. This gives for all i, s and all m ≥ 2: For the function f we have the following partial derivatives. u −1,i = u ϕ 1,i u 1,i = u ϕ 1,i v s,−1 = λv ϕ s,1 v s,1 = (1 − λ)v ϕ s,1 u m,i = u ϕ m,i v s,m = v ϕ s, ∂f (x) ∂u p,i = k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) and ∂f (x) ∂v s,q = ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) The analogous equations hold for ϕ. 2) Relating first order derivatives of network functions f and ϕ Therefore, at 3) Second order derivatives of network functions f and ϕ. For the second derivatives we get (with δ(a, a) = 1 and δ(a, b) = 0 for a = b) ∂ 2 f (x) ∂u p,i ∂u q,j = ∂ ∂u q,j k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, m; x)∂n(l + 1, k; x) · v m,q · σ (n(l, q; x)) · act(l − 1, j; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(p, q) k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) ·act(l − 1, i; x) · act(l − 1, j; x) and ∂ 2 f (x) ∂v s,p ∂v t,q = ∂ ∂v t,q ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, p; x) = ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, t; x) · act(l, p; x) · act(l, q; x) and ∂ 2 f (x) ∂u p,i ∂v s,q = ∂ ∂v s,q k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, k; x) · act(l, q; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(q, p) · ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · σ (n(l, p; x)) · act(l − 1, i; x) For a parameter w closer to the input than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂n(l + 1, m; x) · ∂n(l + 1, m; x) ∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂n(l, p; x) ∂w · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂act(l − 1, i; x) ∂w and ∂ 2 f (x) ∂v s,q ∂w = ∂ ∂w ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) = n ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, n; x) · ∂n(l + 1, n; x) ∂w · act(l, q; x) · act(l, q; x) + ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · ∂act(l, q; x) ∂w For a parameter w closer to the output than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, m; x)∂n ϕ (l + 1, k; x) · v ϕ m,q · σ (n ϕ (l, q; x)) · act ϕ (l − 1, j; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) B p i,j (x) := k ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, k; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) · act ϕ (l − 1, j; x) C p,s i,q (x) := k ∂ 2 h ϕ •,l+1 (n(l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, k; x) · act ϕ (l, q; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) D p,s i (x) := ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x) · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) E s,t p,q (x) := ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, t; x) · act ϕ (l, p; x) · act ϕ (l, q; x) Then for all i, j, p, q, s, t, we have ∂ 2 ϕ(x) ∂u ϕ p,i ∂u ϕ q,j = A p,q i,j (x) + δ(q, p)B p i,j (x) ∂ 2 ϕ(x) ∂u ϕ p,i ∂v ϕ s,q = C p,s i,q (x) + δ(q, p)D p,s i (x) ∂ 2 ϕ(x) ∂v s,p ∂v t,q = E s,t p,q (x) For f we get for p, q ∈ {−1, 1} and all i, j, s, t ∂ 2 f (x) ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j (x) + λB 1 i,j (x) ∂ 2 f (x) ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j (x) + (1 − λ)B 1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂u 1,j = ∂ 2 f (x) ∂u 1,i ∂u −1,j = λ(1 − λ) · A 1,1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂v s,−1 = λC 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u 1,i ∂v s,1 = (1 − λ)C 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u −1,i ∂v s,1 = λ · C 1,s i,1 (x) = λ · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂u 1,i ∂v s,−1 = (1 − λ) · C 1,s i,1 (x) = (1 − λ) · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂v s,p ∂v t,q = E s,t 1,1 (x) = ∂ 2 ϕ(x) ∂v ϕ s,1 ∂v ϕ t,1 and ∂ ∂w ϕ = α (ϕ(x α ) − y α ) · ∂ϕ(x α ) ∂w ϕ . From this it follows immediately that if ∂ ∂w ϕ (w ϕ ) = 0, then ∂L ∂w (γ 1 λ (w ϕ )) = 0 for all λ (cf. [9], [15]). For the second derivative we get and for q ≥ 2 and p ∈ {−1, 1} and all i, j, s, t ∂ 2 L ∂u −1,i ∂u q,j = λA 1,q i,j + λA 1,1 i,j ∂ 2 L ∂u 1,i ∂u q,j = (1 − λ)A 1,q i,j + (1 − λ)A 1,q i,j ∂ 2 L ∂u −1,i ∂v s,q = λC 1,s i,q + λC 1,s i,q ∂ 2 L ∂u 1,i ∂v s,q = (1 − λ)C 1,s i,q + (1 − λ)C 1,s i,q ∂ 2 L ∂u q,i ∂v s,p = C q,s i,p + C q,s i,p ∂ 2 L ∂v s,p ∂v t,q = E s,t 1,q + E s,t 1,q and for p, q ≥ 2 and all i, j, s, t ∂ 2 L ∂u p,i ∂u q,j = A p,q i,j + δ(q, p)B p i,j (x) + A p,q i,j = ∂ 2 ∂u ϕ p,i ∂u ϕ q,j ∂ 2 L ∂u p,i ∂v s,q = C p,s i,q + δ(q, p)D p,s i + C p,s i,q = ∂ 2 ∂u ϕ p,i ∂v ϕ s,q ∂ 2 L ∂v s,p ∂v t,q = E s,t p,q + E s,t p,q = ∂ 2 ∂v ϕ s,p ∂v ϕ t,q 6) Change of basis Choose any real numbers α = −β such that λ = β α+β (equivalently αλ − β(1 − λ) = 0) and set µ −1,i = u −1,i + u 1,i µ 1,i = α · u −1,i − β · u 1,i ν s,−1 = v s,−1 + v s,1 ν s,1 = v s,−1 − v s,1 . ∂ 2 L ∂w∂r = α (f (x α ) − y α ) · ∂ 2 f (x α ) ∂w∂r + α ∂f (x α ) ∂w · ∂f (x α )∂ 2 L ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j + λB 1 i,j + λ 2 A 1,1 i,j ∂ 2 L ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i,j + (1 − λ) 2 A 1,1 i,j ∂ 2 L ∂u −1,i ∂u 1,j = λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j Then at γ 1 λ ([u 1,i ] i , [v s,1 ] s ,w), ∂ 2 L ∂µ −1,i ∂µ −1,j = ∂ ∂u −1,i + ∂ ∂u 1,i ∂L(x) ∂u −1,j + ∂L(x) ∂u 1,j = ∂ 2 L(x) ∂u −1,i ∂u −1,j + ∂ 2 L(x) ∂u −1,i ∂u 1,j + ∂ 2 L(x) ∂u 1,i ∂u −1,j + ∂ 2 L(x) ∂u 1,i ∂u 1,j = λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = A 1,1 i,j + B 1 i.j + A 1,1 i,j ∂ 2 L ∂µ 1,i ∂µ 1,j = α ∂ ∂u −1,i − β ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α 2 ∂ 2 L(x) ∂u −1,i ∂u −1,j − αβ ∂ 2 L(x) ∂u −1,i ∂u 1,j − αβ ∂ 2 L(x) ∂u 1,i ∂u −1,j + β 2 ∂ 2 L(x) ∂u 1,i ∂u 1,j = α 2 λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + β 2 (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = αβB 1 i.j ∂ 2 L ∂µ −1,i ∂µ 1,j = ∂ ∂u −1,i + ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α ∂ 2 L(x) ∂u −1,i ∂u −1,j − β ∂ 2 L(x) ∂u −1,i ∂u 1,j + α ∂ 2 L(x) ∂u 1,i ∂u −1,j − β ∂ 2 L(x) ∂u 1,i ∂u 1,j = α λ 2 A 1,1 i,j + λB 2 i.j + λ 2 A 1,1 i,j − β λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + α λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − β (1 − λ) 2 A 1,1 i,j + (1 − λ)B 2 i.j + (1 − λ) 2 A 1,1 i,j = 0 ∂ 2 L ∂ν s,∂L(x) ∂v t,−1 − ∂L(x) ∂v t,1 = ∂ 2 L(x) ∂v s,−1 ∂v t,−1 − ∂ 2 L(x) ∂v s,−1 ∂v t,1 + ∂ 2 L(x) ∂v s,1 ∂v t,−1 − ∂ 2 L(x) ∂v s,1 ∂v t,1 = E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t 1,1 + E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t We also need to consider the second derivative with respect to the other variables ofw. If w is closer to the output than [u p,i ] p,i , [v s,q ] s,q belonging to layer γ where γ > l + 1, then we get
15,463
1812.06486
2904130053
Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e. non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our proposed construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks with two hidden layers, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks.
To gain better insight into theoretical aspects, some papers consider linear networks, where the activation function is the identity. The classic result by Baldi and Hornik @cite_32 shows that linear two-layer neural networks have a unique global minimum and all other critical values are saddle points. Kawaguchi, @cite_37 , Lu and Kawaguchi @cite_3 and @cite_4 discuss generalizations of @cite_32 to deep linear networks.
{ "abstract": [ "In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.", "We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.", "Abstract We consider the problem of learning from examples in layered linear feed-forward neural networks using optimization methods, such as back propagation, with respect to the usual quadratic error function E of the connection weights. Our main result is a complete description of the landscape attached to E in terms of principal component analysis. We show that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. All the additional critical points of E are saddle points (corresponding to projections onto subspaces generated by higher order vectors). The auto-associative case is examined in detail. Extensions and implications for the learning algorithms are discussed.", "In deep learning, , as well as , create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models." ], "cite_N": [ "@cite_37", "@cite_4", "@cite_32", "@cite_3" ], "mid": [ "2963446085", "2736030546", "2078626246", "2593380010" ] }
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
At the heart of most optimization problems lies the search for the global minimum of a loss function. The common approach to finding a solution is to initialize at random in parameter space and subsequently follow directions of decreasing loss based on local methods. This approach lacks a global progress criteria, which leads to descent into one of the nearest local minima. Since the loss function of deep neural networks is non-convex, the common approach of using gradient descent variants is vulnerable precisely to that problem. Authors pursuing the early approaches to local descent by back-propagating gradients [1] experimentally noticed that suboptimal local minima appeared surprisingly harmless. More recently, for deep neural networks, the earlier observations were further supported by the experiments of e.g., [2]. Several authors aimed to provide theoretical insight for this behavior. Broadly, two views may be distinguished. Some, aiming at explanation, rely on simplifying modeling assumptions. Others investigate neural networks under realistic assumptions, but often focus on failure cases only. Recently, Nguyen and Hein [3] provide partial explanations for deep and extremely wide neural networks for a class of activation functions including the commonly used sigmoid. Extreme width is characterized by a "wide" layer that has more neurons than input patterns to learn. For almost every instantiation of parameter values w (i.e. for all but a null set of parameter values) it is shown that, if the loss function has a local minimum at w, then this local minimum must be a global one. This suggests that for deep and wide neural networks, possibly every local minimum is global. The question on what happens at the null set of parameter values, for which the result does not hold, remains unanswered. Similar observations for neural networks with one hidden layer were made earlier by Gori and Tesi [4] and Poston et al. [5]. Poston et al. [5] show for a neural network with one hidden layer and sigmoid activation function that, if the hidden layer has more nodes than training patterns, then the error function (squared sum of prediction losses over the samples) has no suboptimal "local minimum" and "each point is arbitrarily close to a point from which a strictly decreasing path starts, so such a point cannot be separated from a so called good point by a barrier of any positive height" [5]. It was criticized by Sprinkhuizen-Kuyper and Boers [6] that the definition of a local minimum used in the proof of [5] was rather strict and unconventional. In particular, the results do not imply that no suboptimal local minima, defined in the usual way, exist. As a consequence, the notion of attracting and non-attracting regions of local minima were introduced and the authors prove that non-attracting regions exist by providing an example for the extended XOR problem. The existence of these regions imply that a gradient-based approach descending the loss surface using local information may still not converge to the global minimum. The main objective of this work is to revisit the problem of such non-attracting regions and show that they also exist in deep and wide networks. In particular, a gradient based approach may get stuck in a suboptimal local minimum. Most importantly, the performance of deep and wide neural networks cannot be explained by the analysis of the loss curve alone, without taking proper initialization or the stochasticity of SGD into account. Our observations are not fundamentally negative. At first, the local minima we find are rather degenerate. With proper initialization, a local descent technique is unlikely to get stuck in one of the degenerate, suboptimal local minima 1 . Secondly, the minima reside on a non-attracting region of local minima (see Definition 1). Due to its exploration properties, stochastic gradient descent will eventually be able to escape from such a region (see [8]). We conjecture that in sufficiently wide and deep networks, except for a null set of parameter values as starting points, there is always a monotonically decreasing path down to the global minimum. This was shown in [5] for neural networks with one hidden layer, sigmoid activation function and square loss, and we generalize this result to neural networks with two hidden layers. (More precisely, our result holds for all neural networks with square loss and a class of activation functions including the sigmoid, where the wide layer is the last or second last hidden layer). This implies that in such networks every local minimum belongs to a non-attracting region of local minima. Our proof of the existence of suboptimal local minima even in extremely wide and deep networks is based on a construction of local minima in neural networks given by Fukumizu and Amari [9]. By relying on careful computation we are able to characterize when this construction is applicable to deep neural networks. Interestingly, in deeper layers, the construction rarely seems to lead to local minima, but more often to saddle points. The argument that saddle points rather than suboptimal local minima are the main problem in deep networks has been raised before (see [10]) but a theoretical justification [11] uses strong assumptions that do not exactly hold in neural networks. Here, we provide the first analytical argument, under realistic assumptions on the neural network structure, describing when certain critical points of the training loss lead to saddle points in deeper networks. III. MAIN RESULTS A. Problem definition We consider regression networks with fully connected layers of size n l , 0 ≤ l ≤ L given by f (x) = w L (σ(w L−1 (σ(. . . (w 2 (σ(w 1 (x) + w 1 0 )) + w 2 0 ) . . .)) + w L−1 0 )) + w L 0 , where w l ∈ R nl×nl−1 denotes the weight matrix of the l-th layer, 1 ≤ l ≤ L, w l 0 the bias terms, and σ a nonlinear activation function. The neural network function is denoted by f and we notationally suppress dependence on parameters. We assume the activation function σ to belong to the class of strictly monotonically increasing, analytic, bounded functions on R with image in interval (c, d) such that 0 ∈ [c, d], a class we denote by A. As prominent examples, the sigmoid activation function σ(t) = 1 1+exp(−t) and σ(t) = tanh(x) lie in A. We assume no activation function at the output layer. The neural network is assumed to be a regression network mapping into the real domain R, i.e. n L = 1 and w L ∈ R 1×nL−1 . We train on a finite dataset (x α , y α ) 1≤α≤N of size N with input patterns x α ∈ R n0 and desired target value y α ∈ R. We aim to minimize the squared loss L = N α=1 (f (x α ) − y α ) 2 . Further, w denotes the collection of all w l . The dependence of the neural network function f on w translates into a dependence of L = L(w) of the loss function on the parameters w. Due to assumptions on σ, L(w) is twice continuously differentiable. The goal of training a neural network consists of minimizing L(w) over w. There is a unique value L 0 denoting the infimum of the neural network's loss (most often L 0 = 0 in our examples). Any set of weights w • that satisfies L(w • ) = L 0 is called a global minimum. Due to its non-convexity, the loss function L(w) of a neural network is in general known to potentially suffer from local minima (precise definition of a local minimum below). We will study the existence of suboptimal local minima in the sense that a local minimum w * is suboptimal if its loss L(w * ) is strictly larger than L 0 . We refer to deep neural networks as models with more than one hidden layer. Further, we refer to wide neural networks as the type of model considered in [3]- [5] with one hidden layer containing at least as many neurons as input patterns (i.e. n l ≥ N for some 1 ≤ l < L in our notation). Disclaimer: Naturally, training for zero global loss is not desirable in practice, neither is the use of fully connected wide and deep neural networks necessarily. The results of this paper are of theoretical importance. To be able to understand the complex learning behavior of deep neural networks in practice, it is a necessity to understand the networks with the most fundamental structure. In this regard, while our result are not directly applicable to neural networks used in practice, they do offer explanations for their learning behavior. B. A special kind of local minimum The standard definition of a local minimum, which is also used here, is a point w * such that w * has a neighborhood U with L(w) ≥ L(w * ) for all w ∈ U . Since local minima do not need to be isolated (i.e. L(w) > L(w * ) for all w ∈ U \ {w * }) two types of connected regions of local minima may be distinguished. Note that our definition slightly differs from the one by [6]. Definition 1. [6] Let : R n → R be a differentiable function. Suppose R is a maximal connected subset of parameter values w ∈ R m , such that every w ∈ R is a local minimum of with value (w) = c. • R is called an attracting region of local minima, if there is a neighborhood U of R such that every continuous path Γ(t), which is non-increasing in and starts from some Γ(0) ∈ U , satisfies (Γ(t)) ≥ c for all t. • R is called a non-attracting region of local minima, if every neighborhood U of R contains a point from where a continuous path Γ(t) exists that is non-increasing in and ends in a point Γ(1) with (Γ(1)) < c. Despite its non-attractive nature, a non-attracting region R of local minima may be harmful for a gradient descent approach. A path of greatest descent can end in a local minimum on R. However, no point z on R needs to have a neighborhood of attraction in the sense that following the path of greatest descent from a point in a neighborhood of z will lead back to z. (The path can lead to a different local minimum on R close by or reach points with strictly smaller values than c.) In the example of such a region for the 2-3-1 XOR network provided in [6], a local minimum (of higher loss than the global loss) resides at points in parameter space with some coordinates at infinity. In particular, a gradient descent approach may lead to diverging parameters in that case. However, a different non-increasing path down to the global minimum always exists. It can be shown that local minima at infinity also exist for wide and deep neural networks. (The proof can be found in Appendix A.) Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. A different type of non-attracting regions of local minima (without infinite parameter values) is considered for neural networks with one hidden layer by Fukumizu and Amari [9] and Wei et al. [8] under the name of singularities. This type of region is characterized by singularities in the weight space (a subset of the null set not covered by the results of Nguyen and Hein [3]) leading to a loss value strictly larger than the global loss. The dynamics around such region are investigated by Wei et al. [8]. Again, a full batch gradient descent approach can get stuck in a local minimum in this type of region. A rough illustration of the nature of these non-attracting regions of local minima is depicted in Fig. 1. Non-attracting regions of local minima do not only exist in small two-layer neural networks. Theorem 2. There exist deep and wide fully-connected neural networks with sigmoid activation function such that the squared loss function of a finite dataset has a non-attracting region of local minima (at finite parameter values). The construction of such local minima is discussed in Section V with a complete proof in Appendix B. Corollary 1. Any attempt to show for fully connected deep and wide neural networks that a gradient descent technique will always lead to a global minimum only based on a description of the loss curve will fail if it doesn't take into consideration properties of the learning procedure (such as the stochasticity of stochastic gradient descent), properties of a suitable initialization technique, or assumptions on the dataset. On the positive side, we point out that a stochastic method such as stochastic gradient descent has a good chance to escape a non-attracting region of local minima due to noise. With infinite time at hand and sufficient exploration, the region can be escaped from with high probability (see [8] for a more detailed discussion). In Section V-A we will further characterize when the method used to construct examples of regions of non-attracting local minima is applicable. This characterization limits us to the construction of extremely degenerate examples. We give an intuitive argument why assuring the necessary assumptions for the construction becomes more difficult for wider and deeper networks and why it is natural to expect a lower suboptimal loss (where the suboptimal minima are less "bad") the less degenerate the constructed minima are and the more parameters a neural network possesses. C. Non-increasing path to a global minimum By definition, every neighborhood of a non-attracting region of local minima contains points from where a non-increasing path to a value less than the value of the region exists. (By definition all points belonging to a nonattracting region have the same value, in fact they are all local minima.) The question therefore arises whether from almost everywhere in parameter space there is such a non-increasing path all the way down to a global minimum. If the last hidden layer is the wide layer having more neurons than input patterns (for example consider a wide two-layer neural network), then this holds true by the results of [3] (and [4], [5]). We show the same conclusion to hold for wide neural networks having the second last hidden layer the wide one. In particular, this implies that for wide neural networks with two hidden layers, starting from almost everywhere in parameter space, there is non-increasing path down to a global minimum. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. Corollary 2. Consider a wide, fully connected regression neural network with two hidden layers and activation function in the class A and trained to minimize the squared loss over a finite dataset. Then all suboptimal local minima are contained in a non-attracting region of local minima. The rest of the paper contains the arguments leading to the given results. IV. NOTATIONAL CHOICES We fix additional notation aside the problem definition from Section III-A. For input x α , we denote the pattern vector of values at all neurons at layer l before activation by n(l; x α ) and after activation by act(l; x α ). x α,1 x α,2 x 0 1, −1 1, 1 1, 2 1, 3 1, 3 1, 0 f (x α ) [u 1,i ] i [u 1,i ] i [u 2,i ] i [u 3,i ] i λ · v •,1 (1 − λ) · v •,1 v •,2 v •,3 v •,0 In general, we will denote column vectors of size n with coefficients z i by [z i ] 1≤i≤n or simply [z i ] i and matrices with entries a i,j at position (i, j) by [a i,j ] i,j . The neuron value pattern n(l; x) is then a vector of size n l denoted by n(l; x) = [n(l, k; x)] 1≤k≤nl , and the activation pattern act(l; x) = [act(l, k; x)] 1≤k≤nl . Using that f can be considered a composition of functions from consecutive layers, we denote the function from act(k; x) to the output by h •,k (x). For convenience of the reader, a tabular summary of all notation is provided in Appendix A. V. CONSTRUCTION OF LOCAL MINIMA We recall the construction of so-called hierarchical suboptimal local minima given in [9] and extend it to deep networks. For the hierarchical construction of critical points, we add one additional neuron n(l, −1; x) to a hidden layer l. (Negative indices are unused for neurons, which allows us to add a neuron with this index.) Once we have fixed the layer l, we denote the parameters of the incoming linear transformation by [u p,i ] p,i , so that u p,i denotes the contribution of neuron i in layer l − 1 to neuron p in layer l, and the parameters of the outgoing linear transformation by [v s,q ], where v s,q denotes the contribution of neuron q in layer l to neuron s in layer l + 1. For weights of the output layer (into a single neuron), we write w •,j instead of w 1,j . We recall the function γ used in [9] to construct local minima in a hierarchical way. This function γ describes the mapping from the parameters of the original network to the parameters after adding a neuron n(l, −1; x) and is determined by incoming weights u −1,i into n(l, −1; x), outgoing weights v s,−1 of n(l, −1; x), and a change of the outgoing weights v s,r of n(l, r; x) for one chosen r in the smaller network. Sorting the network parameters in a convenient way, the embedding of the smaller network into the larger one is defined for any λ ∈ R by a function γ r λ mapping parameters {([u r,i ] i , [v s,r ] s ,w} of the smaller network to parameters {([u −1,i ] i , [v s,−1 ] s , [u r,i ] i , [v s,r ] s ,w)} of the larger network and is defined by γ r λ ([u r,i ] i , [v s,r ] s ,w) := ([u r,i ] i , [λ · v s,r ] s , [u r,i ] i , [(1 − λ) · v s,r ] s ,w) . Herew denotes the collection of all remaining network parameters, i.e., all [u p,i ] i , [v s,q ] s for p, q / ∈ {−1, r} and all parameters from linear transformation of layers with index smaller than l or larger than l + 1, if existent. A visualization of γ 1 λ is shown in Fig. 2. Important fact: For the functions ϕ, f of smaller and larger network at parameters ([u * 1,i ] i , [v * s,1 ] s ,w * ) and γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) respectively, we have ϕ(x) = f (x) for all x. More generally, we even have n ϕ (l, k; x) = n(l, k; x) and act ϕ (l, k; x) = act(l, k; x) for all l, x and k ≥ 0. A. Characterization of hierarchical local minima Using γ r to embed a smaller deep neural network into a second one with one additional neuron, it has been shown that critical points get mapped to critical points. Theorem 4 (Nitta [15]). Consider two neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. If parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point for the squared loss over a finite dataset in the smaller network then, for each λ ∈ R, γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) determines a critical point in the larger network. As a consequence, whenever an embedding of a local minimum with γ r λ into a larger network does not lead to a local minimum, then it leads to a saddle point instead. (There are no local maxima in the networks we consider, since the loss function is convex with respect to the parameters of the last layer.) For neural networks with one hidden layer, it was characterized when a critical point leads to a local minimum. Theorem 5 (Fukumizu, Amari [9]). Consider two neural networks as in Section III-A with only one hidden layer and which differ by one neuron in the hidden layer with index n(1, −1; x) in the larger network. Assume that parameters ([u * r,i ] i , v * •,r ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller neural network and that λ / ∈ {0, 1}. Then γ r λ ([u * r,i ] i , v * •,r ,w * ) determines a local minimum in the larger network if the matrix [B r i,j ] i,j given by B r i,j = α (f (x α ) − y α ) · v * •,r · σ (n(1, r; x α )) · x α,i · x α,j is positive definite and 0 < λ < 1, or if [B r i,j ] i,j is negative definite and λ < 0 or λ > 1. (Here, we denote the k-th input dimension of input x α by x α,k .) We extend the previous theorem to a characterization in the case of deep networks. We note that a similar computation was performed in [19] for neural networks with two hidden layers. Theorem 6. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller network. If the matrix [B r i,j ] i,j defined by B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either • positive definite and λ ∈ I := (0, 1), or • negative definite and λ ∈ I : = (−∞, 0) ∪ (1, ∞), then γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) | λ ∈ I determines a non-attracting region of local minima in the larger network if and only if D r,s i := α (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) · σ (n(l, r; x α )) · act(l − 1, i; x α )(2) is zero, D r,s i = 0, for all i, s. Remark 1. In the case of a neural network with only one hidden layer as considered in Theorem 5, the function h •,l+1 (x) is the identity function on R and the matrix [B r i,j ] i,j in (1) reduces to the matrix [B r i,j ] i,j in Theorem 5. The condition that D r,s i = 0 for all i, s does hold for shallow neural networks with one hidden layer as we show below. This proves Theorem 6 to be consistent with Theorem 5. The theorem follows from a careful computation of the Hessian of the cost function L(w), characterizing when it is positive (or negative) semidefinite and checking that the loss function does not change along directions that correspond to an eigenvector of the Hessian with eigenvalue 0. We state the outcome of the computation in Lemma 1 and refer the reader interested in a full proof of Theorem 6 to Appendix B. Lemma 1. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Fix 1 ≤ r ≤ n l . Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point in the smaller network. Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ) , the Hessian of L (i.e., the second derivative with respect to the new network parameters) at γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) is given by        [ ∂ 2 ∂ur,i∂ur,j ] i,j 2[ ∂ 2 ∂ur,i∂vs,r ] i,s [ ∂ 2 ∂w ∂ur,i ] i,w 0 0 2[ ∂ 2 ∂ur,i∂vs,r ] s,i 4[ ∂ 2 ∂vs,r∂vt,r ] s,t 2[ ∂ 2 ∂w ∂vs,r ] s,w (α − β)[D r,s i ] s,i 0 [ ∂ 2 ∂w ∂ur,i ]w ,i 2[ ∂ 2 ∂w ∂vs,r ]w ,s [ ∂ 2 ∂w ∂w ]w ,w 0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        B. Shallow networks with a single hidden layer For the construction of suboptimal local minima in wide two-layer networks, we begin by following the experiments of [9] that prove the existence of suboptimal local minima in (non-wide) two-layer neural networks. Consider a neural network of size 1-2-1. We use the corresponding network function f to construct a dataset (x α , y α ) N α=1 by randomly choosing x α and letting y α = f (x α ). By construction, we know that a neural network of size 1-2-1 can perfectly fit the dataset with zero error. Consider now a smaller network of size 1-1-1 having too little expressibility for a global fit of all data points. We find parameters [u * 1,1 , v * • ] where the loss function of the neural network is in a local minimum with non-zero loss. For this small example, the required positive definiteness of [B 1 i,j ] i,j from (1) for a use of γ λ with λ ∈ (0, 1) reduces to checking a real number for positivity, which we assume to hold true. We can now apply γ λ and Theorem 5 to find parameters for a neural network of size 1-2-1 that determine a suboptimal local minimum. This example may serve as the base case for a proof by induction to show the following result. Theorem 7. There is a wide neural network with one hidden layer and arbitrarily many neurons in the hidden layer that has a non-attracting region of suboptimal local minima. Having already established the existence of parameters for a (small) neural network leading to a suboptimal local minimum, it suffices to note that iteratively adding neurons using Theorem 5 is possible. Iteratively at step t, we add a neuron n(1, −t; x) to the network by an application of γ 1 λ with the same λ ∈ (0, 1). The corresponding matrix from (1), B 1,(t) i,j = α (f (x α ) − y α ) · (1 − λ) t · v * •,1 · σ (n(l, 1; x α )) · x α,i · x α,j , is positive semidefinite. (We use here that neither f (x α ) nor n(l, 1; x α ) ever change during this construction.) By Theorem 5 we always find a suboptimal minimum with nonzero loss for the network for λ ∈ (0, 1). Note however, that a continuous change of λ to a value outside of [0, 1] does not change the network function, but leads to a saddle point. Hence, we found a non-attracting region of suboptimal minima. Remark 2. Since we started the construction from a network of size 1-1-1, our constructed example is extremely degenerate: The suboptimal local minima of the wide network have identical incoming weight vectors for each hidden neuron. Obviously, the suboptimality of this parameter setting is easily discovered. Also with proper initialization, the chance of landing in this local minimum is vanishing. However, one may also start the construction from a more complex network with a larger network with several hidden neurons. In this case, when adding a few more neurons using γ 1 λ , it is much harder to detect the suboptimality of the parameters from visual inspection. C. Deep neural networks According to Theorem 6, next to positive definiteness of the matrix B r i,j for some r, in deep networks there is a second condition for the construction of hierarchical local minima using the map γ r λ , i.e. D r,s i = 0. We consider conditions that make D r,s i = 0. Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) The proof is contained in Appendix C. D. Experiment for deep networks To construct a local minimum in a deep and wide neural network, we start by considering a three-layer network of size 2-2-4-1, i.e. we have two input dimensions, one output dimension and hidden layers of two and four neurons. We use its network function f to create a dataset of 50 samples (x α , f (x α )), hence we know that a network of size 2-2-4-1 can attain zero loss. We initialize a new neural network of size 2-2-2-1 and train it until convergence, before using the construction to add neurons to the network. When adding neurons to the last hidden layer using γ 1 λ , Proposition 1 assures that D 1,• i = 0 for all i. We check for positive definiteness of the matrix B 1 i,j , and only continue when this property holds. Having thus assured the necessary condition of Theorem 6, we can add a few neurons to the last hidden layer (by induction as in the two-layer case), which results in local minimum of a network of size 2-2-M-1. The local minimum of non-zero loss that we attain is suboptimal whenever M ≥ 4 by construction. For M ≥ 50 the network is wide. Experimentally, we show not only that indeed we end up with a suboptimal minimum, but also that it belongs to a non-attracting region of local minima. In Fig. 3 we show results after adding eleven neurons to the last hidden layer. On the left side, we plot the loss in the neighborhood of the constructed local minimum in parameter space. The top image shows the loss curve into randomly generated directions, the bottom displays the minimal loss over all these directions. On the top right we show the change of loss along one of the degenerate directions that allows reaching a saddle point. In such a saddle point we know from Lemma 1 the direction of descent. The image on the bottom right shows that indeed the direction allows a reduction in loss. Being able to reach a saddle point from a local minimum by a path of non-increasing loss shows that indeed we found a non-attracting region of local minima. E. A discussion of limitations and of the loss of non-attracting regions of suboptimal minima We fix a neuron in layer l and aim to use γ r λ to find a local minimum in the larger network. We then need to check whether a matrix B r i,j is positive definite, which depends on the dataset. Under strong independence assumptions (the signs of different eigenvalues of B r i,j are independent), one may argue similar to arguments in [10] that the probability of finding B r i,j to be positive definite (all eigenvalues positive) is exponentially decreasing in the number of possible neurons of the previous layer l − 1. At the same time, the number of neurons n(l, r; x) in layer l to use for the construction only increases linearly in the number of neurons in layer l. Experimentally, we use a four-layer neural network of size 2-8-12-8-1 to construct a (random) dataset containing 500 labeled samples. We train a network of size 2-4-6-4-1 on the dataset until convergence using SciPy's 2 BFGS implementation. For each layer l, we check each neuron r whether it can be used for enlargment of the network using the map γ r λ for some λ ∈ (0, 1), i.e., we check whether the corresponding matrix B r i,j is positive definite. We repeat this experiment 1000 times. For the first layer, we find that in 547 of 4000 test cases the matrix is positive definite. For the second layer we only find B r i,j positive definite in 33 of 6000 cases, and for the last hidden layer there are only 6 instances out of 4000 where the matrix B r i,j is positive definite. Since the matrix B r i,j is of size 2 × 2/4 × 4/6 × 6 for the first/second/last hidden layer respectively, the number of positive matrices is less than what would be expected under the strong independence assumptions discussed above. In addition, in deeper layers, further away from the output layer, it seems dataset dependent and unlikely to us that D r,s i = 0. Simulations seem to support this belief. However, it is difficult to check the condition numerically. Firstly, it is hard to find the exact position of minima and we only compute numerical approximations of D r,s i . Secondly, the terms are small for sufficiently large networks and numerical errors play a role. Due to these two facts, it becomes barely possible to check the condition of exact equality to zero. In Fig. 4 we show the distribution of maximal entries of the matrix D r,s i = 0 for neurons in the first, second and third layer of the network of size 2-4-6-4-1 trained as above. Note that for the third layer we know from theory that in a critical point we have D r,s i = 0, but due to numerical errors much larger values arise. Further, a region of local minima as above requires linearly dependent activation pattern vectors. This is how linear dimensions for subsequent layers get lost, reducing the ability to approximate the target function. Intuitively, in a deep and wide neural network there are many possible directions of descent. Loosing some of them still leaves the network with enough freedom to closely approximate the target function. As a result, these suboptimal minima have a loss close to the global loss. Conclusively, finding suboptimal local minima with high loss by the construction using γ r λ becomes hard when the networks become deep and wide. VI. PROVING THE EXISTENCE OF A NON-INCREASING PATH TO THE GLOBAL MINIMUM In the previous section we showed the existence of non-attracting regions of local minima. These type of local minima do not rule out the possibility of non-increasing paths to the global minimum from almost everywhere in parameter space. In this section, we sketch the proof to Theorem 3 illustrated in form of several lemmas, where up to the basic assumptions on the neural network structure as in Section III-A (with activation function in A), the assumption of one lemma is given by the conclusion of the previous one. A full proof can be found in Appendix D. We consider vectors that we call activation vectors, different from the activation pattern vectors act(l; x) from above. The activation vector at neuron k in layer l is denoted by a l k and defined by all values at the given neuron for different samples x α : a l k := [act(l, k; x α )] α . In other words while we fix l and x for the activation pattern vectors act(l; x) and let k run over its possible values, we fix l and k for the activation vectors a l k and let x run over its samples x α in the dataset. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . As it turns out, there is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum whenever we may assume that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it is necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. We apply Lemma 5 to activation vectors r i = a i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L 1,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. Fix z α = f (x α ), the prediction for the current weights. The main idea is to change the activation vectors of the last hidden layer according to ρ j : t ∈ [0, 1] → a L−1 j + t · 1 w L •,j · (y − z) N . With w L fixed, at the output this results in a change of t ∈ [0, 1] → z + t · (y − z), which reduces the loss to zero. The required change of activation vectors can be implemented by an application of Lemma 3 and Lemma 4, but only if the image of each ρ j lies in the image [c, d] of the activation function. Hence, the latter must be arranged. In the case that 0 ∈ (c, d), it suffices to first decrease the norm of a L−1 j while simultaneously increasing the norm of the outgoing weight w L •,j so that the output remains constant. If, however, 0 is in the boundary of the interval [c, d] (for example the case of a sigmoid activation function), then the assumption of non-zero weights with different signs becomes necessary. We let J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}, I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, I − = {α ∈ {1, 2, . . . , N } | (y − z) α < 0}. We further define (y − z) I+ to be the vector v with coordinate v α for α ∈ I + equal to (y − z) α and 0 otherwise, and we let analogously (y − z) I− denote the vector containing only the negative coordinates of y − z. Then the paths ρ j : [0, 1] → (c, d) defined by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I− |J − | can be arranged to all lie in the image of the activation function and they again lead to an output change of t ∈ [0, 1] → z + t · (y − z). (Appendix D contains a more detailed proof.) This concludes the proof of Theorem 3 having found a sufficient condition in Lemma 6 to confirm the existence of a path down to zero loss and having shown how to realize this condition in Lemmas 3, 4 and 5. VII. CONCLUSION In this paper we have studied the local minima of deep and wide regression neural networks with sigmoid activation functions. We established that the nature of local minima is such that they live in a special region of the cost function called a non-attractive region, and showed that a non-increasing path to a configuration with lower loss than that of the region can always be found. For sufficiently wide two-or three-layer neural networks, all local minima belong to such a region. We generalized the procedure to find such regions, introduced by Fukumizu and Amari [9], to deep networks and described sufficient conditions for the construction to work. The necessary conditions become very hard to satisfy in wider and deeper networks and, if they fail, the construction leads to saddle points instead. Finally, an intuitive argument shows a clear relation between the degree of degeneracy of a local minimum and the level of suboptimality of the constructed local minimum. APPENDIX NOTATION [x α ] α R n column vector with entries x α ∈ R [x i,j ] i,j ∈ R n1×n2 matrix with entry x i,j at position (i, j) Im(f) ⊆ R image of a function f C n (X, Y ) n-times continuously differentiable function from X to Y N ∈ N number of data samples in training set x α ∈ R n0 training sample input y α ∈ R target output for sample x α A ∈ C(R) class of real-analytic, strictly monotonically increasing, bounded (activation) functions such that the closure of the image contains zero σ ∈ C 2 (R, R) a nonlinear activation function in class A f ∈ C(R n0 , R) neural network function l 1 ≤ l ≤ L index of a layer L ∈ N number of layers excluding the input layer l=0 input layer l = L output layer n l ∈ N number of neurons in layer l k 1 ≤ k ≤ n l index of a neuron in layer l w l ∈ R nl×nl−1 weight matrix of the l-th layer w ∈ R L l=1 (nl·nl−1) collection of all w l w l i,j ∈ R the weight from neuron j of layer l − 1 to neuron j of layer l w L •,j ∈ R the weight from neuron j of layer L − 1 to the output L ∈ R + squared loss over training samples n(l, k; x) ∈ R value at neuron k in layer l before activation for input pattern x n(l; x) ∈ R nl neuron pattern at layer l before activation for input pattern x act(l, k; x) ∈ Im(σ) activation pattern at neuron k in layer l for input x act(l; x) ∈ Im(σ) nl neuron pattern at layer l for input x In Section V, where we fix a layer l, we additionally use the following notation. h •,k (x) ∈ C(R nl , R) the function from act(l; x) to the output [u p,i ] p,i ∈ R nl×nl−1 weights of the given layer l. [v s,q ] s,q ∈ R nl×nl+1 weights the layer l + 1. r ∈ {1, 2, . . . , n l } the index of the neuron of layer l that we use for the addition of one additional neuron M ∈ N = L t=1 (n t · n t−1 ), the number of weights in the smaller neural network w ∈ R M −nl−1−nl+1 all weights except u 1,i and v s,1 γ r λ ∈ C(R M , R M +nl−1+nl+1 ) the map defined in Section V to add a neuron in layer l using the neuron with index r in layer l In Section VI, we additionally use the following notation. A. Local minima at infinity in neural networks In this section we prove the existence of local minima at infinity in neural networks. Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. Proof. We will show that, if all bias terms u i,0 of the last hidden layer are sufficiently large, then there are parameters u i,0k for k = 0 and parameters v i of the output layer such that the minimal loss is achieved at u i,0 = ∞ for all i. We note that, if u i,0 = ∞ for all i, all neurons of the last hidden layer are fully active for all samples, i.e. act(L − 1, i; x α ) = 1 for all i. Therefore, in this case f ( x α ) = i v •,i for all α. A constant function f (x α ) = i v •,i = c minimizes the loss α (c − y α ) 2 uniquely for c := 1 N N α=1 y α . We will assume that the v •,i are chosen such that i v •,i = c does hold. That is, for fully active hidden neurons at the last hidden layer, the v •,i are chosen to minimize the loss. We write f (x α ) = c + α . Then L = 1 2 α (f (x α ) − y α ) 2 = 1 2 α (c + α − y α ) 2 = 1 2 α ( α + (c − y α )) 2 = 1 2 α (c − y α ) 2 Loss at ui,0 = ∞ for all i + 1 2 α 2 α ≥0 + α α (c − y α ) ( * ) . The idea is now to ensure that ( * ) ≥ 0 for sufficiently large u i,0 and in a neighborhood of the v •,i chosen as above. Then the loss L is larger than at infinity, and any point in parameter space with u i,0 = ∞ and v •,i with i v •,i = c is a local minimum. To study the behavior at u i,0 = ∞, we consider p i = exp(−u i,0 ). Note that lim ui,0→∞ p i = 0. We have f (x α ) = i v •,i σ(u i,0 + k u i,k act(L − 2, k; x α )) = i v •,i · 1 1 + p i · exp(− k u i,k act(L − 2, k; x α )) Now for p i close to 0 we can use Taylor expansion of g j i (p i ) : = 1 1+piexp(a j i ) to get g j i (p i ) = 1 − exp(a j i )p i + O(|p i | 2 ). Therefore f (x α ) = c − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) and we find that α = − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ). Recalling that we aim to ensure ( * ) = α α (c − y α ) ≥ 0 we consider α α (c − y α ) = − α (c − y α )( i v •,i p i exp(− k u i,k act(L − 2, k; x α ))) + O(p 2 i ) = − i v •,i p i α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) We are still able to choose the parameters u i,k for i = 0, the parameters from previous layers, and the v •,i subject to i v •,i = c. If now v •,i > 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) < 0 and v •,i < 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) > 0, then the term ( * ) is strictly positive, hence the overall loss is larger than the loss at p i = 0 for sufficiently small p i and in a neighborhood of v •,i . The only obstruction we have to get around is the case where we need all v •,i of the opposite sign of c (in other words, α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) has the same sign as c), conflicting with i v •,i = c. To avoid this case, we impose the mild condition that α (c−y α )act(L−2, r; x α ) = 0 for some r, which can be arranged to hold for almost every dataset by fixing all parameters of layers with index smaller than L − 2. By Lemma 7 below (with d α = (c−y α ) and a r α = act(L−2, r; x α )), we can find u > k such that α (c−y α ) exp(− k u > k act(L−2, k; x α )) > 0 and u < k such that α (c − y α ) exp(− k u < k act(L − 2, k; x α )) < 0. We fix u i,k for k ≥ 0 such that there is some i 1 with [u i1,k ] k = [u > k ] k and some i 2 with [u i2,k ] k = [u < k ] k . This assures that we can choose the v •,i of opposite sign to α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) and such that i v •,i = c, leading to a local minimum at infinity. The local minimum is suboptimal whenever a constant function is not the optimal network function for the given dataset. By assumption, there is r such that the last term is nonzero. Hence, using coordinate r, we can choose w = (0, 0, . . . , 0, w r , 0, . . . , 0) such that φ(w) is positive and we can choose w such that φ(w) is negative. B. Proofs for the construction of local minima Here we prove B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either is zero, D r,s i = 0, for all i, s. The previous theorem follows from two lemmas, with the first lemma containing the computation of the Hessian of the cost function L of the larger network at parameters γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) with respect to a suitable basis. In addition, to find local minima one needs to explain away all additional directions, i.e., we need to show that the loss function actually does not change into the direction of eigenvectors of the Hessian with eigenvalue 0. Otherwise a higher derivative into this direction could be nonzero and potentially lead to a saddle point (see [19]). Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ),0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        Proof. The proof only requires a tedious, but not complicated calculation (using the relation αλ − β(1 − λ) = 0 multiple times. To keep the argumentation streamlined, we moved all the necessary calculations into Appendix E. (z 1 , z 2 , z 3 , z 4 )     a 2b c 0 2b T 4d 2e 0 c T 2e T f 0 0 0 0 x         z 1 z 2 z 3 z 4     = (z 1 , 2z 2 , z 3 , z 4 )     a b c 0 b T d e 0 c T e T f 0 0 0 0 x         z 1 2z 2 z 3 z 4     (b) It is clear that the matrix x is positive semidefinite for g positive semidefinite and h = 0. To show the converse, first note that if g is not positive semidefinite and z is such that z T gz < 0 then (z T , 0) g h h T 0 z 0 = z T gz < 0. It therefore remains to show that also h = 0 is a necessary condition. Assume h = 0 and find z such that hz = 0. Then for any λ ∈ R we have ((hz) T , −λz T ) g h h T 0 hz −λz = (hz) T g(hz) − 2(hz) T hλz = (hz) T g(hz) − 2λ||hz|| 2 2 . For sufficiently large λ, the last term is negative. Proof of Theorem 6. In Lemma 1, we calculated the Hessian of L with respect to a suitable basis at a the critical point γ λ ([u * r,i ] i , [v * s,r ] s ,w * ). If the matrix [D r,s i ] i,] i,j is positive definite or if (λ < 0 or λ > 1) ⇔ αβ < 0 and [B r i,j ] i,j is negative definite. In each case we can alter the λ to values leading to saddle points without changing the network function or loss. Therefore, the critical points can only be saddle points or local minima on a non-attracting region of local minima. To determine whether the critical points in questions lead to local minima when [D r,s i ] i,s = 0, it is insufficient to only prove the Hessian to be positive semidefinite (in contrast to (strict) positive definiteness), but we need to consider directions for which the second order information is insufficient. We know that the loss is at a minimum with respect to all coordinates except for the degenerate directions [v s,−1 − v s,r ] s . However, the network function f (x) is constant along [v s,−1 − v s,r ] s (keeping [v s,−1 + v s, r ] s constant) at the critical point where u −1,i = u r,i for all i. Hence, no higher order information leads to saddle points and it follows that the critical point lies on a region of local minima. C. Construction of local minima in deep networks Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) Proof. The fact that property (i) suffices uses that h •,l+1 (x) reduces to the identity function on the networks output and hence its derivative is one. Then, considering a regression network as before, our assumption says that v * •,r = 0, hence its reciprocal can be factored out of the sum in Equation (2). Denoting incoming weights into n(l, r; x) by u r,i as before, this leads to D r,1• i = 1 v * •,r · α (f (x α ) − y α ) · v * •,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) = 1 v * •,r · ∂L ∂u r,i = 0 In the case of (ii), ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t and we can factor out the reciprocal of t v * r,s = 0 in Equation (2) to again see that for each i, ∂L ∂ur,i = 0 implies that D r,s i = 0 for all s. (iii) is evident since in this case clearly every summand in Equation (2) is zero. D. Proofs for the non-increasing path to a global minimum In this section we discuss how in wide neural networks with two hidden layers a non-increasing path to the global minimum may be found from almost everywhere in the parameter space. By [3] (and [4], [5]), we can find such a path if the last hidden layer is wide (containing more neurons than input patterns). We therefore only consider the case where the first hidden layer in a three-layer neural network is wide. More generally, our results apply to all deep neural networks with the second last hidden layer wide. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) = Γ(t) · [act(L − 2, k; x α )] k,α Proof. We write ν(t) = [n(L − 1, s; x α )] s,α +ν(t) withν(0) = 0. We will findΓ(t) such thatν(t) =Γ(t) · [act(L − 2, k; x α )] k,α withΓ(0) = 0. Then Γ(t) := w L−1 +Γ(t) does the job. Since by assumption [act(L − 2, k; x α )] k,α has full rank, we can find an invertible submatrixà ∈ R N ×N of [act(L−2, k; x α )] k,α . Then we can define a continuous pathρ in R nL−1×N given byρ(t) :=ν(t)·Ã −1 , which satisfies ρ(t) ·Ã = ν(t) andρ(0) = 0. Extendingρ(t) to a path in R nL−1×nL−2 by zero columns at positions corresponding to rows of [act(L − 2, k; x α )] k,α missing inÃ, gives a pathΓ(t) such thatΓ(t) · [act(L − 2, k; x α )] k,α =ν(t) and withΓ(0) = 0. Lemma 4. For all continuous paths ρ(t) in Im(σ) N , i.e. the N-fold copy of the image of σ, there is a continuous path ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. Proof. Since σ : R N → Im(σ) N is invertible with a continuous inverse, take ν(t) = σ −1 (ρ(t)). The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . There is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum. For activation functions in A with 0 in the boundary of the image interval [c, d], this path requires that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it seems necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. Proof. We only consider the case with all λ j ≥ 0. The other case can be treated analogously. If only one λ j0 is nonzero, then consider a vector r k corresponding to a zero coefficient λ k = 0 and change r k continuously until it equals the vector r j0 corresponding to the only nonzero coefficient. Then continuously increase the positive coefficient λ j0 , while introducing a corresponding negative contribution via λ k . It is then easy to see that this leads to a path satisfying conditions (i)-(iii). We may therefore assume that at least two coefficients λ j are nonzero, say λ 1 and λ 2 . Leaving all r j and λ j for j ≥ 3 unchanged, we only consider r 1 , r 2 , λ 1 , λ 2 for the desired path, i.e. r j (t) = r j and λ j (t) = λ j for all j ≥ 3. We have that λ 1 r 1 + λ 2 r 2 ∈ (λ 1 + λ 2 ) · Im(σ) N , hence can be written as λR for some λ > 0 and R ∈ Im(σ) N with λR = z − j≥3 λ j r j = λ 1 r 1 + λ 2 r 2 . For t ∈ [0, 1 2 ] we define r 1 (t) := r 1 + 2t(R − r 1 ) and r 2 (t) := r 2 , λ 1 (t) = λλ 1 (1 − 2t)λ + 2tλ 1 and λ 2 (t) = (1 − 2t) λλ 2 (1 − 2t)λ + 2tλ 1 . For t ∈ [ 1 2 , 1] we set r 1 (t) := (2 − 2t)R + (2t − 1)( λ 1 λ 1 + 2λ 2 r 1 + 2λ 2 λ 1 + 2λ 2 r 2 ) and r 2 (t) = r 2 , λ 1 (t) = λ(λ 1 + 2λ 2 ) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ and λ 2 (t) = −λ 2 λ(2t − 1) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ . Then (i) r 1 (0) = r 1 and r 2 (0) = r 2 as desired. Further (ii) z ∈ span j (r j (t)) for all t ∈ [0, 1] via z = j λ j (t)r j (t) . It is also easy to check that r 1 (t), r 2 (t) ∈ Im(σ) N for all t ∈ [0, 1]. Finally, (iii) λ 1 (1) = λ 1 +2λ 2 > 0 and λ 2 (1) = −λ 2 < 0. Hence, if all non-zero coefficients of w L have the same sign, then we apply Lemma 5 to activation vectors r i = a L−1 i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L •,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. We may now assume that not all non-zero entries of w L have the same sign. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. We first prove the result for the (more complicated) case when Im(σ) = (0, d) for some d > 0, e.g. for σ the sigmoid function: Let z ∈ R N be the vector given by z α = f (x α ) for the parameter w at the current weights. Let I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}. For each j ∈ {1, 2, . . . , n L−1 } \ J 0 = J + ∪ J − we consider the path ρ j 2 : [0, 1) → (0, d) N of activation values given by ρ j 2 (t) = (1 − t)[act(L − 1, j; x α )] α . Applying Lemma 3 and Lemma 4 we find the inducing path Γ j 2,L−1 for parameters w L−1 , and we simultaneously change the parameters w L via w L •,j (t) = Γ j 2,L (t) := 1 1−t w L •,j . Following along Γ j 2 (t) = (Γ j 2,L−1 (t), Γ j 2,L (t)) does not change the outcome f (x α ) = z α for any α. For j ∈ J + we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I+ |J + | ∈ (0, d) N . This is possible, since all involved terms are positive, ρ j 2 (t j ) < 1 and decreasing to zero for increasing t, while w L •,j (t) increases for growing t. Similarly, for j ∈ J − we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I− |J − | ∈ (0, d) N . This time the negative sign of w L •,j (t) for j ∈ J . and the negative signs of (y − z) I− cancel, again allowing to find suitable t j . We will consider the endpoints Γ j 2 (t j ) as the new parameter values for w and the induced endpoints ρ j 2 (t j ) as our new act(L − 1, j; x α ). The next part of the path incrementally adds positive or negative coordinates of (y − z) to each activation vector of the last hidden layer. For each j ∈ J + , we let ρ j 3 : [0, 1] → (0, d) N be the path defined by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I− |J − | Since ρ j 3 (t) is a path in Im(σ) for all j, this path can again be realized by an inducing change Γ 3 (t) of parameters w L−1 . The parameters w L are kept unchanged in this last part of the path. Simultaneously changing all ρ j 3 (t) results in a change of the output of the neural network given by [f t (x α )] α = w L •,0 + nL−1 j=1 w L •,j ρ j 3 (t) = w L •,0 +   j∈J+ w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I+,α |J + |   α +   j∈J− w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I−,α |J − |   α = w L •,0 +   nL−1 j=1 w L •,j act(L − 1, j; x α )   α + j∈J+ t · (y − z) I+ |J + | + j∈J− t · (y − z) I− |J − | = z + t · (y − z) I+ + t · (y − z) I− = z + t · (y − z). It is easy to see that for the path t ∈ [0, 1] → z + t · (y − z) the loss L = ||z + t · (y − z) − y|| 2 2 = (1 − t)||y − z|| 2 2 is strictly decreasing to zero. The concatenation of Γ 2 and Γ 3 gives us the desired path Γ 0 . The case that Im(σ) = (c, 0) for some c < 0 works analogously. In the case that Im(σ) = (c, d) with 0 ∈ (c, d), there is no need to split up into sets I + , I − and J + , J − . We haveρ j 2 (t j ) + 1 w L •,j (tj) · (y−z) N ∈ (c, d) N for t j close enough to 1. Hence we can follow Γ j 2 (t) as above until ρ j 2 (t) + 1 w L •,j (t) · (y − z) N ∈ (c, d) N for all j. From here, the paths ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y−z) N define paths in Im(σ) for each j, which can be implemented by an application of Lemma 3 and Lemma 4 and lead to the global minimum. E. Calculations for Lemma 1 For the calculations we may assume without loss of generality that r = 1. If we want to consider a different n(l, r; x) and its corresponding γ r λ , then this can be achieved by a reordering of the indices of neurons.) We let ϕ denote the network function of the smaller neural network and f the neural network function of the larger network after adding one neuron according to the map γ 1 λ . To distinguish the parameters of f and ϕ, we write w ϕ for the parameters of the network before the embedding. This gives for all i, s and all m ≥ 2: For the function f we have the following partial derivatives. u −1,i = u ϕ 1,i u 1,i = u ϕ 1,i v s,−1 = λv ϕ s,1 v s,1 = (1 − λ)v ϕ s,1 u m,i = u ϕ m,i v s,m = v ϕ s, ∂f (x) ∂u p,i = k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) and ∂f (x) ∂v s,q = ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) The analogous equations hold for ϕ. 2) Relating first order derivatives of network functions f and ϕ Therefore, at 3) Second order derivatives of network functions f and ϕ. For the second derivatives we get (with δ(a, a) = 1 and δ(a, b) = 0 for a = b) ∂ 2 f (x) ∂u p,i ∂u q,j = ∂ ∂u q,j k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, m; x)∂n(l + 1, k; x) · v m,q · σ (n(l, q; x)) · act(l − 1, j; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(p, q) k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) ·act(l − 1, i; x) · act(l − 1, j; x) and ∂ 2 f (x) ∂v s,p ∂v t,q = ∂ ∂v t,q ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, p; x) = ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, t; x) · act(l, p; x) · act(l, q; x) and ∂ 2 f (x) ∂u p,i ∂v s,q = ∂ ∂v s,q k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, k; x) · act(l, q; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(q, p) · ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · σ (n(l, p; x)) · act(l − 1, i; x) For a parameter w closer to the input than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂n(l + 1, m; x) · ∂n(l + 1, m; x) ∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂n(l, p; x) ∂w · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂act(l − 1, i; x) ∂w and ∂ 2 f (x) ∂v s,q ∂w = ∂ ∂w ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) = n ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, n; x) · ∂n(l + 1, n; x) ∂w · act(l, q; x) · act(l, q; x) + ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · ∂act(l, q; x) ∂w For a parameter w closer to the output than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, m; x)∂n ϕ (l + 1, k; x) · v ϕ m,q · σ (n ϕ (l, q; x)) · act ϕ (l − 1, j; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) B p i,j (x) := k ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, k; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) · act ϕ (l − 1, j; x) C p,s i,q (x) := k ∂ 2 h ϕ •,l+1 (n(l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, k; x) · act ϕ (l, q; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) D p,s i (x) := ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x) · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) E s,t p,q (x) := ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, t; x) · act ϕ (l, p; x) · act ϕ (l, q; x) Then for all i, j, p, q, s, t, we have ∂ 2 ϕ(x) ∂u ϕ p,i ∂u ϕ q,j = A p,q i,j (x) + δ(q, p)B p i,j (x) ∂ 2 ϕ(x) ∂u ϕ p,i ∂v ϕ s,q = C p,s i,q (x) + δ(q, p)D p,s i (x) ∂ 2 ϕ(x) ∂v s,p ∂v t,q = E s,t p,q (x) For f we get for p, q ∈ {−1, 1} and all i, j, s, t ∂ 2 f (x) ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j (x) + λB 1 i,j (x) ∂ 2 f (x) ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j (x) + (1 − λ)B 1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂u 1,j = ∂ 2 f (x) ∂u 1,i ∂u −1,j = λ(1 − λ) · A 1,1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂v s,−1 = λC 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u 1,i ∂v s,1 = (1 − λ)C 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u −1,i ∂v s,1 = λ · C 1,s i,1 (x) = λ · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂u 1,i ∂v s,−1 = (1 − λ) · C 1,s i,1 (x) = (1 − λ) · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂v s,p ∂v t,q = E s,t 1,1 (x) = ∂ 2 ϕ(x) ∂v ϕ s,1 ∂v ϕ t,1 and ∂ ∂w ϕ = α (ϕ(x α ) − y α ) · ∂ϕ(x α ) ∂w ϕ . From this it follows immediately that if ∂ ∂w ϕ (w ϕ ) = 0, then ∂L ∂w (γ 1 λ (w ϕ )) = 0 for all λ (cf. [9], [15]). For the second derivative we get and for q ≥ 2 and p ∈ {−1, 1} and all i, j, s, t ∂ 2 L ∂u −1,i ∂u q,j = λA 1,q i,j + λA 1,1 i,j ∂ 2 L ∂u 1,i ∂u q,j = (1 − λ)A 1,q i,j + (1 − λ)A 1,q i,j ∂ 2 L ∂u −1,i ∂v s,q = λC 1,s i,q + λC 1,s i,q ∂ 2 L ∂u 1,i ∂v s,q = (1 − λ)C 1,s i,q + (1 − λ)C 1,s i,q ∂ 2 L ∂u q,i ∂v s,p = C q,s i,p + C q,s i,p ∂ 2 L ∂v s,p ∂v t,q = E s,t 1,q + E s,t 1,q and for p, q ≥ 2 and all i, j, s, t ∂ 2 L ∂u p,i ∂u q,j = A p,q i,j + δ(q, p)B p i,j (x) + A p,q i,j = ∂ 2 ∂u ϕ p,i ∂u ϕ q,j ∂ 2 L ∂u p,i ∂v s,q = C p,s i,q + δ(q, p)D p,s i + C p,s i,q = ∂ 2 ∂u ϕ p,i ∂v ϕ s,q ∂ 2 L ∂v s,p ∂v t,q = E s,t p,q + E s,t p,q = ∂ 2 ∂v ϕ s,p ∂v ϕ t,q 6) Change of basis Choose any real numbers α = −β such that λ = β α+β (equivalently αλ − β(1 − λ) = 0) and set µ −1,i = u −1,i + u 1,i µ 1,i = α · u −1,i − β · u 1,i ν s,−1 = v s,−1 + v s,1 ν s,1 = v s,−1 − v s,1 . ∂ 2 L ∂w∂r = α (f (x α ) − y α ) · ∂ 2 f (x α ) ∂w∂r + α ∂f (x α ) ∂w · ∂f (x α )∂ 2 L ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j + λB 1 i,j + λ 2 A 1,1 i,j ∂ 2 L ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i,j + (1 − λ) 2 A 1,1 i,j ∂ 2 L ∂u −1,i ∂u 1,j = λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j Then at γ 1 λ ([u 1,i ] i , [v s,1 ] s ,w), ∂ 2 L ∂µ −1,i ∂µ −1,j = ∂ ∂u −1,i + ∂ ∂u 1,i ∂L(x) ∂u −1,j + ∂L(x) ∂u 1,j = ∂ 2 L(x) ∂u −1,i ∂u −1,j + ∂ 2 L(x) ∂u −1,i ∂u 1,j + ∂ 2 L(x) ∂u 1,i ∂u −1,j + ∂ 2 L(x) ∂u 1,i ∂u 1,j = λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = A 1,1 i,j + B 1 i.j + A 1,1 i,j ∂ 2 L ∂µ 1,i ∂µ 1,j = α ∂ ∂u −1,i − β ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α 2 ∂ 2 L(x) ∂u −1,i ∂u −1,j − αβ ∂ 2 L(x) ∂u −1,i ∂u 1,j − αβ ∂ 2 L(x) ∂u 1,i ∂u −1,j + β 2 ∂ 2 L(x) ∂u 1,i ∂u 1,j = α 2 λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + β 2 (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = αβB 1 i.j ∂ 2 L ∂µ −1,i ∂µ 1,j = ∂ ∂u −1,i + ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α ∂ 2 L(x) ∂u −1,i ∂u −1,j − β ∂ 2 L(x) ∂u −1,i ∂u 1,j + α ∂ 2 L(x) ∂u 1,i ∂u −1,j − β ∂ 2 L(x) ∂u 1,i ∂u 1,j = α λ 2 A 1,1 i,j + λB 2 i.j + λ 2 A 1,1 i,j − β λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + α λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − β (1 − λ) 2 A 1,1 i,j + (1 − λ)B 2 i.j + (1 − λ) 2 A 1,1 i,j = 0 ∂ 2 L ∂ν s,∂L(x) ∂v t,−1 − ∂L(x) ∂v t,1 = ∂ 2 L(x) ∂v s,−1 ∂v t,−1 − ∂ 2 L(x) ∂v s,−1 ∂v t,1 + ∂ 2 L(x) ∂v s,1 ∂v t,−1 − ∂ 2 L(x) ∂v s,1 ∂v t,1 = E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t 1,1 + E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t We also need to consider the second derivative with respect to the other variables ofw. If w is closer to the output than [u p,i ] p,i , [v s,q ] s,q belonging to layer γ where γ > l + 1, then we get
15,463
1812.06486
2904130053
Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e. non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our proposed construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks with two hidden layers, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks.
Finally, worth mentioning is the study of Liao and Poggio @cite_27 who use polynomial approximations to argue, by relying on Bezout's theorem, that the loss function should have many local minima with zero empirical loss. Also relevant is the observation by @cite_38 showing that, if the global minimum is not of zero loss, then a perfect predictor may have a larger loss in training than one producing worse classification results.
{ "abstract": [ "It is widely believed that the back-propagation algorithm in neural networks, for tasks such as pattern classification, overcomes the limitations of the perceptron. The authors construct several counterexamples to this belief. They also construct linearly separable examples which have a unique minimum which fails to separate two families of vectors, and a simple example with four two-dimensional vectors in a single-layer network showing local minima with a large basin of attraction. Thus, back-propagation is guaranteed to fail in the first example, and likely to fail in the second example. It is shown that even multilayered (hidden-layer) networks can also fail in this way to classify linearly separable problems. Since the authors' examples are all linearly separable, the perceptron would correctly classify them. The results disprove the presumption, made in recent years, that, barring local minima, back-propagation will find the best set of weights for a given problem. >", "Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least in the case of the most successful Deep Convolutional Neural Networks (DCNNs), practitioners can always increase the network size to fit the training data (an extreme example would be [1]). The most successful DCNNs such as VGG and ResNets are best used with a degree of \"overparametrization\". In this work, we characterize with a mix of theory and experiments, the landscape of the empirical risk of overparametrized DCNNs. We first prove in the regression framework the existence of a large number of degenerate global minimizers with zero empirical error (modulo inconsistent equations). The argument that relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity (which empirically works as well). As described in our Theory III [2] paper, the same minimizers are degenerate and thus very likely to be found by SGD that will furthermore select with higher probability the most robust zero-minimizer. We further experimentally explored and visualized the landscape of empirical risk of a DCNN on CIFAR-10 during the entire training process and especially the global minima. Finally, based on our theoretical and experimental results, we propose an intuitive model of the landscape of DCNN's empirical loss surface, which might not be as complicated as people commonly believe." ], "cite_N": [ "@cite_38", "@cite_27" ], "mid": [ "2046949678", "2603221039" ] }
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
At the heart of most optimization problems lies the search for the global minimum of a loss function. The common approach to finding a solution is to initialize at random in parameter space and subsequently follow directions of decreasing loss based on local methods. This approach lacks a global progress criteria, which leads to descent into one of the nearest local minima. Since the loss function of deep neural networks is non-convex, the common approach of using gradient descent variants is vulnerable precisely to that problem. Authors pursuing the early approaches to local descent by back-propagating gradients [1] experimentally noticed that suboptimal local minima appeared surprisingly harmless. More recently, for deep neural networks, the earlier observations were further supported by the experiments of e.g., [2]. Several authors aimed to provide theoretical insight for this behavior. Broadly, two views may be distinguished. Some, aiming at explanation, rely on simplifying modeling assumptions. Others investigate neural networks under realistic assumptions, but often focus on failure cases only. Recently, Nguyen and Hein [3] provide partial explanations for deep and extremely wide neural networks for a class of activation functions including the commonly used sigmoid. Extreme width is characterized by a "wide" layer that has more neurons than input patterns to learn. For almost every instantiation of parameter values w (i.e. for all but a null set of parameter values) it is shown that, if the loss function has a local minimum at w, then this local minimum must be a global one. This suggests that for deep and wide neural networks, possibly every local minimum is global. The question on what happens at the null set of parameter values, for which the result does not hold, remains unanswered. Similar observations for neural networks with one hidden layer were made earlier by Gori and Tesi [4] and Poston et al. [5]. Poston et al. [5] show for a neural network with one hidden layer and sigmoid activation function that, if the hidden layer has more nodes than training patterns, then the error function (squared sum of prediction losses over the samples) has no suboptimal "local minimum" and "each point is arbitrarily close to a point from which a strictly decreasing path starts, so such a point cannot be separated from a so called good point by a barrier of any positive height" [5]. It was criticized by Sprinkhuizen-Kuyper and Boers [6] that the definition of a local minimum used in the proof of [5] was rather strict and unconventional. In particular, the results do not imply that no suboptimal local minima, defined in the usual way, exist. As a consequence, the notion of attracting and non-attracting regions of local minima were introduced and the authors prove that non-attracting regions exist by providing an example for the extended XOR problem. The existence of these regions imply that a gradient-based approach descending the loss surface using local information may still not converge to the global minimum. The main objective of this work is to revisit the problem of such non-attracting regions and show that they also exist in deep and wide networks. In particular, a gradient based approach may get stuck in a suboptimal local minimum. Most importantly, the performance of deep and wide neural networks cannot be explained by the analysis of the loss curve alone, without taking proper initialization or the stochasticity of SGD into account. Our observations are not fundamentally negative. At first, the local minima we find are rather degenerate. With proper initialization, a local descent technique is unlikely to get stuck in one of the degenerate, suboptimal local minima 1 . Secondly, the minima reside on a non-attracting region of local minima (see Definition 1). Due to its exploration properties, stochastic gradient descent will eventually be able to escape from such a region (see [8]). We conjecture that in sufficiently wide and deep networks, except for a null set of parameter values as starting points, there is always a monotonically decreasing path down to the global minimum. This was shown in [5] for neural networks with one hidden layer, sigmoid activation function and square loss, and we generalize this result to neural networks with two hidden layers. (More precisely, our result holds for all neural networks with square loss and a class of activation functions including the sigmoid, where the wide layer is the last or second last hidden layer). This implies that in such networks every local minimum belongs to a non-attracting region of local minima. Our proof of the existence of suboptimal local minima even in extremely wide and deep networks is based on a construction of local minima in neural networks given by Fukumizu and Amari [9]. By relying on careful computation we are able to characterize when this construction is applicable to deep neural networks. Interestingly, in deeper layers, the construction rarely seems to lead to local minima, but more often to saddle points. The argument that saddle points rather than suboptimal local minima are the main problem in deep networks has been raised before (see [10]) but a theoretical justification [11] uses strong assumptions that do not exactly hold in neural networks. Here, we provide the first analytical argument, under realistic assumptions on the neural network structure, describing when certain critical points of the training loss lead to saddle points in deeper networks. III. MAIN RESULTS A. Problem definition We consider regression networks with fully connected layers of size n l , 0 ≤ l ≤ L given by f (x) = w L (σ(w L−1 (σ(. . . (w 2 (σ(w 1 (x) + w 1 0 )) + w 2 0 ) . . .)) + w L−1 0 )) + w L 0 , where w l ∈ R nl×nl−1 denotes the weight matrix of the l-th layer, 1 ≤ l ≤ L, w l 0 the bias terms, and σ a nonlinear activation function. The neural network function is denoted by f and we notationally suppress dependence on parameters. We assume the activation function σ to belong to the class of strictly monotonically increasing, analytic, bounded functions on R with image in interval (c, d) such that 0 ∈ [c, d], a class we denote by A. As prominent examples, the sigmoid activation function σ(t) = 1 1+exp(−t) and σ(t) = tanh(x) lie in A. We assume no activation function at the output layer. The neural network is assumed to be a regression network mapping into the real domain R, i.e. n L = 1 and w L ∈ R 1×nL−1 . We train on a finite dataset (x α , y α ) 1≤α≤N of size N with input patterns x α ∈ R n0 and desired target value y α ∈ R. We aim to minimize the squared loss L = N α=1 (f (x α ) − y α ) 2 . Further, w denotes the collection of all w l . The dependence of the neural network function f on w translates into a dependence of L = L(w) of the loss function on the parameters w. Due to assumptions on σ, L(w) is twice continuously differentiable. The goal of training a neural network consists of minimizing L(w) over w. There is a unique value L 0 denoting the infimum of the neural network's loss (most often L 0 = 0 in our examples). Any set of weights w • that satisfies L(w • ) = L 0 is called a global minimum. Due to its non-convexity, the loss function L(w) of a neural network is in general known to potentially suffer from local minima (precise definition of a local minimum below). We will study the existence of suboptimal local minima in the sense that a local minimum w * is suboptimal if its loss L(w * ) is strictly larger than L 0 . We refer to deep neural networks as models with more than one hidden layer. Further, we refer to wide neural networks as the type of model considered in [3]- [5] with one hidden layer containing at least as many neurons as input patterns (i.e. n l ≥ N for some 1 ≤ l < L in our notation). Disclaimer: Naturally, training for zero global loss is not desirable in practice, neither is the use of fully connected wide and deep neural networks necessarily. The results of this paper are of theoretical importance. To be able to understand the complex learning behavior of deep neural networks in practice, it is a necessity to understand the networks with the most fundamental structure. In this regard, while our result are not directly applicable to neural networks used in practice, they do offer explanations for their learning behavior. B. A special kind of local minimum The standard definition of a local minimum, which is also used here, is a point w * such that w * has a neighborhood U with L(w) ≥ L(w * ) for all w ∈ U . Since local minima do not need to be isolated (i.e. L(w) > L(w * ) for all w ∈ U \ {w * }) two types of connected regions of local minima may be distinguished. Note that our definition slightly differs from the one by [6]. Definition 1. [6] Let : R n → R be a differentiable function. Suppose R is a maximal connected subset of parameter values w ∈ R m , such that every w ∈ R is a local minimum of with value (w) = c. • R is called an attracting region of local minima, if there is a neighborhood U of R such that every continuous path Γ(t), which is non-increasing in and starts from some Γ(0) ∈ U , satisfies (Γ(t)) ≥ c for all t. • R is called a non-attracting region of local minima, if every neighborhood U of R contains a point from where a continuous path Γ(t) exists that is non-increasing in and ends in a point Γ(1) with (Γ(1)) < c. Despite its non-attractive nature, a non-attracting region R of local minima may be harmful for a gradient descent approach. A path of greatest descent can end in a local minimum on R. However, no point z on R needs to have a neighborhood of attraction in the sense that following the path of greatest descent from a point in a neighborhood of z will lead back to z. (The path can lead to a different local minimum on R close by or reach points with strictly smaller values than c.) In the example of such a region for the 2-3-1 XOR network provided in [6], a local minimum (of higher loss than the global loss) resides at points in parameter space with some coordinates at infinity. In particular, a gradient descent approach may lead to diverging parameters in that case. However, a different non-increasing path down to the global minimum always exists. It can be shown that local minima at infinity also exist for wide and deep neural networks. (The proof can be found in Appendix A.) Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. A different type of non-attracting regions of local minima (without infinite parameter values) is considered for neural networks with one hidden layer by Fukumizu and Amari [9] and Wei et al. [8] under the name of singularities. This type of region is characterized by singularities in the weight space (a subset of the null set not covered by the results of Nguyen and Hein [3]) leading to a loss value strictly larger than the global loss. The dynamics around such region are investigated by Wei et al. [8]. Again, a full batch gradient descent approach can get stuck in a local minimum in this type of region. A rough illustration of the nature of these non-attracting regions of local minima is depicted in Fig. 1. Non-attracting regions of local minima do not only exist in small two-layer neural networks. Theorem 2. There exist deep and wide fully-connected neural networks with sigmoid activation function such that the squared loss function of a finite dataset has a non-attracting region of local minima (at finite parameter values). The construction of such local minima is discussed in Section V with a complete proof in Appendix B. Corollary 1. Any attempt to show for fully connected deep and wide neural networks that a gradient descent technique will always lead to a global minimum only based on a description of the loss curve will fail if it doesn't take into consideration properties of the learning procedure (such as the stochasticity of stochastic gradient descent), properties of a suitable initialization technique, or assumptions on the dataset. On the positive side, we point out that a stochastic method such as stochastic gradient descent has a good chance to escape a non-attracting region of local minima due to noise. With infinite time at hand and sufficient exploration, the region can be escaped from with high probability (see [8] for a more detailed discussion). In Section V-A we will further characterize when the method used to construct examples of regions of non-attracting local minima is applicable. This characterization limits us to the construction of extremely degenerate examples. We give an intuitive argument why assuring the necessary assumptions for the construction becomes more difficult for wider and deeper networks and why it is natural to expect a lower suboptimal loss (where the suboptimal minima are less "bad") the less degenerate the constructed minima are and the more parameters a neural network possesses. C. Non-increasing path to a global minimum By definition, every neighborhood of a non-attracting region of local minima contains points from where a non-increasing path to a value less than the value of the region exists. (By definition all points belonging to a nonattracting region have the same value, in fact they are all local minima.) The question therefore arises whether from almost everywhere in parameter space there is such a non-increasing path all the way down to a global minimum. If the last hidden layer is the wide layer having more neurons than input patterns (for example consider a wide two-layer neural network), then this holds true by the results of [3] (and [4], [5]). We show the same conclusion to hold for wide neural networks having the second last hidden layer the wide one. In particular, this implies that for wide neural networks with two hidden layers, starting from almost everywhere in parameter space, there is non-increasing path down to a global minimum. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. Corollary 2. Consider a wide, fully connected regression neural network with two hidden layers and activation function in the class A and trained to minimize the squared loss over a finite dataset. Then all suboptimal local minima are contained in a non-attracting region of local minima. The rest of the paper contains the arguments leading to the given results. IV. NOTATIONAL CHOICES We fix additional notation aside the problem definition from Section III-A. For input x α , we denote the pattern vector of values at all neurons at layer l before activation by n(l; x α ) and after activation by act(l; x α ). x α,1 x α,2 x 0 1, −1 1, 1 1, 2 1, 3 1, 3 1, 0 f (x α ) [u 1,i ] i [u 1,i ] i [u 2,i ] i [u 3,i ] i λ · v •,1 (1 − λ) · v •,1 v •,2 v •,3 v •,0 In general, we will denote column vectors of size n with coefficients z i by [z i ] 1≤i≤n or simply [z i ] i and matrices with entries a i,j at position (i, j) by [a i,j ] i,j . The neuron value pattern n(l; x) is then a vector of size n l denoted by n(l; x) = [n(l, k; x)] 1≤k≤nl , and the activation pattern act(l; x) = [act(l, k; x)] 1≤k≤nl . Using that f can be considered a composition of functions from consecutive layers, we denote the function from act(k; x) to the output by h •,k (x). For convenience of the reader, a tabular summary of all notation is provided in Appendix A. V. CONSTRUCTION OF LOCAL MINIMA We recall the construction of so-called hierarchical suboptimal local minima given in [9] and extend it to deep networks. For the hierarchical construction of critical points, we add one additional neuron n(l, −1; x) to a hidden layer l. (Negative indices are unused for neurons, which allows us to add a neuron with this index.) Once we have fixed the layer l, we denote the parameters of the incoming linear transformation by [u p,i ] p,i , so that u p,i denotes the contribution of neuron i in layer l − 1 to neuron p in layer l, and the parameters of the outgoing linear transformation by [v s,q ], where v s,q denotes the contribution of neuron q in layer l to neuron s in layer l + 1. For weights of the output layer (into a single neuron), we write w •,j instead of w 1,j . We recall the function γ used in [9] to construct local minima in a hierarchical way. This function γ describes the mapping from the parameters of the original network to the parameters after adding a neuron n(l, −1; x) and is determined by incoming weights u −1,i into n(l, −1; x), outgoing weights v s,−1 of n(l, −1; x), and a change of the outgoing weights v s,r of n(l, r; x) for one chosen r in the smaller network. Sorting the network parameters in a convenient way, the embedding of the smaller network into the larger one is defined for any λ ∈ R by a function γ r λ mapping parameters {([u r,i ] i , [v s,r ] s ,w} of the smaller network to parameters {([u −1,i ] i , [v s,−1 ] s , [u r,i ] i , [v s,r ] s ,w)} of the larger network and is defined by γ r λ ([u r,i ] i , [v s,r ] s ,w) := ([u r,i ] i , [λ · v s,r ] s , [u r,i ] i , [(1 − λ) · v s,r ] s ,w) . Herew denotes the collection of all remaining network parameters, i.e., all [u p,i ] i , [v s,q ] s for p, q / ∈ {−1, r} and all parameters from linear transformation of layers with index smaller than l or larger than l + 1, if existent. A visualization of γ 1 λ is shown in Fig. 2. Important fact: For the functions ϕ, f of smaller and larger network at parameters ([u * 1,i ] i , [v * s,1 ] s ,w * ) and γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) respectively, we have ϕ(x) = f (x) for all x. More generally, we even have n ϕ (l, k; x) = n(l, k; x) and act ϕ (l, k; x) = act(l, k; x) for all l, x and k ≥ 0. A. Characterization of hierarchical local minima Using γ r to embed a smaller deep neural network into a second one with one additional neuron, it has been shown that critical points get mapped to critical points. Theorem 4 (Nitta [15]). Consider two neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. If parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point for the squared loss over a finite dataset in the smaller network then, for each λ ∈ R, γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) determines a critical point in the larger network. As a consequence, whenever an embedding of a local minimum with γ r λ into a larger network does not lead to a local minimum, then it leads to a saddle point instead. (There are no local maxima in the networks we consider, since the loss function is convex with respect to the parameters of the last layer.) For neural networks with one hidden layer, it was characterized when a critical point leads to a local minimum. Theorem 5 (Fukumizu, Amari [9]). Consider two neural networks as in Section III-A with only one hidden layer and which differ by one neuron in the hidden layer with index n(1, −1; x) in the larger network. Assume that parameters ([u * r,i ] i , v * •,r ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller neural network and that λ / ∈ {0, 1}. Then γ r λ ([u * r,i ] i , v * •,r ,w * ) determines a local minimum in the larger network if the matrix [B r i,j ] i,j given by B r i,j = α (f (x α ) − y α ) · v * •,r · σ (n(1, r; x α )) · x α,i · x α,j is positive definite and 0 < λ < 1, or if [B r i,j ] i,j is negative definite and λ < 0 or λ > 1. (Here, we denote the k-th input dimension of input x α by x α,k .) We extend the previous theorem to a characterization in the case of deep networks. We note that a similar computation was performed in [19] for neural networks with two hidden layers. Theorem 6. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a local minimum for the squared loss over a finite dataset in the smaller network. If the matrix [B r i,j ] i,j defined by B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either • positive definite and λ ∈ I := (0, 1), or • negative definite and λ ∈ I : = (−∞, 0) ∪ (1, ∞), then γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) | λ ∈ I determines a non-attracting region of local minima in the larger network if and only if D r,s i := α (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) · σ (n(l, r; x α )) · act(l − 1, i; x α )(2) is zero, D r,s i = 0, for all i, s. Remark 1. In the case of a neural network with only one hidden layer as considered in Theorem 5, the function h •,l+1 (x) is the identity function on R and the matrix [B r i,j ] i,j in (1) reduces to the matrix [B r i,j ] i,j in Theorem 5. The condition that D r,s i = 0 for all i, s does hold for shallow neural networks with one hidden layer as we show below. This proves Theorem 6 to be consistent with Theorem 5. The theorem follows from a careful computation of the Hessian of the cost function L(w), characterizing when it is positive (or negative) semidefinite and checking that the loss function does not change along directions that correspond to an eigenvector of the Hessian with eigenvalue 0. We state the outcome of the computation in Lemma 1 and refer the reader interested in a full proof of Theorem 6 to Appendix B. Lemma 1. Consider two (possibly deep) neural networks as in Section III-A, which differ by one neuron in layer l with index n(l, −1; x) in the larger network. Fix 1 ≤ r ≤ n l . Assume that the parameter choices ([u * r,i ] i , [v * s,r ] s ,w * ) determine a critical point in the smaller network. Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ) , the Hessian of L (i.e., the second derivative with respect to the new network parameters) at γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) is given by        [ ∂ 2 ∂ur,i∂ur,j ] i,j 2[ ∂ 2 ∂ur,i∂vs,r ] i,s [ ∂ 2 ∂w ∂ur,i ] i,w 0 0 2[ ∂ 2 ∂ur,i∂vs,r ] s,i 4[ ∂ 2 ∂vs,r∂vt,r ] s,t 2[ ∂ 2 ∂w ∂vs,r ] s,w (α − β)[D r,s i ] s,i 0 [ ∂ 2 ∂w ∂ur,i ]w ,i 2[ ∂ 2 ∂w ∂vs,r ]w ,s [ ∂ 2 ∂w ∂w ]w ,w 0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        B. Shallow networks with a single hidden layer For the construction of suboptimal local minima in wide two-layer networks, we begin by following the experiments of [9] that prove the existence of suboptimal local minima in (non-wide) two-layer neural networks. Consider a neural network of size 1-2-1. We use the corresponding network function f to construct a dataset (x α , y α ) N α=1 by randomly choosing x α and letting y α = f (x α ). By construction, we know that a neural network of size 1-2-1 can perfectly fit the dataset with zero error. Consider now a smaller network of size 1-1-1 having too little expressibility for a global fit of all data points. We find parameters [u * 1,1 , v * • ] where the loss function of the neural network is in a local minimum with non-zero loss. For this small example, the required positive definiteness of [B 1 i,j ] i,j from (1) for a use of γ λ with λ ∈ (0, 1) reduces to checking a real number for positivity, which we assume to hold true. We can now apply γ λ and Theorem 5 to find parameters for a neural network of size 1-2-1 that determine a suboptimal local minimum. This example may serve as the base case for a proof by induction to show the following result. Theorem 7. There is a wide neural network with one hidden layer and arbitrarily many neurons in the hidden layer that has a non-attracting region of suboptimal local minima. Having already established the existence of parameters for a (small) neural network leading to a suboptimal local minimum, it suffices to note that iteratively adding neurons using Theorem 5 is possible. Iteratively at step t, we add a neuron n(1, −t; x) to the network by an application of γ 1 λ with the same λ ∈ (0, 1). The corresponding matrix from (1), B 1,(t) i,j = α (f (x α ) − y α ) · (1 − λ) t · v * •,1 · σ (n(l, 1; x α )) · x α,i · x α,j , is positive semidefinite. (We use here that neither f (x α ) nor n(l, 1; x α ) ever change during this construction.) By Theorem 5 we always find a suboptimal minimum with nonzero loss for the network for λ ∈ (0, 1). Note however, that a continuous change of λ to a value outside of [0, 1] does not change the network function, but leads to a saddle point. Hence, we found a non-attracting region of suboptimal minima. Remark 2. Since we started the construction from a network of size 1-1-1, our constructed example is extremely degenerate: The suboptimal local minima of the wide network have identical incoming weight vectors for each hidden neuron. Obviously, the suboptimality of this parameter setting is easily discovered. Also with proper initialization, the chance of landing in this local minimum is vanishing. However, one may also start the construction from a more complex network with a larger network with several hidden neurons. In this case, when adding a few more neurons using γ 1 λ , it is much harder to detect the suboptimality of the parameters from visual inspection. C. Deep neural networks According to Theorem 6, next to positive definiteness of the matrix B r i,j for some r, in deep networks there is a second condition for the construction of hierarchical local minima using the map γ r λ , i.e. D r,s i = 0. We consider conditions that make D r,s i = 0. Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) The proof is contained in Appendix C. D. Experiment for deep networks To construct a local minimum in a deep and wide neural network, we start by considering a three-layer network of size 2-2-4-1, i.e. we have two input dimensions, one output dimension and hidden layers of two and four neurons. We use its network function f to create a dataset of 50 samples (x α , f (x α )), hence we know that a network of size 2-2-4-1 can attain zero loss. We initialize a new neural network of size 2-2-2-1 and train it until convergence, before using the construction to add neurons to the network. When adding neurons to the last hidden layer using γ 1 λ , Proposition 1 assures that D 1,• i = 0 for all i. We check for positive definiteness of the matrix B 1 i,j , and only continue when this property holds. Having thus assured the necessary condition of Theorem 6, we can add a few neurons to the last hidden layer (by induction as in the two-layer case), which results in local minimum of a network of size 2-2-M-1. The local minimum of non-zero loss that we attain is suboptimal whenever M ≥ 4 by construction. For M ≥ 50 the network is wide. Experimentally, we show not only that indeed we end up with a suboptimal minimum, but also that it belongs to a non-attracting region of local minima. In Fig. 3 we show results after adding eleven neurons to the last hidden layer. On the left side, we plot the loss in the neighborhood of the constructed local minimum in parameter space. The top image shows the loss curve into randomly generated directions, the bottom displays the minimal loss over all these directions. On the top right we show the change of loss along one of the degenerate directions that allows reaching a saddle point. In such a saddle point we know from Lemma 1 the direction of descent. The image on the bottom right shows that indeed the direction allows a reduction in loss. Being able to reach a saddle point from a local minimum by a path of non-increasing loss shows that indeed we found a non-attracting region of local minima. E. A discussion of limitations and of the loss of non-attracting regions of suboptimal minima We fix a neuron in layer l and aim to use γ r λ to find a local minimum in the larger network. We then need to check whether a matrix B r i,j is positive definite, which depends on the dataset. Under strong independence assumptions (the signs of different eigenvalues of B r i,j are independent), one may argue similar to arguments in [10] that the probability of finding B r i,j to be positive definite (all eigenvalues positive) is exponentially decreasing in the number of possible neurons of the previous layer l − 1. At the same time, the number of neurons n(l, r; x) in layer l to use for the construction only increases linearly in the number of neurons in layer l. Experimentally, we use a four-layer neural network of size 2-8-12-8-1 to construct a (random) dataset containing 500 labeled samples. We train a network of size 2-4-6-4-1 on the dataset until convergence using SciPy's 2 BFGS implementation. For each layer l, we check each neuron r whether it can be used for enlargment of the network using the map γ r λ for some λ ∈ (0, 1), i.e., we check whether the corresponding matrix B r i,j is positive definite. We repeat this experiment 1000 times. For the first layer, we find that in 547 of 4000 test cases the matrix is positive definite. For the second layer we only find B r i,j positive definite in 33 of 6000 cases, and for the last hidden layer there are only 6 instances out of 4000 where the matrix B r i,j is positive definite. Since the matrix B r i,j is of size 2 × 2/4 × 4/6 × 6 for the first/second/last hidden layer respectively, the number of positive matrices is less than what would be expected under the strong independence assumptions discussed above. In addition, in deeper layers, further away from the output layer, it seems dataset dependent and unlikely to us that D r,s i = 0. Simulations seem to support this belief. However, it is difficult to check the condition numerically. Firstly, it is hard to find the exact position of minima and we only compute numerical approximations of D r,s i . Secondly, the terms are small for sufficiently large networks and numerical errors play a role. Due to these two facts, it becomes barely possible to check the condition of exact equality to zero. In Fig. 4 we show the distribution of maximal entries of the matrix D r,s i = 0 for neurons in the first, second and third layer of the network of size 2-4-6-4-1 trained as above. Note that for the third layer we know from theory that in a critical point we have D r,s i = 0, but due to numerical errors much larger values arise. Further, a region of local minima as above requires linearly dependent activation pattern vectors. This is how linear dimensions for subsequent layers get lost, reducing the ability to approximate the target function. Intuitively, in a deep and wide neural network there are many possible directions of descent. Loosing some of them still leaves the network with enough freedom to closely approximate the target function. As a result, these suboptimal minima have a loss close to the global loss. Conclusively, finding suboptimal local minima with high loss by the construction using γ r λ becomes hard when the networks become deep and wide. VI. PROVING THE EXISTENCE OF A NON-INCREASING PATH TO THE GLOBAL MINIMUM In the previous section we showed the existence of non-attracting regions of local minima. These type of local minima do not rule out the possibility of non-increasing paths to the global minimum from almost everywhere in parameter space. In this section, we sketch the proof to Theorem 3 illustrated in form of several lemmas, where up to the basic assumptions on the neural network structure as in Section III-A (with activation function in A), the assumption of one lemma is given by the conclusion of the previous one. A full proof can be found in Appendix D. We consider vectors that we call activation vectors, different from the activation pattern vectors act(l; x) from above. The activation vector at neuron k in layer l is denoted by a l k and defined by all values at the given neuron for different samples x α : a l k := [act(l, k; x α )] α . In other words while we fix l and x for the activation pattern vectors act(l; x) and let k run over its possible values, we fix l and k for the activation vectors a l k and let x run over its samples x α in the dataset. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . As it turns out, there is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum whenever we may assume that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it is necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. We apply Lemma 5 to activation vectors r i = a i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L 1,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. Fix z α = f (x α ), the prediction for the current weights. The main idea is to change the activation vectors of the last hidden layer according to ρ j : t ∈ [0, 1] → a L−1 j + t · 1 w L •,j · (y − z) N . With w L fixed, at the output this results in a change of t ∈ [0, 1] → z + t · (y − z), which reduces the loss to zero. The required change of activation vectors can be implemented by an application of Lemma 3 and Lemma 4, but only if the image of each ρ j lies in the image [c, d] of the activation function. Hence, the latter must be arranged. In the case that 0 ∈ (c, d), it suffices to first decrease the norm of a L−1 j while simultaneously increasing the norm of the outgoing weight w L •,j so that the output remains constant. If, however, 0 is in the boundary of the interval [c, d] (for example the case of a sigmoid activation function), then the assumption of non-zero weights with different signs becomes necessary. We let J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}, I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, I − = {α ∈ {1, 2, . . . , N } | (y − z) α < 0}. We further define (y − z) I+ to be the vector v with coordinate v α for α ∈ I + equal to (y − z) α and 0 otherwise, and we let analogously (y − z) I− denote the vector containing only the negative coordinates of y − z. Then the paths ρ j : [0, 1] → (c, d) defined by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = a L−1 j + t · 1 w L •,j · (y − z) I− |J − | can be arranged to all lie in the image of the activation function and they again lead to an output change of t ∈ [0, 1] → z + t · (y − z). (Appendix D contains a more detailed proof.) This concludes the proof of Theorem 3 having found a sufficient condition in Lemma 6 to confirm the existence of a path down to zero loss and having shown how to realize this condition in Lemmas 3, 4 and 5. VII. CONCLUSION In this paper we have studied the local minima of deep and wide regression neural networks with sigmoid activation functions. We established that the nature of local minima is such that they live in a special region of the cost function called a non-attractive region, and showed that a non-increasing path to a configuration with lower loss than that of the region can always be found. For sufficiently wide two-or three-layer neural networks, all local minima belong to such a region. We generalized the procedure to find such regions, introduced by Fukumizu and Amari [9], to deep networks and described sufficient conditions for the construction to work. The necessary conditions become very hard to satisfy in wider and deeper networks and, if they fail, the construction leads to saddle points instead. Finally, an intuitive argument shows a clear relation between the degree of degeneracy of a local minimum and the level of suboptimality of the constructed local minimum. APPENDIX NOTATION [x α ] α R n column vector with entries x α ∈ R [x i,j ] i,j ∈ R n1×n2 matrix with entry x i,j at position (i, j) Im(f) ⊆ R image of a function f C n (X, Y ) n-times continuously differentiable function from X to Y N ∈ N number of data samples in training set x α ∈ R n0 training sample input y α ∈ R target output for sample x α A ∈ C(R) class of real-analytic, strictly monotonically increasing, bounded (activation) functions such that the closure of the image contains zero σ ∈ C 2 (R, R) a nonlinear activation function in class A f ∈ C(R n0 , R) neural network function l 1 ≤ l ≤ L index of a layer L ∈ N number of layers excluding the input layer l=0 input layer l = L output layer n l ∈ N number of neurons in layer l k 1 ≤ k ≤ n l index of a neuron in layer l w l ∈ R nl×nl−1 weight matrix of the l-th layer w ∈ R L l=1 (nl·nl−1) collection of all w l w l i,j ∈ R the weight from neuron j of layer l − 1 to neuron j of layer l w L •,j ∈ R the weight from neuron j of layer L − 1 to the output L ∈ R + squared loss over training samples n(l, k; x) ∈ R value at neuron k in layer l before activation for input pattern x n(l; x) ∈ R nl neuron pattern at layer l before activation for input pattern x act(l, k; x) ∈ Im(σ) activation pattern at neuron k in layer l for input x act(l; x) ∈ Im(σ) nl neuron pattern at layer l for input x In Section V, where we fix a layer l, we additionally use the following notation. h •,k (x) ∈ C(R nl , R) the function from act(l; x) to the output [u p,i ] p,i ∈ R nl×nl−1 weights of the given layer l. [v s,q ] s,q ∈ R nl×nl+1 weights the layer l + 1. r ∈ {1, 2, . . . , n l } the index of the neuron of layer l that we use for the addition of one additional neuron M ∈ N = L t=1 (n t · n t−1 ), the number of weights in the smaller neural network w ∈ R M −nl−1−nl+1 all weights except u 1,i and v s,1 γ r λ ∈ C(R M , R M +nl−1+nl+1 ) the map defined in Section V to add a neuron in layer l using the neuron with index r in layer l In Section VI, we additionally use the following notation. A. Local minima at infinity in neural networks In this section we prove the existence of local minima at infinity in neural networks. Theorem 1 (cf. [6] Section III). Let L denote the squared loss of a fully connected regression neural network with sigmoid activation functions, having at least one hidden layer and each hidden layer containing at least two neurons. Then, for almost every finite dataset, the loss function L possesses a local minimum at infinity. The local minimum is suboptimal whenever dataset and neural network are such that a constant function is not an optimal solution. Proof. We will show that, if all bias terms u i,0 of the last hidden layer are sufficiently large, then there are parameters u i,0k for k = 0 and parameters v i of the output layer such that the minimal loss is achieved at u i,0 = ∞ for all i. We note that, if u i,0 = ∞ for all i, all neurons of the last hidden layer are fully active for all samples, i.e. act(L − 1, i; x α ) = 1 for all i. Therefore, in this case f ( x α ) = i v •,i for all α. A constant function f (x α ) = i v •,i = c minimizes the loss α (c − y α ) 2 uniquely for c := 1 N N α=1 y α . We will assume that the v •,i are chosen such that i v •,i = c does hold. That is, for fully active hidden neurons at the last hidden layer, the v •,i are chosen to minimize the loss. We write f (x α ) = c + α . Then L = 1 2 α (f (x α ) − y α ) 2 = 1 2 α (c + α − y α ) 2 = 1 2 α ( α + (c − y α )) 2 = 1 2 α (c − y α ) 2 Loss at ui,0 = ∞ for all i + 1 2 α 2 α ≥0 + α α (c − y α ) ( * ) . The idea is now to ensure that ( * ) ≥ 0 for sufficiently large u i,0 and in a neighborhood of the v •,i chosen as above. Then the loss L is larger than at infinity, and any point in parameter space with u i,0 = ∞ and v •,i with i v •,i = c is a local minimum. To study the behavior at u i,0 = ∞, we consider p i = exp(−u i,0 ). Note that lim ui,0→∞ p i = 0. We have f (x α ) = i v •,i σ(u i,0 + k u i,k act(L − 2, k; x α )) = i v •,i · 1 1 + p i · exp(− k u i,k act(L − 2, k; x α )) Now for p i close to 0 we can use Taylor expansion of g j i (p i ) : = 1 1+piexp(a j i ) to get g j i (p i ) = 1 − exp(a j i )p i + O(|p i | 2 ). Therefore f (x α ) = c − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) and we find that α = − i v •,i p i exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ). Recalling that we aim to ensure ( * ) = α α (c − y α ) ≥ 0 we consider α α (c − y α ) = − α (c − y α )( i v •,i p i exp(− k u i,k act(L − 2, k; x α ))) + O(p 2 i ) = − i v •,i p i α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) + O(p 2 i ) We are still able to choose the parameters u i,k for i = 0, the parameters from previous layers, and the v •,i subject to i v •,i = c. If now v •,i > 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) < 0 and v •,i < 0 whenever α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) > 0, then the term ( * ) is strictly positive, hence the overall loss is larger than the loss at p i = 0 for sufficiently small p i and in a neighborhood of v •,i . The only obstruction we have to get around is the case where we need all v •,i of the opposite sign of c (in other words, α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) has the same sign as c), conflicting with i v •,i = c. To avoid this case, we impose the mild condition that α (c−y α )act(L−2, r; x α ) = 0 for some r, which can be arranged to hold for almost every dataset by fixing all parameters of layers with index smaller than L − 2. By Lemma 7 below (with d α = (c−y α ) and a r α = act(L−2, r; x α )), we can find u > k such that α (c−y α ) exp(− k u > k act(L−2, k; x α )) > 0 and u < k such that α (c − y α ) exp(− k u < k act(L − 2, k; x α )) < 0. We fix u i,k for k ≥ 0 such that there is some i 1 with [u i1,k ] k = [u > k ] k and some i 2 with [u i2,k ] k = [u < k ] k . This assures that we can choose the v •,i of opposite sign to α (c − y α ) exp(− k u i,k act(L − 2, k; x α )) and such that i v •,i = c, leading to a local minimum at infinity. The local minimum is suboptimal whenever a constant function is not the optimal network function for the given dataset. By assumption, there is r such that the last term is nonzero. Hence, using coordinate r, we can choose w = (0, 0, . . . , 0, w r , 0, . . . , 0) such that φ(w) is positive and we can choose w such that φ(w) is negative. B. Proofs for the construction of local minima Here we prove B r i,j := α (f (x α ) − y α ) · k ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, k; x α ) · v * k,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) · act(l − 1, j; x α )(1) is either is zero, D r,s i = 0, for all i, s. The previous theorem follows from two lemmas, with the first lemma containing the computation of the Hessian of the cost function L of the larger network at parameters γ r λ ([u * r,i ] i , [v * s,r ] s ,w * ) with respect to a suitable basis. In addition, to find local minima one needs to explain away all additional directions, i.e., we need to show that the loss function actually does not change into the direction of eigenvectors of the Hessian with eigenvalue 0. Otherwise a higher derivative into this direction could be nonzero and potentially lead to a saddle point (see [19]). Let L denote the the loss function of the larger network and the loss function of the smaller network. Let α = −β ∈ R such that λ = β α+β . With respect to the basis of the parameter space of the larger network given by ([u −1,i +u r,i ] i , [v s,−1 +v s,r ] s ,w, [α· u −1,i − β · u r,i ] i , [v s,−1 − v s,r ] s ),0 0 0 (α − β)[D r,s i ] i,s 0 αβ[B r i,j ] i,j (α + β)[D r,s i ] i,s 0 0 0 (α + β)[D r,s i ] s,i 0        Proof. The proof only requires a tedious, but not complicated calculation (using the relation αλ − β(1 − λ) = 0 multiple times. To keep the argumentation streamlined, we moved all the necessary calculations into Appendix E. (z 1 , z 2 , z 3 , z 4 )     a 2b c 0 2b T 4d 2e 0 c T 2e T f 0 0 0 0 x         z 1 z 2 z 3 z 4     = (z 1 , 2z 2 , z 3 , z 4 )     a b c 0 b T d e 0 c T e T f 0 0 0 0 x         z 1 2z 2 z 3 z 4     (b) It is clear that the matrix x is positive semidefinite for g positive semidefinite and h = 0. To show the converse, first note that if g is not positive semidefinite and z is such that z T gz < 0 then (z T , 0) g h h T 0 z 0 = z T gz < 0. It therefore remains to show that also h = 0 is a necessary condition. Assume h = 0 and find z such that hz = 0. Then for any λ ∈ R we have ((hz) T , −λz T ) g h h T 0 hz −λz = (hz) T g(hz) − 2(hz) T hλz = (hz) T g(hz) − 2λ||hz|| 2 2 . For sufficiently large λ, the last term is negative. Proof of Theorem 6. In Lemma 1, we calculated the Hessian of L with respect to a suitable basis at a the critical point γ λ ([u * r,i ] i , [v * s,r ] s ,w * ). If the matrix [D r,s i ] i,] i,j is positive definite or if (λ < 0 or λ > 1) ⇔ αβ < 0 and [B r i,j ] i,j is negative definite. In each case we can alter the λ to values leading to saddle points without changing the network function or loss. Therefore, the critical points can only be saddle points or local minima on a non-attracting region of local minima. To determine whether the critical points in questions lead to local minima when [D r,s i ] i,s = 0, it is insufficient to only prove the Hessian to be positive semidefinite (in contrast to (strict) positive definiteness), but we need to consider directions for which the second order information is insufficient. We know that the loss is at a minimum with respect to all coordinates except for the degenerate directions [v s,−1 − v s,r ] s . However, the network function f (x) is constant along [v s,−1 − v s,r ] s (keeping [v s,−1 + v s, r ] s constant) at the critical point where u −1,i = u r,i for all i. Hence, no higher order information leads to saddle points and it follows that the critical point lies on a region of local minima. C. Construction of local minima in deep networks Proposition 1. Suppose we have a hierarchically constructed critical point of the squared loss of a neural network constructed by adding a neuron into layer l with index n(l, −1; x) by application of the map γ r λ to a neuron n(l, r; x). Suppose further that for the outgoing weights v * s,r of n(l, r; x) we have s v * s,r = 0 , and suppose that D r,s i is defined as in (2). Then D r,s i = 0 if one of the following holds. (i) The layer l is the last hidden layer. (This condition includes the case l = 1 indexing the hidden layer in a two-layer network.) (ii) ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t, α (iii) For each α and each t, with L α : = (f (x α ) − y α ) 2 , ∂L α ∂n(l + 1, t; x α ) = (f (x α ) − y α ) · ∂h •,l+1 (n(l + 1; x α ) ∂n(l + 1, t; x α ) = 0. (This condition holds in the case of the weight infinity attractors in the proof to Theorem 1 for l + 1 the second last layer. It also holds in a global minimum.) Proof. The fact that property (i) suffices uses that h •,l+1 (x) reduces to the identity function on the networks output and hence its derivative is one. Then, considering a regression network as before, our assumption says that v * •,r = 0, hence its reciprocal can be factored out of the sum in Equation (2). Denoting incoming weights into n(l, r; x) by u r,i as before, this leads to D r,1• i = 1 v * •,r · α (f (x α ) − y α ) · v * •,r · σ (n(l, r; x α )) · act(l − 1, i; x α ) = 1 v * •,r · ∂L ∂u r,i = 0 In the case of (ii), ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, s; x α ) = ∂h •,l+1 (n(l + 1; x α )) ∂n(l + 1, t; x α ) for all s, t and we can factor out the reciprocal of t v * r,s = 0 in Equation (2) to again see that for each i, ∂L ∂ur,i = 0 implies that D r,s i = 0 for all s. (iii) is evident since in this case clearly every summand in Equation (2) is zero. D. Proofs for the non-increasing path to a global minimum In this section we discuss how in wide neural networks with two hidden layers a non-increasing path to the global minimum may be found from almost everywhere in the parameter space. By [3] (and [4], [5]), we can find such a path if the last hidden layer is wide (containing more neurons than input patterns). We therefore only consider the case where the first hidden layer in a three-layer neural network is wide. More generally, our results apply to all deep neural networks with the second last hidden layer wide. Theorem 3. Consider a fully connected regression neural network with activation function in the class A equipped with the squared loss function for a finite dataset. Assume that the second last hidden layer contains more neurons than the number of input patterns. Then, for each set of parameters w and all > 0, there is w such that ||w − w || < and such that a path non-increasing in loss from w to a global minimum where f (x α ) = y α for each α exists. The first step of the proof is to use the freedom given by to have the activation vectors a L−2 of the wide layer L − 2 span the whole space R N . ν(t) = Γ(t) · [act(L − 2, k; x α )] k,α Proof. We write ν(t) = [n(L − 1, s; x α )] s,α +ν(t) withν(0) = 0. We will findΓ(t) such thatν(t) =Γ(t) · [act(L − 2, k; x α )] k,α withΓ(0) = 0. Then Γ(t) := w L−1 +Γ(t) does the job. Since by assumption [act(L − 2, k; x α )] k,α has full rank, we can find an invertible submatrixà ∈ R N ×N of [act(L−2, k; x α )] k,α . Then we can define a continuous pathρ in R nL−1×N given byρ(t) :=ν(t)·Ã −1 , which satisfies ρ(t) ·Ã = ν(t) andρ(0) = 0. Extendingρ(t) to a path in R nL−1×nL−2 by zero columns at positions corresponding to rows of [act(L − 2, k; x α )] k,α missing inÃ, gives a pathΓ(t) such thatΓ(t) · [act(L − 2, k; x α )] k,α =ν(t) and withΓ(0) = 0. Lemma 4. For all continuous paths ρ(t) in Im(σ) N , i.e. the N-fold copy of the image of σ, there is a continuous path ν(t) in R N such that ρ(t) = σ(ν(t)) for all t. Proof. Since σ : R N → Im(σ) N is invertible with a continuous inverse, take ν(t) = σ −1 (ρ(t)). The activation vectors a L−1 k of the last hidden layer span a linear subspace H of R N . The optimal parameters w L of the output layer compute the best approximation of (y α ) α onto H. Lemma 3 and Lemma 4 together imply that we can achieve any desired continuous change of the spanning vectors of H, and hence the linear subspace H, by a suitable change of the parameters w L−1 . There is a natural possible path of parameters that strictly monotonically decreases the loss to the global minimum. For activation functions in A with 0 in the boundary of the image interval [c, d], this path requires that not all non-zero coefficients of w L have the same sign. If this is not the case, however, we first follow a different path through the parameter space to eventually assure different signs of coefficients of w L . Interestingly, this path leaves the loss constant. In other words, from certain points in parameter space it seems necessary to follow a path of constant loss until we reach a point from where we can further decrease the loss; just like in the case of the non-attracting regions of local minima. Lemma 5. For n ≥ 2, let {r 1 , r 2 , . . . , r n } be a set of vectors in Im(σ) N and E = span j (r j ) their linear span. If z ∈ E has a representation z = j λ j r j where all λ j are positive (or all negative), then there are continuous paths r j : [0, 1] → r j (t) of vectors in Im(σ) N such that the following properties hold. (i) r j (0) = r j . (ii) z ∈ span j (r j (t)) for all t, so that there are continuous paths t → λ j (t) such that z = λ j (t)r j (t). (iii) There are 1 ≤ j + , j − ≤ n such that λ j+ (1) > 0 and λ j− (1) < 0. Proof. We only consider the case with all λ j ≥ 0. The other case can be treated analogously. If only one λ j0 is nonzero, then consider a vector r k corresponding to a zero coefficient λ k = 0 and change r k continuously until it equals the vector r j0 corresponding to the only nonzero coefficient. Then continuously increase the positive coefficient λ j0 , while introducing a corresponding negative contribution via λ k . It is then easy to see that this leads to a path satisfying conditions (i)-(iii). We may therefore assume that at least two coefficients λ j are nonzero, say λ 1 and λ 2 . Leaving all r j and λ j for j ≥ 3 unchanged, we only consider r 1 , r 2 , λ 1 , λ 2 for the desired path, i.e. r j (t) = r j and λ j (t) = λ j for all j ≥ 3. We have that λ 1 r 1 + λ 2 r 2 ∈ (λ 1 + λ 2 ) · Im(σ) N , hence can be written as λR for some λ > 0 and R ∈ Im(σ) N with λR = z − j≥3 λ j r j = λ 1 r 1 + λ 2 r 2 . For t ∈ [0, 1 2 ] we define r 1 (t) := r 1 + 2t(R − r 1 ) and r 2 (t) := r 2 , λ 1 (t) = λλ 1 (1 − 2t)λ + 2tλ 1 and λ 2 (t) = (1 − 2t) λλ 2 (1 − 2t)λ + 2tλ 1 . For t ∈ [ 1 2 , 1] we set r 1 (t) := (2 − 2t)R + (2t − 1)( λ 1 λ 1 + 2λ 2 r 1 + 2λ 2 λ 1 + 2λ 2 r 2 ) and r 2 (t) = r 2 , λ 1 (t) = λ(λ 1 + 2λ 2 ) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ and λ 2 (t) = −λ 2 λ(2t − 1) (2 − 2t)(λ 1 + 2λ 2 ) + (2t − 1)λ . Then (i) r 1 (0) = r 1 and r 2 (0) = r 2 as desired. Further (ii) z ∈ span j (r j (t)) for all t ∈ [0, 1] via z = j λ j (t)r j (t) . It is also easy to check that r 1 (t), r 2 (t) ∈ Im(σ) N for all t ∈ [0, 1]. Finally, (iii) λ 1 (1) = λ 1 +2λ 2 > 0 and λ 2 (1) = −λ 2 < 0. Hence, if all non-zero coefficients of w L have the same sign, then we apply Lemma 5 to activation vectors r i = a L−1 i giving continuous paths t → a L−1 i (t) and t → λ i (t) = w L •,i (t). Then the output f (x α ) of the neural network along this path remains constant, hence so does the loss. The desired change of activation vectors a L−1 i (t) can be performed by a suitable change of parameters w L−1 according to Lemma 3 and Lemma 4. The simultaneous change of w L−1 and w L defines the first part Γ 1 (t) of our desired path in the parameter space which keeps f (x α ) constant. We may now assume that not all non-zero entries of w L have the same sign. The final part of the desired path is given by the following lemma. Lemma 6. Assume a neural network structure as above with activation vectors a L−2 i of the wide hidden layer spanning R N . If the weights w L of the output layer satisfy that there is both a positive and a negative weight, then there is a continuous path t ∈ [0, 1] → Γ 0 (t) from the current weights Γ 0 (0) = w of decreasing loss down to the global minimum at Γ 0 (1) . Proof. We first prove the result for the (more complicated) case when Im(σ) = (0, d) for some d > 0, e.g. for σ the sigmoid function: Let z ∈ R N be the vector given by z α = f (x α ) for the parameter w at the current weights. Let I + = {α ∈ {1, 2, . . . , N } | (y − z) α ≥ 0}, J + = {j ∈ {1, 2, . . . , n L−1 } | w L •,j ≥ 0}, J − = {j ∈ {1, 2, . . . , n L−1 } | w L •,j < 0}. For each j ∈ {1, 2, . . . , n L−1 } \ J 0 = J + ∪ J − we consider the path ρ j 2 : [0, 1) → (0, d) N of activation values given by ρ j 2 (t) = (1 − t)[act(L − 1, j; x α )] α . Applying Lemma 3 and Lemma 4 we find the inducing path Γ j 2,L−1 for parameters w L−1 , and we simultaneously change the parameters w L via w L •,j (t) = Γ j 2,L (t) := 1 1−t w L •,j . Following along Γ j 2 (t) = (Γ j 2,L−1 (t), Γ j 2,L (t)) does not change the outcome f (x α ) = z α for any α. For j ∈ J + we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I+ |J + | ∈ (0, d) N . This is possible, since all involved terms are positive, ρ j 2 (t j ) < 1 and decreasing to zero for increasing t, while w L •,j (t) increases for growing t. Similarly, for j ∈ J − we find t j ∈ [0, 1) such that ρ j 2 (t j ) + 1 w L •,j (t j ) · (y − z) I− |J − | ∈ (0, d) N . This time the negative sign of w L •,j (t) for j ∈ J . and the negative signs of (y − z) I− cancel, again allowing to find suitable t j . We will consider the endpoints Γ j 2 (t j ) as the new parameter values for w and the induced endpoints ρ j 2 (t j ) as our new act(L − 1, j; x α ). The next part of the path incrementally adds positive or negative coordinates of (y − z) to each activation vector of the last hidden layer. For each j ∈ J + , we let ρ j 3 : [0, 1] → (0, d) N be the path defined by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I+ |J + | and for each j ∈ J − by ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y − z) I− |J − | Since ρ j 3 (t) is a path in Im(σ) for all j, this path can again be realized by an inducing change Γ 3 (t) of parameters w L−1 . The parameters w L are kept unchanged in this last part of the path. Simultaneously changing all ρ j 3 (t) results in a change of the output of the neural network given by [f t (x α )] α = w L •,0 + nL−1 j=1 w L •,j ρ j 3 (t) = w L •,0 +   j∈J+ w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I+,α |J + |   α +   j∈J− w L •,j act(L − 1, j; x α ) + t · 1 w L •,j · (y − z) I−,α |J − |   α = w L •,0 +   nL−1 j=1 w L •,j act(L − 1, j; x α )   α + j∈J+ t · (y − z) I+ |J + | + j∈J− t · (y − z) I− |J − | = z + t · (y − z) I+ + t · (y − z) I− = z + t · (y − z). It is easy to see that for the path t ∈ [0, 1] → z + t · (y − z) the loss L = ||z + t · (y − z) − y|| 2 2 = (1 − t)||y − z|| 2 2 is strictly decreasing to zero. The concatenation of Γ 2 and Γ 3 gives us the desired path Γ 0 . The case that Im(σ) = (c, 0) for some c < 0 works analogously. In the case that Im(σ) = (c, d) with 0 ∈ (c, d), there is no need to split up into sets I + , I − and J + , J − . We haveρ j 2 (t j ) + 1 w L •,j (tj) · (y−z) N ∈ (c, d) N for t j close enough to 1. Hence we can follow Γ j 2 (t) as above until ρ j 2 (t) + 1 w L •,j (t) · (y − z) N ∈ (c, d) N for all j. From here, the paths ρ j 3 (t) = [act(L − 1, j; x α )] α + t · 1 w L •,j · (y−z) N define paths in Im(σ) for each j, which can be implemented by an application of Lemma 3 and Lemma 4 and lead to the global minimum. E. Calculations for Lemma 1 For the calculations we may assume without loss of generality that r = 1. If we want to consider a different n(l, r; x) and its corresponding γ r λ , then this can be achieved by a reordering of the indices of neurons.) We let ϕ denote the network function of the smaller neural network and f the neural network function of the larger network after adding one neuron according to the map γ 1 λ . To distinguish the parameters of f and ϕ, we write w ϕ for the parameters of the network before the embedding. This gives for all i, s and all m ≥ 2: For the function f we have the following partial derivatives. u −1,i = u ϕ 1,i u 1,i = u ϕ 1,i v s,−1 = λv ϕ s,1 v s,1 = (1 − λ)v ϕ s,1 u m,i = u ϕ m,i v s,m = v ϕ s, ∂f (x) ∂u p,i = k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) and ∂f (x) ∂v s,q = ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) The analogous equations hold for ϕ. 2) Relating first order derivatives of network functions f and ϕ Therefore, at 3) Second order derivatives of network functions f and ϕ. For the second derivatives we get (with δ(a, a) = 1 and δ(a, b) = 0 for a = b) ∂ 2 f (x) ∂u p,i ∂u q,j = ∂ ∂u q,j k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, m; x)∂n(l + 1, k; x) · v m,q · σ (n(l, q; x)) · act(l − 1, j; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(p, q) k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) ·act(l − 1, i; x) · act(l − 1, j; x) and ∂ 2 f (x) ∂v s,p ∂v t,q = ∂ ∂v t,q ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, p; x) = ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, t; x) · act(l, p; x) · act(l, q; x) and ∂ 2 f (x) ∂u p,i ∂v s,q = ∂ ∂v s,q k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, k; x) · act(l, q; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + δ(q, p) · ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · σ (n(l, p; x)) · act(l − 1, i; x) For a parameter w closer to the input than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = m k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂n(l + 1, m; x) · ∂n(l + 1, m; x) ∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂n(l, p; x) ∂w · act(l − 1, i; x) + k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · ∂act(l − 1, i; x) ∂w and ∂ 2 f (x) ∂v s,q ∂w = ∂ ∂w ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · act(l, q; x) = n ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x)∂n(l + 1, n; x) · ∂n(l + 1, n; x) ∂w · act(l, q; x) · act(l, q; x) + ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, s; x) · ∂act(l, q; x) ∂w For a parameter w closer to the output than [u p,i ] p,i , [v s,q ] s,q , we have ∂ 2 f (x) ∂u p,i ∂w = ∂ ∂w k ∂h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x) · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) = k ∂ 2 h •,l+1 (n(l + 1; x)) ∂n(l + 1, k; x)∂w · v k,p · σ (n(l, p; x)) · act(l − 1, i; x) ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, m; x)∂n ϕ (l + 1, k; x) · v ϕ m,q · σ (n ϕ (l, q; x)) · act ϕ (l − 1, j; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) B p i,j (x) := k ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, k; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) · act ϕ (l − 1, j; x) C p,s i,q (x) := k ∂ 2 h ϕ •,l+1 (n(l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, k; x) · act ϕ (l, q; x) · v ϕ k,p · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) D p,s i (x) := ∂h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x) · σ (n ϕ (l, p; x)) · act ϕ (l − 1, i; x) E s,t p,q (x) := ∂ 2 h ϕ •,l+1 (n ϕ (l + 1; x)) ∂n ϕ (l + 1, s; x)∂n ϕ (l + 1, t; x) · act ϕ (l, p; x) · act ϕ (l, q; x) Then for all i, j, p, q, s, t, we have ∂ 2 ϕ(x) ∂u ϕ p,i ∂u ϕ q,j = A p,q i,j (x) + δ(q, p)B p i,j (x) ∂ 2 ϕ(x) ∂u ϕ p,i ∂v ϕ s,q = C p,s i,q (x) + δ(q, p)D p,s i (x) ∂ 2 ϕ(x) ∂v s,p ∂v t,q = E s,t p,q (x) For f we get for p, q ∈ {−1, 1} and all i, j, s, t ∂ 2 f (x) ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j (x) + λB 1 i,j (x) ∂ 2 f (x) ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j (x) + (1 − λ)B 1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂u 1,j = ∂ 2 f (x) ∂u 1,i ∂u −1,j = λ(1 − λ) · A 1,1 i,j (x) ∂ 2 f (x) ∂u −1,i ∂v s,−1 = λC 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u 1,i ∂v s,1 = (1 − λ)C 1,s i,1 (x) + D 1,s i (x) ∂ 2 f (x) ∂u −1,i ∂v s,1 = λ · C 1,s i,1 (x) = λ · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂u 1,i ∂v s,−1 = (1 − λ) · C 1,s i,1 (x) = (1 − λ) · ∂ 2 ϕ(x) ∂u ϕ 1,i ∂v ϕ s,1 ∂ 2 f (x) ∂v s,p ∂v t,q = E s,t 1,1 (x) = ∂ 2 ϕ(x) ∂v ϕ s,1 ∂v ϕ t,1 and ∂ ∂w ϕ = α (ϕ(x α ) − y α ) · ∂ϕ(x α ) ∂w ϕ . From this it follows immediately that if ∂ ∂w ϕ (w ϕ ) = 0, then ∂L ∂w (γ 1 λ (w ϕ )) = 0 for all λ (cf. [9], [15]). For the second derivative we get and for q ≥ 2 and p ∈ {−1, 1} and all i, j, s, t ∂ 2 L ∂u −1,i ∂u q,j = λA 1,q i,j + λA 1,1 i,j ∂ 2 L ∂u 1,i ∂u q,j = (1 − λ)A 1,q i,j + (1 − λ)A 1,q i,j ∂ 2 L ∂u −1,i ∂v s,q = λC 1,s i,q + λC 1,s i,q ∂ 2 L ∂u 1,i ∂v s,q = (1 − λ)C 1,s i,q + (1 − λ)C 1,s i,q ∂ 2 L ∂u q,i ∂v s,p = C q,s i,p + C q,s i,p ∂ 2 L ∂v s,p ∂v t,q = E s,t 1,q + E s,t 1,q and for p, q ≥ 2 and all i, j, s, t ∂ 2 L ∂u p,i ∂u q,j = A p,q i,j + δ(q, p)B p i,j (x) + A p,q i,j = ∂ 2 ∂u ϕ p,i ∂u ϕ q,j ∂ 2 L ∂u p,i ∂v s,q = C p,s i,q + δ(q, p)D p,s i + C p,s i,q = ∂ 2 ∂u ϕ p,i ∂v ϕ s,q ∂ 2 L ∂v s,p ∂v t,q = E s,t p,q + E s,t p,q = ∂ 2 ∂v ϕ s,p ∂v ϕ t,q 6) Change of basis Choose any real numbers α = −β such that λ = β α+β (equivalently αλ − β(1 − λ) = 0) and set µ −1,i = u −1,i + u 1,i µ 1,i = α · u −1,i − β · u 1,i ν s,−1 = v s,−1 + v s,1 ν s,1 = v s,−1 − v s,1 . ∂ 2 L ∂w∂r = α (f (x α ) − y α ) · ∂ 2 f (x α ) ∂w∂r + α ∂f (x α ) ∂w · ∂f (x α )∂ 2 L ∂u −1,i ∂u −1,j = λ 2 A 1,1 i,j + λB 1 i,j + λ 2 A 1,1 i,j ∂ 2 L ∂u 1,i ∂u 1,j = (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i,j + (1 − λ) 2 A 1,1 i,j ∂ 2 L ∂u −1,i ∂u 1,j = λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j Then at γ 1 λ ([u 1,i ] i , [v s,1 ] s ,w), ∂ 2 L ∂µ −1,i ∂µ −1,j = ∂ ∂u −1,i + ∂ ∂u 1,i ∂L(x) ∂u −1,j + ∂L(x) ∂u 1,j = ∂ 2 L(x) ∂u −1,i ∂u −1,j + ∂ 2 L(x) ∂u −1,i ∂u 1,j + ∂ 2 L(x) ∂u 1,i ∂u −1,j + ∂ 2 L(x) ∂u 1,i ∂u 1,j = λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = A 1,1 i,j + B 1 i.j + A 1,1 i,j ∂ 2 L ∂µ 1,i ∂µ 1,j = α ∂ ∂u −1,i − β ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α 2 ∂ 2 L(x) ∂u −1,i ∂u −1,j − αβ ∂ 2 L(x) ∂u −1,i ∂u 1,j − αβ ∂ 2 L(x) ∂u 1,i ∂u −1,j + β 2 ∂ 2 L(x) ∂u 1,i ∂u 1,j = α 2 λ 2 A 1,1 i,j + λB 1 i.j + λ 2 A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − αβ λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + β 2 (1 − λ) 2 A 1,1 i,j + (1 − λ)B 1 i.j + (1 − λ) 2 A 1,1 i,j = αβB 1 i.j ∂ 2 L ∂µ −1,i ∂µ 1,j = ∂ ∂u −1,i + ∂ ∂u 1,i α ∂L(x) ∂u −1,j − β ∂L(x) ∂u 1,j = α ∂ 2 L(x) ∂u −1,i ∂u −1,j − β ∂ 2 L(x) ∂u −1,i ∂u 1,j + α ∂ 2 L(x) ∂u 1,i ∂u −1,j − β ∂ 2 L(x) ∂u 1,i ∂u 1,j = α λ 2 A 1,1 i,j + λB 2 i.j + λ 2 A 1,1 i,j − β λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j + α λ(1 − λ)A 1,1 i,j + λ(1 − λ)A 1,1 i,j − β (1 − λ) 2 A 1,1 i,j + (1 − λ)B 2 i.j + (1 − λ) 2 A 1,1 i,j = 0 ∂ 2 L ∂ν s,∂L(x) ∂v t,−1 − ∂L(x) ∂v t,1 = ∂ 2 L(x) ∂v s,−1 ∂v t,−1 − ∂ 2 L(x) ∂v s,−1 ∂v t,1 + ∂ 2 L(x) ∂v s,1 ∂v t,−1 − ∂ 2 L(x) ∂v s,1 ∂v t,1 = E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t 1,1 + E s,t 1,1 + E s,t 1,1 − E s,t 1,1 + E s,t We also need to consider the second derivative with respect to the other variables ofw. If w is closer to the output than [u p,i ] p,i , [v s,q ] s,q belonging to layer γ where γ > l + 1, then we get
15,463
1812.06744
2904146064
[Context.] The success of deep learning makes its usage more and more tempting in safety-critical applications. However such applications have historical standards (e.g., DO178, ISO26262) which typically do not envision the usage of machine learning. We focus in particular on of software artifacts, i.e., code modules, functions, or statements (depending on the desired granularity). [Problem.] Both code and requirements are a problem when dealing with deep neural networks: code constituting the network is not comparable to classical code; furthermore, requirements for applications where neural networks are required are typically very hard to specify: even though high-level requirements can be defined, it is very hard to make such requirements concrete enough, that one can qualify them of low-level requirements. An additional problem is that deep learning is in practice very much based on trial-and-error, which makes the final result hard to explain without the previous iterations. [Proposed solution.] We investigate which artifacts could play a similar role to code or low-level requirements in neural network development and propose various traces which one could possibly consider as a replacement for classical notions. We also propose a form of traceability (and new artifacts) in order to deal with the particular trial-and-error development process for deep learning.
In general, the safety of DNNs is commonly recognized as a huge challenge @cite_1 . There are more and more attempts towards the certification, verification, or explainability of DNNs, of which we provide now a short overview. None of them however (and, as far as we know, no other work either) addresses the traceability of DNNs.
{ "abstract": [ "We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards. We apply the concept in a concrete case study in designing a high-way ANN-based motion predictor to guarantee safety properties such as impossibility for the ego vehicle to suggest moving to the right lane if there exists another vehicle on its right." ], "cite_N": [ "@cite_1" ], "mid": [ "2753457114" ] }
Traceability of Deep Neural Networks
The success of deep learning (DL), in particular in computer vision, makes its usage more and more tempting in many applications, including safety-critical ones. However the development of such applications must follow standards (e.g., DO178 [1], ISO26262 [2]) which typically do not envision the usage of machine learning. At the moment, practitioners therefore cannot use machine learning for safety-critical functions (e.g., ASIL-D for ISO26262, or DAL-A for DO178). There exist various attempts to address this issue whether in standardization committees (e.g., ISO/IEC JTC 1/SC 42 or DKE/DIN [3]) or in the academic community (various initiatives towards explainable AI, e.g., [4]), but they are all far from mature and definitely not usable as of today or do not really address the problem: most standardization approaches just try to map one-to-one classical software engineering processes like the V-model to deep learning. Furthermore, no academic solution, at the moment, provides a solution to the lack of understandability of deep neural networks (DNN). In this paper, we try to find a pragmatic approach, which focuses on artifacts rather than on processes: we are not prescriptive regarding the activities which produced these artifacts. More precisely, we focus only on artifacts which are worth being identified during the development of DNNs for the sake of traceability. Consequently, this paper does not provide a ready-made solution, which a practitioner could follow one-to-one. However, it provides concrete descriptions which should at least be sufficient to provide a first guidance. We restrict the scope of this paper to the following: • Deep neural networks for supervised learning (no reinforcement learning, no unsupervised learning). • We focus only on software, not on system: traces from software requirements to system requirements are out of scope, as well as FMEAs or failure rates. • We do not focus on binary code or deployment thereof on hardware platform. • We assume a fixed, non-evolving, dataset: this does not comply with most real cases in, say, autonomous driving, where data is continuously collected. Even if not continuously collected, the dataset has so much influence on the training that one can hardly ignore these evolutions for proper traceability. Still, there are already sufficiently many questions to address without considering this evolution, which is why we leave this out of focus in this paper. • We focus essentially on functional requirements. Lifting these restrictions is left to future work. The rest of the paper is organized as follows: Section II presents related work. Section III recalls the concept of traceability. Section IV provides a traceability-amenable presentation of deep learning. Section V contains the main contribution of this paper: it analyzes which DNN artifacts replace classical software artifacts and suggests new artifacts and traces to enable the traceability of DNNs. Section VI identifies various gaps of the present work for future research. Finally Section VII summarizes the paper. III. PRELIMINARY: TRACEABILITY It is very difficult (at least nowadays) to find engineers or researchers who know both safety-aware software engineering and deep learning. 1 This paper really attempts to answer a problem which lies at the intersection of two communities and tries therefore to be self-contained for both. Consequently, we first recall the concepts and terminology related to traceability, as used in this paper. This should be a trivial reading for the safety-critical systems software engineer, but we do recommend reading it to ensure that the terminology is clear in the rest of the paper. Even though not a proper formal source, we still recommend Wikipedia [19] on this topic. A. Artifacts When developing classical software, the only product to deliver is executable code. One might provide also source code if the software is open source; the software itself might be part of a bigger system if it is embedded; but, all in all, from the perspective of the software engineer, one just needs to deliver some sort of executable. For safety critical systems, this is not enough: one needs to deliver not only the executable code itself, but also a justification that the executable code indeed does what it is supposed to do or that it is resilient to faults. Such a justification is provided in the form of documents, source code, executable, etc., which are not intended for the final consumer, but for the authority (whether it is an independent authority or a company-internal one) in charge of validating the safety of the product. We call these (development) artifacts. One such essential document is the one describing requirements: requirements describe what the software is supposed to do, without providing implementation details. In many nonsafety critical applications, requirements are expressed in a very unstructured manner, e.g., in some statement of work, in an issue tracker, or in slides communicated from the client. In safety critical applications however, it is essential to have these requirements in a way that they can be structured, referenced, or even categorized. For instance: functional requirements describe the expected function of a component, timing requirements describe the temporal constraints for a given function, interface requirements describe the input/output types of a component. Requirement documents found in safety-critical industry typically use dedicated software like IBM Rational DOORS. Example 1 (Functional requirement [20]): The [system] shall be capable of predicting the paths of the subject vehicle as well as principal other vehicles in order to identify the vehicle(s) whose path(s) may intersect with the subject vehicles path. Requirements are only one sort of document among many to be provided: source code, test results, development plans or any other sort of document which turns out to be necessary to justify that the final software can be used in a safety-critical system. This list is non-exhaustive and typically defined by a standard, like ISO26262 [2] or DO178C [1]. B. Traces The delivered artifacts generally have dependencies between each other: typically, the source code should fulfill the requirements, software requirements should refine system requirements, executable code derives from source code. Keeping these dependencies implicit increases a lot the risk that a dependency be wrong or forgotten. This is the purpose of traces to make these dependencies explicit. Every pair of artifacts is in principle subject to being traced from/to each other. In this paper we consider especially traces from code to requirements. Example 2: As an example, consider a requirement (defined in some document, e.g., a Word document or a DOORS database) being identified with an ID, say REQ 123 ; take then a piece of code defining many functions, one of them -say f 456 -implementing REQ 123. Then a trace is typically nothing more than a comment just before the function simply stating [REQ 123]: 1 / / f 4 5 6 t a k e s a s a r g u m e n t s : 2 / / − x : . . . 3 / / − y : . . . 4 / / I t r e t u r n s . . . 5 / / 6 / / [ REQ 123 ] 7 i n t f 4 5 6 ( i n t x , f l o a t y ) { 8 . . . } The trace is the comment on line 6. Another typical example is a trace between a test case and a requirement: it is important to ensure that the test cases indeed support the verification of requirements and that no requirement is forgotten. Even further, it is essential to also trace the results of the tests to the test cases themselves to ensure that the tests are indeed done and maintained. Writing down a trace is in general a manual activity: engineers look up the code and the requirements and add manually the comment above. 2 C. High-vs Low-level requirements In many cases, requirements are not concrete or precise enough to be traced directly with the above level of granularity (see Example 1). Therefore, it is often recommended to first refine the requirements into more concrete requirements, which can be traced from the code. These artifacts can have different denominations. For instance, the standard for the development of software for civil avionics (DO178C [1]) names them highlevel and low-level requirements (HLR/LLR) respectively (but the concepts is transferable to other standards and domains), with the following definition for LLR: "Low-level requirements are software requirements from which Source Code can be directly implemented without further information." [1]. LLR should themselves be traced to HLR in order to have complete traceability. 3 Note that the definition of HLR and LLR is not absolutely clear: we encountered examples where some requirements were considered as high-level by a company and low-level by another. In general, refining HLR into LLR goes hand in hand with architectural decisions: the requirements can be decomposed only once the function is decomposed into smaller functions, to which one can assign more concrete requirements. This is why the DO178C, for instance, refines the HLR into two artifacts: the LLRs on one hand, and the Software Architecture on the other hand. More concretely, the software architecture defines a set of components and connections between these components -or, more precisely, a set of interfaces (i.e., data types for inputs and outputs), since the software architecture does not cover the implementation of the components. Interfaces typically contain even more information like the physical units of the types (e.g., meters, centimeters, meter per second), or, if relevant, refreshing rates. The LLRs can then be mapped to each interface. Finally, the LLR and the software architecture are the only information necessary to write down the source code. Whether defined in a requirement or separately, there is always a definition of interfaces. In the following, we will generically refer to such a definition as an interface requirement. Fig. 1 represents the artifacts mentioned above. Of course, every artifact refining a previous one shall be traced to the latter, this should be typically bi-directional: every piece of information found in a refined artifact shall be found in the corresponding refining artifact, and conversely, every piece of information -except design decisions -found in a refining artifact shall be found in the refined one. In the DO178, the software architecture is not traced back to the HLR because it is a design decision. The figure also presents the test artifacts: test cases shall be traced as well to requirements (high-or low-level depending on the context), and test results shall be traced to test cases. D. Rationale Understanding the rationale behind traces 1. enables to understand why it is challenging to trace DNNs, and 2. gives hints to investigate relevant alternatives to classical traces. A trace serves the purpose of ensuring that a piece of code is justified by a requirement. This is not a structured or formal justification, which are in practice seldom applicable, however it at least enforces that people think about this justification. In fact, traceability does enable to identify sources of error: when an engineer attempts but does not manage to trace a piece of code then they might indeed get aware that this code is not necessary or, even worse, that it introduces unwanted functionality. Conversely, if a requirement is not traced back by any code, then it is often a hint that the requirement has been forgotten. For the same reason, traceability is a relevant tool for assessors in order to detect potential pitfalls during development. This is what is illustrated in Fig. 1 by the bidirectional arrows for traceability: having traces syntactically on each side is easy; it is however harder to ensure coverage of traceability on both sides, e.g., all HLR are traced to some LLR and all LLR are traced back to some HLR (the latter typically does not happen since some LLRs depend on design decisions). E. Process vs Artifacts Many standards like, e.g., DO178C, do not impose an order on how artifacts shall be developed. For instance, even though code shall be traced to requirements, it does not mean that one is forced to follow a waterfall model: one might just as well work iteratively, define requirements, then code, then go back to requirements, etc. The main point of traceability is that, no matter how one reached the final state of development (e.g., iteratively or waterfall), it should be possible to justify that this final state is coherent. Consequently, one might very well develop all the artifacts without traceability, and only at the end develop the traces. 4 This is why we emphasized in introduction that this paper is not process-but artifact-oriented: we do not impose how engineers should work but only what they should deliver. IV. DEEP LEARNING ARTIFACTS This section presents the concepts and terminology related to deep learning, in a way which makes it amenable to comparison with the artifacts of the previous section. To implement a required function using a DNN, one collects a lot of data matching as an input with their corresponding expected outputs (the outputs are not collected but typically manually annotated). This data is then used by a given DL framework to train the network. Examples of such frameworks are TensorFlow [24] or PyTorch [25]. A typical example of a function where one would want to use a DNN is the following: "given an image represented by a matrix of pixels, return a list of 4-tuples (x, y, w, h) representing rectangles which contain pedestrians" (such rectangles are typically called bounding boxes). One might require to identify various classes of objects (e.g., pedestrian, car, bikes) and to associate every bounding box with a label indicating to which class the object belongs, see Fig. 2. Fig. 2. Bounding boxes -image taken from [26] To teach a DNN, one needs the following: • A dataset containing both the input, e.g., images, and the output, e.g., annotations denoting bounding boxes for pedestrians' positions. In the following, we will consider these two aspects separately: the raw dataset, i.e., only the input, and the labels, i.e., only the corresponding expected output. • A deep neural network architecture. Prior to learning time, one cannot really consider that there is an actual neural network, but rather a skeleton thereof: the learning process will fill in this skeleton (more specifically, the weights), and doing so will generate the neural network used for the required function (in practice, the skeleton is actually randomly pre-filled and the random weights are progressively changed during learning). Such a skeleton is however more than just a box: the deep learning engineer decides on the shape of this skeleton, which does influence the learning process. A DNN architecture is typically designed as a layer-based architecture where the input, represented as a (potentially huge) vector or matrix (e.g. 640 × 480 × 3 for an image of width 640, height 480 and with three color components R, G and B), flows through various massively parallel operations transforming it until the output has the expected form (e.g., a vector containing 3 real numbers between 0 and 1 indicating each the confidence of the image containing a pedestrian, a car or nothing). The engineering then amounts to designing this architecture, meaning defining these operations: their types and the dimensions they transform their input into. See Fig. 3 for an example. • A loss function. To train a DNN, one must have a way to tell the machine learning framework when the DNN is wrong, in order to correct it. In theory, this is easy: if the network provides a wrong answer to an input for which we know the correct answer, then we just tell the framework what the right answer was. However, in practice, the functions addressed with DNN typically output a confidence rather than a perfect answer. One should therefore be more subtle than just telling "right" or "wrong". Instead one can provide a positive real number, basically a grade, telling how wrong the system is. If the number is null, then there is no error; otherwise, the higher the number, the more important the error. 5 Consequently, a loss function takes the expected and actually obtained results of the DNN as inputs, and returns a real number stating how bad the difference between both is. Mathematically, distances make good candidates for losses. Example 3: Example of a loss function: L(θ) = y pos −ŷ pos 2 − c∈C log(y c ).ŷ c where: θ denotes the set of all parameters of the DNN (i.e., its weights), 6 y pos (resp.ŷ pos ) denote the position of an inferred bounding box (resp. the actual, labelled, position of the bounding box -ground truth), -. 2 denotes the L2 norm, -C is the set of classes considered in the problem at hand, e.g., {pedestrian, car, cyclist}, y c (resp.ŷ c ) denotes the class assigned to the inferred bounding box (resp. the actual, labelled, class of the bounding box), via a so-called one-shot encoding, i.e., a vector of the size the number of classes, where each element contains a real number between 0 and 1 assessing the confidence of belonging to the corresponding class. We leave it to the reader to observe the variation of the function depending on the error of the network (or lack thereof). 5 In many cases, the objective is only to minimize the loss, not necessarily to nullify it. 6 Contrarily to one would mathematically expect, θ is present on the righthand side, but only implicitly: rigorously, one should write y as the result of applying the function represented by the DNN, as parametrized by θ. We follow however the conventions used in the classical DL literature. In practice the loss function is expressed using code: this code does not go in the final product but controls the learning of the DNN within the DL framework. As we will see, it will be essential for the rest of the paper not just to understand the artifacts themselves, but how they are developed. Typically, the sequence of decisions are as follows: 1) Collect data and, possibly, preprocess it: re-shape the information, fix missing values, extract features, or achieve much more advanced tasks like matching label ground truth boxes to so-called prior boxes [28] (we do not focus on this activity in this paper). Delivered artifacts: raw dataset, preprocessing functions. 2) Annotate the raw data. Delivered artifact: labelled dataset. 3) Split the dataset in training, validation and testing sets. Delivered artifacts: labelled training-, validation-and testing-datasets. The difference between the validation and testing datasets is that, after evaluating the DNN on the validation dataset, the engineer will take the result as a feedback into account to improve their design. When done and no more correction is planned, then the engineer will assess the quality of their DNN with the testing dataset. This should not entail further iterations of the design (see step 12). 7 4) Design the DNN architecture. Delivered artifact: DNN architecture (typically as python code making use of the selected framework). 5) Define the "learning configuration" this includes picking a loss, picking learning parameters (e.g., dropout [29], learning rate, maximum learning steps), or search strategies for these hyper-parameters (e.g., grid or random search), or even strategies involving the exploration of the dataset itself (e.g., curriculum learning). This learning configuration is a placeholder artifact for all aspects which potentially influence the learning process, e.g., the used version of the various software dependencies or the used random seeds. We do not make this list exhaustive since this is not the focus of this paper. Overall, the configuration shall be understood as the minimal piece of information such that the tuple [training set, architecture, learning configuration] characterizes uniquely the learned DNN. This requirement aims at ensuring the reproducibility of the learning. 8 Delivered artifact: Typically not "one" artifact but rather various pieces scattered across different artifacts: e.g., fine-tuning parameters stored in code, loss having its own source file, etc. Ideally, this could be gathered in some configuration files, as provided by some DL management platforms [30]. 6) Train the DNN architecture with the loss function on the training dataset using a selected deep learning frame- 7 Note that the terms testing dataset and validation datasets are sometimes exchanged in the literature. 8 We are on purpose quite vague on this matter because reproducibility is way harder to reach than one might think: not only random seeds influence the learning process, but also potentially the operating system, the library versions and even the hardware, which might, e.g., swap instructions differently nondeterministically. work. Delivered artifact: (trained) weight values. Note that the artifact is not the code, which, per se, is not different before and after training: the learning process alters the values of the variables used by the code, not the code itself. Consequently, the artifact is actually the resulting information stored in those variables. 7) Post-process the trained DNN (if necessary): many learning strategies require a change between learning and inference phase (e.g., drop out is applied only during learning). Delivered artifact: inference architecture. In that case, it is the opposite to the previous step: the code changes but the data does not. Note however that in most cases, the switch from the learning architecture to the inference one is so standard and systematic that there is no need for any separate artifact: typically, a DL framework will simply provide an optional argument which one shall set to true if in learning mode or to false in inference mode. 8) Test the resulting DNN on the validation dataset. Delivered artifact: test results (e.g., a metric like accuracy in the form of a number between 0 and 1), typically stored in a variable of the python runtime, or in a log file, or in a CI/CD system, if any. 9) Change the architecture or the learn configuration (4-5) based on the results and repeat steps 6-9 until the targeted objectives are reached. 10) Assess the quality of the inference DNN with the test set Delivered artifact: final validation results. 11) Depending on the used framework, serialize/export the network in order to use it in production, e.g., to be linked from a C++ source file, and compile it. Delivered artifact: executable code usable in production, Quite similarly to code development, the process yielding the finally delivered DNN is a typical trial-and-error process. There is a major difference though: code resulting from a trialand-error process can still be understood. This is typically not the case of DNNs: often, the only way to understand why a given architecture is finally obtained, is by looking back at the changes which led to it. This has of course a big impact on justifiability of a DNN and therefore on the traceability. We will get back to that point in Section V-B. Note that steps 1 and 9 are not duals: the former is a preprocessing of the data, which must therefore also happen at runtime; while the latter is a post-processing of the DNN itself, which therefore happens once and for all at design time and is not repeated at runtime. Fig. 4 summarizes the DL artifacts in a similar way to Fig. 1. Note again that this does not denote a process, but really a set of delivered artifacts: no sequence is imposed on the order in which the artifacts are developed. In particular, it is strongly to be expected that, once a developer decides to implement a function using a DNN, additional requirements (called "derived" in the DO178) might have to be added a posteriori: the choice of using DL as a technology might indeed entail new considerations at the requirement level. How can we map the DL artifacts presented in Section IV to the classical ones presented in Section III? First notice that both sections are not exactly targeting the same level of granularity: Section IV did not mention requirements, but there are of course requirements when developing DNNs for safety critical systems. Contrarily to software however, we believe that requirements implemented with DNNs generally cannot be refined into a software architecture and an LLR. This is not particularly a property of DNNs per se, but rather of the functions for which it makes sense to use DNNs: most applications for which DNNs are used successfully compared to classical methods, are applications where humans have difficulty decomposing the problem in hierarchical simpler sub-problems. One can even interpret the success of DNNs precisely under this angle: the learning activity does not just learn a solution to the problem, but also learns its own decomposition of the problem. With respect to requirements, this supports the claim that applications where DNNs are useful are precisely those where it is very hard to come up with a decomposition of HLR into LLR: refining HLR into LLR is intrinsically difficult -otherwise one could most probably use a classical (i.e., non-DNN) method. Consequently the only artifacts between the HLR and the source code are all the inputs to the DL framework: architecture, learning configuration and, of course, training dataset. High-level tests are now replaced by the testing/validation set: the name differs but the role is the same. Let us analyze how artifacts from Fig. 1 map to the ones of Fig. 4 in order to highlight similarities and differences: • System requirements, HLR, tests cases, test results and executable code are found in both cases. • As hinted by Fig. 4, the source code is still present but it is split between the architecture part and the weights part. • As mentioned above, the LLR and software architecture cannot really be mapped to the DL artifacts, unless one maps them to the complete design block, which does not bring anything. When it comes to traceability, traces between preserved artifacts are maintained. Traces between source code and object code can also be considered as preserved since these traces basically amount to trace the code generated by the compiler back to the source code: this is not different for DNNs and for classical software. However traces between HLR and Design, and Design and Source code shall be adapted. The next sections are dedicated precisely to these traces. More precisely, we need to consider traces between: 1) HLR and training dataset, 2) HLR and learning configuration, 3) HLR and architecture, 4) training dataset and source code, 5) learning configuration and source code, 6) architecture and source code. For the source code, one can differentiate inference architecture and learnt weights. Inference architecture simply can be traced trivially to the design architecture (when it is not the same artifact anyway as mentioned earlier) and to no other design artifact. The next section deals with traces between HLR and training dataset, the following section deals with all other traces. A. Traceability between HLR and training dataset Traces between HLR and dataset may seem simple: one just needs to trace every element of the dataset to the HLR. Some aspects are easy to think of tracing: the type of the raw data can be traced to the input definition in the interface requirement, or the type of the labels can be traced to the output definition. This sort of traceability can be targeted but we believe that it is too trivial to support the identification of any relevant problems: type mismatches between dataset and interface are not real sources of problem in practice. In addition, any such problem typically breaks anyways during integration of the DNN components with the rest of the system, so that there is no real possibility of encountering such an error when delivering a safety-critical system. We still go into more details about it in Appendix A in case the reader finds the problem relevant to their particular use case. Let us focus rather on the traceability of every piece of data to HLR, e.g., "The function shall recognize obstacles in urban context", "The function shall recognize obstacles by nice weather". In principle, it is simple to trace the dataset to such requirements: e.g., pictures in the dataset taken by nice weather shall be traced to the corresponding requirement, pictures in urban context as well, etc. However, the sort of information usually found in an HLR often applies uniformly to all elements of a dataset: e.g., if the function shall work only in urban context then all images of the dataset will be urban. This would entail tracing the entire dataset to the HLR, which would be so general that it would not really support the rationale of tracing: tracing the entire dataset to the HLR does not really provide a justification of this particular dataset. Instead, one expects every datum to be justified individually and therefore to be traced potentially differently from another datum. At that stage, we recommend developing the interface requirement much more than it usually is, in addition to the types and units of the inputs/outputs, it should describe in a detailed manner the output and -especially -input domain, with the purpose of defining what is an acceptable coverage of the domain. This can be done either as a requirement among the HLR or as a separate artifact, which we call "domain coverage model". Getting back to the example above, "urban" is not enough: one should actually detail which different forms of environment are encountered in an urban environment, e.g., "one-way street", "roundabout", etc. (of course, in that case, the input domain coverage model connects strongly to the Operational Design Domain -ODD -but it needs not be the case if the function to be performed by the DNN does not directly work with data coming from the sensors). These should be themselves traced towards higherlevel requirements, e.g., system-level requirements: this might even be a useful tool to identify misunderstandings regarding the environment, e.g., imagine a portion of highway which is within the limits of a city: is it urban or not? If working in a very structured context, e.g., where modelbased requirements engineering is used (see, e.g., [31]), the domain coverage model could really be formalized to some extent, via coverage criteria on the domain coverage model. In such cases, this activity comes in close connection to modelbased testing [32], the main difference with these classical approaches being merely the size of the model, which is typically huge in DL, much bigger than for classical approaches. Similar approaches have been carried out in machine learning in the literature, see e.g., [33], to a much smaller and less systematic extent. Note finally that, from a control engineering perspective, this is a bit similar to modelling the plant of a controller. Contrarily to a controller, the resulting NN is not analyzable. The domain coverage model plays thus an even more important role, which therefore justifies that it becomes a first-class citizen w.r.t. traceability. Note that it is typically very hard for a requirement engineer to know beforehand which level of granularity to put in such an input domain coverage model. Actually the level of granularity probably depends on the dataset itself, and can thus be identified only once the dataset is already (at least partially) present: this is counter-intuitive regarding the usual notion of requirement (even though it matches the practice thereof: requirements are never perfect from the beginning, they always need iterations). However remember that we do not focus on the order in which artifacts are delivered but only on ensuring their mutual consistency. In this respect, it is acceptable to generate or modify a posteriori such a requirement. 9 To find out the proper level of granularity, one shall keep in mind that such a domain coverage model shall serve as a tool to analyze the dataset by justifying why a particular datum is in there, and identifying cases where some situation might not be covered. Consequently, if too many pieces of data are tracing to the same environment requirement, then this environment requirement probably does not serve its purpose. Conversely, if very few pieces of data trace to one environment requirement only, then either this requirement is too specific or the dataset needs to be completed. Defining "too many" or "very few" is beyond the scope of this paper, but should be of course defined in a rigorous manner depending on the context. If the domain coverage model is defined with a very lowlevel of granularity, then we have the above situation that traceability becomes useless because applying equally to the entire dataset. On the other hand, if the domain coverage model is defined with a very high-level of granularity, then its coverage is probably not reachable as displayed in Fig. 4: the traceability arrow between HLR and dataset is not bidirectional. Note finally that, even though the discussion above targets especially the raw dataset, the same applies to the labels if their domain is complex enough: for instance, if the DNN shall provide the position on a pedestrian, then it is important to ensure that the domain of positions is adequately covered Fig. 5 updates Fig. 4 to reflect the new artifact and the corresponding traceability. The following traces remain: 1) HLR and learning configuration, 2) HLR and design architecture, 3) training dataset and learnt weights, 4) design architecture and learnt weights, 5) learning configuration and learnt weights. Even if simple to implement, a first essential trace is the one between the training dataset version and the learnt weights: indeed, it is easy in practice to lose track of which version of a dataset was used to train a given network. This trace requires no more than a unique identifier for a given version of the training dataset and a reference to this identifier in the trained DNN. For more meaningful traces, one can trace these artifacts just the same way as one does for classical software engineering: trace code to requirements. Since code has a very specific structure for DNNs, we can be a bit more precise: one can try tracing neurons to requirements. For instance, we could impose on the DL framework to keep a trace of which input datum impacted more which neuron. This is precisely the approach shortly mentioned in [5]. Even though doable in theory, this approach brings nothing in practice, it is acknowledged as impossible -at least as of today -to interpret, understand or explain the role of one particular neuron. In addition, the size of the DNN and of the dataset are so huge that one cannot expect to understand a-posteriori any useful piece of information out of it (though this might change in the future if explainable AI becomes successful). Consequently, this sort of traces will not fulfil the traceability rationale: if a reviewer inspects the involved artifacts in their state at the end of the project, they will not understand them nor their connection to previous artifacts. Remark. Note that the problem is new, but also has a part of well-known aspects to it: DNNs are, in essence, generated; therefore, like every generated code, they are much harder to understand and to trace than manually written code and thus cannot be trusted without further argumentation -which is why standards like DO330 exist [34]. Classically generated code can however usually be understood, which is not the case of DNNs, adding tremendously to the "classical" difficulty. Instead of waiting for explainable AI to provide solutions 10 , we suggest in this paper to trace the engineers' decisions instead of the artifacts themselves: if artifacts are not understandable, engineers' decisions shall be. How do engineers come up with architectures or learning configurations? They essentially try them, test them, and try again until they cannot improve the results anymore. In other words, these decisions are intrinsically based trial-and-error: see Fig. 6 for an illustration. Trial and error is usually not considered at the level Fig. 6. Trial-and-error of traceability: as mentioned earlier, it is rather the opposite, one expects from traceability that we can ensure the coherence of the artifacts in their final state, i.e., independently of how they were obtained, by trial-and-error or not. However, DNN development relies so much and so intrinsically on trial-anderror, that we feel necessary to embrace this kind of activity even for traceability. Future developments might provide more predictable and reproducible approaches to the development of DNNs, in which case the approach of the present section will become obsolete. At the moment, instead of simply avoiding this reality and hoping for techniques which might never come, we make an attempt for a pragmatic approach usable today. In case of trial-and-error, the only justification that one can provide is that a given artifact is better than its previous version. Consequently, we propose to trace every new artifact obtained by trial-and-error to its previous version. The objective of the trace being to demonstrates that the new version improves upon the previous version. This requires storing not only the final artifact but also all the previous versions of it -or at least all those which are necessary to understand the artifact obtained at the end. It might sound like an overkill but note that it is actually standard to store previous versions of artifacts in the development of safety critical systems (where it is often encountered under the term "configuration management") or, of course, for normal software with version control (even though it is usually restricted to source code: not for binary artifacts). Pairing these classical techniques with traceability forces however the engineer to do more than just tagging a new version in their version control system: they must also think about the justification of the new increment. Therefore, we suggest imposing the developers to define a metric (or KPI) to measure the quality of the inference DNNwhich they anyways normally do, maybe however not always formally. Such a metric should not be the loss but be defined according to the actual goals that one plans to achieve with the function (e.g., a car can be mistaken for a pedestrian, but not the other way around). The metric can range from simple cases like accuracy and/or recall to complex combinations of functions [4]. As a new artifact, one must then explicitly store the values of this metric for a given DNN. Of course this value shall be traced to the weight values and inference architecture with which it was obtained. The essential addition is then to require that every version of the network which is obtained by increment of a previous one shall be traced to the metric value obtained with this previous version: one can then easily check if the new value indeed is an improvement. This metric should be the same in order to measure the quality of all the evolutions of the DNN. If it changes during the course of the project or is defined only a posteriori, then one needs to re-check the entire trial-and-error chain leading to the final version of the DNN. We summarize the change of artifacts in Fig. 7. This whole process might sound like a big hindrance for the practitioner, but note that: 1. the problem of not providing a real argumentation for a so-called improvement is actually recognized as a problem, even by the machine learning community itself (see e.g. "Explanation vs Speculation" in [36]), and 2. it is still much easier to apply than any approach currently taken in the field of explainable AI. Our recommendation in its current state can easily be "tricked": nothing forces a developer to deliver the previous versions of their DNN; they can just claim that the version they delivered was the first version that they developed, which, by Fig. 5 to integrate trial-and-error chance, was extremely good. A way to circumvent this, is to impose some restrictions on the first delivered version, e.g., requesting that the first version shall belong to a catalogue of authorized "primitive" DNNs. A developer cannot then just deliver immediately a complex DNN without tracing it to a previous primitive one. Primitive DNNs can be defined in various ways and the definition impacts differently various artifacts: this goes beyond the present paper but shall be investigated. Imposing a primitive catalogue is still not enough: imagine that an engineer developed a specific DNN classically (i.e., without following our recommendation of tracing the trial-anderror activities). Then, instead of going through the tedious work of analyzing the chain of increments which led to their final DNN until they reach their original "simple" DNN, they can just hide all the versions between the first and the last. In such a case the last version displays as their first improvement, which allows them to claim, that, by chance, their "first" improvement was the good one. Of course, this goes completely against the intent of our approach. To circumvent this, one should also restrict possible increments, or at least the justification for one increment. A naïve solution could be to have increments like adding only one layer at a time, having a default size for layers, etc. This might however be too restrictive in practice: some DNNs only show their benefits after having added a certain number of layers, but all the smaller versions with less layers are all equally bad. Investigating such restrictions in detail goes beyond the present paper. VI. FUTURE WORK This paper is, to our knowledge, the first to provide a precise list of traces which could potentially be written down for DNN. However, it does not address various development practices, which are encountered in real developments. Gap between trained and inference DNN. The process highlighted in Section IV assumes more or less implicitly that the interface of the trained DNN is the same as the one of the inference DNN. This assumption is often met (e.g., when using dropout the output type of the trained and inference DNN is the same) but not always: e.g., one might, even in a supervised context, train a sub-part of the final network in an unsupervised manner, for instance to learn valuable features (e.g., latent space of an auto-encoder [37]). One might also train a DNN on a separate dataset or take a DNN already trained on another dataset (e.g., ImageNET for object detection [38]) then remove the latest layers (the most task-specific ones) to only adapt the DNN to the targeted functionality. In such cases, lots of intermediate steps are actually not immediately connected to the final task and therefore not traceable in the sense considered so far. We do not consider this sort of cases in this paper but insist on how essential they are: they reflect a reality of DL engineers which cannot be ignored. Dataset. Another important aspect that has been ignored in this paper, is the evolution of the dataset: we assumed that the dataset (or more precisely, the datasets: training, validation, testing) is fixed. As mentioned, this is common practice when considering traceability: we are normally only interested in the final artifacts (except in our case, exceptionally, for trial-anderror activities). However, in reality, many systems actually gather new data along their lifetime. Therefore, one may not ignore the fact that data evolves permanently all along the life cycle of the autonomous system. In such cases, one should consider a form of incremental traceability, i.e., how to trace new data as it comes along. One should especially probably trace differently training data from testing data. In particular, one might need to argue why adding a new datum indeed provides additional valuable information. To do so, a possibility is to develop dataset coverage models. Depending on the context, one might need to trace the dataset itself to the sources used to generate it since they influence the dataset a lot and therefore the training: sensors calibration setup, sensor driver versions, etc. Explainable AI. As mentioned from the beginning, we try in this paper to be independent of current approaches in the domain of explainable AI. We try in particular to be more pragmatic than academic. However, it is probably valuable to look more precisely into various approaches of explainable AI (see, e.g., [4] for a review) to discover new opportunities for relevant fine-granular traces. Classical AI. Various approaches attempt to mix deep learning with expert knowledge, e.g., by transferring existing expert knowledge to a neural network (e.g., transfer learning [39]) where the expert knowledge can be expressed through rules or other forms; or by intertwining machine learning with probabilistic modelling [40]. All these approaches are valuable from the point of view of AI research, but they are also very promising for safety-critical systems because they allow to control the machine learning process to some extent and therefore to argue better that the final behavior is indeed satisfying. In some sense, one can interpret this as a form of explainability-by-design. It would therefore be very valuable to consider how to trace these methods, in particular the newly induced artifacts (e.g., generative model in the case of probabilistic programming). Intellectual property. In domains like automotive or avionics, the development of the system is extremely distributed among various stakeholders: OEMs, tier 1, tier 2, or even tier 3 suppliers. In such cases, it is essential to deliver sufficient artifacts to guarantee safety, but it is also essential that every stakeholder keep their own intellectual property. This can be problematic for our approach to trial-and-error activities which forces practitioners to provide artifact evolutions which might reveal their production secrets. Similar problems exist for virtual validation and can been solved with approaches like the FMI standard [41]. It should in any case be investigated for the approach presented in this paper. VII. CONCLUSION In this paper, we addressed the traceability of neural network in a pragmatic manner: we first explicitly identified the challenge of tracing DNNs, then analyzed the parallels and differences between DNNs and classical software development, and proposed accordingly adaptations of the notion of trace for DNNs. Instead of blindly mapping classical software activities to DL activities, which would lead to mismatches with the actual practice of DL, we tried to embrace some of the specificities of "real-life" DL, in particular trial-anderror. We provided a solution (or the beginning thereof), which we believe supports both the rationale of traceability, while still being applicable for practitioners. The applicability might be controlled depending on the targeted safety level, as is classically done in safety-related software standards: for instance, one could require different coverage percentages for the domain coverage model whether the function is ASIL A, B, C, or D. Acknowledgments. The author thanks Frederik Diehl for his careful review and his wisdom in DL. This work is the realization of thoughts that were initiated during a World Café at the Auto.AI conference Europe, moderated by Håkan Sivencrona. Further remarks were added after presenting early results at the Vehicle Intelligence conference. Thanks go both to the organizers and participants of the conferences as well as to Håkan. APPENDIX A. Traceability of the dataset types As mentioned in Section V-A, we go in this section more in detail about the traceability of dataset to interface requirements: the dataset being basically a set of examples, it should match the types of the inputs/outputs and therefore be traced to the interface requirement. Concretely, this means tracing 1. the raw dataset, and 2. the labels. Both should be traced to the interface requirements: the raw dataset to the input part of it, the labels to its output part. For instance, if the interface requirement states that the input shall be images of dimension 640 × 480, then the raw dataset shall contain only such images, and shall therefore be traced to this input requirement. In case pre-processing is required, then there might not be a direct match, in which case the pre-processing function shall be mapped to the interface requirement. Various approaches might then be employed: the dataset itself might be traced to the post-processing function directly, or one might introduce new requirements (called derived in the DO178-C) defining the interface of the postprocessing function and then trace the dataset to this requirement. Or one might simply consider that the interface is a design decision, not to be traced (in DO178-C terminology: the interface definition would be part of the software architecture). In a dual manner, suppose the interface requirement specifies that the output type is "list of 4-tuples" -representing bounding boxes. Then every label is a list of bounding boxes. Like previously, the dataset can therefore be traced to this type definition. However, if the structure of the output type is more complex (typically, if it contains sum types, i.e., enumerations), then traces can be defined per datum instead. Suppose for instance, that the interface requirement (say "REQ 123") specifies the following output instead: 1) output shall be a list of pairs, 2) where the first element is a 4-tuple like previously, 3) but the second element is a record containing the following fields: a) "pedestrian", b) "bike", c) "vehicle", 4) and where each of the fields contain a real between 0 and 1, 5) such that the sum of all field numbers is 1. In such cases, the dataset can be traced as a whole to REQ 123-1 and REQ 123-2 since those parts of the type apply to every datum uniformly (more or less like before). On the other hand, for a given image, each label can be traced to REQ 123-3a, REQ 123-3b or REQ 123-3c: for instance, if an image is labeled as containing one pedestrian, and the label "pedestrian" shall be traced to REQ 123-3a. In such cases, we can trace every datum independently. 11 Conversely, if one element of the dataset also identifies "trucks", then this label is not traceable to the requirement, which denotes a potential addition of unintended functionality. Note that there might be reasons why wanting to have data with labels not supporting the requirements: e.g., reuse of some data used in another context, use of the same data for another function, or desire to label more "just in case". Depending on the developed system, such cases shall probably not be forbidden, but their presence might give a hint about potential unintended functionality, which should then probably be addressed. For instance, depending on the case, the dataset should be preprocessed: the unwanted label should be erased or merged into another label, or maybe even gives hint that the requirement itself is not complete. Our main point is that the lack of traceability provides a hint about potential design decisions.
8,374
1812.06744
2904146064
[Context.] The success of deep learning makes its usage more and more tempting in safety-critical applications. However such applications have historical standards (e.g., DO178, ISO26262) which typically do not envision the usage of machine learning. We focus in particular on of software artifacts, i.e., code modules, functions, or statements (depending on the desired granularity). [Problem.] Both code and requirements are a problem when dealing with deep neural networks: code constituting the network is not comparable to classical code; furthermore, requirements for applications where neural networks are required are typically very hard to specify: even though high-level requirements can be defined, it is very hard to make such requirements concrete enough, that one can qualify them of low-level requirements. An additional problem is that deep learning is in practice very much based on trial-and-error, which makes the final result hard to explain without the previous iterations. [Proposed solution.] We investigate which artifacts could play a similar role to code or low-level requirements in neural network development and propose various traces which one could possibly consider as a replacement for classical notions. We also propose a form of traceability (and new artifacts) in order to deal with the particular trial-and-error development process for deep learning.
There has been attempts to apply principles of software engineering (or even of engineering in general) to NNs @cite_14 . It particularly attempts to address the lack of reproducibility in the development of NNs. Even though the terminology and techniques have changed a lot since 2004, the identified problems are still relevant. Still, the solutions of the paper do not answer the need for traceability and seem to hardly match nowadays' practice.
{ "abstract": [ "Neural networks have been used to solve a wide range of problems. Unfortunately, many of the applications of neural networks reported in the literature have been built in an ad-hoc manner, without being informed by the techniques and tools of software engineering. The problem with developing neural networks in an ad-hoc manner, using a \"trial and error\" or \"build and fix\" approach, is that successes are difficult to repeat. Building neural networks to solve specific problems using ad-hoc processes is repeatable only if there is a sufficient culture of disciplined practice and experienced people in the organisation to facilitate the process. We propose a set of methods for developing neural networks that can be used to systematically and repeatably \"engineer\" neural networks to solve specific problems. We explore the \"design problem \"for neural networks, and the problem of validating and verifying the operation and learning algorithms for neural network software. A feature of our approach is to separate the generic components of a neural network from the application specific components." ], "cite_N": [ "@cite_14" ], "mid": [ "2110578455" ] }
Traceability of Deep Neural Networks
The success of deep learning (DL), in particular in computer vision, makes its usage more and more tempting in many applications, including safety-critical ones. However the development of such applications must follow standards (e.g., DO178 [1], ISO26262 [2]) which typically do not envision the usage of machine learning. At the moment, practitioners therefore cannot use machine learning for safety-critical functions (e.g., ASIL-D for ISO26262, or DAL-A for DO178). There exist various attempts to address this issue whether in standardization committees (e.g., ISO/IEC JTC 1/SC 42 or DKE/DIN [3]) or in the academic community (various initiatives towards explainable AI, e.g., [4]), but they are all far from mature and definitely not usable as of today or do not really address the problem: most standardization approaches just try to map one-to-one classical software engineering processes like the V-model to deep learning. Furthermore, no academic solution, at the moment, provides a solution to the lack of understandability of deep neural networks (DNN). In this paper, we try to find a pragmatic approach, which focuses on artifacts rather than on processes: we are not prescriptive regarding the activities which produced these artifacts. More precisely, we focus only on artifacts which are worth being identified during the development of DNNs for the sake of traceability. Consequently, this paper does not provide a ready-made solution, which a practitioner could follow one-to-one. However, it provides concrete descriptions which should at least be sufficient to provide a first guidance. We restrict the scope of this paper to the following: • Deep neural networks for supervised learning (no reinforcement learning, no unsupervised learning). • We focus only on software, not on system: traces from software requirements to system requirements are out of scope, as well as FMEAs or failure rates. • We do not focus on binary code or deployment thereof on hardware platform. • We assume a fixed, non-evolving, dataset: this does not comply with most real cases in, say, autonomous driving, where data is continuously collected. Even if not continuously collected, the dataset has so much influence on the training that one can hardly ignore these evolutions for proper traceability. Still, there are already sufficiently many questions to address without considering this evolution, which is why we leave this out of focus in this paper. • We focus essentially on functional requirements. Lifting these restrictions is left to future work. The rest of the paper is organized as follows: Section II presents related work. Section III recalls the concept of traceability. Section IV provides a traceability-amenable presentation of deep learning. Section V contains the main contribution of this paper: it analyzes which DNN artifacts replace classical software artifacts and suggests new artifacts and traces to enable the traceability of DNNs. Section VI identifies various gaps of the present work for future research. Finally Section VII summarizes the paper. III. PRELIMINARY: TRACEABILITY It is very difficult (at least nowadays) to find engineers or researchers who know both safety-aware software engineering and deep learning. 1 This paper really attempts to answer a problem which lies at the intersection of two communities and tries therefore to be self-contained for both. Consequently, we first recall the concepts and terminology related to traceability, as used in this paper. This should be a trivial reading for the safety-critical systems software engineer, but we do recommend reading it to ensure that the terminology is clear in the rest of the paper. Even though not a proper formal source, we still recommend Wikipedia [19] on this topic. A. Artifacts When developing classical software, the only product to deliver is executable code. One might provide also source code if the software is open source; the software itself might be part of a bigger system if it is embedded; but, all in all, from the perspective of the software engineer, one just needs to deliver some sort of executable. For safety critical systems, this is not enough: one needs to deliver not only the executable code itself, but also a justification that the executable code indeed does what it is supposed to do or that it is resilient to faults. Such a justification is provided in the form of documents, source code, executable, etc., which are not intended for the final consumer, but for the authority (whether it is an independent authority or a company-internal one) in charge of validating the safety of the product. We call these (development) artifacts. One such essential document is the one describing requirements: requirements describe what the software is supposed to do, without providing implementation details. In many nonsafety critical applications, requirements are expressed in a very unstructured manner, e.g., in some statement of work, in an issue tracker, or in slides communicated from the client. In safety critical applications however, it is essential to have these requirements in a way that they can be structured, referenced, or even categorized. For instance: functional requirements describe the expected function of a component, timing requirements describe the temporal constraints for a given function, interface requirements describe the input/output types of a component. Requirement documents found in safety-critical industry typically use dedicated software like IBM Rational DOORS. Example 1 (Functional requirement [20]): The [system] shall be capable of predicting the paths of the subject vehicle as well as principal other vehicles in order to identify the vehicle(s) whose path(s) may intersect with the subject vehicles path. Requirements are only one sort of document among many to be provided: source code, test results, development plans or any other sort of document which turns out to be necessary to justify that the final software can be used in a safety-critical system. This list is non-exhaustive and typically defined by a standard, like ISO26262 [2] or DO178C [1]. B. Traces The delivered artifacts generally have dependencies between each other: typically, the source code should fulfill the requirements, software requirements should refine system requirements, executable code derives from source code. Keeping these dependencies implicit increases a lot the risk that a dependency be wrong or forgotten. This is the purpose of traces to make these dependencies explicit. Every pair of artifacts is in principle subject to being traced from/to each other. In this paper we consider especially traces from code to requirements. Example 2: As an example, consider a requirement (defined in some document, e.g., a Word document or a DOORS database) being identified with an ID, say REQ 123 ; take then a piece of code defining many functions, one of them -say f 456 -implementing REQ 123. Then a trace is typically nothing more than a comment just before the function simply stating [REQ 123]: 1 / / f 4 5 6 t a k e s a s a r g u m e n t s : 2 / / − x : . . . 3 / / − y : . . . 4 / / I t r e t u r n s . . . 5 / / 6 / / [ REQ 123 ] 7 i n t f 4 5 6 ( i n t x , f l o a t y ) { 8 . . . } The trace is the comment on line 6. Another typical example is a trace between a test case and a requirement: it is important to ensure that the test cases indeed support the verification of requirements and that no requirement is forgotten. Even further, it is essential to also trace the results of the tests to the test cases themselves to ensure that the tests are indeed done and maintained. Writing down a trace is in general a manual activity: engineers look up the code and the requirements and add manually the comment above. 2 C. High-vs Low-level requirements In many cases, requirements are not concrete or precise enough to be traced directly with the above level of granularity (see Example 1). Therefore, it is often recommended to first refine the requirements into more concrete requirements, which can be traced from the code. These artifacts can have different denominations. For instance, the standard for the development of software for civil avionics (DO178C [1]) names them highlevel and low-level requirements (HLR/LLR) respectively (but the concepts is transferable to other standards and domains), with the following definition for LLR: "Low-level requirements are software requirements from which Source Code can be directly implemented without further information." [1]. LLR should themselves be traced to HLR in order to have complete traceability. 3 Note that the definition of HLR and LLR is not absolutely clear: we encountered examples where some requirements were considered as high-level by a company and low-level by another. In general, refining HLR into LLR goes hand in hand with architectural decisions: the requirements can be decomposed only once the function is decomposed into smaller functions, to which one can assign more concrete requirements. This is why the DO178C, for instance, refines the HLR into two artifacts: the LLRs on one hand, and the Software Architecture on the other hand. More concretely, the software architecture defines a set of components and connections between these components -or, more precisely, a set of interfaces (i.e., data types for inputs and outputs), since the software architecture does not cover the implementation of the components. Interfaces typically contain even more information like the physical units of the types (e.g., meters, centimeters, meter per second), or, if relevant, refreshing rates. The LLRs can then be mapped to each interface. Finally, the LLR and the software architecture are the only information necessary to write down the source code. Whether defined in a requirement or separately, there is always a definition of interfaces. In the following, we will generically refer to such a definition as an interface requirement. Fig. 1 represents the artifacts mentioned above. Of course, every artifact refining a previous one shall be traced to the latter, this should be typically bi-directional: every piece of information found in a refined artifact shall be found in the corresponding refining artifact, and conversely, every piece of information -except design decisions -found in a refining artifact shall be found in the refined one. In the DO178, the software architecture is not traced back to the HLR because it is a design decision. The figure also presents the test artifacts: test cases shall be traced as well to requirements (high-or low-level depending on the context), and test results shall be traced to test cases. D. Rationale Understanding the rationale behind traces 1. enables to understand why it is challenging to trace DNNs, and 2. gives hints to investigate relevant alternatives to classical traces. A trace serves the purpose of ensuring that a piece of code is justified by a requirement. This is not a structured or formal justification, which are in practice seldom applicable, however it at least enforces that people think about this justification. In fact, traceability does enable to identify sources of error: when an engineer attempts but does not manage to trace a piece of code then they might indeed get aware that this code is not necessary or, even worse, that it introduces unwanted functionality. Conversely, if a requirement is not traced back by any code, then it is often a hint that the requirement has been forgotten. For the same reason, traceability is a relevant tool for assessors in order to detect potential pitfalls during development. This is what is illustrated in Fig. 1 by the bidirectional arrows for traceability: having traces syntactically on each side is easy; it is however harder to ensure coverage of traceability on both sides, e.g., all HLR are traced to some LLR and all LLR are traced back to some HLR (the latter typically does not happen since some LLRs depend on design decisions). E. Process vs Artifacts Many standards like, e.g., DO178C, do not impose an order on how artifacts shall be developed. For instance, even though code shall be traced to requirements, it does not mean that one is forced to follow a waterfall model: one might just as well work iteratively, define requirements, then code, then go back to requirements, etc. The main point of traceability is that, no matter how one reached the final state of development (e.g., iteratively or waterfall), it should be possible to justify that this final state is coherent. Consequently, one might very well develop all the artifacts without traceability, and only at the end develop the traces. 4 This is why we emphasized in introduction that this paper is not process-but artifact-oriented: we do not impose how engineers should work but only what they should deliver. IV. DEEP LEARNING ARTIFACTS This section presents the concepts and terminology related to deep learning, in a way which makes it amenable to comparison with the artifacts of the previous section. To implement a required function using a DNN, one collects a lot of data matching as an input with their corresponding expected outputs (the outputs are not collected but typically manually annotated). This data is then used by a given DL framework to train the network. Examples of such frameworks are TensorFlow [24] or PyTorch [25]. A typical example of a function where one would want to use a DNN is the following: "given an image represented by a matrix of pixels, return a list of 4-tuples (x, y, w, h) representing rectangles which contain pedestrians" (such rectangles are typically called bounding boxes). One might require to identify various classes of objects (e.g., pedestrian, car, bikes) and to associate every bounding box with a label indicating to which class the object belongs, see Fig. 2. Fig. 2. Bounding boxes -image taken from [26] To teach a DNN, one needs the following: • A dataset containing both the input, e.g., images, and the output, e.g., annotations denoting bounding boxes for pedestrians' positions. In the following, we will consider these two aspects separately: the raw dataset, i.e., only the input, and the labels, i.e., only the corresponding expected output. • A deep neural network architecture. Prior to learning time, one cannot really consider that there is an actual neural network, but rather a skeleton thereof: the learning process will fill in this skeleton (more specifically, the weights), and doing so will generate the neural network used for the required function (in practice, the skeleton is actually randomly pre-filled and the random weights are progressively changed during learning). Such a skeleton is however more than just a box: the deep learning engineer decides on the shape of this skeleton, which does influence the learning process. A DNN architecture is typically designed as a layer-based architecture where the input, represented as a (potentially huge) vector or matrix (e.g. 640 × 480 × 3 for an image of width 640, height 480 and with three color components R, G and B), flows through various massively parallel operations transforming it until the output has the expected form (e.g., a vector containing 3 real numbers between 0 and 1 indicating each the confidence of the image containing a pedestrian, a car or nothing). The engineering then amounts to designing this architecture, meaning defining these operations: their types and the dimensions they transform their input into. See Fig. 3 for an example. • A loss function. To train a DNN, one must have a way to tell the machine learning framework when the DNN is wrong, in order to correct it. In theory, this is easy: if the network provides a wrong answer to an input for which we know the correct answer, then we just tell the framework what the right answer was. However, in practice, the functions addressed with DNN typically output a confidence rather than a perfect answer. One should therefore be more subtle than just telling "right" or "wrong". Instead one can provide a positive real number, basically a grade, telling how wrong the system is. If the number is null, then there is no error; otherwise, the higher the number, the more important the error. 5 Consequently, a loss function takes the expected and actually obtained results of the DNN as inputs, and returns a real number stating how bad the difference between both is. Mathematically, distances make good candidates for losses. Example 3: Example of a loss function: L(θ) = y pos −ŷ pos 2 − c∈C log(y c ).ŷ c where: θ denotes the set of all parameters of the DNN (i.e., its weights), 6 y pos (resp.ŷ pos ) denote the position of an inferred bounding box (resp. the actual, labelled, position of the bounding box -ground truth), -. 2 denotes the L2 norm, -C is the set of classes considered in the problem at hand, e.g., {pedestrian, car, cyclist}, y c (resp.ŷ c ) denotes the class assigned to the inferred bounding box (resp. the actual, labelled, class of the bounding box), via a so-called one-shot encoding, i.e., a vector of the size the number of classes, where each element contains a real number between 0 and 1 assessing the confidence of belonging to the corresponding class. We leave it to the reader to observe the variation of the function depending on the error of the network (or lack thereof). 5 In many cases, the objective is only to minimize the loss, not necessarily to nullify it. 6 Contrarily to one would mathematically expect, θ is present on the righthand side, but only implicitly: rigorously, one should write y as the result of applying the function represented by the DNN, as parametrized by θ. We follow however the conventions used in the classical DL literature. In practice the loss function is expressed using code: this code does not go in the final product but controls the learning of the DNN within the DL framework. As we will see, it will be essential for the rest of the paper not just to understand the artifacts themselves, but how they are developed. Typically, the sequence of decisions are as follows: 1) Collect data and, possibly, preprocess it: re-shape the information, fix missing values, extract features, or achieve much more advanced tasks like matching label ground truth boxes to so-called prior boxes [28] (we do not focus on this activity in this paper). Delivered artifacts: raw dataset, preprocessing functions. 2) Annotate the raw data. Delivered artifact: labelled dataset. 3) Split the dataset in training, validation and testing sets. Delivered artifacts: labelled training-, validation-and testing-datasets. The difference between the validation and testing datasets is that, after evaluating the DNN on the validation dataset, the engineer will take the result as a feedback into account to improve their design. When done and no more correction is planned, then the engineer will assess the quality of their DNN with the testing dataset. This should not entail further iterations of the design (see step 12). 7 4) Design the DNN architecture. Delivered artifact: DNN architecture (typically as python code making use of the selected framework). 5) Define the "learning configuration" this includes picking a loss, picking learning parameters (e.g., dropout [29], learning rate, maximum learning steps), or search strategies for these hyper-parameters (e.g., grid or random search), or even strategies involving the exploration of the dataset itself (e.g., curriculum learning). This learning configuration is a placeholder artifact for all aspects which potentially influence the learning process, e.g., the used version of the various software dependencies or the used random seeds. We do not make this list exhaustive since this is not the focus of this paper. Overall, the configuration shall be understood as the minimal piece of information such that the tuple [training set, architecture, learning configuration] characterizes uniquely the learned DNN. This requirement aims at ensuring the reproducibility of the learning. 8 Delivered artifact: Typically not "one" artifact but rather various pieces scattered across different artifacts: e.g., fine-tuning parameters stored in code, loss having its own source file, etc. Ideally, this could be gathered in some configuration files, as provided by some DL management platforms [30]. 6) Train the DNN architecture with the loss function on the training dataset using a selected deep learning frame- 7 Note that the terms testing dataset and validation datasets are sometimes exchanged in the literature. 8 We are on purpose quite vague on this matter because reproducibility is way harder to reach than one might think: not only random seeds influence the learning process, but also potentially the operating system, the library versions and even the hardware, which might, e.g., swap instructions differently nondeterministically. work. Delivered artifact: (trained) weight values. Note that the artifact is not the code, which, per se, is not different before and after training: the learning process alters the values of the variables used by the code, not the code itself. Consequently, the artifact is actually the resulting information stored in those variables. 7) Post-process the trained DNN (if necessary): many learning strategies require a change between learning and inference phase (e.g., drop out is applied only during learning). Delivered artifact: inference architecture. In that case, it is the opposite to the previous step: the code changes but the data does not. Note however that in most cases, the switch from the learning architecture to the inference one is so standard and systematic that there is no need for any separate artifact: typically, a DL framework will simply provide an optional argument which one shall set to true if in learning mode or to false in inference mode. 8) Test the resulting DNN on the validation dataset. Delivered artifact: test results (e.g., a metric like accuracy in the form of a number between 0 and 1), typically stored in a variable of the python runtime, or in a log file, or in a CI/CD system, if any. 9) Change the architecture or the learn configuration (4-5) based on the results and repeat steps 6-9 until the targeted objectives are reached. 10) Assess the quality of the inference DNN with the test set Delivered artifact: final validation results. 11) Depending on the used framework, serialize/export the network in order to use it in production, e.g., to be linked from a C++ source file, and compile it. Delivered artifact: executable code usable in production, Quite similarly to code development, the process yielding the finally delivered DNN is a typical trial-and-error process. There is a major difference though: code resulting from a trialand-error process can still be understood. This is typically not the case of DNNs: often, the only way to understand why a given architecture is finally obtained, is by looking back at the changes which led to it. This has of course a big impact on justifiability of a DNN and therefore on the traceability. We will get back to that point in Section V-B. Note that steps 1 and 9 are not duals: the former is a preprocessing of the data, which must therefore also happen at runtime; while the latter is a post-processing of the DNN itself, which therefore happens once and for all at design time and is not repeated at runtime. Fig. 4 summarizes the DL artifacts in a similar way to Fig. 1. Note again that this does not denote a process, but really a set of delivered artifacts: no sequence is imposed on the order in which the artifacts are developed. In particular, it is strongly to be expected that, once a developer decides to implement a function using a DNN, additional requirements (called "derived" in the DO178) might have to be added a posteriori: the choice of using DL as a technology might indeed entail new considerations at the requirement level. How can we map the DL artifacts presented in Section IV to the classical ones presented in Section III? First notice that both sections are not exactly targeting the same level of granularity: Section IV did not mention requirements, but there are of course requirements when developing DNNs for safety critical systems. Contrarily to software however, we believe that requirements implemented with DNNs generally cannot be refined into a software architecture and an LLR. This is not particularly a property of DNNs per se, but rather of the functions for which it makes sense to use DNNs: most applications for which DNNs are used successfully compared to classical methods, are applications where humans have difficulty decomposing the problem in hierarchical simpler sub-problems. One can even interpret the success of DNNs precisely under this angle: the learning activity does not just learn a solution to the problem, but also learns its own decomposition of the problem. With respect to requirements, this supports the claim that applications where DNNs are useful are precisely those where it is very hard to come up with a decomposition of HLR into LLR: refining HLR into LLR is intrinsically difficult -otherwise one could most probably use a classical (i.e., non-DNN) method. Consequently the only artifacts between the HLR and the source code are all the inputs to the DL framework: architecture, learning configuration and, of course, training dataset. High-level tests are now replaced by the testing/validation set: the name differs but the role is the same. Let us analyze how artifacts from Fig. 1 map to the ones of Fig. 4 in order to highlight similarities and differences: • System requirements, HLR, tests cases, test results and executable code are found in both cases. • As hinted by Fig. 4, the source code is still present but it is split between the architecture part and the weights part. • As mentioned above, the LLR and software architecture cannot really be mapped to the DL artifacts, unless one maps them to the complete design block, which does not bring anything. When it comes to traceability, traces between preserved artifacts are maintained. Traces between source code and object code can also be considered as preserved since these traces basically amount to trace the code generated by the compiler back to the source code: this is not different for DNNs and for classical software. However traces between HLR and Design, and Design and Source code shall be adapted. The next sections are dedicated precisely to these traces. More precisely, we need to consider traces between: 1) HLR and training dataset, 2) HLR and learning configuration, 3) HLR and architecture, 4) training dataset and source code, 5) learning configuration and source code, 6) architecture and source code. For the source code, one can differentiate inference architecture and learnt weights. Inference architecture simply can be traced trivially to the design architecture (when it is not the same artifact anyway as mentioned earlier) and to no other design artifact. The next section deals with traces between HLR and training dataset, the following section deals with all other traces. A. Traceability between HLR and training dataset Traces between HLR and dataset may seem simple: one just needs to trace every element of the dataset to the HLR. Some aspects are easy to think of tracing: the type of the raw data can be traced to the input definition in the interface requirement, or the type of the labels can be traced to the output definition. This sort of traceability can be targeted but we believe that it is too trivial to support the identification of any relevant problems: type mismatches between dataset and interface are not real sources of problem in practice. In addition, any such problem typically breaks anyways during integration of the DNN components with the rest of the system, so that there is no real possibility of encountering such an error when delivering a safety-critical system. We still go into more details about it in Appendix A in case the reader finds the problem relevant to their particular use case. Let us focus rather on the traceability of every piece of data to HLR, e.g., "The function shall recognize obstacles in urban context", "The function shall recognize obstacles by nice weather". In principle, it is simple to trace the dataset to such requirements: e.g., pictures in the dataset taken by nice weather shall be traced to the corresponding requirement, pictures in urban context as well, etc. However, the sort of information usually found in an HLR often applies uniformly to all elements of a dataset: e.g., if the function shall work only in urban context then all images of the dataset will be urban. This would entail tracing the entire dataset to the HLR, which would be so general that it would not really support the rationale of tracing: tracing the entire dataset to the HLR does not really provide a justification of this particular dataset. Instead, one expects every datum to be justified individually and therefore to be traced potentially differently from another datum. At that stage, we recommend developing the interface requirement much more than it usually is, in addition to the types and units of the inputs/outputs, it should describe in a detailed manner the output and -especially -input domain, with the purpose of defining what is an acceptable coverage of the domain. This can be done either as a requirement among the HLR or as a separate artifact, which we call "domain coverage model". Getting back to the example above, "urban" is not enough: one should actually detail which different forms of environment are encountered in an urban environment, e.g., "one-way street", "roundabout", etc. (of course, in that case, the input domain coverage model connects strongly to the Operational Design Domain -ODD -but it needs not be the case if the function to be performed by the DNN does not directly work with data coming from the sensors). These should be themselves traced towards higherlevel requirements, e.g., system-level requirements: this might even be a useful tool to identify misunderstandings regarding the environment, e.g., imagine a portion of highway which is within the limits of a city: is it urban or not? If working in a very structured context, e.g., where modelbased requirements engineering is used (see, e.g., [31]), the domain coverage model could really be formalized to some extent, via coverage criteria on the domain coverage model. In such cases, this activity comes in close connection to modelbased testing [32], the main difference with these classical approaches being merely the size of the model, which is typically huge in DL, much bigger than for classical approaches. Similar approaches have been carried out in machine learning in the literature, see e.g., [33], to a much smaller and less systematic extent. Note finally that, from a control engineering perspective, this is a bit similar to modelling the plant of a controller. Contrarily to a controller, the resulting NN is not analyzable. The domain coverage model plays thus an even more important role, which therefore justifies that it becomes a first-class citizen w.r.t. traceability. Note that it is typically very hard for a requirement engineer to know beforehand which level of granularity to put in such an input domain coverage model. Actually the level of granularity probably depends on the dataset itself, and can thus be identified only once the dataset is already (at least partially) present: this is counter-intuitive regarding the usual notion of requirement (even though it matches the practice thereof: requirements are never perfect from the beginning, they always need iterations). However remember that we do not focus on the order in which artifacts are delivered but only on ensuring their mutual consistency. In this respect, it is acceptable to generate or modify a posteriori such a requirement. 9 To find out the proper level of granularity, one shall keep in mind that such a domain coverage model shall serve as a tool to analyze the dataset by justifying why a particular datum is in there, and identifying cases where some situation might not be covered. Consequently, if too many pieces of data are tracing to the same environment requirement, then this environment requirement probably does not serve its purpose. Conversely, if very few pieces of data trace to one environment requirement only, then either this requirement is too specific or the dataset needs to be completed. Defining "too many" or "very few" is beyond the scope of this paper, but should be of course defined in a rigorous manner depending on the context. If the domain coverage model is defined with a very lowlevel of granularity, then we have the above situation that traceability becomes useless because applying equally to the entire dataset. On the other hand, if the domain coverage model is defined with a very high-level of granularity, then its coverage is probably not reachable as displayed in Fig. 4: the traceability arrow between HLR and dataset is not bidirectional. Note finally that, even though the discussion above targets especially the raw dataset, the same applies to the labels if their domain is complex enough: for instance, if the DNN shall provide the position on a pedestrian, then it is important to ensure that the domain of positions is adequately covered Fig. 5 updates Fig. 4 to reflect the new artifact and the corresponding traceability. The following traces remain: 1) HLR and learning configuration, 2) HLR and design architecture, 3) training dataset and learnt weights, 4) design architecture and learnt weights, 5) learning configuration and learnt weights. Even if simple to implement, a first essential trace is the one between the training dataset version and the learnt weights: indeed, it is easy in practice to lose track of which version of a dataset was used to train a given network. This trace requires no more than a unique identifier for a given version of the training dataset and a reference to this identifier in the trained DNN. For more meaningful traces, one can trace these artifacts just the same way as one does for classical software engineering: trace code to requirements. Since code has a very specific structure for DNNs, we can be a bit more precise: one can try tracing neurons to requirements. For instance, we could impose on the DL framework to keep a trace of which input datum impacted more which neuron. This is precisely the approach shortly mentioned in [5]. Even though doable in theory, this approach brings nothing in practice, it is acknowledged as impossible -at least as of today -to interpret, understand or explain the role of one particular neuron. In addition, the size of the DNN and of the dataset are so huge that one cannot expect to understand a-posteriori any useful piece of information out of it (though this might change in the future if explainable AI becomes successful). Consequently, this sort of traces will not fulfil the traceability rationale: if a reviewer inspects the involved artifacts in their state at the end of the project, they will not understand them nor their connection to previous artifacts. Remark. Note that the problem is new, but also has a part of well-known aspects to it: DNNs are, in essence, generated; therefore, like every generated code, they are much harder to understand and to trace than manually written code and thus cannot be trusted without further argumentation -which is why standards like DO330 exist [34]. Classically generated code can however usually be understood, which is not the case of DNNs, adding tremendously to the "classical" difficulty. Instead of waiting for explainable AI to provide solutions 10 , we suggest in this paper to trace the engineers' decisions instead of the artifacts themselves: if artifacts are not understandable, engineers' decisions shall be. How do engineers come up with architectures or learning configurations? They essentially try them, test them, and try again until they cannot improve the results anymore. In other words, these decisions are intrinsically based trial-and-error: see Fig. 6 for an illustration. Trial and error is usually not considered at the level Fig. 6. Trial-and-error of traceability: as mentioned earlier, it is rather the opposite, one expects from traceability that we can ensure the coherence of the artifacts in their final state, i.e., independently of how they were obtained, by trial-and-error or not. However, DNN development relies so much and so intrinsically on trial-anderror, that we feel necessary to embrace this kind of activity even for traceability. Future developments might provide more predictable and reproducible approaches to the development of DNNs, in which case the approach of the present section will become obsolete. At the moment, instead of simply avoiding this reality and hoping for techniques which might never come, we make an attempt for a pragmatic approach usable today. In case of trial-and-error, the only justification that one can provide is that a given artifact is better than its previous version. Consequently, we propose to trace every new artifact obtained by trial-and-error to its previous version. The objective of the trace being to demonstrates that the new version improves upon the previous version. This requires storing not only the final artifact but also all the previous versions of it -or at least all those which are necessary to understand the artifact obtained at the end. It might sound like an overkill but note that it is actually standard to store previous versions of artifacts in the development of safety critical systems (where it is often encountered under the term "configuration management") or, of course, for normal software with version control (even though it is usually restricted to source code: not for binary artifacts). Pairing these classical techniques with traceability forces however the engineer to do more than just tagging a new version in their version control system: they must also think about the justification of the new increment. Therefore, we suggest imposing the developers to define a metric (or KPI) to measure the quality of the inference DNNwhich they anyways normally do, maybe however not always formally. Such a metric should not be the loss but be defined according to the actual goals that one plans to achieve with the function (e.g., a car can be mistaken for a pedestrian, but not the other way around). The metric can range from simple cases like accuracy and/or recall to complex combinations of functions [4]. As a new artifact, one must then explicitly store the values of this metric for a given DNN. Of course this value shall be traced to the weight values and inference architecture with which it was obtained. The essential addition is then to require that every version of the network which is obtained by increment of a previous one shall be traced to the metric value obtained with this previous version: one can then easily check if the new value indeed is an improvement. This metric should be the same in order to measure the quality of all the evolutions of the DNN. If it changes during the course of the project or is defined only a posteriori, then one needs to re-check the entire trial-and-error chain leading to the final version of the DNN. We summarize the change of artifacts in Fig. 7. This whole process might sound like a big hindrance for the practitioner, but note that: 1. the problem of not providing a real argumentation for a so-called improvement is actually recognized as a problem, even by the machine learning community itself (see e.g. "Explanation vs Speculation" in [36]), and 2. it is still much easier to apply than any approach currently taken in the field of explainable AI. Our recommendation in its current state can easily be "tricked": nothing forces a developer to deliver the previous versions of their DNN; they can just claim that the version they delivered was the first version that they developed, which, by Fig. 5 to integrate trial-and-error chance, was extremely good. A way to circumvent this, is to impose some restrictions on the first delivered version, e.g., requesting that the first version shall belong to a catalogue of authorized "primitive" DNNs. A developer cannot then just deliver immediately a complex DNN without tracing it to a previous primitive one. Primitive DNNs can be defined in various ways and the definition impacts differently various artifacts: this goes beyond the present paper but shall be investigated. Imposing a primitive catalogue is still not enough: imagine that an engineer developed a specific DNN classically (i.e., without following our recommendation of tracing the trial-anderror activities). Then, instead of going through the tedious work of analyzing the chain of increments which led to their final DNN until they reach their original "simple" DNN, they can just hide all the versions between the first and the last. In such a case the last version displays as their first improvement, which allows them to claim, that, by chance, their "first" improvement was the good one. Of course, this goes completely against the intent of our approach. To circumvent this, one should also restrict possible increments, or at least the justification for one increment. A naïve solution could be to have increments like adding only one layer at a time, having a default size for layers, etc. This might however be too restrictive in practice: some DNNs only show their benefits after having added a certain number of layers, but all the smaller versions with less layers are all equally bad. Investigating such restrictions in detail goes beyond the present paper. VI. FUTURE WORK This paper is, to our knowledge, the first to provide a precise list of traces which could potentially be written down for DNN. However, it does not address various development practices, which are encountered in real developments. Gap between trained and inference DNN. The process highlighted in Section IV assumes more or less implicitly that the interface of the trained DNN is the same as the one of the inference DNN. This assumption is often met (e.g., when using dropout the output type of the trained and inference DNN is the same) but not always: e.g., one might, even in a supervised context, train a sub-part of the final network in an unsupervised manner, for instance to learn valuable features (e.g., latent space of an auto-encoder [37]). One might also train a DNN on a separate dataset or take a DNN already trained on another dataset (e.g., ImageNET for object detection [38]) then remove the latest layers (the most task-specific ones) to only adapt the DNN to the targeted functionality. In such cases, lots of intermediate steps are actually not immediately connected to the final task and therefore not traceable in the sense considered so far. We do not consider this sort of cases in this paper but insist on how essential they are: they reflect a reality of DL engineers which cannot be ignored. Dataset. Another important aspect that has been ignored in this paper, is the evolution of the dataset: we assumed that the dataset (or more precisely, the datasets: training, validation, testing) is fixed. As mentioned, this is common practice when considering traceability: we are normally only interested in the final artifacts (except in our case, exceptionally, for trial-anderror activities). However, in reality, many systems actually gather new data along their lifetime. Therefore, one may not ignore the fact that data evolves permanently all along the life cycle of the autonomous system. In such cases, one should consider a form of incremental traceability, i.e., how to trace new data as it comes along. One should especially probably trace differently training data from testing data. In particular, one might need to argue why adding a new datum indeed provides additional valuable information. To do so, a possibility is to develop dataset coverage models. Depending on the context, one might need to trace the dataset itself to the sources used to generate it since they influence the dataset a lot and therefore the training: sensors calibration setup, sensor driver versions, etc. Explainable AI. As mentioned from the beginning, we try in this paper to be independent of current approaches in the domain of explainable AI. We try in particular to be more pragmatic than academic. However, it is probably valuable to look more precisely into various approaches of explainable AI (see, e.g., [4] for a review) to discover new opportunities for relevant fine-granular traces. Classical AI. Various approaches attempt to mix deep learning with expert knowledge, e.g., by transferring existing expert knowledge to a neural network (e.g., transfer learning [39]) where the expert knowledge can be expressed through rules or other forms; or by intertwining machine learning with probabilistic modelling [40]. All these approaches are valuable from the point of view of AI research, but they are also very promising for safety-critical systems because they allow to control the machine learning process to some extent and therefore to argue better that the final behavior is indeed satisfying. In some sense, one can interpret this as a form of explainability-by-design. It would therefore be very valuable to consider how to trace these methods, in particular the newly induced artifacts (e.g., generative model in the case of probabilistic programming). Intellectual property. In domains like automotive or avionics, the development of the system is extremely distributed among various stakeholders: OEMs, tier 1, tier 2, or even tier 3 suppliers. In such cases, it is essential to deliver sufficient artifacts to guarantee safety, but it is also essential that every stakeholder keep their own intellectual property. This can be problematic for our approach to trial-and-error activities which forces practitioners to provide artifact evolutions which might reveal their production secrets. Similar problems exist for virtual validation and can been solved with approaches like the FMI standard [41]. It should in any case be investigated for the approach presented in this paper. VII. CONCLUSION In this paper, we addressed the traceability of neural network in a pragmatic manner: we first explicitly identified the challenge of tracing DNNs, then analyzed the parallels and differences between DNNs and classical software development, and proposed accordingly adaptations of the notion of trace for DNNs. Instead of blindly mapping classical software activities to DL activities, which would lead to mismatches with the actual practice of DL, we tried to embrace some of the specificities of "real-life" DL, in particular trial-anderror. We provided a solution (or the beginning thereof), which we believe supports both the rationale of traceability, while still being applicable for practitioners. The applicability might be controlled depending on the targeted safety level, as is classically done in safety-related software standards: for instance, one could require different coverage percentages for the domain coverage model whether the function is ASIL A, B, C, or D. Acknowledgments. The author thanks Frederik Diehl for his careful review and his wisdom in DL. This work is the realization of thoughts that were initiated during a World Café at the Auto.AI conference Europe, moderated by Håkan Sivencrona. Further remarks were added after presenting early results at the Vehicle Intelligence conference. Thanks go both to the organizers and participants of the conferences as well as to Håkan. APPENDIX A. Traceability of the dataset types As mentioned in Section V-A, we go in this section more in detail about the traceability of dataset to interface requirements: the dataset being basically a set of examples, it should match the types of the inputs/outputs and therefore be traced to the interface requirement. Concretely, this means tracing 1. the raw dataset, and 2. the labels. Both should be traced to the interface requirements: the raw dataset to the input part of it, the labels to its output part. For instance, if the interface requirement states that the input shall be images of dimension 640 × 480, then the raw dataset shall contain only such images, and shall therefore be traced to this input requirement. In case pre-processing is required, then there might not be a direct match, in which case the pre-processing function shall be mapped to the interface requirement. Various approaches might then be employed: the dataset itself might be traced to the post-processing function directly, or one might introduce new requirements (called derived in the DO178-C) defining the interface of the postprocessing function and then trace the dataset to this requirement. Or one might simply consider that the interface is a design decision, not to be traced (in DO178-C terminology: the interface definition would be part of the software architecture). In a dual manner, suppose the interface requirement specifies that the output type is "list of 4-tuples" -representing bounding boxes. Then every label is a list of bounding boxes. Like previously, the dataset can therefore be traced to this type definition. However, if the structure of the output type is more complex (typically, if it contains sum types, i.e., enumerations), then traces can be defined per datum instead. Suppose for instance, that the interface requirement (say "REQ 123") specifies the following output instead: 1) output shall be a list of pairs, 2) where the first element is a 4-tuple like previously, 3) but the second element is a record containing the following fields: a) "pedestrian", b) "bike", c) "vehicle", 4) and where each of the fields contain a real between 0 and 1, 5) such that the sum of all field numbers is 1. In such cases, the dataset can be traced as a whole to REQ 123-1 and REQ 123-2 since those parts of the type apply to every datum uniformly (more or less like before). On the other hand, for a given image, each label can be traced to REQ 123-3a, REQ 123-3b or REQ 123-3c: for instance, if an image is labeled as containing one pedestrian, and the label "pedestrian" shall be traced to REQ 123-3a. In such cases, we can trace every datum independently. 11 Conversely, if one element of the dataset also identifies "trucks", then this label is not traceable to the requirement, which denotes a potential addition of unintended functionality. Note that there might be reasons why wanting to have data with labels not supporting the requirements: e.g., reuse of some data used in another context, use of the same data for another function, or desire to label more "just in case". Depending on the developed system, such cases shall probably not be forbidden, but their presence might give a hint about potential unintended functionality, which should then probably be addressed. For instance, depending on the case, the dataset should be preprocessed: the unwanted label should be erased or merged into another label, or maybe even gives hint that the requirement itself is not complete. Our main point is that the lack of traceability provides a hint about potential design decisions.
8,374
1812.06744
2904146064
[Context.] The success of deep learning makes its usage more and more tempting in safety-critical applications. However such applications have historical standards (e.g., DO178, ISO26262) which typically do not envision the usage of machine learning. We focus in particular on of software artifacts, i.e., code modules, functions, or statements (depending on the desired granularity). [Problem.] Both code and requirements are a problem when dealing with deep neural networks: code constituting the network is not comparable to classical code; furthermore, requirements for applications where neural networks are required are typically very hard to specify: even though high-level requirements can be defined, it is very hard to make such requirements concrete enough, that one can qualify them of low-level requirements. An additional problem is that deep learning is in practice very much based on trial-and-error, which makes the final result hard to explain without the previous iterations. [Proposed solution.] We investigate which artifacts could play a similar role to code or low-level requirements in neural network development and propose various traces which one could possibly consider as a replacement for classical notions. We also propose a form of traceability (and new artifacts) in order to deal with the particular trial-and-error development process for deep learning.
The discrepancy between the recommendations of the ISO 26262 and the methods actually used in practice was analyzed in @cite_2 . This cannot be directly used for traceability but is indirectly a very useful source of information.
{ "abstract": [ "Machine learning (ML) plays an ever-increasing role in advanced automotive functionality for driver assistance and autonomous operation; however, its adequacy from the perspective of safety certification remains controversial. In this paper, we analyze the impacts that the use of ML as an implementation approach has on ISO 26262 safety lifecycle and ask what could be done to address them. We then provide a set of recommendations on how to adapt the standard to accommodate ML." ], "cite_N": [ "@cite_2" ], "mid": [ "2753092418" ] }
Traceability of Deep Neural Networks
The success of deep learning (DL), in particular in computer vision, makes its usage more and more tempting in many applications, including safety-critical ones. However the development of such applications must follow standards (e.g., DO178 [1], ISO26262 [2]) which typically do not envision the usage of machine learning. At the moment, practitioners therefore cannot use machine learning for safety-critical functions (e.g., ASIL-D for ISO26262, or DAL-A for DO178). There exist various attempts to address this issue whether in standardization committees (e.g., ISO/IEC JTC 1/SC 42 or DKE/DIN [3]) or in the academic community (various initiatives towards explainable AI, e.g., [4]), but they are all far from mature and definitely not usable as of today or do not really address the problem: most standardization approaches just try to map one-to-one classical software engineering processes like the V-model to deep learning. Furthermore, no academic solution, at the moment, provides a solution to the lack of understandability of deep neural networks (DNN). In this paper, we try to find a pragmatic approach, which focuses on artifacts rather than on processes: we are not prescriptive regarding the activities which produced these artifacts. More precisely, we focus only on artifacts which are worth being identified during the development of DNNs for the sake of traceability. Consequently, this paper does not provide a ready-made solution, which a practitioner could follow one-to-one. However, it provides concrete descriptions which should at least be sufficient to provide a first guidance. We restrict the scope of this paper to the following: • Deep neural networks for supervised learning (no reinforcement learning, no unsupervised learning). • We focus only on software, not on system: traces from software requirements to system requirements are out of scope, as well as FMEAs or failure rates. • We do not focus on binary code or deployment thereof on hardware platform. • We assume a fixed, non-evolving, dataset: this does not comply with most real cases in, say, autonomous driving, where data is continuously collected. Even if not continuously collected, the dataset has so much influence on the training that one can hardly ignore these evolutions for proper traceability. Still, there are already sufficiently many questions to address without considering this evolution, which is why we leave this out of focus in this paper. • We focus essentially on functional requirements. Lifting these restrictions is left to future work. The rest of the paper is organized as follows: Section II presents related work. Section III recalls the concept of traceability. Section IV provides a traceability-amenable presentation of deep learning. Section V contains the main contribution of this paper: it analyzes which DNN artifacts replace classical software artifacts and suggests new artifacts and traces to enable the traceability of DNNs. Section VI identifies various gaps of the present work for future research. Finally Section VII summarizes the paper. III. PRELIMINARY: TRACEABILITY It is very difficult (at least nowadays) to find engineers or researchers who know both safety-aware software engineering and deep learning. 1 This paper really attempts to answer a problem which lies at the intersection of two communities and tries therefore to be self-contained for both. Consequently, we first recall the concepts and terminology related to traceability, as used in this paper. This should be a trivial reading for the safety-critical systems software engineer, but we do recommend reading it to ensure that the terminology is clear in the rest of the paper. Even though not a proper formal source, we still recommend Wikipedia [19] on this topic. A. Artifacts When developing classical software, the only product to deliver is executable code. One might provide also source code if the software is open source; the software itself might be part of a bigger system if it is embedded; but, all in all, from the perspective of the software engineer, one just needs to deliver some sort of executable. For safety critical systems, this is not enough: one needs to deliver not only the executable code itself, but also a justification that the executable code indeed does what it is supposed to do or that it is resilient to faults. Such a justification is provided in the form of documents, source code, executable, etc., which are not intended for the final consumer, but for the authority (whether it is an independent authority or a company-internal one) in charge of validating the safety of the product. We call these (development) artifacts. One such essential document is the one describing requirements: requirements describe what the software is supposed to do, without providing implementation details. In many nonsafety critical applications, requirements are expressed in a very unstructured manner, e.g., in some statement of work, in an issue tracker, or in slides communicated from the client. In safety critical applications however, it is essential to have these requirements in a way that they can be structured, referenced, or even categorized. For instance: functional requirements describe the expected function of a component, timing requirements describe the temporal constraints for a given function, interface requirements describe the input/output types of a component. Requirement documents found in safety-critical industry typically use dedicated software like IBM Rational DOORS. Example 1 (Functional requirement [20]): The [system] shall be capable of predicting the paths of the subject vehicle as well as principal other vehicles in order to identify the vehicle(s) whose path(s) may intersect with the subject vehicles path. Requirements are only one sort of document among many to be provided: source code, test results, development plans or any other sort of document which turns out to be necessary to justify that the final software can be used in a safety-critical system. This list is non-exhaustive and typically defined by a standard, like ISO26262 [2] or DO178C [1]. B. Traces The delivered artifacts generally have dependencies between each other: typically, the source code should fulfill the requirements, software requirements should refine system requirements, executable code derives from source code. Keeping these dependencies implicit increases a lot the risk that a dependency be wrong or forgotten. This is the purpose of traces to make these dependencies explicit. Every pair of artifacts is in principle subject to being traced from/to each other. In this paper we consider especially traces from code to requirements. Example 2: As an example, consider a requirement (defined in some document, e.g., a Word document or a DOORS database) being identified with an ID, say REQ 123 ; take then a piece of code defining many functions, one of them -say f 456 -implementing REQ 123. Then a trace is typically nothing more than a comment just before the function simply stating [REQ 123]: 1 / / f 4 5 6 t a k e s a s a r g u m e n t s : 2 / / − x : . . . 3 / / − y : . . . 4 / / I t r e t u r n s . . . 5 / / 6 / / [ REQ 123 ] 7 i n t f 4 5 6 ( i n t x , f l o a t y ) { 8 . . . } The trace is the comment on line 6. Another typical example is a trace between a test case and a requirement: it is important to ensure that the test cases indeed support the verification of requirements and that no requirement is forgotten. Even further, it is essential to also trace the results of the tests to the test cases themselves to ensure that the tests are indeed done and maintained. Writing down a trace is in general a manual activity: engineers look up the code and the requirements and add manually the comment above. 2 C. High-vs Low-level requirements In many cases, requirements are not concrete or precise enough to be traced directly with the above level of granularity (see Example 1). Therefore, it is often recommended to first refine the requirements into more concrete requirements, which can be traced from the code. These artifacts can have different denominations. For instance, the standard for the development of software for civil avionics (DO178C [1]) names them highlevel and low-level requirements (HLR/LLR) respectively (but the concepts is transferable to other standards and domains), with the following definition for LLR: "Low-level requirements are software requirements from which Source Code can be directly implemented without further information." [1]. LLR should themselves be traced to HLR in order to have complete traceability. 3 Note that the definition of HLR and LLR is not absolutely clear: we encountered examples where some requirements were considered as high-level by a company and low-level by another. In general, refining HLR into LLR goes hand in hand with architectural decisions: the requirements can be decomposed only once the function is decomposed into smaller functions, to which one can assign more concrete requirements. This is why the DO178C, for instance, refines the HLR into two artifacts: the LLRs on one hand, and the Software Architecture on the other hand. More concretely, the software architecture defines a set of components and connections between these components -or, more precisely, a set of interfaces (i.e., data types for inputs and outputs), since the software architecture does not cover the implementation of the components. Interfaces typically contain even more information like the physical units of the types (e.g., meters, centimeters, meter per second), or, if relevant, refreshing rates. The LLRs can then be mapped to each interface. Finally, the LLR and the software architecture are the only information necessary to write down the source code. Whether defined in a requirement or separately, there is always a definition of interfaces. In the following, we will generically refer to such a definition as an interface requirement. Fig. 1 represents the artifacts mentioned above. Of course, every artifact refining a previous one shall be traced to the latter, this should be typically bi-directional: every piece of information found in a refined artifact shall be found in the corresponding refining artifact, and conversely, every piece of information -except design decisions -found in a refining artifact shall be found in the refined one. In the DO178, the software architecture is not traced back to the HLR because it is a design decision. The figure also presents the test artifacts: test cases shall be traced as well to requirements (high-or low-level depending on the context), and test results shall be traced to test cases. D. Rationale Understanding the rationale behind traces 1. enables to understand why it is challenging to trace DNNs, and 2. gives hints to investigate relevant alternatives to classical traces. A trace serves the purpose of ensuring that a piece of code is justified by a requirement. This is not a structured or formal justification, which are in practice seldom applicable, however it at least enforces that people think about this justification. In fact, traceability does enable to identify sources of error: when an engineer attempts but does not manage to trace a piece of code then they might indeed get aware that this code is not necessary or, even worse, that it introduces unwanted functionality. Conversely, if a requirement is not traced back by any code, then it is often a hint that the requirement has been forgotten. For the same reason, traceability is a relevant tool for assessors in order to detect potential pitfalls during development. This is what is illustrated in Fig. 1 by the bidirectional arrows for traceability: having traces syntactically on each side is easy; it is however harder to ensure coverage of traceability on both sides, e.g., all HLR are traced to some LLR and all LLR are traced back to some HLR (the latter typically does not happen since some LLRs depend on design decisions). E. Process vs Artifacts Many standards like, e.g., DO178C, do not impose an order on how artifacts shall be developed. For instance, even though code shall be traced to requirements, it does not mean that one is forced to follow a waterfall model: one might just as well work iteratively, define requirements, then code, then go back to requirements, etc. The main point of traceability is that, no matter how one reached the final state of development (e.g., iteratively or waterfall), it should be possible to justify that this final state is coherent. Consequently, one might very well develop all the artifacts without traceability, and only at the end develop the traces. 4 This is why we emphasized in introduction that this paper is not process-but artifact-oriented: we do not impose how engineers should work but only what they should deliver. IV. DEEP LEARNING ARTIFACTS This section presents the concepts and terminology related to deep learning, in a way which makes it amenable to comparison with the artifacts of the previous section. To implement a required function using a DNN, one collects a lot of data matching as an input with their corresponding expected outputs (the outputs are not collected but typically manually annotated). This data is then used by a given DL framework to train the network. Examples of such frameworks are TensorFlow [24] or PyTorch [25]. A typical example of a function where one would want to use a DNN is the following: "given an image represented by a matrix of pixels, return a list of 4-tuples (x, y, w, h) representing rectangles which contain pedestrians" (such rectangles are typically called bounding boxes). One might require to identify various classes of objects (e.g., pedestrian, car, bikes) and to associate every bounding box with a label indicating to which class the object belongs, see Fig. 2. Fig. 2. Bounding boxes -image taken from [26] To teach a DNN, one needs the following: • A dataset containing both the input, e.g., images, and the output, e.g., annotations denoting bounding boxes for pedestrians' positions. In the following, we will consider these two aspects separately: the raw dataset, i.e., only the input, and the labels, i.e., only the corresponding expected output. • A deep neural network architecture. Prior to learning time, one cannot really consider that there is an actual neural network, but rather a skeleton thereof: the learning process will fill in this skeleton (more specifically, the weights), and doing so will generate the neural network used for the required function (in practice, the skeleton is actually randomly pre-filled and the random weights are progressively changed during learning). Such a skeleton is however more than just a box: the deep learning engineer decides on the shape of this skeleton, which does influence the learning process. A DNN architecture is typically designed as a layer-based architecture where the input, represented as a (potentially huge) vector or matrix (e.g. 640 × 480 × 3 for an image of width 640, height 480 and with three color components R, G and B), flows through various massively parallel operations transforming it until the output has the expected form (e.g., a vector containing 3 real numbers between 0 and 1 indicating each the confidence of the image containing a pedestrian, a car or nothing). The engineering then amounts to designing this architecture, meaning defining these operations: their types and the dimensions they transform their input into. See Fig. 3 for an example. • A loss function. To train a DNN, one must have a way to tell the machine learning framework when the DNN is wrong, in order to correct it. In theory, this is easy: if the network provides a wrong answer to an input for which we know the correct answer, then we just tell the framework what the right answer was. However, in practice, the functions addressed with DNN typically output a confidence rather than a perfect answer. One should therefore be more subtle than just telling "right" or "wrong". Instead one can provide a positive real number, basically a grade, telling how wrong the system is. If the number is null, then there is no error; otherwise, the higher the number, the more important the error. 5 Consequently, a loss function takes the expected and actually obtained results of the DNN as inputs, and returns a real number stating how bad the difference between both is. Mathematically, distances make good candidates for losses. Example 3: Example of a loss function: L(θ) = y pos −ŷ pos 2 − c∈C log(y c ).ŷ c where: θ denotes the set of all parameters of the DNN (i.e., its weights), 6 y pos (resp.ŷ pos ) denote the position of an inferred bounding box (resp. the actual, labelled, position of the bounding box -ground truth), -. 2 denotes the L2 norm, -C is the set of classes considered in the problem at hand, e.g., {pedestrian, car, cyclist}, y c (resp.ŷ c ) denotes the class assigned to the inferred bounding box (resp. the actual, labelled, class of the bounding box), via a so-called one-shot encoding, i.e., a vector of the size the number of classes, where each element contains a real number between 0 and 1 assessing the confidence of belonging to the corresponding class. We leave it to the reader to observe the variation of the function depending on the error of the network (or lack thereof). 5 In many cases, the objective is only to minimize the loss, not necessarily to nullify it. 6 Contrarily to one would mathematically expect, θ is present on the righthand side, but only implicitly: rigorously, one should write y as the result of applying the function represented by the DNN, as parametrized by θ. We follow however the conventions used in the classical DL literature. In practice the loss function is expressed using code: this code does not go in the final product but controls the learning of the DNN within the DL framework. As we will see, it will be essential for the rest of the paper not just to understand the artifacts themselves, but how they are developed. Typically, the sequence of decisions are as follows: 1) Collect data and, possibly, preprocess it: re-shape the information, fix missing values, extract features, or achieve much more advanced tasks like matching label ground truth boxes to so-called prior boxes [28] (we do not focus on this activity in this paper). Delivered artifacts: raw dataset, preprocessing functions. 2) Annotate the raw data. Delivered artifact: labelled dataset. 3) Split the dataset in training, validation and testing sets. Delivered artifacts: labelled training-, validation-and testing-datasets. The difference between the validation and testing datasets is that, after evaluating the DNN on the validation dataset, the engineer will take the result as a feedback into account to improve their design. When done and no more correction is planned, then the engineer will assess the quality of their DNN with the testing dataset. This should not entail further iterations of the design (see step 12). 7 4) Design the DNN architecture. Delivered artifact: DNN architecture (typically as python code making use of the selected framework). 5) Define the "learning configuration" this includes picking a loss, picking learning parameters (e.g., dropout [29], learning rate, maximum learning steps), or search strategies for these hyper-parameters (e.g., grid or random search), or even strategies involving the exploration of the dataset itself (e.g., curriculum learning). This learning configuration is a placeholder artifact for all aspects which potentially influence the learning process, e.g., the used version of the various software dependencies or the used random seeds. We do not make this list exhaustive since this is not the focus of this paper. Overall, the configuration shall be understood as the minimal piece of information such that the tuple [training set, architecture, learning configuration] characterizes uniquely the learned DNN. This requirement aims at ensuring the reproducibility of the learning. 8 Delivered artifact: Typically not "one" artifact but rather various pieces scattered across different artifacts: e.g., fine-tuning parameters stored in code, loss having its own source file, etc. Ideally, this could be gathered in some configuration files, as provided by some DL management platforms [30]. 6) Train the DNN architecture with the loss function on the training dataset using a selected deep learning frame- 7 Note that the terms testing dataset and validation datasets are sometimes exchanged in the literature. 8 We are on purpose quite vague on this matter because reproducibility is way harder to reach than one might think: not only random seeds influence the learning process, but also potentially the operating system, the library versions and even the hardware, which might, e.g., swap instructions differently nondeterministically. work. Delivered artifact: (trained) weight values. Note that the artifact is not the code, which, per se, is not different before and after training: the learning process alters the values of the variables used by the code, not the code itself. Consequently, the artifact is actually the resulting information stored in those variables. 7) Post-process the trained DNN (if necessary): many learning strategies require a change between learning and inference phase (e.g., drop out is applied only during learning). Delivered artifact: inference architecture. In that case, it is the opposite to the previous step: the code changes but the data does not. Note however that in most cases, the switch from the learning architecture to the inference one is so standard and systematic that there is no need for any separate artifact: typically, a DL framework will simply provide an optional argument which one shall set to true if in learning mode or to false in inference mode. 8) Test the resulting DNN on the validation dataset. Delivered artifact: test results (e.g., a metric like accuracy in the form of a number between 0 and 1), typically stored in a variable of the python runtime, or in a log file, or in a CI/CD system, if any. 9) Change the architecture or the learn configuration (4-5) based on the results and repeat steps 6-9 until the targeted objectives are reached. 10) Assess the quality of the inference DNN with the test set Delivered artifact: final validation results. 11) Depending on the used framework, serialize/export the network in order to use it in production, e.g., to be linked from a C++ source file, and compile it. Delivered artifact: executable code usable in production, Quite similarly to code development, the process yielding the finally delivered DNN is a typical trial-and-error process. There is a major difference though: code resulting from a trialand-error process can still be understood. This is typically not the case of DNNs: often, the only way to understand why a given architecture is finally obtained, is by looking back at the changes which led to it. This has of course a big impact on justifiability of a DNN and therefore on the traceability. We will get back to that point in Section V-B. Note that steps 1 and 9 are not duals: the former is a preprocessing of the data, which must therefore also happen at runtime; while the latter is a post-processing of the DNN itself, which therefore happens once and for all at design time and is not repeated at runtime. Fig. 4 summarizes the DL artifacts in a similar way to Fig. 1. Note again that this does not denote a process, but really a set of delivered artifacts: no sequence is imposed on the order in which the artifacts are developed. In particular, it is strongly to be expected that, once a developer decides to implement a function using a DNN, additional requirements (called "derived" in the DO178) might have to be added a posteriori: the choice of using DL as a technology might indeed entail new considerations at the requirement level. How can we map the DL artifacts presented in Section IV to the classical ones presented in Section III? First notice that both sections are not exactly targeting the same level of granularity: Section IV did not mention requirements, but there are of course requirements when developing DNNs for safety critical systems. Contrarily to software however, we believe that requirements implemented with DNNs generally cannot be refined into a software architecture and an LLR. This is not particularly a property of DNNs per se, but rather of the functions for which it makes sense to use DNNs: most applications for which DNNs are used successfully compared to classical methods, are applications where humans have difficulty decomposing the problem in hierarchical simpler sub-problems. One can even interpret the success of DNNs precisely under this angle: the learning activity does not just learn a solution to the problem, but also learns its own decomposition of the problem. With respect to requirements, this supports the claim that applications where DNNs are useful are precisely those where it is very hard to come up with a decomposition of HLR into LLR: refining HLR into LLR is intrinsically difficult -otherwise one could most probably use a classical (i.e., non-DNN) method. Consequently the only artifacts between the HLR and the source code are all the inputs to the DL framework: architecture, learning configuration and, of course, training dataset. High-level tests are now replaced by the testing/validation set: the name differs but the role is the same. Let us analyze how artifacts from Fig. 1 map to the ones of Fig. 4 in order to highlight similarities and differences: • System requirements, HLR, tests cases, test results and executable code are found in both cases. • As hinted by Fig. 4, the source code is still present but it is split between the architecture part and the weights part. • As mentioned above, the LLR and software architecture cannot really be mapped to the DL artifacts, unless one maps them to the complete design block, which does not bring anything. When it comes to traceability, traces between preserved artifacts are maintained. Traces between source code and object code can also be considered as preserved since these traces basically amount to trace the code generated by the compiler back to the source code: this is not different for DNNs and for classical software. However traces between HLR and Design, and Design and Source code shall be adapted. The next sections are dedicated precisely to these traces. More precisely, we need to consider traces between: 1) HLR and training dataset, 2) HLR and learning configuration, 3) HLR and architecture, 4) training dataset and source code, 5) learning configuration and source code, 6) architecture and source code. For the source code, one can differentiate inference architecture and learnt weights. Inference architecture simply can be traced trivially to the design architecture (when it is not the same artifact anyway as mentioned earlier) and to no other design artifact. The next section deals with traces between HLR and training dataset, the following section deals with all other traces. A. Traceability between HLR and training dataset Traces between HLR and dataset may seem simple: one just needs to trace every element of the dataset to the HLR. Some aspects are easy to think of tracing: the type of the raw data can be traced to the input definition in the interface requirement, or the type of the labels can be traced to the output definition. This sort of traceability can be targeted but we believe that it is too trivial to support the identification of any relevant problems: type mismatches between dataset and interface are not real sources of problem in practice. In addition, any such problem typically breaks anyways during integration of the DNN components with the rest of the system, so that there is no real possibility of encountering such an error when delivering a safety-critical system. We still go into more details about it in Appendix A in case the reader finds the problem relevant to their particular use case. Let us focus rather on the traceability of every piece of data to HLR, e.g., "The function shall recognize obstacles in urban context", "The function shall recognize obstacles by nice weather". In principle, it is simple to trace the dataset to such requirements: e.g., pictures in the dataset taken by nice weather shall be traced to the corresponding requirement, pictures in urban context as well, etc. However, the sort of information usually found in an HLR often applies uniformly to all elements of a dataset: e.g., if the function shall work only in urban context then all images of the dataset will be urban. This would entail tracing the entire dataset to the HLR, which would be so general that it would not really support the rationale of tracing: tracing the entire dataset to the HLR does not really provide a justification of this particular dataset. Instead, one expects every datum to be justified individually and therefore to be traced potentially differently from another datum. At that stage, we recommend developing the interface requirement much more than it usually is, in addition to the types and units of the inputs/outputs, it should describe in a detailed manner the output and -especially -input domain, with the purpose of defining what is an acceptable coverage of the domain. This can be done either as a requirement among the HLR or as a separate artifact, which we call "domain coverage model". Getting back to the example above, "urban" is not enough: one should actually detail which different forms of environment are encountered in an urban environment, e.g., "one-way street", "roundabout", etc. (of course, in that case, the input domain coverage model connects strongly to the Operational Design Domain -ODD -but it needs not be the case if the function to be performed by the DNN does not directly work with data coming from the sensors). These should be themselves traced towards higherlevel requirements, e.g., system-level requirements: this might even be a useful tool to identify misunderstandings regarding the environment, e.g., imagine a portion of highway which is within the limits of a city: is it urban or not? If working in a very structured context, e.g., where modelbased requirements engineering is used (see, e.g., [31]), the domain coverage model could really be formalized to some extent, via coverage criteria on the domain coverage model. In such cases, this activity comes in close connection to modelbased testing [32], the main difference with these classical approaches being merely the size of the model, which is typically huge in DL, much bigger than for classical approaches. Similar approaches have been carried out in machine learning in the literature, see e.g., [33], to a much smaller and less systematic extent. Note finally that, from a control engineering perspective, this is a bit similar to modelling the plant of a controller. Contrarily to a controller, the resulting NN is not analyzable. The domain coverage model plays thus an even more important role, which therefore justifies that it becomes a first-class citizen w.r.t. traceability. Note that it is typically very hard for a requirement engineer to know beforehand which level of granularity to put in such an input domain coverage model. Actually the level of granularity probably depends on the dataset itself, and can thus be identified only once the dataset is already (at least partially) present: this is counter-intuitive regarding the usual notion of requirement (even though it matches the practice thereof: requirements are never perfect from the beginning, they always need iterations). However remember that we do not focus on the order in which artifacts are delivered but only on ensuring their mutual consistency. In this respect, it is acceptable to generate or modify a posteriori such a requirement. 9 To find out the proper level of granularity, one shall keep in mind that such a domain coverage model shall serve as a tool to analyze the dataset by justifying why a particular datum is in there, and identifying cases where some situation might not be covered. Consequently, if too many pieces of data are tracing to the same environment requirement, then this environment requirement probably does not serve its purpose. Conversely, if very few pieces of data trace to one environment requirement only, then either this requirement is too specific or the dataset needs to be completed. Defining "too many" or "very few" is beyond the scope of this paper, but should be of course defined in a rigorous manner depending on the context. If the domain coverage model is defined with a very lowlevel of granularity, then we have the above situation that traceability becomes useless because applying equally to the entire dataset. On the other hand, if the domain coverage model is defined with a very high-level of granularity, then its coverage is probably not reachable as displayed in Fig. 4: the traceability arrow between HLR and dataset is not bidirectional. Note finally that, even though the discussion above targets especially the raw dataset, the same applies to the labels if their domain is complex enough: for instance, if the DNN shall provide the position on a pedestrian, then it is important to ensure that the domain of positions is adequately covered Fig. 5 updates Fig. 4 to reflect the new artifact and the corresponding traceability. The following traces remain: 1) HLR and learning configuration, 2) HLR and design architecture, 3) training dataset and learnt weights, 4) design architecture and learnt weights, 5) learning configuration and learnt weights. Even if simple to implement, a first essential trace is the one between the training dataset version and the learnt weights: indeed, it is easy in practice to lose track of which version of a dataset was used to train a given network. This trace requires no more than a unique identifier for a given version of the training dataset and a reference to this identifier in the trained DNN. For more meaningful traces, one can trace these artifacts just the same way as one does for classical software engineering: trace code to requirements. Since code has a very specific structure for DNNs, we can be a bit more precise: one can try tracing neurons to requirements. For instance, we could impose on the DL framework to keep a trace of which input datum impacted more which neuron. This is precisely the approach shortly mentioned in [5]. Even though doable in theory, this approach brings nothing in practice, it is acknowledged as impossible -at least as of today -to interpret, understand or explain the role of one particular neuron. In addition, the size of the DNN and of the dataset are so huge that one cannot expect to understand a-posteriori any useful piece of information out of it (though this might change in the future if explainable AI becomes successful). Consequently, this sort of traces will not fulfil the traceability rationale: if a reviewer inspects the involved artifacts in their state at the end of the project, they will not understand them nor their connection to previous artifacts. Remark. Note that the problem is new, but also has a part of well-known aspects to it: DNNs are, in essence, generated; therefore, like every generated code, they are much harder to understand and to trace than manually written code and thus cannot be trusted without further argumentation -which is why standards like DO330 exist [34]. Classically generated code can however usually be understood, which is not the case of DNNs, adding tremendously to the "classical" difficulty. Instead of waiting for explainable AI to provide solutions 10 , we suggest in this paper to trace the engineers' decisions instead of the artifacts themselves: if artifacts are not understandable, engineers' decisions shall be. How do engineers come up with architectures or learning configurations? They essentially try them, test them, and try again until they cannot improve the results anymore. In other words, these decisions are intrinsically based trial-and-error: see Fig. 6 for an illustration. Trial and error is usually not considered at the level Fig. 6. Trial-and-error of traceability: as mentioned earlier, it is rather the opposite, one expects from traceability that we can ensure the coherence of the artifacts in their final state, i.e., independently of how they were obtained, by trial-and-error or not. However, DNN development relies so much and so intrinsically on trial-anderror, that we feel necessary to embrace this kind of activity even for traceability. Future developments might provide more predictable and reproducible approaches to the development of DNNs, in which case the approach of the present section will become obsolete. At the moment, instead of simply avoiding this reality and hoping for techniques which might never come, we make an attempt for a pragmatic approach usable today. In case of trial-and-error, the only justification that one can provide is that a given artifact is better than its previous version. Consequently, we propose to trace every new artifact obtained by trial-and-error to its previous version. The objective of the trace being to demonstrates that the new version improves upon the previous version. This requires storing not only the final artifact but also all the previous versions of it -or at least all those which are necessary to understand the artifact obtained at the end. It might sound like an overkill but note that it is actually standard to store previous versions of artifacts in the development of safety critical systems (where it is often encountered under the term "configuration management") or, of course, for normal software with version control (even though it is usually restricted to source code: not for binary artifacts). Pairing these classical techniques with traceability forces however the engineer to do more than just tagging a new version in their version control system: they must also think about the justification of the new increment. Therefore, we suggest imposing the developers to define a metric (or KPI) to measure the quality of the inference DNNwhich they anyways normally do, maybe however not always formally. Such a metric should not be the loss but be defined according to the actual goals that one plans to achieve with the function (e.g., a car can be mistaken for a pedestrian, but not the other way around). The metric can range from simple cases like accuracy and/or recall to complex combinations of functions [4]. As a new artifact, one must then explicitly store the values of this metric for a given DNN. Of course this value shall be traced to the weight values and inference architecture with which it was obtained. The essential addition is then to require that every version of the network which is obtained by increment of a previous one shall be traced to the metric value obtained with this previous version: one can then easily check if the new value indeed is an improvement. This metric should be the same in order to measure the quality of all the evolutions of the DNN. If it changes during the course of the project or is defined only a posteriori, then one needs to re-check the entire trial-and-error chain leading to the final version of the DNN. We summarize the change of artifacts in Fig. 7. This whole process might sound like a big hindrance for the practitioner, but note that: 1. the problem of not providing a real argumentation for a so-called improvement is actually recognized as a problem, even by the machine learning community itself (see e.g. "Explanation vs Speculation" in [36]), and 2. it is still much easier to apply than any approach currently taken in the field of explainable AI. Our recommendation in its current state can easily be "tricked": nothing forces a developer to deliver the previous versions of their DNN; they can just claim that the version they delivered was the first version that they developed, which, by Fig. 5 to integrate trial-and-error chance, was extremely good. A way to circumvent this, is to impose some restrictions on the first delivered version, e.g., requesting that the first version shall belong to a catalogue of authorized "primitive" DNNs. A developer cannot then just deliver immediately a complex DNN without tracing it to a previous primitive one. Primitive DNNs can be defined in various ways and the definition impacts differently various artifacts: this goes beyond the present paper but shall be investigated. Imposing a primitive catalogue is still not enough: imagine that an engineer developed a specific DNN classically (i.e., without following our recommendation of tracing the trial-anderror activities). Then, instead of going through the tedious work of analyzing the chain of increments which led to their final DNN until they reach their original "simple" DNN, they can just hide all the versions between the first and the last. In such a case the last version displays as their first improvement, which allows them to claim, that, by chance, their "first" improvement was the good one. Of course, this goes completely against the intent of our approach. To circumvent this, one should also restrict possible increments, or at least the justification for one increment. A naïve solution could be to have increments like adding only one layer at a time, having a default size for layers, etc. This might however be too restrictive in practice: some DNNs only show their benefits after having added a certain number of layers, but all the smaller versions with less layers are all equally bad. Investigating such restrictions in detail goes beyond the present paper. VI. FUTURE WORK This paper is, to our knowledge, the first to provide a precise list of traces which could potentially be written down for DNN. However, it does not address various development practices, which are encountered in real developments. Gap between trained and inference DNN. The process highlighted in Section IV assumes more or less implicitly that the interface of the trained DNN is the same as the one of the inference DNN. This assumption is often met (e.g., when using dropout the output type of the trained and inference DNN is the same) but not always: e.g., one might, even in a supervised context, train a sub-part of the final network in an unsupervised manner, for instance to learn valuable features (e.g., latent space of an auto-encoder [37]). One might also train a DNN on a separate dataset or take a DNN already trained on another dataset (e.g., ImageNET for object detection [38]) then remove the latest layers (the most task-specific ones) to only adapt the DNN to the targeted functionality. In such cases, lots of intermediate steps are actually not immediately connected to the final task and therefore not traceable in the sense considered so far. We do not consider this sort of cases in this paper but insist on how essential they are: they reflect a reality of DL engineers which cannot be ignored. Dataset. Another important aspect that has been ignored in this paper, is the evolution of the dataset: we assumed that the dataset (or more precisely, the datasets: training, validation, testing) is fixed. As mentioned, this is common practice when considering traceability: we are normally only interested in the final artifacts (except in our case, exceptionally, for trial-anderror activities). However, in reality, many systems actually gather new data along their lifetime. Therefore, one may not ignore the fact that data evolves permanently all along the life cycle of the autonomous system. In such cases, one should consider a form of incremental traceability, i.e., how to trace new data as it comes along. One should especially probably trace differently training data from testing data. In particular, one might need to argue why adding a new datum indeed provides additional valuable information. To do so, a possibility is to develop dataset coverage models. Depending on the context, one might need to trace the dataset itself to the sources used to generate it since they influence the dataset a lot and therefore the training: sensors calibration setup, sensor driver versions, etc. Explainable AI. As mentioned from the beginning, we try in this paper to be independent of current approaches in the domain of explainable AI. We try in particular to be more pragmatic than academic. However, it is probably valuable to look more precisely into various approaches of explainable AI (see, e.g., [4] for a review) to discover new opportunities for relevant fine-granular traces. Classical AI. Various approaches attempt to mix deep learning with expert knowledge, e.g., by transferring existing expert knowledge to a neural network (e.g., transfer learning [39]) where the expert knowledge can be expressed through rules or other forms; or by intertwining machine learning with probabilistic modelling [40]. All these approaches are valuable from the point of view of AI research, but they are also very promising for safety-critical systems because they allow to control the machine learning process to some extent and therefore to argue better that the final behavior is indeed satisfying. In some sense, one can interpret this as a form of explainability-by-design. It would therefore be very valuable to consider how to trace these methods, in particular the newly induced artifacts (e.g., generative model in the case of probabilistic programming). Intellectual property. In domains like automotive or avionics, the development of the system is extremely distributed among various stakeholders: OEMs, tier 1, tier 2, or even tier 3 suppliers. In such cases, it is essential to deliver sufficient artifacts to guarantee safety, but it is also essential that every stakeholder keep their own intellectual property. This can be problematic for our approach to trial-and-error activities which forces practitioners to provide artifact evolutions which might reveal their production secrets. Similar problems exist for virtual validation and can been solved with approaches like the FMI standard [41]. It should in any case be investigated for the approach presented in this paper. VII. CONCLUSION In this paper, we addressed the traceability of neural network in a pragmatic manner: we first explicitly identified the challenge of tracing DNNs, then analyzed the parallels and differences between DNNs and classical software development, and proposed accordingly adaptations of the notion of trace for DNNs. Instead of blindly mapping classical software activities to DL activities, which would lead to mismatches with the actual practice of DL, we tried to embrace some of the specificities of "real-life" DL, in particular trial-anderror. We provided a solution (or the beginning thereof), which we believe supports both the rationale of traceability, while still being applicable for practitioners. The applicability might be controlled depending on the targeted safety level, as is classically done in safety-related software standards: for instance, one could require different coverage percentages for the domain coverage model whether the function is ASIL A, B, C, or D. Acknowledgments. The author thanks Frederik Diehl for his careful review and his wisdom in DL. This work is the realization of thoughts that were initiated during a World Café at the Auto.AI conference Europe, moderated by Håkan Sivencrona. Further remarks were added after presenting early results at the Vehicle Intelligence conference. Thanks go both to the organizers and participants of the conferences as well as to Håkan. APPENDIX A. Traceability of the dataset types As mentioned in Section V-A, we go in this section more in detail about the traceability of dataset to interface requirements: the dataset being basically a set of examples, it should match the types of the inputs/outputs and therefore be traced to the interface requirement. Concretely, this means tracing 1. the raw dataset, and 2. the labels. Both should be traced to the interface requirements: the raw dataset to the input part of it, the labels to its output part. For instance, if the interface requirement states that the input shall be images of dimension 640 × 480, then the raw dataset shall contain only such images, and shall therefore be traced to this input requirement. In case pre-processing is required, then there might not be a direct match, in which case the pre-processing function shall be mapped to the interface requirement. Various approaches might then be employed: the dataset itself might be traced to the post-processing function directly, or one might introduce new requirements (called derived in the DO178-C) defining the interface of the postprocessing function and then trace the dataset to this requirement. Or one might simply consider that the interface is a design decision, not to be traced (in DO178-C terminology: the interface definition would be part of the software architecture). In a dual manner, suppose the interface requirement specifies that the output type is "list of 4-tuples" -representing bounding boxes. Then every label is a list of bounding boxes. Like previously, the dataset can therefore be traced to this type definition. However, if the structure of the output type is more complex (typically, if it contains sum types, i.e., enumerations), then traces can be defined per datum instead. Suppose for instance, that the interface requirement (say "REQ 123") specifies the following output instead: 1) output shall be a list of pairs, 2) where the first element is a 4-tuple like previously, 3) but the second element is a record containing the following fields: a) "pedestrian", b) "bike", c) "vehicle", 4) and where each of the fields contain a real between 0 and 1, 5) such that the sum of all field numbers is 1. In such cases, the dataset can be traced as a whole to REQ 123-1 and REQ 123-2 since those parts of the type apply to every datum uniformly (more or less like before). On the other hand, for a given image, each label can be traced to REQ 123-3a, REQ 123-3b or REQ 123-3c: for instance, if an image is labeled as containing one pedestrian, and the label "pedestrian" shall be traced to REQ 123-3a. In such cases, we can trace every datum independently. 11 Conversely, if one element of the dataset also identifies "trucks", then this label is not traceable to the requirement, which denotes a potential addition of unintended functionality. Note that there might be reasons why wanting to have data with labels not supporting the requirements: e.g., reuse of some data used in another context, use of the same data for another function, or desire to label more "just in case". Depending on the developed system, such cases shall probably not be forbidden, but their presence might give a hint about potential unintended functionality, which should then probably be addressed. For instance, depending on the case, the dataset should be preprocessed: the unwanted label should be erased or merged into another label, or maybe even gives hint that the requirement itself is not complete. Our main point is that the lack of traceability provides a hint about potential design decisions.
8,374
1812.06744
2904146064
[Context.] The success of deep learning makes its usage more and more tempting in safety-critical applications. However such applications have historical standards (e.g., DO178, ISO26262) which typically do not envision the usage of machine learning. We focus in particular on of software artifacts, i.e., code modules, functions, or statements (depending on the desired granularity). [Problem.] Both code and requirements are a problem when dealing with deep neural networks: code constituting the network is not comparable to classical code; furthermore, requirements for applications where neural networks are required are typically very hard to specify: even though high-level requirements can be defined, it is very hard to make such requirements concrete enough, that one can qualify them of low-level requirements. An additional problem is that deep learning is in practice very much based on trial-and-error, which makes the final result hard to explain without the previous iterations. [Proposed solution.] We investigate which artifacts could play a similar role to code or low-level requirements in neural network development and propose various traces which one could possibly consider as a replacement for classical notions. We also propose a form of traceability (and new artifacts) in order to deal with the particular trial-and-error development process for deep learning.
There has been attempts to set grounds for a rigorous science'' of machine learning @cite_13 , again very useful, but not related to traceability. Finally, many safety-related problems have been identified for AI, especially for reinforcement learning @cite_31 . The identified challenges are relevant and the paper proposes a few attempts of solutions. Most of them are a source of inspiration to identify sources of problem and to analyze whether those sources can be tackled with traceability (even though it turns out not to be really the case for solutions identified in the present paper).
{ "abstract": [ "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.", "As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning." ], "cite_N": [ "@cite_31", "@cite_13" ], "mid": [ "2462906003", "2594475271" ] }
Traceability of Deep Neural Networks
The success of deep learning (DL), in particular in computer vision, makes its usage more and more tempting in many applications, including safety-critical ones. However the development of such applications must follow standards (e.g., DO178 [1], ISO26262 [2]) which typically do not envision the usage of machine learning. At the moment, practitioners therefore cannot use machine learning for safety-critical functions (e.g., ASIL-D for ISO26262, or DAL-A for DO178). There exist various attempts to address this issue whether in standardization committees (e.g., ISO/IEC JTC 1/SC 42 or DKE/DIN [3]) or in the academic community (various initiatives towards explainable AI, e.g., [4]), but they are all far from mature and definitely not usable as of today or do not really address the problem: most standardization approaches just try to map one-to-one classical software engineering processes like the V-model to deep learning. Furthermore, no academic solution, at the moment, provides a solution to the lack of understandability of deep neural networks (DNN). In this paper, we try to find a pragmatic approach, which focuses on artifacts rather than on processes: we are not prescriptive regarding the activities which produced these artifacts. More precisely, we focus only on artifacts which are worth being identified during the development of DNNs for the sake of traceability. Consequently, this paper does not provide a ready-made solution, which a practitioner could follow one-to-one. However, it provides concrete descriptions which should at least be sufficient to provide a first guidance. We restrict the scope of this paper to the following: • Deep neural networks for supervised learning (no reinforcement learning, no unsupervised learning). • We focus only on software, not on system: traces from software requirements to system requirements are out of scope, as well as FMEAs or failure rates. • We do not focus on binary code or deployment thereof on hardware platform. • We assume a fixed, non-evolving, dataset: this does not comply with most real cases in, say, autonomous driving, where data is continuously collected. Even if not continuously collected, the dataset has so much influence on the training that one can hardly ignore these evolutions for proper traceability. Still, there are already sufficiently many questions to address without considering this evolution, which is why we leave this out of focus in this paper. • We focus essentially on functional requirements. Lifting these restrictions is left to future work. The rest of the paper is organized as follows: Section II presents related work. Section III recalls the concept of traceability. Section IV provides a traceability-amenable presentation of deep learning. Section V contains the main contribution of this paper: it analyzes which DNN artifacts replace classical software artifacts and suggests new artifacts and traces to enable the traceability of DNNs. Section VI identifies various gaps of the present work for future research. Finally Section VII summarizes the paper. III. PRELIMINARY: TRACEABILITY It is very difficult (at least nowadays) to find engineers or researchers who know both safety-aware software engineering and deep learning. 1 This paper really attempts to answer a problem which lies at the intersection of two communities and tries therefore to be self-contained for both. Consequently, we first recall the concepts and terminology related to traceability, as used in this paper. This should be a trivial reading for the safety-critical systems software engineer, but we do recommend reading it to ensure that the terminology is clear in the rest of the paper. Even though not a proper formal source, we still recommend Wikipedia [19] on this topic. A. Artifacts When developing classical software, the only product to deliver is executable code. One might provide also source code if the software is open source; the software itself might be part of a bigger system if it is embedded; but, all in all, from the perspective of the software engineer, one just needs to deliver some sort of executable. For safety critical systems, this is not enough: one needs to deliver not only the executable code itself, but also a justification that the executable code indeed does what it is supposed to do or that it is resilient to faults. Such a justification is provided in the form of documents, source code, executable, etc., which are not intended for the final consumer, but for the authority (whether it is an independent authority or a company-internal one) in charge of validating the safety of the product. We call these (development) artifacts. One such essential document is the one describing requirements: requirements describe what the software is supposed to do, without providing implementation details. In many nonsafety critical applications, requirements are expressed in a very unstructured manner, e.g., in some statement of work, in an issue tracker, or in slides communicated from the client. In safety critical applications however, it is essential to have these requirements in a way that they can be structured, referenced, or even categorized. For instance: functional requirements describe the expected function of a component, timing requirements describe the temporal constraints for a given function, interface requirements describe the input/output types of a component. Requirement documents found in safety-critical industry typically use dedicated software like IBM Rational DOORS. Example 1 (Functional requirement [20]): The [system] shall be capable of predicting the paths of the subject vehicle as well as principal other vehicles in order to identify the vehicle(s) whose path(s) may intersect with the subject vehicles path. Requirements are only one sort of document among many to be provided: source code, test results, development plans or any other sort of document which turns out to be necessary to justify that the final software can be used in a safety-critical system. This list is non-exhaustive and typically defined by a standard, like ISO26262 [2] or DO178C [1]. B. Traces The delivered artifacts generally have dependencies between each other: typically, the source code should fulfill the requirements, software requirements should refine system requirements, executable code derives from source code. Keeping these dependencies implicit increases a lot the risk that a dependency be wrong or forgotten. This is the purpose of traces to make these dependencies explicit. Every pair of artifacts is in principle subject to being traced from/to each other. In this paper we consider especially traces from code to requirements. Example 2: As an example, consider a requirement (defined in some document, e.g., a Word document or a DOORS database) being identified with an ID, say REQ 123 ; take then a piece of code defining many functions, one of them -say f 456 -implementing REQ 123. Then a trace is typically nothing more than a comment just before the function simply stating [REQ 123]: 1 / / f 4 5 6 t a k e s a s a r g u m e n t s : 2 / / − x : . . . 3 / / − y : . . . 4 / / I t r e t u r n s . . . 5 / / 6 / / [ REQ 123 ] 7 i n t f 4 5 6 ( i n t x , f l o a t y ) { 8 . . . } The trace is the comment on line 6. Another typical example is a trace between a test case and a requirement: it is important to ensure that the test cases indeed support the verification of requirements and that no requirement is forgotten. Even further, it is essential to also trace the results of the tests to the test cases themselves to ensure that the tests are indeed done and maintained. Writing down a trace is in general a manual activity: engineers look up the code and the requirements and add manually the comment above. 2 C. High-vs Low-level requirements In many cases, requirements are not concrete or precise enough to be traced directly with the above level of granularity (see Example 1). Therefore, it is often recommended to first refine the requirements into more concrete requirements, which can be traced from the code. These artifacts can have different denominations. For instance, the standard for the development of software for civil avionics (DO178C [1]) names them highlevel and low-level requirements (HLR/LLR) respectively (but the concepts is transferable to other standards and domains), with the following definition for LLR: "Low-level requirements are software requirements from which Source Code can be directly implemented without further information." [1]. LLR should themselves be traced to HLR in order to have complete traceability. 3 Note that the definition of HLR and LLR is not absolutely clear: we encountered examples where some requirements were considered as high-level by a company and low-level by another. In general, refining HLR into LLR goes hand in hand with architectural decisions: the requirements can be decomposed only once the function is decomposed into smaller functions, to which one can assign more concrete requirements. This is why the DO178C, for instance, refines the HLR into two artifacts: the LLRs on one hand, and the Software Architecture on the other hand. More concretely, the software architecture defines a set of components and connections between these components -or, more precisely, a set of interfaces (i.e., data types for inputs and outputs), since the software architecture does not cover the implementation of the components. Interfaces typically contain even more information like the physical units of the types (e.g., meters, centimeters, meter per second), or, if relevant, refreshing rates. The LLRs can then be mapped to each interface. Finally, the LLR and the software architecture are the only information necessary to write down the source code. Whether defined in a requirement or separately, there is always a definition of interfaces. In the following, we will generically refer to such a definition as an interface requirement. Fig. 1 represents the artifacts mentioned above. Of course, every artifact refining a previous one shall be traced to the latter, this should be typically bi-directional: every piece of information found in a refined artifact shall be found in the corresponding refining artifact, and conversely, every piece of information -except design decisions -found in a refining artifact shall be found in the refined one. In the DO178, the software architecture is not traced back to the HLR because it is a design decision. The figure also presents the test artifacts: test cases shall be traced as well to requirements (high-or low-level depending on the context), and test results shall be traced to test cases. D. Rationale Understanding the rationale behind traces 1. enables to understand why it is challenging to trace DNNs, and 2. gives hints to investigate relevant alternatives to classical traces. A trace serves the purpose of ensuring that a piece of code is justified by a requirement. This is not a structured or formal justification, which are in practice seldom applicable, however it at least enforces that people think about this justification. In fact, traceability does enable to identify sources of error: when an engineer attempts but does not manage to trace a piece of code then they might indeed get aware that this code is not necessary or, even worse, that it introduces unwanted functionality. Conversely, if a requirement is not traced back by any code, then it is often a hint that the requirement has been forgotten. For the same reason, traceability is a relevant tool for assessors in order to detect potential pitfalls during development. This is what is illustrated in Fig. 1 by the bidirectional arrows for traceability: having traces syntactically on each side is easy; it is however harder to ensure coverage of traceability on both sides, e.g., all HLR are traced to some LLR and all LLR are traced back to some HLR (the latter typically does not happen since some LLRs depend on design decisions). E. Process vs Artifacts Many standards like, e.g., DO178C, do not impose an order on how artifacts shall be developed. For instance, even though code shall be traced to requirements, it does not mean that one is forced to follow a waterfall model: one might just as well work iteratively, define requirements, then code, then go back to requirements, etc. The main point of traceability is that, no matter how one reached the final state of development (e.g., iteratively or waterfall), it should be possible to justify that this final state is coherent. Consequently, one might very well develop all the artifacts without traceability, and only at the end develop the traces. 4 This is why we emphasized in introduction that this paper is not process-but artifact-oriented: we do not impose how engineers should work but only what they should deliver. IV. DEEP LEARNING ARTIFACTS This section presents the concepts and terminology related to deep learning, in a way which makes it amenable to comparison with the artifacts of the previous section. To implement a required function using a DNN, one collects a lot of data matching as an input with their corresponding expected outputs (the outputs are not collected but typically manually annotated). This data is then used by a given DL framework to train the network. Examples of such frameworks are TensorFlow [24] or PyTorch [25]. A typical example of a function where one would want to use a DNN is the following: "given an image represented by a matrix of pixels, return a list of 4-tuples (x, y, w, h) representing rectangles which contain pedestrians" (such rectangles are typically called bounding boxes). One might require to identify various classes of objects (e.g., pedestrian, car, bikes) and to associate every bounding box with a label indicating to which class the object belongs, see Fig. 2. Fig. 2. Bounding boxes -image taken from [26] To teach a DNN, one needs the following: • A dataset containing both the input, e.g., images, and the output, e.g., annotations denoting bounding boxes for pedestrians' positions. In the following, we will consider these two aspects separately: the raw dataset, i.e., only the input, and the labels, i.e., only the corresponding expected output. • A deep neural network architecture. Prior to learning time, one cannot really consider that there is an actual neural network, but rather a skeleton thereof: the learning process will fill in this skeleton (more specifically, the weights), and doing so will generate the neural network used for the required function (in practice, the skeleton is actually randomly pre-filled and the random weights are progressively changed during learning). Such a skeleton is however more than just a box: the deep learning engineer decides on the shape of this skeleton, which does influence the learning process. A DNN architecture is typically designed as a layer-based architecture where the input, represented as a (potentially huge) vector or matrix (e.g. 640 × 480 × 3 for an image of width 640, height 480 and with three color components R, G and B), flows through various massively parallel operations transforming it until the output has the expected form (e.g., a vector containing 3 real numbers between 0 and 1 indicating each the confidence of the image containing a pedestrian, a car or nothing). The engineering then amounts to designing this architecture, meaning defining these operations: their types and the dimensions they transform their input into. See Fig. 3 for an example. • A loss function. To train a DNN, one must have a way to tell the machine learning framework when the DNN is wrong, in order to correct it. In theory, this is easy: if the network provides a wrong answer to an input for which we know the correct answer, then we just tell the framework what the right answer was. However, in practice, the functions addressed with DNN typically output a confidence rather than a perfect answer. One should therefore be more subtle than just telling "right" or "wrong". Instead one can provide a positive real number, basically a grade, telling how wrong the system is. If the number is null, then there is no error; otherwise, the higher the number, the more important the error. 5 Consequently, a loss function takes the expected and actually obtained results of the DNN as inputs, and returns a real number stating how bad the difference between both is. Mathematically, distances make good candidates for losses. Example 3: Example of a loss function: L(θ) = y pos −ŷ pos 2 − c∈C log(y c ).ŷ c where: θ denotes the set of all parameters of the DNN (i.e., its weights), 6 y pos (resp.ŷ pos ) denote the position of an inferred bounding box (resp. the actual, labelled, position of the bounding box -ground truth), -. 2 denotes the L2 norm, -C is the set of classes considered in the problem at hand, e.g., {pedestrian, car, cyclist}, y c (resp.ŷ c ) denotes the class assigned to the inferred bounding box (resp. the actual, labelled, class of the bounding box), via a so-called one-shot encoding, i.e., a vector of the size the number of classes, where each element contains a real number between 0 and 1 assessing the confidence of belonging to the corresponding class. We leave it to the reader to observe the variation of the function depending on the error of the network (or lack thereof). 5 In many cases, the objective is only to minimize the loss, not necessarily to nullify it. 6 Contrarily to one would mathematically expect, θ is present on the righthand side, but only implicitly: rigorously, one should write y as the result of applying the function represented by the DNN, as parametrized by θ. We follow however the conventions used in the classical DL literature. In practice the loss function is expressed using code: this code does not go in the final product but controls the learning of the DNN within the DL framework. As we will see, it will be essential for the rest of the paper not just to understand the artifacts themselves, but how they are developed. Typically, the sequence of decisions are as follows: 1) Collect data and, possibly, preprocess it: re-shape the information, fix missing values, extract features, or achieve much more advanced tasks like matching label ground truth boxes to so-called prior boxes [28] (we do not focus on this activity in this paper). Delivered artifacts: raw dataset, preprocessing functions. 2) Annotate the raw data. Delivered artifact: labelled dataset. 3) Split the dataset in training, validation and testing sets. Delivered artifacts: labelled training-, validation-and testing-datasets. The difference between the validation and testing datasets is that, after evaluating the DNN on the validation dataset, the engineer will take the result as a feedback into account to improve their design. When done and no more correction is planned, then the engineer will assess the quality of their DNN with the testing dataset. This should not entail further iterations of the design (see step 12). 7 4) Design the DNN architecture. Delivered artifact: DNN architecture (typically as python code making use of the selected framework). 5) Define the "learning configuration" this includes picking a loss, picking learning parameters (e.g., dropout [29], learning rate, maximum learning steps), or search strategies for these hyper-parameters (e.g., grid or random search), or even strategies involving the exploration of the dataset itself (e.g., curriculum learning). This learning configuration is a placeholder artifact for all aspects which potentially influence the learning process, e.g., the used version of the various software dependencies or the used random seeds. We do not make this list exhaustive since this is not the focus of this paper. Overall, the configuration shall be understood as the minimal piece of information such that the tuple [training set, architecture, learning configuration] characterizes uniquely the learned DNN. This requirement aims at ensuring the reproducibility of the learning. 8 Delivered artifact: Typically not "one" artifact but rather various pieces scattered across different artifacts: e.g., fine-tuning parameters stored in code, loss having its own source file, etc. Ideally, this could be gathered in some configuration files, as provided by some DL management platforms [30]. 6) Train the DNN architecture with the loss function on the training dataset using a selected deep learning frame- 7 Note that the terms testing dataset and validation datasets are sometimes exchanged in the literature. 8 We are on purpose quite vague on this matter because reproducibility is way harder to reach than one might think: not only random seeds influence the learning process, but also potentially the operating system, the library versions and even the hardware, which might, e.g., swap instructions differently nondeterministically. work. Delivered artifact: (trained) weight values. Note that the artifact is not the code, which, per se, is not different before and after training: the learning process alters the values of the variables used by the code, not the code itself. Consequently, the artifact is actually the resulting information stored in those variables. 7) Post-process the trained DNN (if necessary): many learning strategies require a change between learning and inference phase (e.g., drop out is applied only during learning). Delivered artifact: inference architecture. In that case, it is the opposite to the previous step: the code changes but the data does not. Note however that in most cases, the switch from the learning architecture to the inference one is so standard and systematic that there is no need for any separate artifact: typically, a DL framework will simply provide an optional argument which one shall set to true if in learning mode or to false in inference mode. 8) Test the resulting DNN on the validation dataset. Delivered artifact: test results (e.g., a metric like accuracy in the form of a number between 0 and 1), typically stored in a variable of the python runtime, or in a log file, or in a CI/CD system, if any. 9) Change the architecture or the learn configuration (4-5) based on the results and repeat steps 6-9 until the targeted objectives are reached. 10) Assess the quality of the inference DNN with the test set Delivered artifact: final validation results. 11) Depending on the used framework, serialize/export the network in order to use it in production, e.g., to be linked from a C++ source file, and compile it. Delivered artifact: executable code usable in production, Quite similarly to code development, the process yielding the finally delivered DNN is a typical trial-and-error process. There is a major difference though: code resulting from a trialand-error process can still be understood. This is typically not the case of DNNs: often, the only way to understand why a given architecture is finally obtained, is by looking back at the changes which led to it. This has of course a big impact on justifiability of a DNN and therefore on the traceability. We will get back to that point in Section V-B. Note that steps 1 and 9 are not duals: the former is a preprocessing of the data, which must therefore also happen at runtime; while the latter is a post-processing of the DNN itself, which therefore happens once and for all at design time and is not repeated at runtime. Fig. 4 summarizes the DL artifacts in a similar way to Fig. 1. Note again that this does not denote a process, but really a set of delivered artifacts: no sequence is imposed on the order in which the artifacts are developed. In particular, it is strongly to be expected that, once a developer decides to implement a function using a DNN, additional requirements (called "derived" in the DO178) might have to be added a posteriori: the choice of using DL as a technology might indeed entail new considerations at the requirement level. How can we map the DL artifacts presented in Section IV to the classical ones presented in Section III? First notice that both sections are not exactly targeting the same level of granularity: Section IV did not mention requirements, but there are of course requirements when developing DNNs for safety critical systems. Contrarily to software however, we believe that requirements implemented with DNNs generally cannot be refined into a software architecture and an LLR. This is not particularly a property of DNNs per se, but rather of the functions for which it makes sense to use DNNs: most applications for which DNNs are used successfully compared to classical methods, are applications where humans have difficulty decomposing the problem in hierarchical simpler sub-problems. One can even interpret the success of DNNs precisely under this angle: the learning activity does not just learn a solution to the problem, but also learns its own decomposition of the problem. With respect to requirements, this supports the claim that applications where DNNs are useful are precisely those where it is very hard to come up with a decomposition of HLR into LLR: refining HLR into LLR is intrinsically difficult -otherwise one could most probably use a classical (i.e., non-DNN) method. Consequently the only artifacts between the HLR and the source code are all the inputs to the DL framework: architecture, learning configuration and, of course, training dataset. High-level tests are now replaced by the testing/validation set: the name differs but the role is the same. Let us analyze how artifacts from Fig. 1 map to the ones of Fig. 4 in order to highlight similarities and differences: • System requirements, HLR, tests cases, test results and executable code are found in both cases. • As hinted by Fig. 4, the source code is still present but it is split between the architecture part and the weights part. • As mentioned above, the LLR and software architecture cannot really be mapped to the DL artifacts, unless one maps them to the complete design block, which does not bring anything. When it comes to traceability, traces between preserved artifacts are maintained. Traces between source code and object code can also be considered as preserved since these traces basically amount to trace the code generated by the compiler back to the source code: this is not different for DNNs and for classical software. However traces between HLR and Design, and Design and Source code shall be adapted. The next sections are dedicated precisely to these traces. More precisely, we need to consider traces between: 1) HLR and training dataset, 2) HLR and learning configuration, 3) HLR and architecture, 4) training dataset and source code, 5) learning configuration and source code, 6) architecture and source code. For the source code, one can differentiate inference architecture and learnt weights. Inference architecture simply can be traced trivially to the design architecture (when it is not the same artifact anyway as mentioned earlier) and to no other design artifact. The next section deals with traces between HLR and training dataset, the following section deals with all other traces. A. Traceability between HLR and training dataset Traces between HLR and dataset may seem simple: one just needs to trace every element of the dataset to the HLR. Some aspects are easy to think of tracing: the type of the raw data can be traced to the input definition in the interface requirement, or the type of the labels can be traced to the output definition. This sort of traceability can be targeted but we believe that it is too trivial to support the identification of any relevant problems: type mismatches between dataset and interface are not real sources of problem in practice. In addition, any such problem typically breaks anyways during integration of the DNN components with the rest of the system, so that there is no real possibility of encountering such an error when delivering a safety-critical system. We still go into more details about it in Appendix A in case the reader finds the problem relevant to their particular use case. Let us focus rather on the traceability of every piece of data to HLR, e.g., "The function shall recognize obstacles in urban context", "The function shall recognize obstacles by nice weather". In principle, it is simple to trace the dataset to such requirements: e.g., pictures in the dataset taken by nice weather shall be traced to the corresponding requirement, pictures in urban context as well, etc. However, the sort of information usually found in an HLR often applies uniformly to all elements of a dataset: e.g., if the function shall work only in urban context then all images of the dataset will be urban. This would entail tracing the entire dataset to the HLR, which would be so general that it would not really support the rationale of tracing: tracing the entire dataset to the HLR does not really provide a justification of this particular dataset. Instead, one expects every datum to be justified individually and therefore to be traced potentially differently from another datum. At that stage, we recommend developing the interface requirement much more than it usually is, in addition to the types and units of the inputs/outputs, it should describe in a detailed manner the output and -especially -input domain, with the purpose of defining what is an acceptable coverage of the domain. This can be done either as a requirement among the HLR or as a separate artifact, which we call "domain coverage model". Getting back to the example above, "urban" is not enough: one should actually detail which different forms of environment are encountered in an urban environment, e.g., "one-way street", "roundabout", etc. (of course, in that case, the input domain coverage model connects strongly to the Operational Design Domain -ODD -but it needs not be the case if the function to be performed by the DNN does not directly work with data coming from the sensors). These should be themselves traced towards higherlevel requirements, e.g., system-level requirements: this might even be a useful tool to identify misunderstandings regarding the environment, e.g., imagine a portion of highway which is within the limits of a city: is it urban or not? If working in a very structured context, e.g., where modelbased requirements engineering is used (see, e.g., [31]), the domain coverage model could really be formalized to some extent, via coverage criteria on the domain coverage model. In such cases, this activity comes in close connection to modelbased testing [32], the main difference with these classical approaches being merely the size of the model, which is typically huge in DL, much bigger than for classical approaches. Similar approaches have been carried out in machine learning in the literature, see e.g., [33], to a much smaller and less systematic extent. Note finally that, from a control engineering perspective, this is a bit similar to modelling the plant of a controller. Contrarily to a controller, the resulting NN is not analyzable. The domain coverage model plays thus an even more important role, which therefore justifies that it becomes a first-class citizen w.r.t. traceability. Note that it is typically very hard for a requirement engineer to know beforehand which level of granularity to put in such an input domain coverage model. Actually the level of granularity probably depends on the dataset itself, and can thus be identified only once the dataset is already (at least partially) present: this is counter-intuitive regarding the usual notion of requirement (even though it matches the practice thereof: requirements are never perfect from the beginning, they always need iterations). However remember that we do not focus on the order in which artifacts are delivered but only on ensuring their mutual consistency. In this respect, it is acceptable to generate or modify a posteriori such a requirement. 9 To find out the proper level of granularity, one shall keep in mind that such a domain coverage model shall serve as a tool to analyze the dataset by justifying why a particular datum is in there, and identifying cases where some situation might not be covered. Consequently, if too many pieces of data are tracing to the same environment requirement, then this environment requirement probably does not serve its purpose. Conversely, if very few pieces of data trace to one environment requirement only, then either this requirement is too specific or the dataset needs to be completed. Defining "too many" or "very few" is beyond the scope of this paper, but should be of course defined in a rigorous manner depending on the context. If the domain coverage model is defined with a very lowlevel of granularity, then we have the above situation that traceability becomes useless because applying equally to the entire dataset. On the other hand, if the domain coverage model is defined with a very high-level of granularity, then its coverage is probably not reachable as displayed in Fig. 4: the traceability arrow between HLR and dataset is not bidirectional. Note finally that, even though the discussion above targets especially the raw dataset, the same applies to the labels if their domain is complex enough: for instance, if the DNN shall provide the position on a pedestrian, then it is important to ensure that the domain of positions is adequately covered Fig. 5 updates Fig. 4 to reflect the new artifact and the corresponding traceability. The following traces remain: 1) HLR and learning configuration, 2) HLR and design architecture, 3) training dataset and learnt weights, 4) design architecture and learnt weights, 5) learning configuration and learnt weights. Even if simple to implement, a first essential trace is the one between the training dataset version and the learnt weights: indeed, it is easy in practice to lose track of which version of a dataset was used to train a given network. This trace requires no more than a unique identifier for a given version of the training dataset and a reference to this identifier in the trained DNN. For more meaningful traces, one can trace these artifacts just the same way as one does for classical software engineering: trace code to requirements. Since code has a very specific structure for DNNs, we can be a bit more precise: one can try tracing neurons to requirements. For instance, we could impose on the DL framework to keep a trace of which input datum impacted more which neuron. This is precisely the approach shortly mentioned in [5]. Even though doable in theory, this approach brings nothing in practice, it is acknowledged as impossible -at least as of today -to interpret, understand or explain the role of one particular neuron. In addition, the size of the DNN and of the dataset are so huge that one cannot expect to understand a-posteriori any useful piece of information out of it (though this might change in the future if explainable AI becomes successful). Consequently, this sort of traces will not fulfil the traceability rationale: if a reviewer inspects the involved artifacts in their state at the end of the project, they will not understand them nor their connection to previous artifacts. Remark. Note that the problem is new, but also has a part of well-known aspects to it: DNNs are, in essence, generated; therefore, like every generated code, they are much harder to understand and to trace than manually written code and thus cannot be trusted without further argumentation -which is why standards like DO330 exist [34]. Classically generated code can however usually be understood, which is not the case of DNNs, adding tremendously to the "classical" difficulty. Instead of waiting for explainable AI to provide solutions 10 , we suggest in this paper to trace the engineers' decisions instead of the artifacts themselves: if artifacts are not understandable, engineers' decisions shall be. How do engineers come up with architectures or learning configurations? They essentially try them, test them, and try again until they cannot improve the results anymore. In other words, these decisions are intrinsically based trial-and-error: see Fig. 6 for an illustration. Trial and error is usually not considered at the level Fig. 6. Trial-and-error of traceability: as mentioned earlier, it is rather the opposite, one expects from traceability that we can ensure the coherence of the artifacts in their final state, i.e., independently of how they were obtained, by trial-and-error or not. However, DNN development relies so much and so intrinsically on trial-anderror, that we feel necessary to embrace this kind of activity even for traceability. Future developments might provide more predictable and reproducible approaches to the development of DNNs, in which case the approach of the present section will become obsolete. At the moment, instead of simply avoiding this reality and hoping for techniques which might never come, we make an attempt for a pragmatic approach usable today. In case of trial-and-error, the only justification that one can provide is that a given artifact is better than its previous version. Consequently, we propose to trace every new artifact obtained by trial-and-error to its previous version. The objective of the trace being to demonstrates that the new version improves upon the previous version. This requires storing not only the final artifact but also all the previous versions of it -or at least all those which are necessary to understand the artifact obtained at the end. It might sound like an overkill but note that it is actually standard to store previous versions of artifacts in the development of safety critical systems (where it is often encountered under the term "configuration management") or, of course, for normal software with version control (even though it is usually restricted to source code: not for binary artifacts). Pairing these classical techniques with traceability forces however the engineer to do more than just tagging a new version in their version control system: they must also think about the justification of the new increment. Therefore, we suggest imposing the developers to define a metric (or KPI) to measure the quality of the inference DNNwhich they anyways normally do, maybe however not always formally. Such a metric should not be the loss but be defined according to the actual goals that one plans to achieve with the function (e.g., a car can be mistaken for a pedestrian, but not the other way around). The metric can range from simple cases like accuracy and/or recall to complex combinations of functions [4]. As a new artifact, one must then explicitly store the values of this metric for a given DNN. Of course this value shall be traced to the weight values and inference architecture with which it was obtained. The essential addition is then to require that every version of the network which is obtained by increment of a previous one shall be traced to the metric value obtained with this previous version: one can then easily check if the new value indeed is an improvement. This metric should be the same in order to measure the quality of all the evolutions of the DNN. If it changes during the course of the project or is defined only a posteriori, then one needs to re-check the entire trial-and-error chain leading to the final version of the DNN. We summarize the change of artifacts in Fig. 7. This whole process might sound like a big hindrance for the practitioner, but note that: 1. the problem of not providing a real argumentation for a so-called improvement is actually recognized as a problem, even by the machine learning community itself (see e.g. "Explanation vs Speculation" in [36]), and 2. it is still much easier to apply than any approach currently taken in the field of explainable AI. Our recommendation in its current state can easily be "tricked": nothing forces a developer to deliver the previous versions of their DNN; they can just claim that the version they delivered was the first version that they developed, which, by Fig. 5 to integrate trial-and-error chance, was extremely good. A way to circumvent this, is to impose some restrictions on the first delivered version, e.g., requesting that the first version shall belong to a catalogue of authorized "primitive" DNNs. A developer cannot then just deliver immediately a complex DNN without tracing it to a previous primitive one. Primitive DNNs can be defined in various ways and the definition impacts differently various artifacts: this goes beyond the present paper but shall be investigated. Imposing a primitive catalogue is still not enough: imagine that an engineer developed a specific DNN classically (i.e., without following our recommendation of tracing the trial-anderror activities). Then, instead of going through the tedious work of analyzing the chain of increments which led to their final DNN until they reach their original "simple" DNN, they can just hide all the versions between the first and the last. In such a case the last version displays as their first improvement, which allows them to claim, that, by chance, their "first" improvement was the good one. Of course, this goes completely against the intent of our approach. To circumvent this, one should also restrict possible increments, or at least the justification for one increment. A naïve solution could be to have increments like adding only one layer at a time, having a default size for layers, etc. This might however be too restrictive in practice: some DNNs only show their benefits after having added a certain number of layers, but all the smaller versions with less layers are all equally bad. Investigating such restrictions in detail goes beyond the present paper. VI. FUTURE WORK This paper is, to our knowledge, the first to provide a precise list of traces which could potentially be written down for DNN. However, it does not address various development practices, which are encountered in real developments. Gap between trained and inference DNN. The process highlighted in Section IV assumes more or less implicitly that the interface of the trained DNN is the same as the one of the inference DNN. This assumption is often met (e.g., when using dropout the output type of the trained and inference DNN is the same) but not always: e.g., one might, even in a supervised context, train a sub-part of the final network in an unsupervised manner, for instance to learn valuable features (e.g., latent space of an auto-encoder [37]). One might also train a DNN on a separate dataset or take a DNN already trained on another dataset (e.g., ImageNET for object detection [38]) then remove the latest layers (the most task-specific ones) to only adapt the DNN to the targeted functionality. In such cases, lots of intermediate steps are actually not immediately connected to the final task and therefore not traceable in the sense considered so far. We do not consider this sort of cases in this paper but insist on how essential they are: they reflect a reality of DL engineers which cannot be ignored. Dataset. Another important aspect that has been ignored in this paper, is the evolution of the dataset: we assumed that the dataset (or more precisely, the datasets: training, validation, testing) is fixed. As mentioned, this is common practice when considering traceability: we are normally only interested in the final artifacts (except in our case, exceptionally, for trial-anderror activities). However, in reality, many systems actually gather new data along their lifetime. Therefore, one may not ignore the fact that data evolves permanently all along the life cycle of the autonomous system. In such cases, one should consider a form of incremental traceability, i.e., how to trace new data as it comes along. One should especially probably trace differently training data from testing data. In particular, one might need to argue why adding a new datum indeed provides additional valuable information. To do so, a possibility is to develop dataset coverage models. Depending on the context, one might need to trace the dataset itself to the sources used to generate it since they influence the dataset a lot and therefore the training: sensors calibration setup, sensor driver versions, etc. Explainable AI. As mentioned from the beginning, we try in this paper to be independent of current approaches in the domain of explainable AI. We try in particular to be more pragmatic than academic. However, it is probably valuable to look more precisely into various approaches of explainable AI (see, e.g., [4] for a review) to discover new opportunities for relevant fine-granular traces. Classical AI. Various approaches attempt to mix deep learning with expert knowledge, e.g., by transferring existing expert knowledge to a neural network (e.g., transfer learning [39]) where the expert knowledge can be expressed through rules or other forms; or by intertwining machine learning with probabilistic modelling [40]. All these approaches are valuable from the point of view of AI research, but they are also very promising for safety-critical systems because they allow to control the machine learning process to some extent and therefore to argue better that the final behavior is indeed satisfying. In some sense, one can interpret this as a form of explainability-by-design. It would therefore be very valuable to consider how to trace these methods, in particular the newly induced artifacts (e.g., generative model in the case of probabilistic programming). Intellectual property. In domains like automotive or avionics, the development of the system is extremely distributed among various stakeholders: OEMs, tier 1, tier 2, or even tier 3 suppliers. In such cases, it is essential to deliver sufficient artifacts to guarantee safety, but it is also essential that every stakeholder keep their own intellectual property. This can be problematic for our approach to trial-and-error activities which forces practitioners to provide artifact evolutions which might reveal their production secrets. Similar problems exist for virtual validation and can been solved with approaches like the FMI standard [41]. It should in any case be investigated for the approach presented in this paper. VII. CONCLUSION In this paper, we addressed the traceability of neural network in a pragmatic manner: we first explicitly identified the challenge of tracing DNNs, then analyzed the parallels and differences between DNNs and classical software development, and proposed accordingly adaptations of the notion of trace for DNNs. Instead of blindly mapping classical software activities to DL activities, which would lead to mismatches with the actual practice of DL, we tried to embrace some of the specificities of "real-life" DL, in particular trial-anderror. We provided a solution (or the beginning thereof), which we believe supports both the rationale of traceability, while still being applicable for practitioners. The applicability might be controlled depending on the targeted safety level, as is classically done in safety-related software standards: for instance, one could require different coverage percentages for the domain coverage model whether the function is ASIL A, B, C, or D. Acknowledgments. The author thanks Frederik Diehl for his careful review and his wisdom in DL. This work is the realization of thoughts that were initiated during a World Café at the Auto.AI conference Europe, moderated by Håkan Sivencrona. Further remarks were added after presenting early results at the Vehicle Intelligence conference. Thanks go both to the organizers and participants of the conferences as well as to Håkan. APPENDIX A. Traceability of the dataset types As mentioned in Section V-A, we go in this section more in detail about the traceability of dataset to interface requirements: the dataset being basically a set of examples, it should match the types of the inputs/outputs and therefore be traced to the interface requirement. Concretely, this means tracing 1. the raw dataset, and 2. the labels. Both should be traced to the interface requirements: the raw dataset to the input part of it, the labels to its output part. For instance, if the interface requirement states that the input shall be images of dimension 640 × 480, then the raw dataset shall contain only such images, and shall therefore be traced to this input requirement. In case pre-processing is required, then there might not be a direct match, in which case the pre-processing function shall be mapped to the interface requirement. Various approaches might then be employed: the dataset itself might be traced to the post-processing function directly, or one might introduce new requirements (called derived in the DO178-C) defining the interface of the postprocessing function and then trace the dataset to this requirement. Or one might simply consider that the interface is a design decision, not to be traced (in DO178-C terminology: the interface definition would be part of the software architecture). In a dual manner, suppose the interface requirement specifies that the output type is "list of 4-tuples" -representing bounding boxes. Then every label is a list of bounding boxes. Like previously, the dataset can therefore be traced to this type definition. However, if the structure of the output type is more complex (typically, if it contains sum types, i.e., enumerations), then traces can be defined per datum instead. Suppose for instance, that the interface requirement (say "REQ 123") specifies the following output instead: 1) output shall be a list of pairs, 2) where the first element is a 4-tuple like previously, 3) but the second element is a record containing the following fields: a) "pedestrian", b) "bike", c) "vehicle", 4) and where each of the fields contain a real between 0 and 1, 5) such that the sum of all field numbers is 1. In such cases, the dataset can be traced as a whole to REQ 123-1 and REQ 123-2 since those parts of the type apply to every datum uniformly (more or less like before). On the other hand, for a given image, each label can be traced to REQ 123-3a, REQ 123-3b or REQ 123-3c: for instance, if an image is labeled as containing one pedestrian, and the label "pedestrian" shall be traced to REQ 123-3a. In such cases, we can trace every datum independently. 11 Conversely, if one element of the dataset also identifies "trucks", then this label is not traceable to the requirement, which denotes a potential addition of unintended functionality. Note that there might be reasons why wanting to have data with labels not supporting the requirements: e.g., reuse of some data used in another context, use of the same data for another function, or desire to label more "just in case". Depending on the developed system, such cases shall probably not be forbidden, but their presence might give a hint about potential unintended functionality, which should then probably be addressed. For instance, depending on the case, the dataset should be preprocessed: the unwanted label should be erased or merged into another label, or maybe even gives hint that the requirement itself is not complete. Our main point is that the lack of traceability provides a hint about potential design decisions.
8,374
1907.02178
2954739441
Firms implementing digital advertising campaigns face a complex problem in determining the right match between their advertising creatives and target audiences. Typical solutions to the problem have leveraged non-experimental methods, or used "split-testing" strategies that have not explicitly addressed the complexities induced by targeted audiences that can potentially overlap with one another. This paper presents an adaptive algorithm that addresses the problem via online experimentation. The algorithm is set up as a contextual bandit and addresses the overlap issue by partitioning the target audiences into disjoint, non-overlapping sub-populations. It learns an optimal creative display policy in the disjoint space, while assessing in parallel which creative has the best match in the space of possibly overlapping target audiences. Experiments show that the proposed method is more efficient compared to naive "split-testing" or non-adaptive "A B n" testing based methods. We also describe a testing product we built that uses the algorithm. The product is currently deployed on the advertising platform of this http URL, an eCommerce company and a publisher of digital ads in China.
There is a mature literature on successful applications of bandits in web content optimization (e.g., @cite_5 , @cite_0 , @cite_1 , @cite_3 , @cite_9 ). This paper belongs to a sub-stream of this work that has focused on using bandits for controlled experiments on the web. The closest papers to our work are the complementary papers by @cite_4 , @cite_8 and @cite_6 who propose using bandit experiments to evaluate creatives for targeted advertising, without focusing explicitly on the problem addressed here of comparing target audiences.
{ "abstract": [ "The modern service economy is substantively different from the agricultural and manufacturing economies that preceded it. In particular, the cost of experimenting is dominated by opportunity cost rather than the cost of obtaining experimental units. The different economics require a new class of experiments, in which stochastic models play an important role. This article briefly summarizes multi-armed bandit experiments, where the experimental design is modified as the experiment progresses to reduce the cost of experimenting. Special attention is paid to Thompson sampling, which is a simple and effective way to run a multi-armed bandit experiment. Copyright © 2015 John Wiley & Sons, Ltd.", "Firms using online advertising regularly run experiments with multiple versions of their ads since they are uncertain about which ones are most effective. Within a campaign, firms try to adapt to intermediate results of their tests, optimizing what they earn while learning about their ads. But how should they decide what percentage of impressions to allocate to each ad? This paper answers that question, resolving the well-known \"learn-and-earn'' trade-off using multi-armed bandit (MAB) methods. The online advertiser's MAB problem, however, contains particular challenges, such as a hierarchical structure (ads within a website), attributes of actions (creative elements of an ad), and batched decisions (millions of impressions at a time), that are not fully accommodated by existing MAB methods. Our approach captures how the impact of observable ad attributes on ad effectiveness differs by website in unobserved ways, and our policy generates allocations of impressions that can be used in practice. We implemented this policy in a live field experiment delivering over 700 million ad impressions in an online display campaign with a large retail bank. Over the course of two months, our policy achieved an 8 improvement in the customer acquisition rate, relative to a control policy, without any additional costs to the bank. Beyond the actual experiment, we performed counterfactual simulations to evaluate a range of alternative model specifications and allocation rules in MAB policies. Finally, we show that customer acquisition would decrease about 10 if the firm were to optimize click through rates instead of conversion directly, a finding that has implications for understanding the marketing funnel.", "Applications and systems are constantly faced with decisions that require picking from a set of actions based on contextual information. Reinforcement-based learning algorithms such as contextual bandits can be very effective in these settings, but applying them in practice is fraught with technical debt, and no general system exists that supports them completely. We address this and create the first general system for contextual learning, called the Decision Service. Existing systems often suffer from technical debt that arises from issues like incorrect data collection and weak debuggability, issues we systematically address through our ML methodology and system abstractions. The Decision Service enables all aspects of contextual bandit learning using four system abstractions which connect together in a loop: explore (the decision space), log, learn, and deploy. Notably, our new explore and log abstractions ensure the system produces correct, unbiased data, which our learner uses for online learning and to enable real-time safeguards, all in a fully reproducible manner. The Decision Service has a simple user interface and works with a variety of applications: we present two live production deployments for content recommendation that achieved click-through improvements of 25-30 , another with 18 revenue lift in the landing page, and ongoing applications in tech support and machine failure handling. The service makes real-time decisions and learns continuously and scalably, while significantly lowering technical debt.", "Thompson sampling is one of oldest heuristic to address the exploration exploitation trade-off, but it is surprisingly unpopular in the literature. We present here some empirical results using Thompson sampling on simulated and real data, and show that it is highly competitive. And since this heuristic is very easy to implement, we argue that it should be part of the standard baselines to compare against.", "", "Online A B tests play an instrumental role for Internet companies to improve products and technologies in a data-driven manner. An online A B test, in its most straightforward form, can be treated as a static hypothesis test where traditional statistical tools such as p-values and power analysis might be applied to help decision makers determine which variant performs better. However, a static A B test presents both time cost and the opportunity cost for rapid product iterations. For time cost, a fast-paced product evolution pushes its shareholders to consistently monitor results from online A B experiments, which usually invites peeking and altering experimental designs as data collected. It is recognized that this flexibility might harm statistical guarantees if not introduced in the right way, especially when online tests are considered as static hypothesis tests. For opportunity cost, a static test usually entails a static allocation of users into different variants, which prevents an immediate roll-out of the better version to larger audience or risks of alienating users who may suffer from a bad experience. While some works try to tackle these challenges, no prior method focuses on a holistic solution to both issues. In this paper, we propose a unified framework utilizing sequential analysis and multi-armed bandit to address time cost and the opportunity cost of static online tests simultaneously. In particular, we present an imputed sequential Girshick test that accommodates online data and dynamic allocation of data. The unobserved potential outcomes are treated as missing data and are imputed using empirical averages. Focusing on the binomial model, we demonstrate that the proposed imputed Girshick test achieves Type-I error and power control with both a fixed allocation ratio and an adaptive allocation such as Thompson Sampling through extensive experiments. In addition, we also run experiments on historical Etsy.com A B tests to show the reduction in opportunity cost when using the proposed method.", "Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.", "We propose novel multi-armed bandit (explore exploit) schemes to maximize total clicks on a content module published regularly on Yahoo! Intuitively, one can explore'' each candidate item by displaying it to a small fraction of user visits to estimate the item's click-through rate (CTR), and then exploit'' high CTR items in order to maximize clicks. While bandit methods that seek to find the optimal trade-off between explore and exploit have been studied for decades, existing solutions are not satisfactory for web content publishing applications where dynamic set of items with short lifetimes, delayed feedback and non-stationary reward (CTR) distributions are typical. In this paper, we develop a Bayesian solution and extend several existing schemes to our setting. Through extensive evaluation with nine bandit schemes, we show that our Bayesian solution is uniformly better in several scenarios. We also study the empirical characteristics of our schemes and provide useful insights on the strengths and weaknesses of each. Finally, we validate our results with a side-by-side'' comparison of schemes through live experiments conducted on a random sample of real user visits to Yahoo!" ], "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_5" ], "mid": [ "2130527179", "1570738427", "2614208603", "2108738385", "", "2907108374", "2112420033", "2100922150" ] }
Online Evaluation of Audiences for Targeted Advertising via Bandit Experiments
A critical determinant of the success of advertising campaigns is picking the right audience to target. As digital ad-markets have matured and the ability to target advertising has improved, the range of targeting options has expanded, and the profile of possible audiences have become complex. Both advertisers and publishers now rely on datadriven methods to evaluate audiences and to find effective options with which to advertise to them. This paper presents a new bandit algorithm along with a product built to facilitate such evaluations via online experimentation. The problem addressed is as follows. An advertiser designing a campaign wants to pick, from a set of possible target audiences and creatives, a creative-target audience combination that provides her the highest expected payoff in the campaign. The target audiences can be complex, potentially overlapping with each other, and the creatives can be any type of media (picture, video, text etc). We would like to design an experiment to find the best creative-target audience Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. combination for the advertiser while minimizing her costs of experimentation. When only creatives have to be compared to each other, the typical practice is to leverage an "A/B/n" experimental design in which creatives represent arms, so that the best creative is found by ranking the expected payoffs for users randomized into the arms. When target audiences have to be evaluated in addition, extending this design − treating creative-target audience combinations as arms − is problematic. The main difficulty is possible overlap in the target audiences that are compared (e.g., "San Francisco users" and "Male users"). This complicates user assignment because it is not obvious to which constituent arm, a user belonging to an overlapping region should be assigned (e.g., should a Male user from San Francisco be assigned to the "San Francisco-creative" arm or the "Male-creative" arm?). Assigning the overlapping user to one of the constituent arms violates the representativeness of the arms (e.g., if we use a rule that Male users from San Francisco will always be assigned to the "San Francisco-creative" arm, the "Malecreative" arm will have no San Franciscans, and will not represent the distribution of Male users in the platform population). 1 Such assignment also under-utilizes data: though the feedback from the user is informative of all constituent arms, it is being used to learn the best creative for only one picked arm (e.g., if we assign a Male user from San Francisco to the "San Francisco-creative" arm, we do not learn from him the value of the "Male-creative" arm, though his behavior is informative of that arm). Another difficulty is that typical "A/B/n" test designs keep the sample/traffic splits constant as the test progresses. Therefore, both good and bad creatives will be allocated the same amount of traffic during the test. Instead, as we learn during the test that an arm is not performing well, reducing its traffic allocation can reduce the cost of experimentation. The goal of this paper is to develop an algorithm that addresses these issues. It has two broad steps. In step one, we split the compared target audiences (henceforth "TA"s) into disjoint audience sub-populations (henceforth "DA"s), so the set of DAs fully span the set of TAs. In step two, we train a bandit with the creatives as arms, the payoffs to the advertiser as rewards, and the DAs, rather than the TAs as the contexts. As the test progresses, we aggregate over all DAs that correspond to each TA to adaptively learn the best creative-TA match. In essence, we learn an optimal creative allocation policy at the disjoint sub-population level, while making progress towards the test goal at the TA level. Because the DAs have no overlap, each user can be mapped to a distinct DA, addressing the assignment problem. Because all DAs that map to a TA help inform the value of that TA, learning is also efficient. Further, tailoring the bandit's policy to a more finely specified context − i.e., the DA − allows it to match the creative to the user's tastes more finely, thereby improving payoffs and reducing expected regret, while delivering on the goal of assessing the best combination at the level of a more aggregated audience. The adaptive nature of the test ensures the traffic is allocated in a way that reduces the cost to the advertiser from running the test, because creatives that are learned to have low value early are allocated lesser traffic within each DA as the test progresses. The overall algorithm is implemented as a contextual Thompson Sampler (henceforth "TS"; see (Russo et al. 2018) for an overview). Increasing the overlap in the tested TAs increases the payoff similarity between the TAs, making it harder to detect separation. An attractive feature of the proposed algorithm is that feedback on the performance of DAs helps inform the performance of all TAs to which they belong. This crossaudience learning serves as a counterbalancing force that keeps performance stable as overlap increases, preventing the sample sizes required to stop the test from growing unacceptably large and making the algorithm impractical. In several simulations, we show the proposed TS performs well in realistic situations, including with high levels of overlap; and is competitive against benchmark methods including non-adaptive designs and "split-testing" designs currently used in industry. To illustrate real-world performance, we also discuss a case-study from a testing product on the advertising platform of JD.com, where the algorithm is deployed. Method The test takes as input K = {1, .., K} possible TAs and R = {1, .., R} creatives the advertiser wants to evaluate for her campaign. In step 1, we partition the users in the K TAs into a set J = {1, .., J} of J DAs. For example, if the TAs are "San Francisco users" and "Male users," we create three DAs, "San Francisco users, Male," "San Francisco users, Not Male," and "Non San Francisco users, Male." In step 2, we treat each DA as a context, and each creative as an arm that is pulled adaptively based on the context. When a user i arrives at the platform, we categorize the user to a context based on his features, i.e., i ∈ DA(j) if i's features match the definition of j, where DA(j) denotes the set of users in DA j. A creative r ∈ R is then displayed to the user based on the context. The cost of displaying creative r to user i in context j is denoted as b irj . After the creative is displayed, the user's action, y irj , is observed. The empirical implementation of the product uses clicks as the user feedback for updating the bandit, so y is treated as binary, i.e., y irj ∈ {0, 1}. The payoff to the advertiser from the ad-impression, π irj , is defined as π irj =γ · y irj − b irj , where γ is a factor that converts the user's action to monetary units. The goal of the bandit is to find an optimal policy g(j) : J → R which allocates the creative with the maximum expected payoff to a user with context j. Thompson Sampling To develop the TS, we model the outcome y irj in a Bayesian framework, and let y irj ∼ p(y irj |θ rj ); and, θ rj ∼ p(θ rj |Ω rj ). (1) where θ rj are the parameters that describe the distribution of action y irj , and Ω rj are the hyper-parameters governing the distribution of θ rj . Since y is Bernoulli distributed, we make the typical assumption that the prior on θ is Beta which is conjugate to the Bernoulli distribution. With Ω rj ≡ (α rj , β rj ), we model, y irj ∼ Ber(θ rj ); and, θ rj ∼ Beta(α rj , β rj ). (2) Given y irj ∼ Ber(θ rj ), the expected payoff of each creative-disjoint sub-population combination (henceforth "C-DA") is μ π rj (θ rj ) = E[π irj ] = γE[y irj ] − E[b irj ] = γθ rj − b rj , ∀r ∈ R, j ∈ J, whereb rj is the average cost of showing the creative r to the users in DA(j). 2 To make clear how the bandit updates parameters, we add the index t for batch. Before the test starts, t = 1, we set diffuse priors and let α rj,t=1 = 1, β rj,t=1 = 1, ∀r ∈ R, j ∈ J. This prior implies the probability of taking action y, θ rj,t=1 , ∀r ∈ R, j ∈ J is uniformly distributed between 0% and 100%. In batch t, N t users arrive. The TS displays creatives to these users dynamically, by allocating each creative according to the posterior probability each creative offers the highest expected payoffs given the user's context. Given the posterior at the beginning of batch t, the probability a creative r provides the highest expected payoff is, w rjt = P r[μ π rj (θ rjt ) = max r∈R (μ π rj (θ rjt ))| α jt , β jt ],(3) where α jt = [α 1jt , . . . , α Rjt ] and β jt = [β 1jt , . . . , β Rjt ] are the parameters of the posterior distribution of θ jt = [θ 1jt , . . . , θ Rjt ] . To implement this allocation, for each user i = 1, .., N t who arrives in batch t, we determine his context j, and make a draw of the R × 1 vector of parameters,θ (i) jt . Ele- mentθ (i) rjt of the vector is drawn from Beta(α rjt , β rjt ) for r ∈ R. Then, we compute the payoff for each creative r as μ π rj (θ (i) rjt ) = γθ (i) rjt −b rj , and display to i the creative with the highest μ π rj (θ (i) rjt ) . We update all parameters at the end of processing the batch, after the outcomes for all users in the batch is observed. We compute the sum of binary outcomes for each 2 γ may be determined from prior estimation or advertisers' judgment of the value attached to users' actions. γ is pre-computed and held fixed during the test.brj andp(j|k) (defined later) can be pre-computed outside of the test from historical data and held fixed during the test, or inferred during the test using a simple bin estimator that computes these as averages over the observed cost and user contexts data. C-DA combination as s rjt = nrjt i=1 y irjt , ∀r ∈ R, j ∈ J, where n rjt is the number of users with context j allocated to creative r in batch t. Then, we update parameters as α j(t+1) = α jt + s jt and β j(t+1) = β jt + n jt − s jt , ∀j ∈ J, where s jt = [s 1jt , . . . , s Rjt ] , and n jt = [n 1jt , . . . , n Rjt ] . Then, we enter batch t + 1, and use α j(t+1) and β j(t+1) as the posterior parameters to allocate creatives at t + 1. We repeat this process until a pre-specified stopping condition (outlined below) is met. Probabilistic Aggregation and Stopping Rule While the contextual bandit is set up to learn the best C-DA combination, the goal of the test is to learn the best creative-target audience combination (henceforth "C-TA"). As such, we compute the expected payoff of each C-TA combination by aggregating the payoffs of corresponding C-DA combinations, and stop on the basis of the regret associated with learning the best C-TA combination. Using the law of total probability, we can aggregate across all C-DAs associated with C-TA combination (r, k) to obtain λ rkt , λ rkt = j∈O(k) θ rjt ·p(j|k).(4) In equation (4), λ rkt is the probability that a user picked at random from within TA(k) in batch t, takes the action y = 1 upon being displayed creative r;p(j|k) is the probability (in the platform population) that a user belonging to T A(k) is also of the context j; and O(k) is the set of disjoint sub-populations (js) whose associated DA(j)s are subsets of T A(k). Given equation (4), the posterior distribution of θ rjt s from the TS induces a distribution of λ rkt s. We can obtain draws from this distribution using Monte Carlo sampling. For each draw θ rkt , we can similarly compute the implied expected payoff to the advertiser from displaying creative r to a user picked at random from within TA(k) in batch t, ω π rkt (λ (h) rk ) = γλ (h) rkt −b rk , ∀r ∈ R, k ∈ K,(5) whereb rk is the average cost for showing creative r to target audience k, which can be obtained by aggregatingb rj through analogously applying equation (4). Taking the H values of ω π rkt (λ (h) rk ) for each (r, k), we let r * kt denote the creative that has the highest expected payoff within each TA k across all H draws, i.e., r * kt = arg max r∈R max h=1,..,H ω π rkt (λ (h) rk ).(6) Hence, ω π r * kt ,kt (λ (h) rkt ) denote the expected payoff for creative r * kt evaluated at draw h. Also, define ω π * kt (λ (h) rkt ) as the expected payoff for the creative assessed as the best for TA k in draw h itself, i.e., ω π * kt (λ (h) rkt ) = max r∈R ω π rkt (λ (h) rk ),(7) Following (Scott 2015), the value ω π * kt (λ (h) rkt )−ω π r * kt ,kt (λ (h) rkt ) represents an estimate of the regret in batch t for TA k at draw h. Normalizing it by the expected payoff of the best creative across draws gives a unit-free metric of regret for each draw h for each TA k, ρ (h) kt = ω π * kt (λ (h) rkt ) − ω π r * kt ,kt (λ (h) rkt ) ω π r * kt ,kt (λ (h) rkt ) ,(8) Let pP V R(k, t) be the 95 th percentile of ρ (h) kt across the H draws. We stop the test when, max k∈K pP V R(k, t) < 0.01.(9) In other words, we stop the test when the normalized regret for all TAs we are interested in falls below 0.01. 3 Therefore, while we learn an optimal creative displaying policy for each DA, we stop the algorithm when we find the best creative for each TA in terms of minimal regret. Algorithm 1 shows the full procedure. Experiments This section reports on experiments that establish the face validity of the TS; compares it to audience split testing and a random allocation schema where each creative is allocated to each context with equal probability; and explores its performance when the degree of overlap in TAs increases. For the experiments, we consider a setup with 2 creatives and 2 overlapping TAs, implying 3 DAs, 4 C-TA combinations and 6 C-DA combinations as shown in Figure (1). The TAs are assumed to be of equal sizes, with an overlap of 50%. 4 We set the display cost b irj to zero and γ = 1 so we can work with the CTR directly as the payoffs (therefore, we interpret the cost of experimentation as the opportunity cost to the advertiser of not showing the best combination.) We simulate 1,000 values for the expected CTRs of the 6 C-DA combinations from uniform distributions (with supports shown in Figure (1)). Under these values, C 1 -DA 1 has the highest expected CTR amongst the C-DA combinations, and C 1 -T A 1 the highest amongst the C-TA combinations. We run the TS for each simulated value to obtain 1,000 bandit replications. For each replication, we update probabilities over batches of 100 observations, and stop the sampling 3 Other stopping rules may also be used, for example, based on posterior probabilities, or based on practical criteria that the test runs till the budget is exhausted (which protects the advertiser's interests since the budget is allocated to the best creative). The formal question of how to stop a TS when doing Bayesian inference is still an open issue. While data-based stopping rules are known to affect frequentist inference, Bayesian inference has traditionally been viewed as unaffected by optional stopping (e.g., (Edwards, Lindman, and Savage 1963)), though the debate is still unresolved in the statistics and machine learning community (e.g., (Rouder 2014) vs. (de Heide and Grünwald 2018)). This paper adopts a stopping rule reflecting practical product-related considerations, and does not address this debate. Algorithm 1 TS to identify best C-TA combination 1: K TAs are re-partitioned into J DAs 2: t ← 1 3: α rjt ← 1, β rjt ← 1, ∀r ∈ R, j ∈ J 4: Obtain from historical datap(j|k), γ,b rj , ∀r ∈ R, j ∈ J, k ∈ K 5: pP V R(k, t) ← 1, ∀k ∈ K 6: while max Collect data {y irjt } Nt i=1 , {n rjt } r∈R,j∈J 13: Compute s rjt = nrjt i=1 y irjt , ∀r ∈ R, j ∈ J 14: Update α rj(t+1) = α rjt + s rjt , ∀r ∈ R, j ∈ J 15: Update β rj(t+1) = β rjt +n rjt −s rjt , ∀r ∈ R, j ∈ J 16: Make h = 1, .., H draws of θ rj(t+1) s, i.e. ⎡ ⎢ ⎢ ⎢ ⎣ θ 11(t+1) ... θ rj(t+1) ... θ RJ(t+1) ⎤ ⎥ ⎥ ⎥ ⎦ (h) ∼ ⎡ ⎢ ⎢ ⎢ ⎣ Beta(α 11(t+1) , β 11(t+1) ) ... Beta(α rj(t+1) , β rj(t+1) ) ... Beta(α RJ(t+1) , β RJ(t+1) ) ⎤ ⎥ ⎥ ⎥ ⎦ (h) , ∀h = 1, ..., H 17: Compute λ (h) t+1 = ⎡ ⎢ ⎢ ⎢ ⎣ λ 11(t+1) ... λ rk(t+1) ... λ RK(t+1) ⎤ ⎥ ⎥ ⎥ ⎦ (h) = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ j∈O(k=1)p (j|k = 1) · θ rj(t+1) ... j∈O(k)p (j|k) · θ rj(t+1) ... j∈O(k=K)p (j|k = K) · θ rj(t+1) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (h) , ∀h = 1, ..., H 18: Compute ω π (h) t+1 ( λ (h) t+1 ) = ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ ω π 11(t+1) ... ω π rk(t+1) ... ω π RK(t+1) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ (h) = ⎡ ⎢ ⎢ ⎢ ⎣ γ · λ 11(t+1) −b 11(t+1) ... γ · λ rkt −b rk(t+1) ... γ · λ RKt −b RK(t+1) ⎤ ⎥ ⎥ ⎥ ⎦ (h) , ∀h = 1, ..., H 19: Set ρ (h) k(t+1) = [ω π * k(t+1) (λ (h) rk(t+1) ) − ω π r * k(t+1) ,k(t+1) (λ (h) rk(t+1) )]/ω π r * k(t+1) ,k(t+1) (λ (h) rk(t+1) ), ∀h = 1, ..., H, k ∈ K 20: ∀k ∈ K, calculate pP V R(k, t + 1) as the 95 th percentile across the H draws of ρ (2), box-plots across replications of the performance of the TS as batches of data are collected, plotting these at every 10 th batch. Figures (2a and 2b) plot the evolution over batches in the unit-free regret (pPVR) and the expected regret per impression, where the latter is defined as the expected clicks lost per impression in a batch when displaying a creative other than the true-best for each DA, evaluated at the true parameters. 5 If the TS progressively allocates more traffic to creatives with higher probability of being the best arm in each context (DA), the regret should fall as more data is accumulated. Consistent with this, both metrics are seen to fall as the number of batches increases in our simulation. The cutoff of 0.01 pPVR is met in 1,000 batches in all replications. Figure (2c) shows the posterior probability implied by TS in each batch that the true-best C-TA is currently the best. 6 The posterior is seen to converge to the true-best combination as more batches are sampled. We now compare the proposed TS algorithm to an Equal Allocation algorithm (henceforth "EA") and a Split-Testing algorithm (henceforth "ST"). EA is analogous to "A/B/n" testing in that it is non-adaptive: the allocation of traffic to creatives for each DA is held fixed, and not changed across batches. Instead, in each batch, we allocate traffic equally to each of the r ∈ R creatives for each DA. ST follows the design described in §2, and traffic is allocated at the level of C-TA (rather than C-DA) combinations. Each user is assigned randomly with fixed, equal probability to one of R×K C-TA arms (4 in this simulation), and a creative is displayed only if a user's features match the arm's TA definition. To do the comparison, we repeat the same 1,000 replications as above with the same configurations, but this time stop each replication when the criterion in equation (9) is reached. In other words, for each of TS, EA and ST algorithms, we maintain a posterior belief about the best C-TA combination, which we update after every batch. 7 In TS, the traffic allocation reflects this posterior adaptively, while in EA and ST, the traffic splits are held fixed; and the same stopping criteria is imposed in both. All parameters are held 5 Specifically, the expected regret per impression in each batch t is k∈K j∈O(k)p (j|k) r∈R wrjt(θ true rj − max r∈R θ true rj ). 6 Note, these probabilities are not the same as the distribution of traffic allocated by the TS, since traffic is allocated based on DA and not TA. 7 Note that, we do not need to partition the TAs under ST, and instead directly set up the model at the C-TA level under ST. (Figure (2e)). This is because the expected regret per impression under EA and ST remains constant over batches, while as Figure (2b) demonstrated, the expected regret per impression under TS steadily decreases as more batches arrive. ST generates the most regret and requires the largest sample sizes, since it is not only non-adaptive, but also discards a portion of the traffic and the information that could have been gained from this portion. Figure 2f shows that the TS puts more mass at stopping on the true-best C-TA combination compared to EA and ST. Across replications, this allows TS to correctly identify the true-best combination 85.8% of the time at stopping, compared to 77.8% for EA and 70.8% for ST. Overall, the superior performance of the TS relative to EA are consistent with the experiments reported in (Scott 2010). Next, we assess how the extent to which audiences overlap affects performance. This demonstrates the crossaudience learning effect in the algorithm. To do this, we fix the CTRs of the six C-DA combinations C 1 -DA 1 , C 2 -DA 1 , C 1 -DA 2 , C 2 -DA 2 , C 1 -DA 3 , C 2 -DA 3 to be [.01,.03,.03,.05,.025,.035]. We vary the size of the overlapped audience, i.e. Pr (DA 2 |T A 1 ) = Pr (DA 2 |T A 2 ), on For each grid value, we run the TS for 1,000 replications, taking the 6 C-DA CTRs as the truth, stopping each replication per equation (9). We then present in Figure 3 box-plots across these replications as a function of the degree of overlap. Along the x-axis, the two target audiences become more similar, increasing cross-audience learning, but decreasing their payoff differences. Figures (3a and 3b) show that sample sizes required for stopping and total expected regret per impression remain roughly the same as overlap increases, suggesting the two effects largely cancel each other. Figure (3c) shows the proportion of 1,000 replications that correctly identify the true-best C-TA combination as the best at stopping. The annotations label the payoff difference in the top-2 combinations, showing the payoffs become tighter as the overlapping increases. We see that the TS works well for reasonably high values of overlap, but as the payoff differences become very small, it becomes difficult to correctly identify the true-best C-TA combination. Figure (3d) explains this pattern by showing the posterior probability of the best combination identified at stopping also decreases as the payoff differences grow very small. Finally, the arXiv version of the paper presents additional experiments that show that the observed degradation in performance of the TS at very high values of overlap disappears in a pure cross-audience learning setting. Overall, these simulations suggest the proposed TS is viable in identifying best C-TA combinations for reasonably high levels of TA overlap. If the sampler is to be used in situations with extreme overlap, it may be necessary to impose additional conditions on the stopping rule based on posterior probabilities, in addition to the ones based on pP V R across contexts in equation (9). This is left for future research. Deployment We designed an experimentation product based on algorithm. To use the product, an advertiser starts by setting up a test ad-campaign on the product system. The test cam-paign is similar to a typical ad-campaign, involving rules for bidding, budget, duration etc. The difference is that the advertiser defines K TAs and binds R creatives to the testcampaign, rather than one as typical; and the allocation of creatives to a user impression is managed by the TS algorithm. Both K and R are limited to a max of 5. Because the algorithm disjoints TAs, the number of contexts grows combinatorially as K increases, and this restriction keeps the total combinations manageable. When a user arrives at JD.com, the ad-serving system retrieves the user's features. If the features activate the tag(s) of any of the K TAs, and satisfies the campaign's other requirements, the TS chooses a test creative according to the adaptively determined probability, and places a bid for it into the platform's auction system. The bids are chosen by the advertiser, but are required to be the same for all creatives in order to keep the comparison fair. The auction includes other advertisers who compete to display their creatives to this user. The system collects data on the outcome of the winning auctions and whether the user clicks on the creative when served; updates parameters every 10 minutes; and repeats this until the stopping criterion is met and the test is stopped. The data are aggregated and relevant statistical results regarding all the C-TA combinations are delivered to the advertiser. See https://jzt.jd.com /gw/dissert/jzt-split/1897.html for a product overview. We discuss a case-study based on a test on the product. Though several other tests exhibit similar patterns, there is no claim this case-study is representative: we picked it so it illustrates well for the reader some features of the test environment and the performance of the TS.The test involves a large cellphone manufacturer. The advertiser set up 2 TAs and 3 creatives. The 2 TAs overlap, resulting in 3 DAs. Figure (4) shows the probability that each C-TA combination is estimated to be the best as the test progresses. The 6 possible combinations are shown in different colors and markers. During the initial 12 batches, the algorithm identifies the "*" and "+" combinations to be inferior and focuses on exploring the other 4 combinations. Then, the yellow "." combination starts to dominate the others and is finally chosen as the best. The advantage of the adaptive design is that most of the traffic during the test is allocated to C-DA combinations corresponding to the yellow "." combination, so the advertiser does not unnecessarily waste resources on assessing those that were learned to be inferior early on. The test lasted about 6 hours with a total of 18,499 users and 631 clicks. The estimated CTRs of the six C-TA combinations C 1 -T A 1 , C 2 -T A 1 , C 3 -T A 1 (yellow "." combination), C 1 -T A 2 , C 2 -T A 2 , C 3 -T A 2 at stopping are [.028,.034,.048,.028,.017,.036]. Despite the short time span, the posterior probability induced by the sampling on the yellow "." combination being the best is quite high (98.4%). We use a back-of-the-envelope calculation to assess the economic efficiency of TS relative to EA in this test. Using the data, we simulate a scenario where we equally allocate across the creatives the same amount of traffic as this test used. We find TS generates 52 more clicks (8.2% of total clicks) than EA. In other tests, we found the product performs well even in situations where the creatives are quite similar and K, R are close to 5, without requiring unreasonable amounts of data or test time so as to make it unviable. Scaling the product to allow for larger sets of test combinations is a task for future research and development. Conclusion An adaptive algorithm to identify the best combination among a set of advertising creatives and TAs is presented. Experiments show that the proposed method is more efficient compared to naive "split-testing" or non-adaptive "A/B/n" testing based methods. The approach assumes that creatives do not induce long-term dependencies, for instance, that they do not affect future user arrival rates, and that auctions are unrelated to each other, for instance due to the existence of a binding budget constraint. These assumptions justify framing the problem as a multi-armed bandit, and could be relaxed by using a more general reinforcement learning framework.
4,985
1907.02271
2954302879
We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized. We use the output space of a shared cross-domain deep encoder to model the embedding space anduse the Sliced-Wasserstein Distance (SWD) to measure and minimize the distance between the embedded distributions of two source and target domains to enforce the embedding to be domain-agnostic.Additionally, we use the source domain labeled data to train a deep classifier from the embedding space to the label space to enforce the embedding space to be this http URL a result of this training scheme, we provide an effective solution to train the deep classification network on the source domain such that it will generalize well on the target domain, where only unlabeled training data is accessible. To mitigate the challenge of class matching, we also align corresponding classes in the embedding space by using high confidence pseudo-labels for the target domain, i.e. assigning the class for which the source classifier has a high prediction probability. We provide theoretical justification as well as experimental results on UDA benchmark tasks to demonstrate that our method is effective and leads to state-of-the-art performance.
There are two major approaches in the literature to address domain adaption. The approach for a group of methods is based on preprocessing the target domain data points. The target data is mapped from the target domain to the source domain such that the target data structure is preserved in the source @cite_26 . Another common approach is to map data from both domains to a latent domain invariant space @cite_29 . Early methods within the second approach learn a linear subspace as the invariant space @cite_21 @cite_37 where the target domain data points distribute similar to the source domain data points. A linear subspace is not suitable for capturing complex distributions. For this reason, recently deep neural networks have been used to model the intermediate space as the output of the network. The network is trained such that the source and the target domain distributions in its output possess minimal discrepancy. Training procedure can be done both by adversarial learning @cite_20 or directly minimizing the distance between the two distributions @cite_27 .
{ "abstract": [ "In real-world applications of visual recognition, many factors — such as pose, illumination, or image quality — can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.", "Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.", "We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough “target” data to do slightly better than just using only “source” data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms stateof-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multidomain adaptation problem, where one has data from a variety of different domains.", "Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.", "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ], "cite_N": [ "@cite_37", "@cite_26", "@cite_29", "@cite_21", "@cite_27", "@cite_20" ], "mid": [ "2149466042", "1722318740", "2120354757", "2128053425", "1594039573", "2099471712" ] }
Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment
Deep learning classification algorithms have surpassed performance of humans for a wide range of computer vision applications. However, this achievement is conditioned on availability of high-quality labeled datasets to supervise training deep neural networks. Unfortunately, preparing huge labeled datasets is not feasible for many situations as data labeling and annotation can be expensive [30]. Domain adaptation [12] is a paradigm to address the problem of labeled data scarcity in computer vision, where the goal is to improve learning speed and model generalization as well as to avoid expensive redundant model retraining. The major idea is to overcome labeled data scarcity in a target domain by transferring knowledge from a related auxiliary source domain, where labeled data is easy and cheap to obtain. A common technique in domain adaptation literature is to embed data from the two source and target visual domains in an intermediate embedding space such that common crossdomain discriminative relations are captured in the embedding space. For example, if the data from source and target domains have similar class-conditioned probability distributions in the embedding space, then a classifier trained solely using labeled data from the source domain will generalize well on data points that are drawn from the target domain distribution [29], [31]. In this paper, we propose a novel unsupervised adaptation (UDA) algorithm following the above explained procedure. Our approach is a simpler, yet effective, alternative for adversarial learning techniques that have been more dominant to address probability matching indirectly for UDA [42], [44], [24]. Our contribution is two folds. First, we train the shared encoder by minimizing the Sliced-Wasserstein Distance (SWD) [27] between the source and the target distributions in the embedding space. We also train a classifier network simultaneously using the source domain labeled data. A major benefit of SWD over alternative probability metrics is that it can be computed efficiently. Additionally, SWD is known to be suitable for gradient-based optimization which is essential for deep learning [29]. Our second contribution is to circumvent the class matching challenge [35] by minimizing SWD between conditional distributions in sequential iterations for better performance compared to prior UDA methods that match probabilities explicitly. At each iteration, we assign pseudo-labels only to the target domain data that the classifier predicts the assigned class label with high probability and use this portion of target data to minimize the SWD between conditional distributions. As more learning iterations are performed, the number of target data points with correct pseudo-labels grows and progressively enforces distributions to align class-conditionally. We provide theoretical analysis and experimental results on benchmark problems, including ablation and sensitivity studies, to demonstrate that our method is effective. IV. PROPOSED METHOD We consider the case where the feature extractor, φ v (·), is a deep convolutional encoder with weights v and the classifier h w (·) is a shallow fully connected neural network with weights w. The last layer of the classifier network is a softmax layer that assigns a membership probability distribution to any given data point. It is often the case that the labels of data points are assigned according to the class with maximum predicted probability. In short, the encoder network is learned to mix both domains such that the extracted features in the embedding are: 1) domain agnostic in terms of data distributions, and 2) discriminative for the source domain to make learning h w feasible. Figure 1 demonstrates system level presentation of our framework. Following this framework, the UDA reduces to solving the following optimization problem to solve for v and w: min v,w N i=1 L h w (φ v (x s i )), y s i + λD p S (φ v (X S )), p T (φ v (X T )) ,(1) where D(·, ·) is a discrepancy measure between the probabilities and λ is a trade-off parameter. The first term in Eq. (1) is empirical risk for classifying the source labeled data points from the embedding space and the second term is the cross-domain probability matching loss. The encoder's learnable parameters are learned using data points from both domains and the classifier parameters are simultaneously learned using the source domain labeled data. A major remaining question is to select a proper metric. First, note that the actual distributions p S (φ(X S )) and p T (φ(X T )) are unknown and we can rely only on observed samples from these distributions. Therefore, a sensible discrepancy measure, D(·, ·), should be able to measure the dissimilarity between these distributions only based on the drawn samples. In this work, we use the SWD [28] as it is computationally efficient to compute SWD from drawn samples from the corresponding distributions. More importantly, the SWD is a good approximation for the optimal transport [2] which has gained interest in deep learning community as it is an effective distribution metric and its gradient is non-vanishing. The idea behind the SWD is to project two d-dimensional probability distributions into their marginal one-dimensional distributions, i.e., slicing the high-dimensional distributions, and to approximate the Wasserstein distance by integrating the Wasserstein distances between the resulting marginal probability distributions over all possible one-dimensional subspaces. For the distribution p S , a one-dimensional slice of the distribution is defined as: Rp S (t; γ) = S p S (x)δ(t − γ, x )dx,(2) where δ(·) denotes the Kronecker delta function, ·, · denotes the vector dot product, S d−1 is the d-dimensional unit sphere and γ is the projection direction. In other words, Rp S (·; γ) is a marginal distribution of p S obtained from integrating p S over the hyperplanes orthogonal to γ. The SWD then can be computed by integrating the Wasserstein distance between sliced distributions over all γ: SW (p S , p T ) = S d−1 W (Rp S (·; γ), Rp T (·; γ))dγ(3) where W (·) denotes the Wasserstein distance. The main advantage of using the SWD is that, unlike the Wasserstein distance, calculation of the SWD does not require a numerically expensive optimization. This is due to the fact that the Wasserstein distance between two one-dimensional probability distributions has a closed form solution and is equal to the p -distance between the inverse of their cumulative distribution functions Since only samples from distributions are available, the one-dimensional Wasserstein distance can be approximated as the p -distance between the sorted samples [32]. The integral in Eq. (3) is approximated using a Monte Carlo style numerical integration. Doing so, the SWD between f -dimensional samples {φ(x S i ) ∈ R f ∼ p S } M i=1 and {φ(x T i ) ∈ R f ∼ p T } M j=1 can be approximated as the following sum: SW 2 (pS , pT ) ≈ 1 L L l=1 M i=1 | γ l , φ(x S s l [i] ) − γ l , φ(x T t l [i] ) | 2 (4) where γ l ∈ S f −1 is uniformly drawn random sample from the unit f -dimensional ball S f −1 , and s l [i] and t l [i] are the sorted indices of {γ l · φ(x i )} M i=1 for source and target domains, respectively. Note that for a fixed dimension d, Monte Carlo approximation error is proportional to O( 1 √ L ). We utilize the SWD as the discrepancy measure between the probability distributions to match them in the embedding space. Next, we discuss a major deficiency in Eq. (1) and our remedy to tackle it. We utilize the SWD as the discrepancy measure between the probability densities, p S (φ(x S )|C j ) and p T (φ(x T )|C j ) . A. Class-conditional Alignment of Distributions A main shortcoming of Eq. (1) is that minimizing the discrepancy between p S (φ(X S )) and p T (φ(X T )) does not guarantee semantic consistency between the two domains. To clarify this point, consider the source and target domains to be images corresponding to printed digits and handwritten digits. While the feature distributions in the embedding space could have low discrepancy, the classes might not be correctly aligned in this space, e.g. digits from a class in the target domain could be matched to a wrong class of the source domain or, even digits from multiple classes in the target domain could be matched to the cluster of a single digit of the source domain. In such cases, the source classifier will not generalize well on the target domain. In other words, the shared embedding space, Z, might not be a semantically meaningful space for the target domain if we solely minimize SWD between p S (φ(X S )) and p T (φ(X T )). To solve this challenge, the encoder function should be learned such that the class-conditioned probabilities of both domains in the embedding space are similar, i.e. p S (φ(x S )|C j ) ≈ p T (φ(x T )|C j ), where C j denotes a particular class. Given this, we can mitigate the class matching problem by using an adapted version of Eq. (1) as: min v,w N i=1 L h w (φ v (x s i )), y s i + λ k j=1 D p S (φ v (x S )|C j ), p T (φ v (x T )|C j ) ,(5) where discrepancy between distributions is minimized conditioned on classes, to enforce semantic alignment in the embedding space. Solving Eq. (5), however, is not tractable as the labels for the target domain are not available and the conditional distribution, p T (φ(x T )|C j ) , is not known. To tackle the above issue, we compute a surrogate of the objective in Eq. (5). Our idea is to approximate p T (φ(x T )|C j ) by generating pseudo-labels for the target DPL = {(x t i ,ŷ t i )|ŷ t i = fθ(x t i ), p(ŷ t i |x t i ) > τ } 6: for alt = 1, . . . , ALT do 7: Update encoder parameters using pseudo-labels: 8:v = j D pS (φv(xS )|Cj), pSL(φv(xT )|Cj) 9: Update entire model: 10:v,ŵ = arg minw,v N i=1 L hw(φˆv(x s i )), y s i 11: end for 12: end for data points. The pseudo-labels are obtained from the source classifier prediction, but only for the portion of target data points that the the source classifier provides confident prediction. More specifically, we solve Eq. (5) in incremental gradient descent iterations. In particular, we first initialize the classifier network by training it on the source data. We then alternate between optimizing the classification loss for the source data and SWD loss term at each iteration. At each iteration, we pass the target domain data points into the classifier learned on the source data and analyze the label probability distribution on the softmax layer of the classifier. We choose a threshold τ and assign pseudo-labels only to those target data points that the classifier predicts the pseudo-labels with high confidence, i.e. p(y i |x t i ) > τ . Since the source and the target domains are related, it is sensible that the source classifier can classify a subset of target data points correctly and with high confidence. We use these data points to approximate p T (φ(x T )|C j ) in Eq. (5) and update the encoder parameters, v, accordingly. In our empirical experiments, we have observed that because the domains are related, as more optimization iterations are performed, the number of data points with confident pseudolabels increases and our approximation for Eq. (5) improves and becomes more stable, enforcing the source and the target distributions to align class conditionally in the embedding space. As a side benefit, since we math the distributions class-conditionally, a problem similar to mode collapse is unlikely to occur. Figure 2 visualizes this process using real data. Our proposed framework, named Domain Adaptation with Conditional Alignment of Distributions (DACAD) is summarized in Algorithm 1. V. THEORETICAL ANALYSIS In this section, we employ existing theoretical results on suitability of optimal transport for domain adaptation [29] within our framework and prove why our algorithm can train models that generalize well on the target domain. First note that, the hypothesis class within our framework is the set of all models f θ (·) that are parameterized by θ. For any given model in this hypothesis class, we denote the observed risk on the source domain by e S . Analogously, e T denotes the observed risk on the target domain in the UDA setting. Also, letμ S = 1 N N n=1 δ(x s n ) denote the empirical source distribution, obtained from the observed training samples. We can define the empirical source distributionμ T = 1 M M m=1 δ(x t m ) similarly. Moreover, let f θ * denote the ideal model that minimizes the combined source and target risks e C (θ * ), i.e. θ * = arg min θ e C (θ) = arg min θ {e S +e T }. In the presence of enough labeled target domain data, this is the best joint model that can be learned. We rely on the following theorem [4]. Theorem 1 [29]: Under the assumptions described above for UDA, then for any d > d and ζ < √ 2, there exists a constant number N 0 depending on d such that for any ξ > 0 and min(N, M ) ≥ max(ξ −(d +2),1 ) with probability at least 1 − ξ for all f θ , the following holds: e T ≤e S + W (μ T ,μ S ) + e C (θ * )+ 2 log( 1 ξ )/ζ 1 N + 1 M .(6) For simplicity, Theorem 1 originally is proven in the binary classification setting and consider 0-1 binary loss function L(·) (thresholded binary softmax). We also limit our analysis to this setting but note that these restrictions can be loosen to be broader.The initial consequence of the above theorem might seem that minimizing the Wasserstein distance between the source and the target distributions can improve generalization error on the target domain because it will make the inequality in Eq. (6) tighter. But it is crucial to note that Wasserstein distance cannot be minimized independently from minimizing the source risk. Moreover, there is no guarantee that doing so, the learned model would be a good approximate of the joint optimal model f θ * which is important as the third term in the right hand side denotes in Eq. (6). We cannot even approximate e C (θ * ) in UDA framework as the there is no labeled data in the target domain. In fact, this theorem justifies why minimizing the Wasserstein distance is not sufficient, and we should minimize the source empirical risk simultaneously, and learn jointly on both domains to consider all terms in Theorem 1. Using Theorem 1, we demonstrate why our algorithm can learn models that generalize well on the target domain. We also want to highlight once more that, although we minimize SWD in our framework and our theoretical results are driven for the Wasserstein distance, it has been theoretically demonstrated that SWD is a good approximation for computing the Wasserstein distance [2]. Theorem 2: Consider we use the pseudo-labeled target dataset D PL = {x t i ,ŷ t i } M P L i=1 , which we are confident with threshold τ , in an optimization iteration in the algorithm 1. Then, the following inequality holds: e T ≤e S + W (μ S ,μ PL ) + e C (θ * ) + (1 − τ )+ 2 log( 1 ξ )/ζ 1 N + 1 M P L ,(7) where e C (θ * ) denote the expected risk of the optimally joint model f θ * on both the source domain and the confident pseudo-labeled target data points. Proof: since the pseudo-labeled data points are selected according to the threshold τ , if we select a pseudo-labeled data point randomly, then the probability of the pseudo-label to be false is equal to 1 − τ . We can define the difference between the error based on the true labels and the pseudolabel for a particular data point as follows: |L(f θ (x t i ), y t i ) − L(f θ (x t i ),ŷ t i )| = 0, if y t i =ŷ t i . 1, otherwise.(8) We can compute the expectation on the above error as: |e PL − e T | ≤ E |L(f θ (x t i ), y t i ) − L(f θ (x t i ),ŷ t i )| ≤ (1 − τ ).(9) Using Eq. (9) we can deduce: e S + e T = e S + e T + e PL − e PL ≤ e S + e PL + |e T − e PL | ≤ e S + e PL + (1 − τ ).(10) Note that since Eq. (10) is valid for all θ, if we consider the joint optimal parameter θ * in Eq. (10), we deduce: e C (θ * ) ≤ e C (θ) + (1 − τ ).(11) By considering Theorem 1, where the pseudo-labeled data points are the given target dataset, and then applying Eq. (11) on Eq.(6), Theorem 2 follows. Theorem 2 indicates that why our algorithm can potentially learn models that generalize well on the target domain. We can see that at any given iteration, we minimize the upperbound of the target error as given in (7). We minimize the source risk e S through the supervised loss. We minimize the Wasserstein distance by minimizing the SWD loss. The term e C (θ * ) is minimized because the pseudo-labeled data points by definition are selected such that the true labels can be predicted with high probability. Hence, the optimal model with parameter θ * can perform well both on the source domain and the pseudo-labeled data points. The term 1 − τ is also small because we only select the confident data points. If (crucial) at a given iteration, minimizing the upperbound in Eq. (7) reduces the target true risk, then the class-conditional overlap between the latent distributions of source and target domains increases. This is because the trained model performance has improved on both domains (the source risk e S is always minimized directly). As a result, in the next iteration, the number of samples with confident pseudo-labels increases which in turn makes the upperbound of Eq. (7) tighter. As a result, the constant term in the right hand side of Eq. (7) (in the second line) decreases, making generalization tighter. Hence our algorithm minimizes all the terms in Eq. (7), which would reduce the true risk on the target domain as more optimization iterations are performed. However, this result is conditioned on existence of confident pseudo-labels which means the domains must be related. VI. EXPERIMENTAL VALIDATION We evaluate our algorithm using standard benchmark UDA tasks and compare against several UDA methods. Datasets: We investigate the empirical performance of our proposed method on five commonly used benchmark datasets Fig. 2: The high-level system architecture, shown on the left, illustrates the data paths used during UDA training. On the right, t SNE visualizations demonstrate how the embedding space evolves during training for the S → U task. In the target domain, colored points are examples with assigned pseudo-labels, which increase in number with the confidence of the classifier. in UDA, namely: MNIST (M) [20], USPS (U) [21], Street View House Numbers, i.e., SVHN (S), CIFAR (CI), and STL (ST ). The first three datasets are 10 class (0-9) digit classification datasets. MNIST and USPS are collection of hand written digits whereas SVHN is a collection of real world RGB images of house numbers. STL and CIFAR contain RGB images that share 9 object classes: airplane, car, bird, cat, deer, dog, horse, ship, and truck. For the digit datasets, while six domain adaptation problems can be defined among these datasets, prior works often consider four of these six cases, as knowledge transfer from simple MNIST and USPS datasets to a more challenging SVHN domain does not seem to be tractable. Following the literature, we use 2000 randomly selected images from MNIST and 1800 images from USPS in our experiments for the case of U → M and S → M [24]. In the remaining cases, we used full datasets. All datasets have their images scaled to 32×32 pixels and the SVHN images are converted to grayscale as the encoder network is shared between the domains. CIFAR and STL maintain their RGB components. We report the target classification accuracy across the tasks. Pre-training: Our experiments involve a pre-training stage to initialize the encoder and the classifier networks solely using the source data. This is an essential step because the combined deep network can generate confident pseudolabels on the target domain only if initially trained on the related source domain. In other words, this initially learned network can be served as a naive model on the target domain. We then boost the performance on the target domain using our proposed algorithm, demonstrating that our algorithm is indeed effective for transferring knowledge. Doing so, we investigate a less-explored issue in the UDA literature. Different UDA approaches use considerably different networks, both in terms of complexity, e.g. number of layers and convolution filters, and the structure, e.g. using an autoencoder. Consequently, it is ambiguous whether performance of a particular UDA algorithm is due to successful knowledge transfer from the source domain or just a good baseline network that performs well on the target domain even without considerable knowledge transfer from the source domain. To highlight that our algorithm can indeed transfer knowledge, we use three different network architectures: DRCN [11], VGG [39], and a small ResNet [17]. We then show that our algorithm can effectively boost base-line performance (statistically significant) regardless of the underlying network. In most of the domain adaptation tasks, we demonstrate that this boost indeed stems from transferring knowledge from the source domain. In our experiments we used Adam optimizer [19] and set the pseudo-labeling threshold to tr = 0.99. Data Augmentation: Following the literature, we use data augmentation to create additional training data by applying reasonable transformations to input data in an effort to improve generalization [38]. Confirming the reported result in [11], we also found that geometric transformations and noise, applied to appropriate inputs, greatly improves performance and transferability of the source model to the target data. Data augmentation can help to reduce the domain shift between the two domains. The augmentations in this work are limited to translation, rotation, skew, zoom, Gaussian noise, Binomial noise, and inverted pixels. A. Results Figure 2 demonstrates how our algorithm successfully learns an embedding with class-conditional alignment of distributions of both domains. This figure presents the twodimensional t SNE visualization of the source and target domain data points in the shared embedding space for the S → U task. The horizontal axis demonstrates the optimization iterations where each cell presents data visualization after a particular optimization iteration is performed. The top sub-figures visualize the source data points, where each color represents a particular class. The bottom sub-figures visualize the target data points, where the colored data points represent the pseudo-labeled data points at each iteration and the black points represent the rest of the target domain data points. We can see that, due to pre-training initialization, the embedding space is discriminative for the source domain in the beginning, but the target distribution differs from the source distributions. However, the classifier is confident about a portion of target data points. As more optimiza- [42]. tion iterations are performed, since the network becomes a better classifier for the target domain, the number of the target pseudo-labeled data points increase, improving our approximate of Eq. 5. As a result, the discrepancy between the two distributions progressively decreases. Over time, our algorithm learns a shared embedding which is discriminative for both domains, making pseudo-labels a good prediction for the original labels, bottom, right-most sub-figure. This result empirically validates our theoretical justification on applicability of our algorithm to address UDA. We also compare our results against several recent UDA algorithms in Table I. In particular, we compare against the recent adversarial learning algorithms: Generate to Adapt (GtA) [36], CoGAN [22], ADDA [42], CyCADA [18], and I2I-Adapt [25]. We also include FADA [24], which is originally a few-shot learning technique. For FADA, we list the reported one-shot accuracy, which is very close to the UDA setting (but it is arguably a simpler problem). Additionally, we have included results for RevGrad [9], DRCN [11], AUDA [35], OPDA [4], MML [37]. The latter methods are similar to our method because these methods learn an embedding space to couple the domains. OPDA and MML are more similar as they match distributions explicitly in the learned embedding. Finally, we have included the performance of fully-supervised (FS) learning on the target domain as an upper-bound for UDA. In our own results, we include the baseline target performance that we obtain by naively employing a DRCN network as well as target performance from VGG and ResNet networks that are learned solely on the source domain. We notice that in Table I, our baseline performance is better than some of the UDA algorithms for some tasks. This is a very crucial observation as it demonstrates that, in some cases, a trained deep network with good data augmentation can extract domain invariant features that make domain adaptation feasible even without any further transfer learning procedure. The last row demonstrates that our method is effective in transferring knowledge to boost the baseline performance. In other words, Table I serves as an ablation study to demonstrate that that effectiveness of our algorithm stems from successful cross-domain knowledge transfer. We can see that our algorithm leads to near-or the state-of-the-art performance across the tasks. Additionally, an important observation is that our method significantly outperforms the methods that match distributions directly and is competent against methods that use adversarial learning. This can be explained as the result of matching distributions class-conditionally and suggests our second contribution can potentially boost performance of these methods. Finally, we note that our proposed method provide a statistically significant boost in all but two of the cases (shown in gray in Table I). VII. CONCLUSIONS AND DISCUSSION We developed a new UDA algorithm based on learning a domain-invariant embedding space. We map data points from two related domains to the embedding space such that discrepancy between the transformed distributions is minimized. We used the sliced Wasserstein distance metric as a measure to match the distributions in the embedding space. As a result, our method is computationally more efficient. Additionally, we matched distributions class-conditionally by assigning pseudo-labels to the target domain data. As a result, our method is more robust and outperforms prior UDA methods that match distributions directly. We provided theoretical justification for effectiveness of our approach and experimental validations to demonstrate that our method is competent against SOA recent UDA methods.
4,538
1907.02271
2954302879
We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized. We use the output space of a shared cross-domain deep encoder to model the embedding space anduse the Sliced-Wasserstein Distance (SWD) to measure and minimize the distance between the embedded distributions of two source and target domains to enforce the embedding to be domain-agnostic.Additionally, we use the source domain labeled data to train a deep classifier from the embedding space to the label space to enforce the embedding space to be this http URL a result of this training scheme, we provide an effective solution to train the deep classification network on the source domain such that it will generalize well on the target domain, where only unlabeled training data is accessible. To mitigate the challenge of class matching, we also align corresponding classes in the embedding space by using high confidence pseudo-labels for the target domain, i.e. assigning the class for which the source classifier has a high prediction probability. We provide theoretical justification as well as experimental results on UDA benchmark tasks to demonstrate that our method is effective and leads to state-of-the-art performance.
Several important UDA methods use adversarial learning. @cite_30 pioneered and developed an effective method to match two distributions indirectly by using adversarial learning. @cite_22 and @cite_5 use the Generative Adversarial Networks (GAN) structure @cite_20 to tackle domain adaptation. The idea is to train two competing (i.e., adversarial) deep neural networks to match the source and the target distributions. A generator network maps data points from both domains to the domain-invariant space and a binary discriminator network is trained to classify the data points, with each domain considered as a class, based on the representations of the target and the source data points. The generator network is trained such that eventually the discriminator cannot distinguish between the two domains, i.e. classification rate becomes $50
{ "abstract": [ "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ], "cite_N": [ "@cite_30", "@cite_5", "@cite_22", "@cite_20" ], "mid": [ "1731081199", "2593768305", "2963784072", "2099471712" ] }
Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment
Deep learning classification algorithms have surpassed performance of humans for a wide range of computer vision applications. However, this achievement is conditioned on availability of high-quality labeled datasets to supervise training deep neural networks. Unfortunately, preparing huge labeled datasets is not feasible for many situations as data labeling and annotation can be expensive [30]. Domain adaptation [12] is a paradigm to address the problem of labeled data scarcity in computer vision, where the goal is to improve learning speed and model generalization as well as to avoid expensive redundant model retraining. The major idea is to overcome labeled data scarcity in a target domain by transferring knowledge from a related auxiliary source domain, where labeled data is easy and cheap to obtain. A common technique in domain adaptation literature is to embed data from the two source and target visual domains in an intermediate embedding space such that common crossdomain discriminative relations are captured in the embedding space. For example, if the data from source and target domains have similar class-conditioned probability distributions in the embedding space, then a classifier trained solely using labeled data from the source domain will generalize well on data points that are drawn from the target domain distribution [29], [31]. In this paper, we propose a novel unsupervised adaptation (UDA) algorithm following the above explained procedure. Our approach is a simpler, yet effective, alternative for adversarial learning techniques that have been more dominant to address probability matching indirectly for UDA [42], [44], [24]. Our contribution is two folds. First, we train the shared encoder by minimizing the Sliced-Wasserstein Distance (SWD) [27] between the source and the target distributions in the embedding space. We also train a classifier network simultaneously using the source domain labeled data. A major benefit of SWD over alternative probability metrics is that it can be computed efficiently. Additionally, SWD is known to be suitable for gradient-based optimization which is essential for deep learning [29]. Our second contribution is to circumvent the class matching challenge [35] by minimizing SWD between conditional distributions in sequential iterations for better performance compared to prior UDA methods that match probabilities explicitly. At each iteration, we assign pseudo-labels only to the target domain data that the classifier predicts the assigned class label with high probability and use this portion of target data to minimize the SWD between conditional distributions. As more learning iterations are performed, the number of target data points with correct pseudo-labels grows and progressively enforces distributions to align class-conditionally. We provide theoretical analysis and experimental results on benchmark problems, including ablation and sensitivity studies, to demonstrate that our method is effective. IV. PROPOSED METHOD We consider the case where the feature extractor, φ v (·), is a deep convolutional encoder with weights v and the classifier h w (·) is a shallow fully connected neural network with weights w. The last layer of the classifier network is a softmax layer that assigns a membership probability distribution to any given data point. It is often the case that the labels of data points are assigned according to the class with maximum predicted probability. In short, the encoder network is learned to mix both domains such that the extracted features in the embedding are: 1) domain agnostic in terms of data distributions, and 2) discriminative for the source domain to make learning h w feasible. Figure 1 demonstrates system level presentation of our framework. Following this framework, the UDA reduces to solving the following optimization problem to solve for v and w: min v,w N i=1 L h w (φ v (x s i )), y s i + λD p S (φ v (X S )), p T (φ v (X T )) ,(1) where D(·, ·) is a discrepancy measure between the probabilities and λ is a trade-off parameter. The first term in Eq. (1) is empirical risk for classifying the source labeled data points from the embedding space and the second term is the cross-domain probability matching loss. The encoder's learnable parameters are learned using data points from both domains and the classifier parameters are simultaneously learned using the source domain labeled data. A major remaining question is to select a proper metric. First, note that the actual distributions p S (φ(X S )) and p T (φ(X T )) are unknown and we can rely only on observed samples from these distributions. Therefore, a sensible discrepancy measure, D(·, ·), should be able to measure the dissimilarity between these distributions only based on the drawn samples. In this work, we use the SWD [28] as it is computationally efficient to compute SWD from drawn samples from the corresponding distributions. More importantly, the SWD is a good approximation for the optimal transport [2] which has gained interest in deep learning community as it is an effective distribution metric and its gradient is non-vanishing. The idea behind the SWD is to project two d-dimensional probability distributions into their marginal one-dimensional distributions, i.e., slicing the high-dimensional distributions, and to approximate the Wasserstein distance by integrating the Wasserstein distances between the resulting marginal probability distributions over all possible one-dimensional subspaces. For the distribution p S , a one-dimensional slice of the distribution is defined as: Rp S (t; γ) = S p S (x)δ(t − γ, x )dx,(2) where δ(·) denotes the Kronecker delta function, ·, · denotes the vector dot product, S d−1 is the d-dimensional unit sphere and γ is the projection direction. In other words, Rp S (·; γ) is a marginal distribution of p S obtained from integrating p S over the hyperplanes orthogonal to γ. The SWD then can be computed by integrating the Wasserstein distance between sliced distributions over all γ: SW (p S , p T ) = S d−1 W (Rp S (·; γ), Rp T (·; γ))dγ(3) where W (·) denotes the Wasserstein distance. The main advantage of using the SWD is that, unlike the Wasserstein distance, calculation of the SWD does not require a numerically expensive optimization. This is due to the fact that the Wasserstein distance between two one-dimensional probability distributions has a closed form solution and is equal to the p -distance between the inverse of their cumulative distribution functions Since only samples from distributions are available, the one-dimensional Wasserstein distance can be approximated as the p -distance between the sorted samples [32]. The integral in Eq. (3) is approximated using a Monte Carlo style numerical integration. Doing so, the SWD between f -dimensional samples {φ(x S i ) ∈ R f ∼ p S } M i=1 and {φ(x T i ) ∈ R f ∼ p T } M j=1 can be approximated as the following sum: SW 2 (pS , pT ) ≈ 1 L L l=1 M i=1 | γ l , φ(x S s l [i] ) − γ l , φ(x T t l [i] ) | 2 (4) where γ l ∈ S f −1 is uniformly drawn random sample from the unit f -dimensional ball S f −1 , and s l [i] and t l [i] are the sorted indices of {γ l · φ(x i )} M i=1 for source and target domains, respectively. Note that for a fixed dimension d, Monte Carlo approximation error is proportional to O( 1 √ L ). We utilize the SWD as the discrepancy measure between the probability distributions to match them in the embedding space. Next, we discuss a major deficiency in Eq. (1) and our remedy to tackle it. We utilize the SWD as the discrepancy measure between the probability densities, p S (φ(x S )|C j ) and p T (φ(x T )|C j ) . A. Class-conditional Alignment of Distributions A main shortcoming of Eq. (1) is that minimizing the discrepancy between p S (φ(X S )) and p T (φ(X T )) does not guarantee semantic consistency between the two domains. To clarify this point, consider the source and target domains to be images corresponding to printed digits and handwritten digits. While the feature distributions in the embedding space could have low discrepancy, the classes might not be correctly aligned in this space, e.g. digits from a class in the target domain could be matched to a wrong class of the source domain or, even digits from multiple classes in the target domain could be matched to the cluster of a single digit of the source domain. In such cases, the source classifier will not generalize well on the target domain. In other words, the shared embedding space, Z, might not be a semantically meaningful space for the target domain if we solely minimize SWD between p S (φ(X S )) and p T (φ(X T )). To solve this challenge, the encoder function should be learned such that the class-conditioned probabilities of both domains in the embedding space are similar, i.e. p S (φ(x S )|C j ) ≈ p T (φ(x T )|C j ), where C j denotes a particular class. Given this, we can mitigate the class matching problem by using an adapted version of Eq. (1) as: min v,w N i=1 L h w (φ v (x s i )), y s i + λ k j=1 D p S (φ v (x S )|C j ), p T (φ v (x T )|C j ) ,(5) where discrepancy between distributions is minimized conditioned on classes, to enforce semantic alignment in the embedding space. Solving Eq. (5), however, is not tractable as the labels for the target domain are not available and the conditional distribution, p T (φ(x T )|C j ) , is not known. To tackle the above issue, we compute a surrogate of the objective in Eq. (5). Our idea is to approximate p T (φ(x T )|C j ) by generating pseudo-labels for the target DPL = {(x t i ,ŷ t i )|ŷ t i = fθ(x t i ), p(ŷ t i |x t i ) > τ } 6: for alt = 1, . . . , ALT do 7: Update encoder parameters using pseudo-labels: 8:v = j D pS (φv(xS )|Cj), pSL(φv(xT )|Cj) 9: Update entire model: 10:v,ŵ = arg minw,v N i=1 L hw(φˆv(x s i )), y s i 11: end for 12: end for data points. The pseudo-labels are obtained from the source classifier prediction, but only for the portion of target data points that the the source classifier provides confident prediction. More specifically, we solve Eq. (5) in incremental gradient descent iterations. In particular, we first initialize the classifier network by training it on the source data. We then alternate between optimizing the classification loss for the source data and SWD loss term at each iteration. At each iteration, we pass the target domain data points into the classifier learned on the source data and analyze the label probability distribution on the softmax layer of the classifier. We choose a threshold τ and assign pseudo-labels only to those target data points that the classifier predicts the pseudo-labels with high confidence, i.e. p(y i |x t i ) > τ . Since the source and the target domains are related, it is sensible that the source classifier can classify a subset of target data points correctly and with high confidence. We use these data points to approximate p T (φ(x T )|C j ) in Eq. (5) and update the encoder parameters, v, accordingly. In our empirical experiments, we have observed that because the domains are related, as more optimization iterations are performed, the number of data points with confident pseudolabels increases and our approximation for Eq. (5) improves and becomes more stable, enforcing the source and the target distributions to align class conditionally in the embedding space. As a side benefit, since we math the distributions class-conditionally, a problem similar to mode collapse is unlikely to occur. Figure 2 visualizes this process using real data. Our proposed framework, named Domain Adaptation with Conditional Alignment of Distributions (DACAD) is summarized in Algorithm 1. V. THEORETICAL ANALYSIS In this section, we employ existing theoretical results on suitability of optimal transport for domain adaptation [29] within our framework and prove why our algorithm can train models that generalize well on the target domain. First note that, the hypothesis class within our framework is the set of all models f θ (·) that are parameterized by θ. For any given model in this hypothesis class, we denote the observed risk on the source domain by e S . Analogously, e T denotes the observed risk on the target domain in the UDA setting. Also, letμ S = 1 N N n=1 δ(x s n ) denote the empirical source distribution, obtained from the observed training samples. We can define the empirical source distributionμ T = 1 M M m=1 δ(x t m ) similarly. Moreover, let f θ * denote the ideal model that minimizes the combined source and target risks e C (θ * ), i.e. θ * = arg min θ e C (θ) = arg min θ {e S +e T }. In the presence of enough labeled target domain data, this is the best joint model that can be learned. We rely on the following theorem [4]. Theorem 1 [29]: Under the assumptions described above for UDA, then for any d > d and ζ < √ 2, there exists a constant number N 0 depending on d such that for any ξ > 0 and min(N, M ) ≥ max(ξ −(d +2),1 ) with probability at least 1 − ξ for all f θ , the following holds: e T ≤e S + W (μ T ,μ S ) + e C (θ * )+ 2 log( 1 ξ )/ζ 1 N + 1 M .(6) For simplicity, Theorem 1 originally is proven in the binary classification setting and consider 0-1 binary loss function L(·) (thresholded binary softmax). We also limit our analysis to this setting but note that these restrictions can be loosen to be broader.The initial consequence of the above theorem might seem that minimizing the Wasserstein distance between the source and the target distributions can improve generalization error on the target domain because it will make the inequality in Eq. (6) tighter. But it is crucial to note that Wasserstein distance cannot be minimized independently from minimizing the source risk. Moreover, there is no guarantee that doing so, the learned model would be a good approximate of the joint optimal model f θ * which is important as the third term in the right hand side denotes in Eq. (6). We cannot even approximate e C (θ * ) in UDA framework as the there is no labeled data in the target domain. In fact, this theorem justifies why minimizing the Wasserstein distance is not sufficient, and we should minimize the source empirical risk simultaneously, and learn jointly on both domains to consider all terms in Theorem 1. Using Theorem 1, we demonstrate why our algorithm can learn models that generalize well on the target domain. We also want to highlight once more that, although we minimize SWD in our framework and our theoretical results are driven for the Wasserstein distance, it has been theoretically demonstrated that SWD is a good approximation for computing the Wasserstein distance [2]. Theorem 2: Consider we use the pseudo-labeled target dataset D PL = {x t i ,ŷ t i } M P L i=1 , which we are confident with threshold τ , in an optimization iteration in the algorithm 1. Then, the following inequality holds: e T ≤e S + W (μ S ,μ PL ) + e C (θ * ) + (1 − τ )+ 2 log( 1 ξ )/ζ 1 N + 1 M P L ,(7) where e C (θ * ) denote the expected risk of the optimally joint model f θ * on both the source domain and the confident pseudo-labeled target data points. Proof: since the pseudo-labeled data points are selected according to the threshold τ , if we select a pseudo-labeled data point randomly, then the probability of the pseudo-label to be false is equal to 1 − τ . We can define the difference between the error based on the true labels and the pseudolabel for a particular data point as follows: |L(f θ (x t i ), y t i ) − L(f θ (x t i ),ŷ t i )| = 0, if y t i =ŷ t i . 1, otherwise.(8) We can compute the expectation on the above error as: |e PL − e T | ≤ E |L(f θ (x t i ), y t i ) − L(f θ (x t i ),ŷ t i )| ≤ (1 − τ ).(9) Using Eq. (9) we can deduce: e S + e T = e S + e T + e PL − e PL ≤ e S + e PL + |e T − e PL | ≤ e S + e PL + (1 − τ ).(10) Note that since Eq. (10) is valid for all θ, if we consider the joint optimal parameter θ * in Eq. (10), we deduce: e C (θ * ) ≤ e C (θ) + (1 − τ ).(11) By considering Theorem 1, where the pseudo-labeled data points are the given target dataset, and then applying Eq. (11) on Eq.(6), Theorem 2 follows. Theorem 2 indicates that why our algorithm can potentially learn models that generalize well on the target domain. We can see that at any given iteration, we minimize the upperbound of the target error as given in (7). We minimize the source risk e S through the supervised loss. We minimize the Wasserstein distance by minimizing the SWD loss. The term e C (θ * ) is minimized because the pseudo-labeled data points by definition are selected such that the true labels can be predicted with high probability. Hence, the optimal model with parameter θ * can perform well both on the source domain and the pseudo-labeled data points. The term 1 − τ is also small because we only select the confident data points. If (crucial) at a given iteration, minimizing the upperbound in Eq. (7) reduces the target true risk, then the class-conditional overlap between the latent distributions of source and target domains increases. This is because the trained model performance has improved on both domains (the source risk e S is always minimized directly). As a result, in the next iteration, the number of samples with confident pseudo-labels increases which in turn makes the upperbound of Eq. (7) tighter. As a result, the constant term in the right hand side of Eq. (7) (in the second line) decreases, making generalization tighter. Hence our algorithm minimizes all the terms in Eq. (7), which would reduce the true risk on the target domain as more optimization iterations are performed. However, this result is conditioned on existence of confident pseudo-labels which means the domains must be related. VI. EXPERIMENTAL VALIDATION We evaluate our algorithm using standard benchmark UDA tasks and compare against several UDA methods. Datasets: We investigate the empirical performance of our proposed method on five commonly used benchmark datasets Fig. 2: The high-level system architecture, shown on the left, illustrates the data paths used during UDA training. On the right, t SNE visualizations demonstrate how the embedding space evolves during training for the S → U task. In the target domain, colored points are examples with assigned pseudo-labels, which increase in number with the confidence of the classifier. in UDA, namely: MNIST (M) [20], USPS (U) [21], Street View House Numbers, i.e., SVHN (S), CIFAR (CI), and STL (ST ). The first three datasets are 10 class (0-9) digit classification datasets. MNIST and USPS are collection of hand written digits whereas SVHN is a collection of real world RGB images of house numbers. STL and CIFAR contain RGB images that share 9 object classes: airplane, car, bird, cat, deer, dog, horse, ship, and truck. For the digit datasets, while six domain adaptation problems can be defined among these datasets, prior works often consider four of these six cases, as knowledge transfer from simple MNIST and USPS datasets to a more challenging SVHN domain does not seem to be tractable. Following the literature, we use 2000 randomly selected images from MNIST and 1800 images from USPS in our experiments for the case of U → M and S → M [24]. In the remaining cases, we used full datasets. All datasets have their images scaled to 32×32 pixels and the SVHN images are converted to grayscale as the encoder network is shared between the domains. CIFAR and STL maintain their RGB components. We report the target classification accuracy across the tasks. Pre-training: Our experiments involve a pre-training stage to initialize the encoder and the classifier networks solely using the source data. This is an essential step because the combined deep network can generate confident pseudolabels on the target domain only if initially trained on the related source domain. In other words, this initially learned network can be served as a naive model on the target domain. We then boost the performance on the target domain using our proposed algorithm, demonstrating that our algorithm is indeed effective for transferring knowledge. Doing so, we investigate a less-explored issue in the UDA literature. Different UDA approaches use considerably different networks, both in terms of complexity, e.g. number of layers and convolution filters, and the structure, e.g. using an autoencoder. Consequently, it is ambiguous whether performance of a particular UDA algorithm is due to successful knowledge transfer from the source domain or just a good baseline network that performs well on the target domain even without considerable knowledge transfer from the source domain. To highlight that our algorithm can indeed transfer knowledge, we use three different network architectures: DRCN [11], VGG [39], and a small ResNet [17]. We then show that our algorithm can effectively boost base-line performance (statistically significant) regardless of the underlying network. In most of the domain adaptation tasks, we demonstrate that this boost indeed stems from transferring knowledge from the source domain. In our experiments we used Adam optimizer [19] and set the pseudo-labeling threshold to tr = 0.99. Data Augmentation: Following the literature, we use data augmentation to create additional training data by applying reasonable transformations to input data in an effort to improve generalization [38]. Confirming the reported result in [11], we also found that geometric transformations and noise, applied to appropriate inputs, greatly improves performance and transferability of the source model to the target data. Data augmentation can help to reduce the domain shift between the two domains. The augmentations in this work are limited to translation, rotation, skew, zoom, Gaussian noise, Binomial noise, and inverted pixels. A. Results Figure 2 demonstrates how our algorithm successfully learns an embedding with class-conditional alignment of distributions of both domains. This figure presents the twodimensional t SNE visualization of the source and target domain data points in the shared embedding space for the S → U task. The horizontal axis demonstrates the optimization iterations where each cell presents data visualization after a particular optimization iteration is performed. The top sub-figures visualize the source data points, where each color represents a particular class. The bottom sub-figures visualize the target data points, where the colored data points represent the pseudo-labeled data points at each iteration and the black points represent the rest of the target domain data points. We can see that, due to pre-training initialization, the embedding space is discriminative for the source domain in the beginning, but the target distribution differs from the source distributions. However, the classifier is confident about a portion of target data points. As more optimiza- [42]. tion iterations are performed, since the network becomes a better classifier for the target domain, the number of the target pseudo-labeled data points increase, improving our approximate of Eq. 5. As a result, the discrepancy between the two distributions progressively decreases. Over time, our algorithm learns a shared embedding which is discriminative for both domains, making pseudo-labels a good prediction for the original labels, bottom, right-most sub-figure. This result empirically validates our theoretical justification on applicability of our algorithm to address UDA. We also compare our results against several recent UDA algorithms in Table I. In particular, we compare against the recent adversarial learning algorithms: Generate to Adapt (GtA) [36], CoGAN [22], ADDA [42], CyCADA [18], and I2I-Adapt [25]. We also include FADA [24], which is originally a few-shot learning technique. For FADA, we list the reported one-shot accuracy, which is very close to the UDA setting (but it is arguably a simpler problem). Additionally, we have included results for RevGrad [9], DRCN [11], AUDA [35], OPDA [4], MML [37]. The latter methods are similar to our method because these methods learn an embedding space to couple the domains. OPDA and MML are more similar as they match distributions explicitly in the learned embedding. Finally, we have included the performance of fully-supervised (FS) learning on the target domain as an upper-bound for UDA. In our own results, we include the baseline target performance that we obtain by naively employing a DRCN network as well as target performance from VGG and ResNet networks that are learned solely on the source domain. We notice that in Table I, our baseline performance is better than some of the UDA algorithms for some tasks. This is a very crucial observation as it demonstrates that, in some cases, a trained deep network with good data augmentation can extract domain invariant features that make domain adaptation feasible even without any further transfer learning procedure. The last row demonstrates that our method is effective in transferring knowledge to boost the baseline performance. In other words, Table I serves as an ablation study to demonstrate that that effectiveness of our algorithm stems from successful cross-domain knowledge transfer. We can see that our algorithm leads to near-or the state-of-the-art performance across the tasks. Additionally, an important observation is that our method significantly outperforms the methods that match distributions directly and is competent against methods that use adversarial learning. This can be explained as the result of matching distributions class-conditionally and suggests our second contribution can potentially boost performance of these methods. Finally, we note that our proposed method provide a statistically significant boost in all but two of the cases (shown in gray in Table I). VII. CONCLUSIONS AND DISCUSSION We developed a new UDA algorithm based on learning a domain-invariant embedding space. We map data points from two related domains to the embedding space such that discrepancy between the transformed distributions is minimized. We used the sliced Wasserstein distance metric as a measure to match the distributions in the embedding space. As a result, our method is computationally more efficient. Additionally, we matched distributions class-conditionally by assigning pseudo-labels to the target domain data. As a result, our method is more robust and outperforms prior UDA methods that match distributions directly. We provided theoretical justification for effectiveness of our approach and experimental validations to demonstrate that our method is competent against SOA recent UDA methods.
4,538
1907.02271
2954302879
We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized. We use the output space of a shared cross-domain deep encoder to model the embedding space anduse the Sliced-Wasserstein Distance (SWD) to measure and minimize the distance between the embedded distributions of two source and target domains to enforce the embedding to be domain-agnostic.Additionally, we use the source domain labeled data to train a deep classifier from the embedding space to the label space to enforce the embedding space to be this http URL a result of this training scheme, we provide an effective solution to train the deep classification network on the source domain such that it will generalize well on the target domain, where only unlabeled training data is accessible. To mitigate the challenge of class matching, we also align corresponding classes in the embedding space by using high confidence pseudo-labels for the target domain, i.e. assigning the class for which the source classifier has a high prediction probability. We provide theoretical justification as well as experimental results on UDA benchmark tasks to demonstrate that our method is effective and leads to state-of-the-art performance.
As Wasserstein distance is finding more applications in deep learning, efficient computation of Wasserstein distance has become an active area of research. The reason is that Wasserstein distance is defined in form of a linear programming optimization and solving this optimization problem is computationally expensive for high-dimensional data. Although computationally efficient variations and approximations of the Wasserstein distance have been recently proposed @cite_34 @cite_39 @cite_3 , these variations still require an additional optimization in each iteration of the stochastic gradient descent (SGD) steps to match distributions. @cite_27 used a regularized version of the optimal transport for domain adaptation. @cite_13 used a dual stochastic gradient algorithm for solving the regularized optimal transport problem. Alternatively, we propose to address the above challenges using Sliced Wasserstein Distance (SWD). Definition of SWD is motivated by the fact that in contrast to higher dimensions, the Wasserstein distance for one-dimensional distributions has a closed form solution which can be computed efficiently. This fact is used to approximate Wasserstein distance by SWD, which is a computationally efficient approximation and has recently drawn interest from the machine learning and computer vision communities @cite_28 @cite_35 @cite_32 @cite_41 @cite_42 .
{ "abstract": [ "This article details two approaches to compute barycenters of measures using 1-D Wasserstein distances along radial projections of the input measures. The first method makes use of the Radon transform of the measures, and the second is the solution of a convex optimization problem over the space of measures. We show several properties of these barycenters and explain their relationship. We show numerical approximation schemes based on a discrete Radon transform and on the resolution of a non-convex optimization problem. We explore the respective merits and drawbacks of each approach on applications to two image processing problems: color transfer and texture mixing.", "", "Generative Adversarial Nets (GANs) are very successful at modeling distributions from given samples, even in the high-dimensional case. However, their formulation is also known to be hard to optimize and often not stable. While this is particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein distance rather than the Jenson-Shannon divergence. Here, we consider an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation. By augmenting this approach with a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the improved Wasserstein GAN. Further, unlike the traditional GAN loss, the loss formulated in our method is a good measure of the actual distance between the distributions and, for the first time for GAN training, we are able to show estimates for the same.", "By building upon the recent theory that established the connection between implicit generative modeling (IGM) and optimal transport, in this study, we propose a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them. The proposed algorithm is based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes. We formulate the problem as a gradient flow in the space of probability measures. The connections between gradient flows and stochastic differential equations let us develop a computationally efficient algorithm for solving the optimization problem. We provide formal theoretical analysis where we prove finite-time error guarantees for the proposed algorithm. To the best of our knowledge, the proposed algorithm is the first nonparametric IGM algorithm with explicit theoretical guarantees. Our experimental results support our theory and show that our algorithm is able to successfully capture the structure of different types of data distributions.", "", "An efficient method for computing solutions to the Optimal Transportation (OT) problem with a wide class of cost functions is presented. The standard linear programming (LP) discretization of the continuous problem becomes intractible for moderate grid sizes. A grid refinement method results in a linear cost algorithm. Weak convergence of solutions is stablished. Barycentric projection of transference plans is used to improve the accuracy of solutions. The method is applied to more general problems, including partial optimal transportation, and barycenter problems. Computational examples validate the accuracy and efficiency of the method. Optimal maps between nonconvex domains, partial OT free boundaries, and high accuracy barycenters are presented.", "This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. Our main contribution is to show that optimal transportation can be made tractable over large domains used in graphics, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work. To this end, we approximate optimal transportation distances using entropic regularization. The resulting objective contains a geodesic distance-based kernel that can be approximated with the heat kernel. This approach leads to simple iterative numerical schemes with linear convergence, in which each iteration only requires Gaussian convolution or the solution of a sparse, pre-factored linear system. We demonstrate the versatility and efficiency of our method on tasks including reflectance interpolation, color transfer, and geometry processing.", "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.", "Optimal transport distances are a fundamental family of distances for probability measures and histograms of features. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost can quickly become prohibitive whenever the size of the support of these measures or the histograms' dimension exceeds a few hundred. We propose in this work a new family of optimal transport distances that look at transport problems from a maximum-entropy perspective. We smooth the classic optimal transport problem with an entropic regularization term, and show that the resulting optimum is also a distance which can be computed through Sinkhorn's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers. We also show that this regularized distance improves upon classic optimal transport distances on the MNIST classification problem.", "This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling." ], "cite_N": [ "@cite_35", "@cite_28", "@cite_41", "@cite_42", "@cite_32", "@cite_3", "@cite_39", "@cite_27", "@cite_34", "@cite_13" ], "mid": [ "2019106840", "", "2963398989", "2963506208", "", "2253327477", "2009172320", "1594039573", "2158131535", "2964252913" ] }
Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment
Deep learning classification algorithms have surpassed performance of humans for a wide range of computer vision applications. However, this achievement is conditioned on availability of high-quality labeled datasets to supervise training deep neural networks. Unfortunately, preparing huge labeled datasets is not feasible for many situations as data labeling and annotation can be expensive [30]. Domain adaptation [12] is a paradigm to address the problem of labeled data scarcity in computer vision, where the goal is to improve learning speed and model generalization as well as to avoid expensive redundant model retraining. The major idea is to overcome labeled data scarcity in a target domain by transferring knowledge from a related auxiliary source domain, where labeled data is easy and cheap to obtain. A common technique in domain adaptation literature is to embed data from the two source and target visual domains in an intermediate embedding space such that common crossdomain discriminative relations are captured in the embedding space. For example, if the data from source and target domains have similar class-conditioned probability distributions in the embedding space, then a classifier trained solely using labeled data from the source domain will generalize well on data points that are drawn from the target domain distribution [29], [31]. In this paper, we propose a novel unsupervised adaptation (UDA) algorithm following the above explained procedure. Our approach is a simpler, yet effective, alternative for adversarial learning techniques that have been more dominant to address probability matching indirectly for UDA [42], [44], [24]. Our contribution is two folds. First, we train the shared encoder by minimizing the Sliced-Wasserstein Distance (SWD) [27] between the source and the target distributions in the embedding space. We also train a classifier network simultaneously using the source domain labeled data. A major benefit of SWD over alternative probability metrics is that it can be computed efficiently. Additionally, SWD is known to be suitable for gradient-based optimization which is essential for deep learning [29]. Our second contribution is to circumvent the class matching challenge [35] by minimizing SWD between conditional distributions in sequential iterations for better performance compared to prior UDA methods that match probabilities explicitly. At each iteration, we assign pseudo-labels only to the target domain data that the classifier predicts the assigned class label with high probability and use this portion of target data to minimize the SWD between conditional distributions. As more learning iterations are performed, the number of target data points with correct pseudo-labels grows and progressively enforces distributions to align class-conditionally. We provide theoretical analysis and experimental results on benchmark problems, including ablation and sensitivity studies, to demonstrate that our method is effective. IV. PROPOSED METHOD We consider the case where the feature extractor, φ v (·), is a deep convolutional encoder with weights v and the classifier h w (·) is a shallow fully connected neural network with weights w. The last layer of the classifier network is a softmax layer that assigns a membership probability distribution to any given data point. It is often the case that the labels of data points are assigned according to the class with maximum predicted probability. In short, the encoder network is learned to mix both domains such that the extracted features in the embedding are: 1) domain agnostic in terms of data distributions, and 2) discriminative for the source domain to make learning h w feasible. Figure 1 demonstrates system level presentation of our framework. Following this framework, the UDA reduces to solving the following optimization problem to solve for v and w: min v,w N i=1 L h w (φ v (x s i )), y s i + λD p S (φ v (X S )), p T (φ v (X T )) ,(1) where D(·, ·) is a discrepancy measure between the probabilities and λ is a trade-off parameter. The first term in Eq. (1) is empirical risk for classifying the source labeled data points from the embedding space and the second term is the cross-domain probability matching loss. The encoder's learnable parameters are learned using data points from both domains and the classifier parameters are simultaneously learned using the source domain labeled data. A major remaining question is to select a proper metric. First, note that the actual distributions p S (φ(X S )) and p T (φ(X T )) are unknown and we can rely only on observed samples from these distributions. Therefore, a sensible discrepancy measure, D(·, ·), should be able to measure the dissimilarity between these distributions only based on the drawn samples. In this work, we use the SWD [28] as it is computationally efficient to compute SWD from drawn samples from the corresponding distributions. More importantly, the SWD is a good approximation for the optimal transport [2] which has gained interest in deep learning community as it is an effective distribution metric and its gradient is non-vanishing. The idea behind the SWD is to project two d-dimensional probability distributions into their marginal one-dimensional distributions, i.e., slicing the high-dimensional distributions, and to approximate the Wasserstein distance by integrating the Wasserstein distances between the resulting marginal probability distributions over all possible one-dimensional subspaces. For the distribution p S , a one-dimensional slice of the distribution is defined as: Rp S (t; γ) = S p S (x)δ(t − γ, x )dx,(2) where δ(·) denotes the Kronecker delta function, ·, · denotes the vector dot product, S d−1 is the d-dimensional unit sphere and γ is the projection direction. In other words, Rp S (·; γ) is a marginal distribution of p S obtained from integrating p S over the hyperplanes orthogonal to γ. The SWD then can be computed by integrating the Wasserstein distance between sliced distributions over all γ: SW (p S , p T ) = S d−1 W (Rp S (·; γ), Rp T (·; γ))dγ(3) where W (·) denotes the Wasserstein distance. The main advantage of using the SWD is that, unlike the Wasserstein distance, calculation of the SWD does not require a numerically expensive optimization. This is due to the fact that the Wasserstein distance between two one-dimensional probability distributions has a closed form solution and is equal to the p -distance between the inverse of their cumulative distribution functions Since only samples from distributions are available, the one-dimensional Wasserstein distance can be approximated as the p -distance between the sorted samples [32]. The integral in Eq. (3) is approximated using a Monte Carlo style numerical integration. Doing so, the SWD between f -dimensional samples {φ(x S i ) ∈ R f ∼ p S } M i=1 and {φ(x T i ) ∈ R f ∼ p T } M j=1 can be approximated as the following sum: SW 2 (pS , pT ) ≈ 1 L L l=1 M i=1 | γ l , φ(x S s l [i] ) − γ l , φ(x T t l [i] ) | 2 (4) where γ l ∈ S f −1 is uniformly drawn random sample from the unit f -dimensional ball S f −1 , and s l [i] and t l [i] are the sorted indices of {γ l · φ(x i )} M i=1 for source and target domains, respectively. Note that for a fixed dimension d, Monte Carlo approximation error is proportional to O( 1 √ L ). We utilize the SWD as the discrepancy measure between the probability distributions to match them in the embedding space. Next, we discuss a major deficiency in Eq. (1) and our remedy to tackle it. We utilize the SWD as the discrepancy measure between the probability densities, p S (φ(x S )|C j ) and p T (φ(x T )|C j ) . A. Class-conditional Alignment of Distributions A main shortcoming of Eq. (1) is that minimizing the discrepancy between p S (φ(X S )) and p T (φ(X T )) does not guarantee semantic consistency between the two domains. To clarify this point, consider the source and target domains to be images corresponding to printed digits and handwritten digits. While the feature distributions in the embedding space could have low discrepancy, the classes might not be correctly aligned in this space, e.g. digits from a class in the target domain could be matched to a wrong class of the source domain or, even digits from multiple classes in the target domain could be matched to the cluster of a single digit of the source domain. In such cases, the source classifier will not generalize well on the target domain. In other words, the shared embedding space, Z, might not be a semantically meaningful space for the target domain if we solely minimize SWD between p S (φ(X S )) and p T (φ(X T )). To solve this challenge, the encoder function should be learned such that the class-conditioned probabilities of both domains in the embedding space are similar, i.e. p S (φ(x S )|C j ) ≈ p T (φ(x T )|C j ), where C j denotes a particular class. Given this, we can mitigate the class matching problem by using an adapted version of Eq. (1) as: min v,w N i=1 L h w (φ v (x s i )), y s i + λ k j=1 D p S (φ v (x S )|C j ), p T (φ v (x T )|C j ) ,(5) where discrepancy between distributions is minimized conditioned on classes, to enforce semantic alignment in the embedding space. Solving Eq. (5), however, is not tractable as the labels for the target domain are not available and the conditional distribution, p T (φ(x T )|C j ) , is not known. To tackle the above issue, we compute a surrogate of the objective in Eq. (5). Our idea is to approximate p T (φ(x T )|C j ) by generating pseudo-labels for the target DPL = {(x t i ,ŷ t i )|ŷ t i = fθ(x t i ), p(ŷ t i |x t i ) > τ } 6: for alt = 1, . . . , ALT do 7: Update encoder parameters using pseudo-labels: 8:v = j D pS (φv(xS )|Cj), pSL(φv(xT )|Cj) 9: Update entire model: 10:v,ŵ = arg minw,v N i=1 L hw(φˆv(x s i )), y s i 11: end for 12: end for data points. The pseudo-labels are obtained from the source classifier prediction, but only for the portion of target data points that the the source classifier provides confident prediction. More specifically, we solve Eq. (5) in incremental gradient descent iterations. In particular, we first initialize the classifier network by training it on the source data. We then alternate between optimizing the classification loss for the source data and SWD loss term at each iteration. At each iteration, we pass the target domain data points into the classifier learned on the source data and analyze the label probability distribution on the softmax layer of the classifier. We choose a threshold τ and assign pseudo-labels only to those target data points that the classifier predicts the pseudo-labels with high confidence, i.e. p(y i |x t i ) > τ . Since the source and the target domains are related, it is sensible that the source classifier can classify a subset of target data points correctly and with high confidence. We use these data points to approximate p T (φ(x T )|C j ) in Eq. (5) and update the encoder parameters, v, accordingly. In our empirical experiments, we have observed that because the domains are related, as more optimization iterations are performed, the number of data points with confident pseudolabels increases and our approximation for Eq. (5) improves and becomes more stable, enforcing the source and the target distributions to align class conditionally in the embedding space. As a side benefit, since we math the distributions class-conditionally, a problem similar to mode collapse is unlikely to occur. Figure 2 visualizes this process using real data. Our proposed framework, named Domain Adaptation with Conditional Alignment of Distributions (DACAD) is summarized in Algorithm 1. V. THEORETICAL ANALYSIS In this section, we employ existing theoretical results on suitability of optimal transport for domain adaptation [29] within our framework and prove why our algorithm can train models that generalize well on the target domain. First note that, the hypothesis class within our framework is the set of all models f θ (·) that are parameterized by θ. For any given model in this hypothesis class, we denote the observed risk on the source domain by e S . Analogously, e T denotes the observed risk on the target domain in the UDA setting. Also, letμ S = 1 N N n=1 δ(x s n ) denote the empirical source distribution, obtained from the observed training samples. We can define the empirical source distributionμ T = 1 M M m=1 δ(x t m ) similarly. Moreover, let f θ * denote the ideal model that minimizes the combined source and target risks e C (θ * ), i.e. θ * = arg min θ e C (θ) = arg min θ {e S +e T }. In the presence of enough labeled target domain data, this is the best joint model that can be learned. We rely on the following theorem [4]. Theorem 1 [29]: Under the assumptions described above for UDA, then for any d > d and ζ < √ 2, there exists a constant number N 0 depending on d such that for any ξ > 0 and min(N, M ) ≥ max(ξ −(d +2),1 ) with probability at least 1 − ξ for all f θ , the following holds: e T ≤e S + W (μ T ,μ S ) + e C (θ * )+ 2 log( 1 ξ )/ζ 1 N + 1 M .(6) For simplicity, Theorem 1 originally is proven in the binary classification setting and consider 0-1 binary loss function L(·) (thresholded binary softmax). We also limit our analysis to this setting but note that these restrictions can be loosen to be broader.The initial consequence of the above theorem might seem that minimizing the Wasserstein distance between the source and the target distributions can improve generalization error on the target domain because it will make the inequality in Eq. (6) tighter. But it is crucial to note that Wasserstein distance cannot be minimized independently from minimizing the source risk. Moreover, there is no guarantee that doing so, the learned model would be a good approximate of the joint optimal model f θ * which is important as the third term in the right hand side denotes in Eq. (6). We cannot even approximate e C (θ * ) in UDA framework as the there is no labeled data in the target domain. In fact, this theorem justifies why minimizing the Wasserstein distance is not sufficient, and we should minimize the source empirical risk simultaneously, and learn jointly on both domains to consider all terms in Theorem 1. Using Theorem 1, we demonstrate why our algorithm can learn models that generalize well on the target domain. We also want to highlight once more that, although we minimize SWD in our framework and our theoretical results are driven for the Wasserstein distance, it has been theoretically demonstrated that SWD is a good approximation for computing the Wasserstein distance [2]. Theorem 2: Consider we use the pseudo-labeled target dataset D PL = {x t i ,ŷ t i } M P L i=1 , which we are confident with threshold τ , in an optimization iteration in the algorithm 1. Then, the following inequality holds: e T ≤e S + W (μ S ,μ PL ) + e C (θ * ) + (1 − τ )+ 2 log( 1 ξ )/ζ 1 N + 1 M P L ,(7) where e C (θ * ) denote the expected risk of the optimally joint model f θ * on both the source domain and the confident pseudo-labeled target data points. Proof: since the pseudo-labeled data points are selected according to the threshold τ , if we select a pseudo-labeled data point randomly, then the probability of the pseudo-label to be false is equal to 1 − τ . We can define the difference between the error based on the true labels and the pseudolabel for a particular data point as follows: |L(f θ (x t i ), y t i ) − L(f θ (x t i ),ŷ t i )| = 0, if y t i =ŷ t i . 1, otherwise.(8) We can compute the expectation on the above error as: |e PL − e T | ≤ E |L(f θ (x t i ), y t i ) − L(f θ (x t i ),ŷ t i )| ≤ (1 − τ ).(9) Using Eq. (9) we can deduce: e S + e T = e S + e T + e PL − e PL ≤ e S + e PL + |e T − e PL | ≤ e S + e PL + (1 − τ ).(10) Note that since Eq. (10) is valid for all θ, if we consider the joint optimal parameter θ * in Eq. (10), we deduce: e C (θ * ) ≤ e C (θ) + (1 − τ ).(11) By considering Theorem 1, where the pseudo-labeled data points are the given target dataset, and then applying Eq. (11) on Eq.(6), Theorem 2 follows. Theorem 2 indicates that why our algorithm can potentially learn models that generalize well on the target domain. We can see that at any given iteration, we minimize the upperbound of the target error as given in (7). We minimize the source risk e S through the supervised loss. We minimize the Wasserstein distance by minimizing the SWD loss. The term e C (θ * ) is minimized because the pseudo-labeled data points by definition are selected such that the true labels can be predicted with high probability. Hence, the optimal model with parameter θ * can perform well both on the source domain and the pseudo-labeled data points. The term 1 − τ is also small because we only select the confident data points. If (crucial) at a given iteration, minimizing the upperbound in Eq. (7) reduces the target true risk, then the class-conditional overlap between the latent distributions of source and target domains increases. This is because the trained model performance has improved on both domains (the source risk e S is always minimized directly). As a result, in the next iteration, the number of samples with confident pseudo-labels increases which in turn makes the upperbound of Eq. (7) tighter. As a result, the constant term in the right hand side of Eq. (7) (in the second line) decreases, making generalization tighter. Hence our algorithm minimizes all the terms in Eq. (7), which would reduce the true risk on the target domain as more optimization iterations are performed. However, this result is conditioned on existence of confident pseudo-labels which means the domains must be related. VI. EXPERIMENTAL VALIDATION We evaluate our algorithm using standard benchmark UDA tasks and compare against several UDA methods. Datasets: We investigate the empirical performance of our proposed method on five commonly used benchmark datasets Fig. 2: The high-level system architecture, shown on the left, illustrates the data paths used during UDA training. On the right, t SNE visualizations demonstrate how the embedding space evolves during training for the S → U task. In the target domain, colored points are examples with assigned pseudo-labels, which increase in number with the confidence of the classifier. in UDA, namely: MNIST (M) [20], USPS (U) [21], Street View House Numbers, i.e., SVHN (S), CIFAR (CI), and STL (ST ). The first three datasets are 10 class (0-9) digit classification datasets. MNIST and USPS are collection of hand written digits whereas SVHN is a collection of real world RGB images of house numbers. STL and CIFAR contain RGB images that share 9 object classes: airplane, car, bird, cat, deer, dog, horse, ship, and truck. For the digit datasets, while six domain adaptation problems can be defined among these datasets, prior works often consider four of these six cases, as knowledge transfer from simple MNIST and USPS datasets to a more challenging SVHN domain does not seem to be tractable. Following the literature, we use 2000 randomly selected images from MNIST and 1800 images from USPS in our experiments for the case of U → M and S → M [24]. In the remaining cases, we used full datasets. All datasets have their images scaled to 32×32 pixels and the SVHN images are converted to grayscale as the encoder network is shared between the domains. CIFAR and STL maintain their RGB components. We report the target classification accuracy across the tasks. Pre-training: Our experiments involve a pre-training stage to initialize the encoder and the classifier networks solely using the source data. This is an essential step because the combined deep network can generate confident pseudolabels on the target domain only if initially trained on the related source domain. In other words, this initially learned network can be served as a naive model on the target domain. We then boost the performance on the target domain using our proposed algorithm, demonstrating that our algorithm is indeed effective for transferring knowledge. Doing so, we investigate a less-explored issue in the UDA literature. Different UDA approaches use considerably different networks, both in terms of complexity, e.g. number of layers and convolution filters, and the structure, e.g. using an autoencoder. Consequently, it is ambiguous whether performance of a particular UDA algorithm is due to successful knowledge transfer from the source domain or just a good baseline network that performs well on the target domain even without considerable knowledge transfer from the source domain. To highlight that our algorithm can indeed transfer knowledge, we use three different network architectures: DRCN [11], VGG [39], and a small ResNet [17]. We then show that our algorithm can effectively boost base-line performance (statistically significant) regardless of the underlying network. In most of the domain adaptation tasks, we demonstrate that this boost indeed stems from transferring knowledge from the source domain. In our experiments we used Adam optimizer [19] and set the pseudo-labeling threshold to tr = 0.99. Data Augmentation: Following the literature, we use data augmentation to create additional training data by applying reasonable transformations to input data in an effort to improve generalization [38]. Confirming the reported result in [11], we also found that geometric transformations and noise, applied to appropriate inputs, greatly improves performance and transferability of the source model to the target data. Data augmentation can help to reduce the domain shift between the two domains. The augmentations in this work are limited to translation, rotation, skew, zoom, Gaussian noise, Binomial noise, and inverted pixels. A. Results Figure 2 demonstrates how our algorithm successfully learns an embedding with class-conditional alignment of distributions of both domains. This figure presents the twodimensional t SNE visualization of the source and target domain data points in the shared embedding space for the S → U task. The horizontal axis demonstrates the optimization iterations where each cell presents data visualization after a particular optimization iteration is performed. The top sub-figures visualize the source data points, where each color represents a particular class. The bottom sub-figures visualize the target data points, where the colored data points represent the pseudo-labeled data points at each iteration and the black points represent the rest of the target domain data points. We can see that, due to pre-training initialization, the embedding space is discriminative for the source domain in the beginning, but the target distribution differs from the source distributions. However, the classifier is confident about a portion of target data points. As more optimiza- [42]. tion iterations are performed, since the network becomes a better classifier for the target domain, the number of the target pseudo-labeled data points increase, improving our approximate of Eq. 5. As a result, the discrepancy between the two distributions progressively decreases. Over time, our algorithm learns a shared embedding which is discriminative for both domains, making pseudo-labels a good prediction for the original labels, bottom, right-most sub-figure. This result empirically validates our theoretical justification on applicability of our algorithm to address UDA. We also compare our results against several recent UDA algorithms in Table I. In particular, we compare against the recent adversarial learning algorithms: Generate to Adapt (GtA) [36], CoGAN [22], ADDA [42], CyCADA [18], and I2I-Adapt [25]. We also include FADA [24], which is originally a few-shot learning technique. For FADA, we list the reported one-shot accuracy, which is very close to the UDA setting (but it is arguably a simpler problem). Additionally, we have included results for RevGrad [9], DRCN [11], AUDA [35], OPDA [4], MML [37]. The latter methods are similar to our method because these methods learn an embedding space to couple the domains. OPDA and MML are more similar as they match distributions explicitly in the learned embedding. Finally, we have included the performance of fully-supervised (FS) learning on the target domain as an upper-bound for UDA. In our own results, we include the baseline target performance that we obtain by naively employing a DRCN network as well as target performance from VGG and ResNet networks that are learned solely on the source domain. We notice that in Table I, our baseline performance is better than some of the UDA algorithms for some tasks. This is a very crucial observation as it demonstrates that, in some cases, a trained deep network with good data augmentation can extract domain invariant features that make domain adaptation feasible even without any further transfer learning procedure. The last row demonstrates that our method is effective in transferring knowledge to boost the baseline performance. In other words, Table I serves as an ablation study to demonstrate that that effectiveness of our algorithm stems from successful cross-domain knowledge transfer. We can see that our algorithm leads to near-or the state-of-the-art performance across the tasks. Additionally, an important observation is that our method significantly outperforms the methods that match distributions directly and is competent against methods that use adversarial learning. This can be explained as the result of matching distributions class-conditionally and suggests our second contribution can potentially boost performance of these methods. Finally, we note that our proposed method provide a statistically significant boost in all but two of the cases (shown in gray in Table I). VII. CONCLUSIONS AND DISCUSSION We developed a new UDA algorithm based on learning a domain-invariant embedding space. We map data points from two related domains to the embedding space such that discrepancy between the transformed distributions is minimized. We used the sliced Wasserstein distance metric as a measure to match the distributions in the embedding space. As a result, our method is computationally more efficient. Additionally, we matched distributions class-conditionally by assigning pseudo-labels to the target domain data. As a result, our method is more robust and outperforms prior UDA methods that match distributions directly. We provided theoretical justification for effectiveness of our approach and experimental validations to demonstrate that our method is competent against SOA recent UDA methods.
4,538
1907.02277
2954514064
Discovering communities in complex networks means grouping nodes similar to each other, to uncover latent information about them. There are hundreds of different algorithms to solve the community detection task, each with its own understanding and definition of what a "community" is. Dozens of review works attempt to order such a diverse landscape -- classifying community discovery algorithms by the process they employ to detect communities, by their explicitly stated definition of community, or by their performance on a standardized task. In this paper, we classify community discovery algorithms according to a fourth criterion: the similarity of their results. We create an Algorithm Similarity Network (ASN), whose nodes are the community detection approaches, connected if they return similar groupings. We then perform community detection on this network, grouping algorithms that consistently return the same partitions or overlapping coverage over a span of more than one thousand synthetic and real world networks. This paper is an attempt to create a similarity-based classification of community detection algorithms based on empirical data. It improves over the state of the art by comparing more than seventy approaches, discovering that the ASN contains well-separated groups, making it a sensible tool for practitioners, aiding their choice of algorithms fitting their analytic needs.
The first -- most popular -- category includes works classifying algorithms by the techniques they employ to divide the graph into groups of nodes, i.e. by their . Examples in this category are @cite_21 , @cite_24 , @cite_33 , @cite_9 , @cite_8 , @cite_26 ; @cite_12 -- focusing on multilayer networks; and @cite_10 -- whose attention narrows down to genetic algorithms. Here, we are agnostic about how an algorithm works, as we are focused on figuring out which algorithm returns similar partitions to which other. This is influenced by how they work, but even algorithms based on the philosophy of modularity maximization might end up in different categories.
{ "abstract": [ "With the rapid development of information technologies, various big graphs are prevalent in many real applications (e.g., social media and knowledge bases). An important component of these graphs is the network community. Essentially, a community is a group of vertices which are densely connected internally. Community retrieval can be used in many real applications, such as event organization, friend recommendation, and so on. Consequently, how to efficiently find high-quality communities from big graphs is an important research topic in the era of big data. Recently a large group of research works, called community search, have been proposed. They aim to provide efficient solutions for searching high-quality communities from large networks in real-time. Nevertheless, these works focus on different types of graphs and formulate communities in different manners, and thus it is desirable to have a comprehensive review of these works. In this survey, we conduct a thorough review of existing community search works. Moreover, we analyze and compare the quality of communities under their models, and the performance of different solutions. Furthermore, we point out new research directions. This survey does not only help researchers to have a better understanding of existing community search solutions, but also provides practitioners a better judgment on choosing the proper solutions.", "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "Abstract Community detection in networks is one of the most popular topics of modern network science. Communities, or clusters, are usually groups of vertices having higher probability of being connected to each other than to members of other groups, though other patterns are possible. Identifying communities is an ill-defined problem. There are no universal protocols on the fundamental ingredients, like the definition of community itself, nor on other crucial issues, like the validation of algorithms and the comparison of their performances. This has generated a number of confusions and misconceptions, which undermine the progress in the field. We offer a guided tour through the main aspects of the problem. We also point out strengths and weaknesses of popular methods, and give directions to their use.", "", "There has been considerable recent interest in algorithms for finding communities in networks— groups of vertices within which connections are dense, but between which connections are sparser. Here we review the progress that has been made towards this end. We begin by describing some traditional methods of community detection, such as spectral bisection, the Kernighan-Lin algorithm and hierarchical clustering based on similarity measures. None of these methods, however, is ideal for the types of real-world network data with which current research is concerned, such as Internet and web data and biological and social networks. We describe a number of more recent algorithms that appear to work well with these data, including algorithms based on edge betweenness scores, on counts of short loops in networks and on voltage differences in resistor networks.", "We survey some of the concepts, methods, and applications of community detection, which has become an increasingly important area of network science. To help ease newcomers into the field, we provide a guide to available methodology and open problems, and discuss why scientists from diverse backgrounds are interested in these problems. As a running theme, we emphasize the connections of community detection to problems in statistical physics and computational optimization.", "Uncovering community structures of a complex network can help us to understand how the network functions. Over the past few decades, network community detection has attracted growing research interest from many fields. Many community detection methods have been developed. Network community structure detection can be modelled as optimisation problems. Due to their inherent complexity, these problems often cannot be well solved by traditional optimisation methods. For this reason, evolutionary algorithms have been adopted as a major tool for dealing with community detection problems. This paper presents a survey on evolutionary algorithms for network community detection. The evolutionary algorithms in this survey cover both single objective and multiobjective optimisations. The network models involve weighted unweighted, signed unsigned, overlapping non-overlapping and static dynamic ones.", "" ], "cite_N": [ "@cite_26", "@cite_33", "@cite_8", "@cite_9", "@cite_21", "@cite_24", "@cite_10", "@cite_12" ], "mid": [ "2940854948", "2127048411", "2502979434", "", "2125050594", "1942910215", "2479340554", "" ] }
Discovering Communities of Community Discovery
In this paper, we provide a bottom-up data-driven categorization of community detection algorithms. Community detection in complex networks is the task of finding groups of nodes that are closely related to each other. Doing so usually unveils new knowledge about how nodes connect, helping us predicting new links or some latent node characteristic. Community discovery is probably the most prominent and studied problem in network science. This popularity implies that the number of different networks to which community discovery can be applied is vast and so is the number of its potential analytic objectives. As a result, what a community is in a complex network can take as many different interpretations as the number of people working in the field. Review works on the topic abound and often their reference lists contain hundreds of citations [14]. They usually attempt a classification, grouping community detection algorithms into a manageable set of macro categories. Most of them work towards one of three objectives. They classify community detection algorithms: by process, meaning they explain the inner workings of an algorithm and let the reader decide which method corresponds to their own definition of community -e.g. [14]; by definition, meaning they collect all community discovery definitions ever proposed and create an ontology of them -e.g. [6]; by performance, meaning that they put the algorithms to a standardized task and rank them according to how well they perform on that task -e.g. [18]. This paper also attempts to classify community discovery algorithms, but uses none of these approaches. Instead, we perform a categorization by similarity, e.g. which algorithms, at a practical level, return almost the same communities. As in the process case, we expect the inner workings of an algorithm to make most of the difference, but we do not focus on them. As in the definition case, we aim to build an ontology, but ours is bottom-up data-driven rather than being imposed top-down. As in the performance case, we define a set of standardized tasks, but we are not interested in which method maximizes a quality function. Here, we are not interested in what works best but what works similarly. This is useful for practitioners because they might have identified an algorithm that finds the communities they are interested in, with some downsides that make its application impossible (e.g. long running times). With the map provided in this paper, a researcher can identify the set of algorithms outputting almost identical results to their favorite one, but not affected by its specific issues. Maybe they perform slightly worse, but do so at a higher time efficiency. We do so by collecting implementations of community detection algorithms and extract communities on synthetic benchmarks and real world networks. We then calculate the pairwise similarity of the output groupings, using overlapping mutual information [21], [26] -we need the overlapping variant, because it allows us to compare algorithms which allow communities to share nodes. For each network in which algorithms a 1 and a 2 ranked in the top five among the most similar outputs we increase their similarity count by one. Once we have an overall measure of how many times two algorithms provided similar communities, we can reconstruct an affinity graph, which we call the Algorithm Similarity Network (ASN ). In ASN , each node is a community discovery method. We weigh each link according to the similarity count, as explained above. We only keep links if this count is significantly different from null expectation. Once we establish that our reconstruction of ASN is resilient to noise and to our choices, we analyze it. Specifically, we want to find groups of algorithms that work similarly: we discover communities of community discovery algorithms. There are other approaches proposing a data-driven classification of community discovery algorithms [10,11,16]. This paper improves over the state of the art by: exploring more algorithms (73) over more benchmarks (960 synthetic and 819 real-world networks) than other empirical tests; exploring more algorithm types -including overlapping and hierarchical solutions -; looking at the actual similarity of the partitions rather than the distribution of community sizes. Note that we were only able to collect 73 out of the hundreds community discovery algorithms, because we focused on the papers which provided an easy way to recover their implementation. This paper should not be considered finished as is, but rather as a work in progress. Many prominent algorithms were excluded as it was not possible to find a working implementation -sometimes because they are simply too old. Authors of excluded methods should be assured that we will include their algorithm in ASN if they can contact us at mcos@itu.dk. The most updated version of ASN will then be not in this paper, but available at http://www.michelecoscia. com/?page_id=1640. METHOD The aim of this paper is to build an Algorithm Similarity Network (ASN ), whose elements are the similarities between the outputs of community discovery algorithms. To evaluate result similarity is far from trivial, as we need to: (i) test enough scenarios to get a robust similarity measure, and (ii) being able to compare disjoint partitions to overlapping coverages -where nodes can be part of multiple communities. In this section we outline our methodology to build ASN , in three phases: (i) creating benchmark networks; (ii) evaluating the pairwise similarity of results on the benchmark networks; and (iii) extracting ASN 's backbone. A note about generating the results for each algorithm. Many algorithms require parameters and do not have an explicit test for choosing the optimal ones. In those cases, we operate a grid search, selecting the combination yielding the maximum modularity. This is simpler in the case of algorithms returning disjoint partitions. For algorithms providing an overlapping coverage, there are multiple conflicting definitions of overlapping modularity. For this paper, we choose the one presented in [23]. Benchmarks We have two distinct sets of benchmarks on which to test our community discovery algorithms: synthetic networks and real world networks. Synthetic Networks. In evaluating community discovery algorithms, most researchers agree on using the LFR benchmark generator [22] for synthetic testing. The LFR benchmark creates networks respecting most of the properties of interest of many real world networks. We follow the literature and use the LFR benchmark. We make this choice not without criticism, which we spell out in Section 4.2. To generate an LFR benchmark we need to specify several parameters. Here we focus on two in particular: number of nodes n and mixing parameter µ -which is the fraction of edges that span across communities, making the task of finding communities harder. We create a grid, generating networks with n = {50, 60, 70, 80, 90, 100} and µ = {.07, .09, .11, .13, .15, .17, .19, .21}. The average degree (k) is set to 6 for all networks, while the maximum degree (K) is a function of n. For each combination of parameters we generate ten independent benchmarks with disjoint communities and ten benchmarks with overlapping communities. In the overlapping case, the number of nodes overlapping between communities (o n ), as well as the number of communities to which they belong (o m ), are also a function of n. We generate 2 (overlapping, disjoint) × 10 (independent benchmarks) × 6 (possible number of nodes) × 8 (distinct µ values) = 960 benchmarks. Due to the high number of networks and to the high time complexity of some of the methods, we are unable to have larger benchmarks. The number of benchmarks is necessary to guarantee statistical power to our similarity measure. Real World Networks. The LFR benchmarks have a single definition of community in mind. Therefore the tests are not independent, and if an algorithm follows a different community definition, it might fail in unpredictable ways, which makes our edge creating process prone to noise. To reduce this issue, we collect a number of different real world networks. Communities in real world networks might originate from a vast and variegated set of possible processes. We assembled 819 real world networks, which were found in the Colorado Index of Complex networks 2 . We selected a high number of small networks to conform to our needs of statistical significance as described in the previous subsection. Evaluating Similarity Once we run two community discovery algorithms on a network, we obtain two divisions of nodes into communities. A standard way to estimate how similar these two groupings are is to use normalized mutual information [40] (NMI). Mutual information quantifies the information obtained about one random variable through observing the other. The normalized variant, rather than returning the number of bits, is adjusted to take values between 0 (no mutual information) and 1 (perfect correlation). The standard version of NMI is defined only for disjoint partitions, where nodes can belong to only one community. However, many of the algorithms we test are overlapping, placing nodes in multiple communities. There are several ways to extend NMI to the overlapping case (oNMI), as described in [21] and [26]. We use the three definitions considered in these two papers as our alternative similarity measures. These versions reduce to NMI when their input is two disjoint partitions. This allows us to compare disjoint and overlapping partitions to each other. We label the three variants as MAX, LFK, and SUM, following the original papers. Our default choice is MAX, which normalizes the mutual information between the overlapping results a 1 and a 2 with the maximum of the entropy of a 1 and a 2 . Differently from LFK, MAX is corrected by chance: unrelated vectors will have zero oNMI MAX. How do we aggregate the similarity results across our 1,779 benchmarks? We have three options: (i) averaging them, (ii) counting the number of times two algorithms had an oNMI higher than a given threshold, and (iii) counting the number of times two algorithms were each other in the most similar algorithms in a given benchmark. We choose option (iii). Option (i) has both theoretical and practical issues. It is not immediately clear what is the semantic of an average normalized mutual information. Moreover, we want to empathize the scenarios in which two algorithms are similar more than when they are dissimilar. There is only one way in which two results can be similar, while there are (almost) infinite ways for two results to be dissimilar. Thus similarity contains more information than dissimilarity. If we take the simple average, dissimilarity is going to drive the results. In option (ii), NMIs will have different expected values for different networks. If we choose a threshold for all benchmarks, we will overweight some benchmarks over others. This is fixed by option (iii), which counts the cases in which both algorithms agree on the community structure in the network. Note that both algorithms have to agree, thus this method still allows algorithms to be isolated if they are dissimilar to everything else. Suppose a 1 is a very peculiar algorithm. Regardless of its results, it will find a 2 as its most similar companion, even if the results are different. Since the results are different, a 2 will not have a 1 as one of its most similar companions. Thus there will be no edge between a 1 and a 2 . We will see in our robustness checks that the three options return comparable results, with option (iii) having the fewest theoretical and practical concerns. Building the Network The result from the previous section is a weighted network, where each edge weight is the number of benchmarks in which two algorithms were in each other most similar results. Any edge generation choice will generate a certain amount of noise. Algorithms with average results might end up as most similar to other algorithms in a benchmark just by pure chance. This means that there is uncertainity in our estimation of the edge weights -or whether some edges should be present at all. To alleviate the problem, we use the noise corrected (NC) backbone approach [7]. The reason to pick this approach over the alternatives lies in its design. The NC backboning procedure removes noise from edge weight estimates, under specific assumptions about the edge generation process, which fit the way we build our network. ASN is a network where edge weights are counts, broadly distributed -as we show in the Analysis section -, and are generated with an hypergeometric "extraction without replacement" approach, which are all assumptions of the NC backboning approach. For this reason, we apply the NC backbone to our ASN . Node size: sum of total edge weights. Node color: community affiliation -multicolored nodes belong to multiple communities. Edge width: number of times the two algorithms returned similar partitions. Only including links exceeding null expectation. Link color: significance, from dark (high) to light (low, but still significant with p < .00001). The NC backbone requires a parameter δ , which controls for the statistical significance of the edges we include in the resulting network. We set the parameter to the value required to have the minimum possible number of edges, while at the same time ensuring that each node has at least one connection. In our case, we set δ = 19.5, meaning that we only include edges with that particular tscore (or higher), which is roughly equivalent to say that p < .00001. Again, note that we are not imposing the ASN to be connected in a single component. Under these constraints, ASN could be just a set of small components, each composed by a pair of connected algorithms. ANALYSIS 4.1 The Algorithm Similarity Network We start by taking a look at the resulting ASN network. We show a depiction of the network in Figure 1 -calculated using the oNMI MAX similarity function and setting δ = 19.5 for the noise corrected backboning. The network contains all the results, both from synthetic and from real-world networks. The first remarkable thing about ASN is that it does have a community structure. The network is sparse -by construction, this is not a result -: only 9% of possible edges are in the network. However, and this is surprising, clustering is high -transitivity is 0.47, or 47% of connected node triads have all three edges necessary to close the triangle. For these reasons, we can run a community discovery algorithm on ASN . We choose to run the overlapping Infomap algorithm [38]. The algorithm attempts to compress the information about random walks on the network using community prefix codes: good communities compress the walks better because the random walker is "trapped" inside them. The quality measure is the codelength necessary to encode random walks. The codelength gives us a corroboration of the presence of communities. Without communities, we need ∼ 8.52 bits to encode the random walks. With communities, the codelength reduces to ∼ 4.48. Figure 2 shows the complement of the cumulative distribution (CCDF) of the edge weights of ASN before operating the backboning. We can see that, while the distribution is not a power-lawnote the log-log scale -, it nevertheless spans multiple orders of magnitude, with a clear skewed distribution. In fact, 50% of the edges have a weight lower than 10 -only in 10 cases out of the possible 960 + 819 the two algorithms were in the top five most similar results -, while the three strongest edges (.1% of the network) have weights of 1,453, 1,519, and 1,540, respectively. This means that the distribution could have been a power-law, had we performed enough tests. In any case, such broad distribution justifies our choice of backboning method, which is specifically designed to handle cases with large variance and lack of well-defined averages. Robustness In developing our framework, we made choices that have repercussions ASN 's shape. How much do these choices impact the final result? We are interested in estimating the amount of change in ASN 's topology, specifically whether it is stable: different ASN s calculated with different procedures and parameters are similar. The first test aims at quantifying the amount of change introduced by using a different oNMI measure. Recall that our official ASN uses the MAX variant. There are two alternatives: LFK and Figure 4: Correlation between the ASN weights using the LFR benchmarks (x-axis) and the real world networks (yaxis). Same legend as Figure 3, for different oNMI variants: (left) MAX, (middle) LFK, (right) SUM. Figure 3 shows how ASN s calculated using them correlated with the MAX standard version. SUM. It is immediately obvious from the plots that the choice of the specific measure of oNMI has no effect on the shape of ASN . We could have picked any variant and we would have likely observed similar results. In fact, the correlations between the methods are as follows: MAX vs LFK = 0.94; MAX vs SUM = 0.99; LFK vs SUM = 0.97. The second test focuses on the synthetic LFR benchmarks versus the 819 real world networks. Real world networks do not necessarily look like LFR benchmarks -or each other. On the other hand, all LFR benchmarks are similar to each other. Does that create different ASN s? We repeat our correlation test ( Figure 4). As in the previous cases, we observe a significant positive correlation for all testsalbeit lower than before: LFR vs Real (MAX) = 0.55; LFR vs Real (LFK) = 0.51; LFR vs Real (SUM) = 0.51. All these correlations are still statistically significant (p ∼ 0). However, we concede that there is a difference between real world networks and LFR benchmarks. It is worthwhile investigating this difference in future works, as a possible argument against the blind acceptance of LFR as the sole benchmark for testing community discovery algorithms. Third, our edge weights are a count of benchmarks in which two algorithms were in each other most similar lists. Alternative edge creation procedures might be to take the average oNMI, or to count the similarity between two algorithms only if they exceed a fixed oNMI threshold. Section 3.2 provides our theoretical reasons. Here we show that, at a practical level, our results are not gravely affected by such choice. We do so by calculating the NMI between ASN 's communities obtained with all three techniques. The ASN built by averaging the similarity scores has a 0.63 NMI with our option, while the one obtained by a fixed threshold has a 0.46 NMI. On the basis of these similarities, we conclude that there is an underlying ASN structure, and we think our choices allow us to capture it best. Communities In Figure 1, we show a partition of ASN into communities. A seasoned researcher in the community discovery field would be able to give meaningful labels to those communities. Here, we objectively quantify this meaningfulness along a few dimensions of the many possible. We start by considering a few attributes of community detection algorithms, whether they: return overlapping partitions (in which Table 1: Features of the communities of ASN . n: # of nodes. Over: % overlapping algorithms. Spr: % algorithms based either on centrality measures (including edge betweenness and random walks) or some sort of spreading process (e.g. label percolation). Q: % algorithms based on modularity maximization. NSim: % algorithms based on neighborhood similarity. Algorithms can be part of multiple/no classes, so the rows do not sum to one. communities can share nodes), are based on some centrality measure (be it random walks or shortest paths) or spreading process (it will become apparent why we lump these two categories), are based on modularity maximization [29], or are based on a neighborhood similarity approach (e.g. they cluster the adjacency matrix). In Table 1 we calculate the fraction of nodes in a community in each of those categories. Note that we count overlap nodes in all of their communities, so some nodes contribute to up to three communities. As we expect, some communities have a stronger presence of a single category. The largest community (in blue) groups centrality-based algorithms (Infomap [38], Edge betweenness [27], Walktrap [34], etc) with the ones based on spreading processes (label percolation [36], SLPA [5], Ganxis [42], etc). Some of these can be overlapping, but the majority of nodes in the community is part of this "spreading" category. This community shows a strong relationship between random walks, centrality-based approaches, and approaches founded on spreading processes. The second largest community (in red) is mostly populated by overlapping approaches (more than 90% of its nodes are overlapping) -BigClam [43], k-Clique [31], and DEMON [8] are some examples. The third largest community (in purple) is mostly composed by algorithms driven by neighbor similarity (more than 70% of them) rather than the classical "internal density" definition (the two are not necessarily the same). The fourth largest community (in green) exclusively groups modularity maximization algorithms. We now calculate descriptive statistics of the groupings each method returns and then we calculate its average across all the test networks. To facilitate interpretation, we also aggregate at the level of the ASN community, as we show in Figure 1. Table 2 reports those statistics. We also calculate the standard errors, which prove that these differences are significant, but we omit them to reduce clutter. The results from Table 2 can be combined from the knowledge we gathered from Table 1. For instance, consider community 4. We know from Table 1 that this hosts peculiar algorithms working on "neighbor similarity" rather than internal density. This might seem like a small difference, but Table 2 shows its significant repercussions: the average modularity we get from these algorithms is practically zero. Moreover, the algorithms tend to return more - Table 2: The averages of various community descriptive statistics per algorithm group.|C |: Average number of communities. Avg Size: Average number of nodes in the communities.d: Average community density.Q: Average modularity -when the algorithm is overlapping we use the overlapping modularity instead of the regular definition.c: Average conductance -from [24]. Avg Ncut: Average normalized cut -from [24]. and therefore smaller -communities, which tend to be denser but also to have higher conductance. 3 This is another warning sign for uncritically accepting modularity as the de facto quality measure to look at when evaluating the performance of a community discovery algorithm. It works perfectly for the methods based on the same community definition, but there are other -different and validcommunity definitions. Other interesting facts include the almost identical average modularity between community 2 -whose algorithms are explicitly maximizing modularity -and community 3 -which is based on spreading processes. Community 1 has higher internal density, but also higher conductance and normalized cut than average, showing how overlapping approaches can find unusually dense communities, sacrificing the requirement of having few outgoing connections. The categories we discussed are necessarily broad and might group algorithms that have significant differences in other aspects. For instance, there are hundreds of different ways to make your algorithm return overlapping communities -communities sharing nodes. Our approach allows us to focus on such methods to find differences inside the algorithm communities. In practice, we can generate different versions of ASN , by only considering the similarities between the algorithms in the "overlapping" category. Note that this is different than simply inducing the graph from the original ASN , selecting only the overlapping algorithms and all the edges between them. Here we select the nodes and all their similarities and then we apply the backboning, with a differenthigher -δ threshold. In this way, we can deploy a more stringent similarity test, that is able to distinguish between subcategories of the main category. Figure 5 depicts the result. Infomap divides the overlapping ASN in three communities, proving the point that there are substantial sub-classes in the overlapping coverage category. There are strong arguments in favor of these classes being meaningful, although a full discussion requires more space and data. For instance, consider the bottom-right community of the network (in blue). It contains all the methods which apply the same strategy to find overlapping communities: rather than clustering nodes, they cluster edges. This is true for Linecomms [12], HLC [1], Ganet+ [33], and OLC [3]. The remaining methods do not cluster link directly, but ASN suggests that their strategies might be comparable. We can conclude that ASN provides a way to narrow down to subcategories of community discovery and find relevant information to motivate one's choice of an algorithm. Ground Truth in Synthetic Networks The version of ASN based on synthetic LFR benchmarks allows an additional analysis. The LFR benchmark generates a network with a known ground truth: it establishes edges according to a planted partition, which it also provides as an output. Thus, we can add a node to the network: the ground truth. We calculate the similarity of the ground truth division in communities with the one provided by each algorithm. We now can evaluate how the algorithms performed, by looking at the edge weights between the ground truth node and the algorithm itself. In the MAX measure, this means the number of times the algorithm was in the top similarity with the ground truth and vice versa. Table 3 shows the ten best algorithms in our sample. We do not show the worst algorithms, because MAX is a strict test, and thus there is a long list of (21) algorithms with weight equal to zero, which is not informative. The table shows that the best performing algorithm are Linecomms, OSLOM, and the overlap version of Infomap. Should we conclude that these are the best community discovery algorithms in the literature? The answer is yes only if we limit ourselves to the task of finding the same type of communities that the LFR benchmark plants in its output network. Crucially, oNMI MAX 1 linecomms 165 2 oslom 73 3 infomap-overlap 64 4 savi 62 5 labelperc 57 6 rmcl 54 7 edgebetween 41 7 leadeig 41 7 vbmod 41 10 gce 32 Table 3: The ten nodes with the highest MAX edge weight with the ground truth node in ASN -using exclusively data from the LFR synthetic networks. Table 3 are not scattered randomly in the network: they tend to be in the same area. Specifically we know that the ground truth node is located deep inside the blue community, as most of the top ten algorithms from Table 3 are classified in that group. Rank Algorithm We can quantify this objectively by calculating the average path length between the ten nodes, which is equal to 2.51 -on average you need to cross two and a half edges to go from any of these ten nodes to any other of the ten. This is shorter than the overall average path length in ASN , which is 3.25. We test statistical significance by calculating the expected average path length when selecting ten random nodes in the network. Figure 6 shows the distribution of their distances. Only seven out of a thousand attempts generated a smaller or equal average path length. We conclude this section with a word of caution when using benchmarks to establish the quality of a community discovery algorithm, which is routinely done in review works and when proposing a new approach. If the benchmark does not fit the desired definition of community, it might not return a fair evaluation. If one is interested in communities based on neighborhood similarity -the green community in Figure 1 -the LFR benchmark is not the correct one to use. Moreover, when deciding to test a new method against the state of the art, one must choose the algorithms in the literature fitting the same community definition, or the benchmark test would be pointless. This warning goes the other way: assuming that all valid communities look like the ones generated by the LFR benchmark would impoverish a field that -as the strong clusters in ASN show -does indeed have significantly different perspectives of what a community is. CONCLUSION In this paper we contributed to the literature on reviewing community discovery algorithms. Rather than classify them by their process, community definition, or performance, here we classify them by their similarity. How similar are the groupings they return? We performed the most comprehensive analysis of community discovery algorithms to date, including 73 algorithms tested over more than a thousand synthetic and real world networks. We were able to reconstruct an Algorithm Similarity Network -ASN -connecting algorithms to each other based on their output similarity. ASN confirms the intuition about the community discovery literature: there are indeed different valid definitions of community, as the strong clustering in the network shows. The clusters are meaningful as they reflect real differences among the algorithms' features. ASN allows us to perform multi-level analysis: by focusing on a specific category, we can apply our framework to discover meaningful sub-categories. Finally, ASN 's topology highlights how projecting the community detection problem on a single definition of community -e.g. "a group of nodes densely connected to each other and sparsely connected with the rest of the network" -does the entire sub-field a disservice, by trivializing a much more diverse set of valid community definitions. By its very nature, this paper will always be a work in progress. We do not claim that there are only 73 algorithms in the community discovery literature that are worth investigating. We only gathered what we could. Future work based on this paper can and will include whatever additions authors in the field feel should be consideredand they are encouraged to help us by sending suggestions and/or working implementations to mcos@itu.dk. The most up to date version of ASN will be available at http://www.michelecoscia.com/ ?page_id=1640. Moreover, for simplicity, here we focused only on algorithms that work on the simplest graph representations. Several algorithms specialize in directed, multilayer, bipartite, and/or metadata-rich graphs. These will be included as we refine the ASN building procedure in the future.
4,964
1907.02277
2954514064
Discovering communities in complex networks means grouping nodes similar to each other, to uncover latent information about them. There are hundreds of different algorithms to solve the community detection task, each with its own understanding and definition of what a "community" is. Dozens of review works attempt to order such a diverse landscape -- classifying community discovery algorithms by the process they employ to detect communities, by their explicitly stated definition of community, or by their performance on a standardized task. In this paper, we classify community discovery algorithms according to a fourth criterion: the similarity of their results. We create an Algorithm Similarity Network (ASN), whose nodes are the community detection approaches, connected if they return similar groupings. We then perform community detection on this network, grouping algorithms that consistently return the same partitions or overlapping coverage over a span of more than one thousand synthetic and real world networks. This paper is an attempt to create a similarity-based classification of community detection algorithms based on empirical data. It improves over the state of the art by comparing more than seventy approaches, discovering that the ASN contains well-separated groups, making it a sensible tool for practitioners, aiding their choice of algorithms fitting their analytic needs.
The second category includes works classifying community discovery algorithms by the of community they are searching for in the network. Notable definition-based review works are @cite_13 , @cite_30 , @cite_22 , @cite_6 , and @cite_32 , the latter three focusing on directed, overlapping, and evolving networks. This is the closest category to ours, as we are also interested in building an ontology of community discovery algorithms. However, the works in this category employ a top-down approach. They take the stated -- theoretical -- definition of community of a paper and use it to classify it. Here, we have a data-driven approach: we classify algorithms not by their stated definition, but by their practical results.
{ "abstract": [ "Many real-world networks are intimately organized according to a community structure. Much research effort has been devoted to develop methods and algorithms that can efficiently highlight this hidden structure of a network, yielding a vast literature on what is called today community detection. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition, each proposed algorithm then extracts the communities, which typically reflect only part of the features of real communities. The aim of this survey is to provide a ‘user manual’ for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery methods based on the definition of community they adopt. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on. The proposed classification of community discovery methods is also useful for putting into perspective the many open directions for further research. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 512–546, 2011 © 2011 Wiley Periodicals, Inc.", "Abstract Networks (or graphs) appear as dominant structures in diverse domains, including sociology, biology, neuroscience and computer science. In most of the aforementioned cases graphs are directed — in the sense that there is directionality on the edges, making the semantics of the edges nonsymmetric as the source node transmits some property to the target one but not vice versa. An interesting feature that real networks present is the clustering or community structure property, under which the graph topology is organized into modules commonly called communities or clusters. The essence here is that nodes of the same community are highly similar while on the contrary, nodes across communities present low similarity. Revealing the underlying community structure of directed complex networks has become a crucial and interdisciplinary topic with a plethora of relevant application domains. Therefore, naturally there is a recent wealth of research production in the area of mining directed graphs — with clustering being the primary method sought and the primary tool for community detection and evaluation. The goal of this paper is to offer an in-depth comparative review of the methods presented so far for clustering directed networks along with the relevant necessary methodological background and also related applications. The survey commences by offering a concise review of the fundamental concepts and methodological base on which graph clustering algorithms capitalize on. Then we present the relevant work along two orthogonal classifications. The first one is mostly concerned with the methodological principles of the clustering algorithms, while the second one approaches the methods from the viewpoint regarding the properties of a good cluster in a directed network. Further, we present methods and metrics for evaluating graph clustering results, demonstrate interesting application domains and provide promising future research directions.", "Several research studies have shown that complex networks modeling real-world phenomena are characterized by striking properties: (i) they are organized according to community structure, and (ii) their structure evolves with time. Many researchers have worked on methods that can efficiently unveil substructures in complex networks, giving birth to the field of community discovery. A novel and fascinating problem started capturing researcher interest recently: the identification of evolving communities. Dynamic networks can be used to model the evolution of a system: nodes and edges are mutable, and their presence, or absence, deeply impacts the community structure that composes them. This survey aims to present the distinctive features and challenges of dynamic community discovery and propose a classification of published approaches. As a “user manual,” this work organizes state-of-the-art methodologies into a taxonomy, based on their rationale, and their specific instantiation. Given a definition of network dynamics, desired community characteristics, and analytical needs, this survey will support researchers to identify the set of approaches that best fit their needs. The proposed classification could also help researchers choose in which direction to orient their future research.", "The detection of overlapping communities is a challenging problem which is gaining increasing interest in recent years because of the natural attitude of individuals, observed in real-world networks, to participate in multiple groups at the same time. This review gives a description of the main proposals in the field. Besides the methods designed for static networks, some new approaches that deal with the detection of overlapping communities in networks that change over time, are described. Methods are classified with respect to the underlying principles guiding them to obtain a network division in groups sharing part of their nodes. For each of them we also report, when available, computational complexity and web site address from which it is possible to download the software implementing the method.", "" ], "cite_N": [ "@cite_30", "@cite_22", "@cite_32", "@cite_6", "@cite_13" ], "mid": [ "2157527521", "2152430833", "2734601503", "2964273740", "" ] }
Discovering Communities of Community Discovery
In this paper, we provide a bottom-up data-driven categorization of community detection algorithms. Community detection in complex networks is the task of finding groups of nodes that are closely related to each other. Doing so usually unveils new knowledge about how nodes connect, helping us predicting new links or some latent node characteristic. Community discovery is probably the most prominent and studied problem in network science. This popularity implies that the number of different networks to which community discovery can be applied is vast and so is the number of its potential analytic objectives. As a result, what a community is in a complex network can take as many different interpretations as the number of people working in the field. Review works on the topic abound and often their reference lists contain hundreds of citations [14]. They usually attempt a classification, grouping community detection algorithms into a manageable set of macro categories. Most of them work towards one of three objectives. They classify community detection algorithms: by process, meaning they explain the inner workings of an algorithm and let the reader decide which method corresponds to their own definition of community -e.g. [14]; by definition, meaning they collect all community discovery definitions ever proposed and create an ontology of them -e.g. [6]; by performance, meaning that they put the algorithms to a standardized task and rank them according to how well they perform on that task -e.g. [18]. This paper also attempts to classify community discovery algorithms, but uses none of these approaches. Instead, we perform a categorization by similarity, e.g. which algorithms, at a practical level, return almost the same communities. As in the process case, we expect the inner workings of an algorithm to make most of the difference, but we do not focus on them. As in the definition case, we aim to build an ontology, but ours is bottom-up data-driven rather than being imposed top-down. As in the performance case, we define a set of standardized tasks, but we are not interested in which method maximizes a quality function. Here, we are not interested in what works best but what works similarly. This is useful for practitioners because they might have identified an algorithm that finds the communities they are interested in, with some downsides that make its application impossible (e.g. long running times). With the map provided in this paper, a researcher can identify the set of algorithms outputting almost identical results to their favorite one, but not affected by its specific issues. Maybe they perform slightly worse, but do so at a higher time efficiency. We do so by collecting implementations of community detection algorithms and extract communities on synthetic benchmarks and real world networks. We then calculate the pairwise similarity of the output groupings, using overlapping mutual information [21], [26] -we need the overlapping variant, because it allows us to compare algorithms which allow communities to share nodes. For each network in which algorithms a 1 and a 2 ranked in the top five among the most similar outputs we increase their similarity count by one. Once we have an overall measure of how many times two algorithms provided similar communities, we can reconstruct an affinity graph, which we call the Algorithm Similarity Network (ASN ). In ASN , each node is a community discovery method. We weigh each link according to the similarity count, as explained above. We only keep links if this count is significantly different from null expectation. Once we establish that our reconstruction of ASN is resilient to noise and to our choices, we analyze it. Specifically, we want to find groups of algorithms that work similarly: we discover communities of community discovery algorithms. There are other approaches proposing a data-driven classification of community discovery algorithms [10,11,16]. This paper improves over the state of the art by: exploring more algorithms (73) over more benchmarks (960 synthetic and 819 real-world networks) than other empirical tests; exploring more algorithm types -including overlapping and hierarchical solutions -; looking at the actual similarity of the partitions rather than the distribution of community sizes. Note that we were only able to collect 73 out of the hundreds community discovery algorithms, because we focused on the papers which provided an easy way to recover their implementation. This paper should not be considered finished as is, but rather as a work in progress. Many prominent algorithms were excluded as it was not possible to find a working implementation -sometimes because they are simply too old. Authors of excluded methods should be assured that we will include their algorithm in ASN if they can contact us at mcos@itu.dk. The most updated version of ASN will then be not in this paper, but available at http://www.michelecoscia. com/?page_id=1640. METHOD The aim of this paper is to build an Algorithm Similarity Network (ASN ), whose elements are the similarities between the outputs of community discovery algorithms. To evaluate result similarity is far from trivial, as we need to: (i) test enough scenarios to get a robust similarity measure, and (ii) being able to compare disjoint partitions to overlapping coverages -where nodes can be part of multiple communities. In this section we outline our methodology to build ASN , in three phases: (i) creating benchmark networks; (ii) evaluating the pairwise similarity of results on the benchmark networks; and (iii) extracting ASN 's backbone. A note about generating the results for each algorithm. Many algorithms require parameters and do not have an explicit test for choosing the optimal ones. In those cases, we operate a grid search, selecting the combination yielding the maximum modularity. This is simpler in the case of algorithms returning disjoint partitions. For algorithms providing an overlapping coverage, there are multiple conflicting definitions of overlapping modularity. For this paper, we choose the one presented in [23]. Benchmarks We have two distinct sets of benchmarks on which to test our community discovery algorithms: synthetic networks and real world networks. Synthetic Networks. In evaluating community discovery algorithms, most researchers agree on using the LFR benchmark generator [22] for synthetic testing. The LFR benchmark creates networks respecting most of the properties of interest of many real world networks. We follow the literature and use the LFR benchmark. We make this choice not without criticism, which we spell out in Section 4.2. To generate an LFR benchmark we need to specify several parameters. Here we focus on two in particular: number of nodes n and mixing parameter µ -which is the fraction of edges that span across communities, making the task of finding communities harder. We create a grid, generating networks with n = {50, 60, 70, 80, 90, 100} and µ = {.07, .09, .11, .13, .15, .17, .19, .21}. The average degree (k) is set to 6 for all networks, while the maximum degree (K) is a function of n. For each combination of parameters we generate ten independent benchmarks with disjoint communities and ten benchmarks with overlapping communities. In the overlapping case, the number of nodes overlapping between communities (o n ), as well as the number of communities to which they belong (o m ), are also a function of n. We generate 2 (overlapping, disjoint) × 10 (independent benchmarks) × 6 (possible number of nodes) × 8 (distinct µ values) = 960 benchmarks. Due to the high number of networks and to the high time complexity of some of the methods, we are unable to have larger benchmarks. The number of benchmarks is necessary to guarantee statistical power to our similarity measure. Real World Networks. The LFR benchmarks have a single definition of community in mind. Therefore the tests are not independent, and if an algorithm follows a different community definition, it might fail in unpredictable ways, which makes our edge creating process prone to noise. To reduce this issue, we collect a number of different real world networks. Communities in real world networks might originate from a vast and variegated set of possible processes. We assembled 819 real world networks, which were found in the Colorado Index of Complex networks 2 . We selected a high number of small networks to conform to our needs of statistical significance as described in the previous subsection. Evaluating Similarity Once we run two community discovery algorithms on a network, we obtain two divisions of nodes into communities. A standard way to estimate how similar these two groupings are is to use normalized mutual information [40] (NMI). Mutual information quantifies the information obtained about one random variable through observing the other. The normalized variant, rather than returning the number of bits, is adjusted to take values between 0 (no mutual information) and 1 (perfect correlation). The standard version of NMI is defined only for disjoint partitions, where nodes can belong to only one community. However, many of the algorithms we test are overlapping, placing nodes in multiple communities. There are several ways to extend NMI to the overlapping case (oNMI), as described in [21] and [26]. We use the three definitions considered in these two papers as our alternative similarity measures. These versions reduce to NMI when their input is two disjoint partitions. This allows us to compare disjoint and overlapping partitions to each other. We label the three variants as MAX, LFK, and SUM, following the original papers. Our default choice is MAX, which normalizes the mutual information between the overlapping results a 1 and a 2 with the maximum of the entropy of a 1 and a 2 . Differently from LFK, MAX is corrected by chance: unrelated vectors will have zero oNMI MAX. How do we aggregate the similarity results across our 1,779 benchmarks? We have three options: (i) averaging them, (ii) counting the number of times two algorithms had an oNMI higher than a given threshold, and (iii) counting the number of times two algorithms were each other in the most similar algorithms in a given benchmark. We choose option (iii). Option (i) has both theoretical and practical issues. It is not immediately clear what is the semantic of an average normalized mutual information. Moreover, we want to empathize the scenarios in which two algorithms are similar more than when they are dissimilar. There is only one way in which two results can be similar, while there are (almost) infinite ways for two results to be dissimilar. Thus similarity contains more information than dissimilarity. If we take the simple average, dissimilarity is going to drive the results. In option (ii), NMIs will have different expected values for different networks. If we choose a threshold for all benchmarks, we will overweight some benchmarks over others. This is fixed by option (iii), which counts the cases in which both algorithms agree on the community structure in the network. Note that both algorithms have to agree, thus this method still allows algorithms to be isolated if they are dissimilar to everything else. Suppose a 1 is a very peculiar algorithm. Regardless of its results, it will find a 2 as its most similar companion, even if the results are different. Since the results are different, a 2 will not have a 1 as one of its most similar companions. Thus there will be no edge between a 1 and a 2 . We will see in our robustness checks that the three options return comparable results, with option (iii) having the fewest theoretical and practical concerns. Building the Network The result from the previous section is a weighted network, where each edge weight is the number of benchmarks in which two algorithms were in each other most similar results. Any edge generation choice will generate a certain amount of noise. Algorithms with average results might end up as most similar to other algorithms in a benchmark just by pure chance. This means that there is uncertainity in our estimation of the edge weights -or whether some edges should be present at all. To alleviate the problem, we use the noise corrected (NC) backbone approach [7]. The reason to pick this approach over the alternatives lies in its design. The NC backboning procedure removes noise from edge weight estimates, under specific assumptions about the edge generation process, which fit the way we build our network. ASN is a network where edge weights are counts, broadly distributed -as we show in the Analysis section -, and are generated with an hypergeometric "extraction without replacement" approach, which are all assumptions of the NC backboning approach. For this reason, we apply the NC backbone to our ASN . Node size: sum of total edge weights. Node color: community affiliation -multicolored nodes belong to multiple communities. Edge width: number of times the two algorithms returned similar partitions. Only including links exceeding null expectation. Link color: significance, from dark (high) to light (low, but still significant with p < .00001). The NC backbone requires a parameter δ , which controls for the statistical significance of the edges we include in the resulting network. We set the parameter to the value required to have the minimum possible number of edges, while at the same time ensuring that each node has at least one connection. In our case, we set δ = 19.5, meaning that we only include edges with that particular tscore (or higher), which is roughly equivalent to say that p < .00001. Again, note that we are not imposing the ASN to be connected in a single component. Under these constraints, ASN could be just a set of small components, each composed by a pair of connected algorithms. ANALYSIS 4.1 The Algorithm Similarity Network We start by taking a look at the resulting ASN network. We show a depiction of the network in Figure 1 -calculated using the oNMI MAX similarity function and setting δ = 19.5 for the noise corrected backboning. The network contains all the results, both from synthetic and from real-world networks. The first remarkable thing about ASN is that it does have a community structure. The network is sparse -by construction, this is not a result -: only 9% of possible edges are in the network. However, and this is surprising, clustering is high -transitivity is 0.47, or 47% of connected node triads have all three edges necessary to close the triangle. For these reasons, we can run a community discovery algorithm on ASN . We choose to run the overlapping Infomap algorithm [38]. The algorithm attempts to compress the information about random walks on the network using community prefix codes: good communities compress the walks better because the random walker is "trapped" inside them. The quality measure is the codelength necessary to encode random walks. The codelength gives us a corroboration of the presence of communities. Without communities, we need ∼ 8.52 bits to encode the random walks. With communities, the codelength reduces to ∼ 4.48. Figure 2 shows the complement of the cumulative distribution (CCDF) of the edge weights of ASN before operating the backboning. We can see that, while the distribution is not a power-lawnote the log-log scale -, it nevertheless spans multiple orders of magnitude, with a clear skewed distribution. In fact, 50% of the edges have a weight lower than 10 -only in 10 cases out of the possible 960 + 819 the two algorithms were in the top five most similar results -, while the three strongest edges (.1% of the network) have weights of 1,453, 1,519, and 1,540, respectively. This means that the distribution could have been a power-law, had we performed enough tests. In any case, such broad distribution justifies our choice of backboning method, which is specifically designed to handle cases with large variance and lack of well-defined averages. Robustness In developing our framework, we made choices that have repercussions ASN 's shape. How much do these choices impact the final result? We are interested in estimating the amount of change in ASN 's topology, specifically whether it is stable: different ASN s calculated with different procedures and parameters are similar. The first test aims at quantifying the amount of change introduced by using a different oNMI measure. Recall that our official ASN uses the MAX variant. There are two alternatives: LFK and Figure 4: Correlation between the ASN weights using the LFR benchmarks (x-axis) and the real world networks (yaxis). Same legend as Figure 3, for different oNMI variants: (left) MAX, (middle) LFK, (right) SUM. Figure 3 shows how ASN s calculated using them correlated with the MAX standard version. SUM. It is immediately obvious from the plots that the choice of the specific measure of oNMI has no effect on the shape of ASN . We could have picked any variant and we would have likely observed similar results. In fact, the correlations between the methods are as follows: MAX vs LFK = 0.94; MAX vs SUM = 0.99; LFK vs SUM = 0.97. The second test focuses on the synthetic LFR benchmarks versus the 819 real world networks. Real world networks do not necessarily look like LFR benchmarks -or each other. On the other hand, all LFR benchmarks are similar to each other. Does that create different ASN s? We repeat our correlation test ( Figure 4). As in the previous cases, we observe a significant positive correlation for all testsalbeit lower than before: LFR vs Real (MAX) = 0.55; LFR vs Real (LFK) = 0.51; LFR vs Real (SUM) = 0.51. All these correlations are still statistically significant (p ∼ 0). However, we concede that there is a difference between real world networks and LFR benchmarks. It is worthwhile investigating this difference in future works, as a possible argument against the blind acceptance of LFR as the sole benchmark for testing community discovery algorithms. Third, our edge weights are a count of benchmarks in which two algorithms were in each other most similar lists. Alternative edge creation procedures might be to take the average oNMI, or to count the similarity between two algorithms only if they exceed a fixed oNMI threshold. Section 3.2 provides our theoretical reasons. Here we show that, at a practical level, our results are not gravely affected by such choice. We do so by calculating the NMI between ASN 's communities obtained with all three techniques. The ASN built by averaging the similarity scores has a 0.63 NMI with our option, while the one obtained by a fixed threshold has a 0.46 NMI. On the basis of these similarities, we conclude that there is an underlying ASN structure, and we think our choices allow us to capture it best. Communities In Figure 1, we show a partition of ASN into communities. A seasoned researcher in the community discovery field would be able to give meaningful labels to those communities. Here, we objectively quantify this meaningfulness along a few dimensions of the many possible. We start by considering a few attributes of community detection algorithms, whether they: return overlapping partitions (in which Table 1: Features of the communities of ASN . n: # of nodes. Over: % overlapping algorithms. Spr: % algorithms based either on centrality measures (including edge betweenness and random walks) or some sort of spreading process (e.g. label percolation). Q: % algorithms based on modularity maximization. NSim: % algorithms based on neighborhood similarity. Algorithms can be part of multiple/no classes, so the rows do not sum to one. communities can share nodes), are based on some centrality measure (be it random walks or shortest paths) or spreading process (it will become apparent why we lump these two categories), are based on modularity maximization [29], or are based on a neighborhood similarity approach (e.g. they cluster the adjacency matrix). In Table 1 we calculate the fraction of nodes in a community in each of those categories. Note that we count overlap nodes in all of their communities, so some nodes contribute to up to three communities. As we expect, some communities have a stronger presence of a single category. The largest community (in blue) groups centrality-based algorithms (Infomap [38], Edge betweenness [27], Walktrap [34], etc) with the ones based on spreading processes (label percolation [36], SLPA [5], Ganxis [42], etc). Some of these can be overlapping, but the majority of nodes in the community is part of this "spreading" category. This community shows a strong relationship between random walks, centrality-based approaches, and approaches founded on spreading processes. The second largest community (in red) is mostly populated by overlapping approaches (more than 90% of its nodes are overlapping) -BigClam [43], k-Clique [31], and DEMON [8] are some examples. The third largest community (in purple) is mostly composed by algorithms driven by neighbor similarity (more than 70% of them) rather than the classical "internal density" definition (the two are not necessarily the same). The fourth largest community (in green) exclusively groups modularity maximization algorithms. We now calculate descriptive statistics of the groupings each method returns and then we calculate its average across all the test networks. To facilitate interpretation, we also aggregate at the level of the ASN community, as we show in Figure 1. Table 2 reports those statistics. We also calculate the standard errors, which prove that these differences are significant, but we omit them to reduce clutter. The results from Table 2 can be combined from the knowledge we gathered from Table 1. For instance, consider community 4. We know from Table 1 that this hosts peculiar algorithms working on "neighbor similarity" rather than internal density. This might seem like a small difference, but Table 2 shows its significant repercussions: the average modularity we get from these algorithms is practically zero. Moreover, the algorithms tend to return more - Table 2: The averages of various community descriptive statistics per algorithm group.|C |: Average number of communities. Avg Size: Average number of nodes in the communities.d: Average community density.Q: Average modularity -when the algorithm is overlapping we use the overlapping modularity instead of the regular definition.c: Average conductance -from [24]. Avg Ncut: Average normalized cut -from [24]. and therefore smaller -communities, which tend to be denser but also to have higher conductance. 3 This is another warning sign for uncritically accepting modularity as the de facto quality measure to look at when evaluating the performance of a community discovery algorithm. It works perfectly for the methods based on the same community definition, but there are other -different and validcommunity definitions. Other interesting facts include the almost identical average modularity between community 2 -whose algorithms are explicitly maximizing modularity -and community 3 -which is based on spreading processes. Community 1 has higher internal density, but also higher conductance and normalized cut than average, showing how overlapping approaches can find unusually dense communities, sacrificing the requirement of having few outgoing connections. The categories we discussed are necessarily broad and might group algorithms that have significant differences in other aspects. For instance, there are hundreds of different ways to make your algorithm return overlapping communities -communities sharing nodes. Our approach allows us to focus on such methods to find differences inside the algorithm communities. In practice, we can generate different versions of ASN , by only considering the similarities between the algorithms in the "overlapping" category. Note that this is different than simply inducing the graph from the original ASN , selecting only the overlapping algorithms and all the edges between them. Here we select the nodes and all their similarities and then we apply the backboning, with a differenthigher -δ threshold. In this way, we can deploy a more stringent similarity test, that is able to distinguish between subcategories of the main category. Figure 5 depicts the result. Infomap divides the overlapping ASN in three communities, proving the point that there are substantial sub-classes in the overlapping coverage category. There are strong arguments in favor of these classes being meaningful, although a full discussion requires more space and data. For instance, consider the bottom-right community of the network (in blue). It contains all the methods which apply the same strategy to find overlapping communities: rather than clustering nodes, they cluster edges. This is true for Linecomms [12], HLC [1], Ganet+ [33], and OLC [3]. The remaining methods do not cluster link directly, but ASN suggests that their strategies might be comparable. We can conclude that ASN provides a way to narrow down to subcategories of community discovery and find relevant information to motivate one's choice of an algorithm. Ground Truth in Synthetic Networks The version of ASN based on synthetic LFR benchmarks allows an additional analysis. The LFR benchmark generates a network with a known ground truth: it establishes edges according to a planted partition, which it also provides as an output. Thus, we can add a node to the network: the ground truth. We calculate the similarity of the ground truth division in communities with the one provided by each algorithm. We now can evaluate how the algorithms performed, by looking at the edge weights between the ground truth node and the algorithm itself. In the MAX measure, this means the number of times the algorithm was in the top similarity with the ground truth and vice versa. Table 3 shows the ten best algorithms in our sample. We do not show the worst algorithms, because MAX is a strict test, and thus there is a long list of (21) algorithms with weight equal to zero, which is not informative. The table shows that the best performing algorithm are Linecomms, OSLOM, and the overlap version of Infomap. Should we conclude that these are the best community discovery algorithms in the literature? The answer is yes only if we limit ourselves to the task of finding the same type of communities that the LFR benchmark plants in its output network. Crucially, oNMI MAX 1 linecomms 165 2 oslom 73 3 infomap-overlap 64 4 savi 62 5 labelperc 57 6 rmcl 54 7 edgebetween 41 7 leadeig 41 7 vbmod 41 10 gce 32 Table 3: The ten nodes with the highest MAX edge weight with the ground truth node in ASN -using exclusively data from the LFR synthetic networks. Table 3 are not scattered randomly in the network: they tend to be in the same area. Specifically we know that the ground truth node is located deep inside the blue community, as most of the top ten algorithms from Table 3 are classified in that group. Rank Algorithm We can quantify this objectively by calculating the average path length between the ten nodes, which is equal to 2.51 -on average you need to cross two and a half edges to go from any of these ten nodes to any other of the ten. This is shorter than the overall average path length in ASN , which is 3.25. We test statistical significance by calculating the expected average path length when selecting ten random nodes in the network. Figure 6 shows the distribution of their distances. Only seven out of a thousand attempts generated a smaller or equal average path length. We conclude this section with a word of caution when using benchmarks to establish the quality of a community discovery algorithm, which is routinely done in review works and when proposing a new approach. If the benchmark does not fit the desired definition of community, it might not return a fair evaluation. If one is interested in communities based on neighborhood similarity -the green community in Figure 1 -the LFR benchmark is not the correct one to use. Moreover, when deciding to test a new method against the state of the art, one must choose the algorithms in the literature fitting the same community definition, or the benchmark test would be pointless. This warning goes the other way: assuming that all valid communities look like the ones generated by the LFR benchmark would impoverish a field that -as the strong clusters in ASN show -does indeed have significantly different perspectives of what a community is. CONCLUSION In this paper we contributed to the literature on reviewing community discovery algorithms. Rather than classify them by their process, community definition, or performance, here we classify them by their similarity. How similar are the groupings they return? We performed the most comprehensive analysis of community discovery algorithms to date, including 73 algorithms tested over more than a thousand synthetic and real world networks. We were able to reconstruct an Algorithm Similarity Network -ASN -connecting algorithms to each other based on their output similarity. ASN confirms the intuition about the community discovery literature: there are indeed different valid definitions of community, as the strong clustering in the network shows. The clusters are meaningful as they reflect real differences among the algorithms' features. ASN allows us to perform multi-level analysis: by focusing on a specific category, we can apply our framework to discover meaningful sub-categories. Finally, ASN 's topology highlights how projecting the community detection problem on a single definition of community -e.g. "a group of nodes densely connected to each other and sparsely connected with the rest of the network" -does the entire sub-field a disservice, by trivializing a much more diverse set of valid community definitions. By its very nature, this paper will always be a work in progress. We do not claim that there are only 73 algorithms in the community discovery literature that are worth investigating. We only gathered what we could. Future work based on this paper can and will include whatever additions authors in the field feel should be consideredand they are encouraged to help us by sending suggestions and/or working implementations to mcos@itu.dk. The most up to date version of ASN will be available at http://www.michelecoscia.com/ ?page_id=1640. Moreover, for simplicity, here we focused only on algorithms that work on the simplest graph representations. Several algorithms specialize in directed, multilayer, bipartite, and/or metadata-rich graphs. These will be included as we refine the ASN building procedure in the future.
4,964
1907.02277
2954514064
Discovering communities in complex networks means grouping nodes similar to each other, to uncover latent information about them. There are hundreds of different algorithms to solve the community detection task, each with its own understanding and definition of what a "community" is. Dozens of review works attempt to order such a diverse landscape -- classifying community discovery algorithms by the process they employ to detect communities, by their explicitly stated definition of community, or by their performance on a standardized task. In this paper, we classify community discovery algorithms according to a fourth criterion: the similarity of their results. We create an Algorithm Similarity Network (ASN), whose nodes are the community detection approaches, connected if they return similar groupings. We then perform community detection on this network, grouping algorithms that consistently return the same partitions or overlapping coverage over a span of more than one thousand synthetic and real world networks. This paper is an attempt to create a similarity-based classification of community detection algorithms based on empirical data. It improves over the state of the art by comparing more than seventy approaches, discovering that the ASN contains well-separated groups, making it a sensible tool for practitioners, aiding their choice of algorithms fitting their analytic needs.
The third category -- gaining popularity recently -- includes works classifying community discovery algorithms by giving them a specific task and ranking them in how well they in that task. Such tasks can be maximizing modularity or the normalized mutual information of the communities they recover versus some other metadata we have about the nodes. In this category, we can find papers such as @cite_3 , @cite_40 , @cite_15 , @cite_29 , @cite_38 , @cite_23 , @cite_27 ; and, specifically for overlapping community discovery, @cite_25 . In line with this approach, we also use standardized tests and benchmarks. However, we have no interest in which algorithm performs best'' -- whatever the definition of best'' is -- rather in what works similarly. We have a small ranking discussion, but we use it to criticize the notion of a best'' community discovery algorithm rather than taking the results at face value.
{ "abstract": [ "Algorithms to find communities in networks rely just on structural information and search for cohesive subsets of nodes. On the other hand, most scholars implicitly or explicitly assume that structural communities represent groups of nodes with similar (non-topological) properties or functions. This hypothesis could not be verified, so far, because of the lack of network datasets with information on the classification of the nodes. We show that traditional community detection methods fail to find the metadata groups in many large networks. Our results show that there is a marked separation between structural communities and metadata groups, in line with recent findings. That means that either our current modeling of community structure has to be substantially modified, or that metadata groups may not be recoverable from topology alone.", "Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that \"look like\" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior.", "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.", "Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems.", "Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.", "Community detection is a common problem in graph data analytics that consists of finding groups of densely connected nodes with few connections to nodes outside of the group. In particular, identifying communities in large-scale networks is an important task in many scientific domains. In this review, we evaluated eight state-of-the-art and five traditional algorithms for overlapping and disjoint community detection on large-scale real-world networks with known ground-truth communities. These 13 algorithms were empirically compared using goodness metrics that measure the structural properties of the identified communities, as well as performance metrics that evaluate these communities against the ground-truth. Our results show that these two types of metrics are not equivalent. That is, an algorithm may perform well in terms of goodness metrics, but poorly in terms of performance metrics, or vice versa. © 2014 The Authors. WIREs Computational", "Community detection has become a very important part in complex networks analysis. Authors traditionally test their algorithms on a few real or artificial networks. Testing on real networks is necessary, but also limited: the considered real networks are usually small, the actual underlying communities are generally not defined objectively, and it is not possible to control their properties. Generating artificial networks makes it possible to overcome these limitations. Until recently though, most works used variations of the classic Erdős-Renyi random model and consequently suffered from the same flaws, generating networks not realistic enough. In this work, we use model, which is able to generate networks with controlled power-law degree and community distributions, to test some community detection algorithms. We analyze the properties of the generated networks and use the normalized mutual information measure to assess the quality of the results and compare the considered algorithms.", "Network communities represent basic structures for understanding the organization of real-world networks. A community (also referred to as a module or a cluster) is typically thought of as a group of nodes with more connections amongst its members than between its members and the remainder of the network. Communities in networks also overlap as nodes belong to multiple clusters at once. Due to the difficulties in evaluating the detected communities and the lack of scalable algorithms, the task of overlapping community detection in large networks largely remains an open problem. In this paper we present BIGCLAM (Cluster Affiliation Model for Big Networks), an overlapping community detection method that scales to large networks of millions of nodes and edges. We build on a novel observation that overlaps between communities are densely connected. This is in sharp contrast with present community detection methods which implicitly assume that overlaps between communities are sparsely connected and thus cannot properly extract overlapping communities in networks. In this paper, we develop a model-based community detection algorithm that can detect densely overlapping, hierarchically nested as well as non-overlapping communities in massive networks. We evaluate our algorithm on 6 large social, collaboration and information networks with ground-truth community information. Experiments show state of the art performance both in terms of the quality of detected communities as well as in speed and scalability of our algorithm." ], "cite_N": [ "@cite_38", "@cite_29", "@cite_3", "@cite_40", "@cite_27", "@cite_23", "@cite_15", "@cite_25" ], "mid": [ "2090372231", "2111002549", "2120043163", "1995996823", "2497752945", "1565608089", "1568213684", "2139694940" ] }
Discovering Communities of Community Discovery
In this paper, we provide a bottom-up data-driven categorization of community detection algorithms. Community detection in complex networks is the task of finding groups of nodes that are closely related to each other. Doing so usually unveils new knowledge about how nodes connect, helping us predicting new links or some latent node characteristic. Community discovery is probably the most prominent and studied problem in network science. This popularity implies that the number of different networks to which community discovery can be applied is vast and so is the number of its potential analytic objectives. As a result, what a community is in a complex network can take as many different interpretations as the number of people working in the field. Review works on the topic abound and often their reference lists contain hundreds of citations [14]. They usually attempt a classification, grouping community detection algorithms into a manageable set of macro categories. Most of them work towards one of three objectives. They classify community detection algorithms: by process, meaning they explain the inner workings of an algorithm and let the reader decide which method corresponds to their own definition of community -e.g. [14]; by definition, meaning they collect all community discovery definitions ever proposed and create an ontology of them -e.g. [6]; by performance, meaning that they put the algorithms to a standardized task and rank them according to how well they perform on that task -e.g. [18]. This paper also attempts to classify community discovery algorithms, but uses none of these approaches. Instead, we perform a categorization by similarity, e.g. which algorithms, at a practical level, return almost the same communities. As in the process case, we expect the inner workings of an algorithm to make most of the difference, but we do not focus on them. As in the definition case, we aim to build an ontology, but ours is bottom-up data-driven rather than being imposed top-down. As in the performance case, we define a set of standardized tasks, but we are not interested in which method maximizes a quality function. Here, we are not interested in what works best but what works similarly. This is useful for practitioners because they might have identified an algorithm that finds the communities they are interested in, with some downsides that make its application impossible (e.g. long running times). With the map provided in this paper, a researcher can identify the set of algorithms outputting almost identical results to their favorite one, but not affected by its specific issues. Maybe they perform slightly worse, but do so at a higher time efficiency. We do so by collecting implementations of community detection algorithms and extract communities on synthetic benchmarks and real world networks. We then calculate the pairwise similarity of the output groupings, using overlapping mutual information [21], [26] -we need the overlapping variant, because it allows us to compare algorithms which allow communities to share nodes. For each network in which algorithms a 1 and a 2 ranked in the top five among the most similar outputs we increase their similarity count by one. Once we have an overall measure of how many times two algorithms provided similar communities, we can reconstruct an affinity graph, which we call the Algorithm Similarity Network (ASN ). In ASN , each node is a community discovery method. We weigh each link according to the similarity count, as explained above. We only keep links if this count is significantly different from null expectation. Once we establish that our reconstruction of ASN is resilient to noise and to our choices, we analyze it. Specifically, we want to find groups of algorithms that work similarly: we discover communities of community discovery algorithms. There are other approaches proposing a data-driven classification of community discovery algorithms [10,11,16]. This paper improves over the state of the art by: exploring more algorithms (73) over more benchmarks (960 synthetic and 819 real-world networks) than other empirical tests; exploring more algorithm types -including overlapping and hierarchical solutions -; looking at the actual similarity of the partitions rather than the distribution of community sizes. Note that we were only able to collect 73 out of the hundreds community discovery algorithms, because we focused on the papers which provided an easy way to recover their implementation. This paper should not be considered finished as is, but rather as a work in progress. Many prominent algorithms were excluded as it was not possible to find a working implementation -sometimes because they are simply too old. Authors of excluded methods should be assured that we will include their algorithm in ASN if they can contact us at mcos@itu.dk. The most updated version of ASN will then be not in this paper, but available at http://www.michelecoscia. com/?page_id=1640. METHOD The aim of this paper is to build an Algorithm Similarity Network (ASN ), whose elements are the similarities between the outputs of community discovery algorithms. To evaluate result similarity is far from trivial, as we need to: (i) test enough scenarios to get a robust similarity measure, and (ii) being able to compare disjoint partitions to overlapping coverages -where nodes can be part of multiple communities. In this section we outline our methodology to build ASN , in three phases: (i) creating benchmark networks; (ii) evaluating the pairwise similarity of results on the benchmark networks; and (iii) extracting ASN 's backbone. A note about generating the results for each algorithm. Many algorithms require parameters and do not have an explicit test for choosing the optimal ones. In those cases, we operate a grid search, selecting the combination yielding the maximum modularity. This is simpler in the case of algorithms returning disjoint partitions. For algorithms providing an overlapping coverage, there are multiple conflicting definitions of overlapping modularity. For this paper, we choose the one presented in [23]. Benchmarks We have two distinct sets of benchmarks on which to test our community discovery algorithms: synthetic networks and real world networks. Synthetic Networks. In evaluating community discovery algorithms, most researchers agree on using the LFR benchmark generator [22] for synthetic testing. The LFR benchmark creates networks respecting most of the properties of interest of many real world networks. We follow the literature and use the LFR benchmark. We make this choice not without criticism, which we spell out in Section 4.2. To generate an LFR benchmark we need to specify several parameters. Here we focus on two in particular: number of nodes n and mixing parameter µ -which is the fraction of edges that span across communities, making the task of finding communities harder. We create a grid, generating networks with n = {50, 60, 70, 80, 90, 100} and µ = {.07, .09, .11, .13, .15, .17, .19, .21}. The average degree (k) is set to 6 for all networks, while the maximum degree (K) is a function of n. For each combination of parameters we generate ten independent benchmarks with disjoint communities and ten benchmarks with overlapping communities. In the overlapping case, the number of nodes overlapping between communities (o n ), as well as the number of communities to which they belong (o m ), are also a function of n. We generate 2 (overlapping, disjoint) × 10 (independent benchmarks) × 6 (possible number of nodes) × 8 (distinct µ values) = 960 benchmarks. Due to the high number of networks and to the high time complexity of some of the methods, we are unable to have larger benchmarks. The number of benchmarks is necessary to guarantee statistical power to our similarity measure. Real World Networks. The LFR benchmarks have a single definition of community in mind. Therefore the tests are not independent, and if an algorithm follows a different community definition, it might fail in unpredictable ways, which makes our edge creating process prone to noise. To reduce this issue, we collect a number of different real world networks. Communities in real world networks might originate from a vast and variegated set of possible processes. We assembled 819 real world networks, which were found in the Colorado Index of Complex networks 2 . We selected a high number of small networks to conform to our needs of statistical significance as described in the previous subsection. Evaluating Similarity Once we run two community discovery algorithms on a network, we obtain two divisions of nodes into communities. A standard way to estimate how similar these two groupings are is to use normalized mutual information [40] (NMI). Mutual information quantifies the information obtained about one random variable through observing the other. The normalized variant, rather than returning the number of bits, is adjusted to take values between 0 (no mutual information) and 1 (perfect correlation). The standard version of NMI is defined only for disjoint partitions, where nodes can belong to only one community. However, many of the algorithms we test are overlapping, placing nodes in multiple communities. There are several ways to extend NMI to the overlapping case (oNMI), as described in [21] and [26]. We use the three definitions considered in these two papers as our alternative similarity measures. These versions reduce to NMI when their input is two disjoint partitions. This allows us to compare disjoint and overlapping partitions to each other. We label the three variants as MAX, LFK, and SUM, following the original papers. Our default choice is MAX, which normalizes the mutual information between the overlapping results a 1 and a 2 with the maximum of the entropy of a 1 and a 2 . Differently from LFK, MAX is corrected by chance: unrelated vectors will have zero oNMI MAX. How do we aggregate the similarity results across our 1,779 benchmarks? We have three options: (i) averaging them, (ii) counting the number of times two algorithms had an oNMI higher than a given threshold, and (iii) counting the number of times two algorithms were each other in the most similar algorithms in a given benchmark. We choose option (iii). Option (i) has both theoretical and practical issues. It is not immediately clear what is the semantic of an average normalized mutual information. Moreover, we want to empathize the scenarios in which two algorithms are similar more than when they are dissimilar. There is only one way in which two results can be similar, while there are (almost) infinite ways for two results to be dissimilar. Thus similarity contains more information than dissimilarity. If we take the simple average, dissimilarity is going to drive the results. In option (ii), NMIs will have different expected values for different networks. If we choose a threshold for all benchmarks, we will overweight some benchmarks over others. This is fixed by option (iii), which counts the cases in which both algorithms agree on the community structure in the network. Note that both algorithms have to agree, thus this method still allows algorithms to be isolated if they are dissimilar to everything else. Suppose a 1 is a very peculiar algorithm. Regardless of its results, it will find a 2 as its most similar companion, even if the results are different. Since the results are different, a 2 will not have a 1 as one of its most similar companions. Thus there will be no edge between a 1 and a 2 . We will see in our robustness checks that the three options return comparable results, with option (iii) having the fewest theoretical and practical concerns. Building the Network The result from the previous section is a weighted network, where each edge weight is the number of benchmarks in which two algorithms were in each other most similar results. Any edge generation choice will generate a certain amount of noise. Algorithms with average results might end up as most similar to other algorithms in a benchmark just by pure chance. This means that there is uncertainity in our estimation of the edge weights -or whether some edges should be present at all. To alleviate the problem, we use the noise corrected (NC) backbone approach [7]. The reason to pick this approach over the alternatives lies in its design. The NC backboning procedure removes noise from edge weight estimates, under specific assumptions about the edge generation process, which fit the way we build our network. ASN is a network where edge weights are counts, broadly distributed -as we show in the Analysis section -, and are generated with an hypergeometric "extraction without replacement" approach, which are all assumptions of the NC backboning approach. For this reason, we apply the NC backbone to our ASN . Node size: sum of total edge weights. Node color: community affiliation -multicolored nodes belong to multiple communities. Edge width: number of times the two algorithms returned similar partitions. Only including links exceeding null expectation. Link color: significance, from dark (high) to light (low, but still significant with p < .00001). The NC backbone requires a parameter δ , which controls for the statistical significance of the edges we include in the resulting network. We set the parameter to the value required to have the minimum possible number of edges, while at the same time ensuring that each node has at least one connection. In our case, we set δ = 19.5, meaning that we only include edges with that particular tscore (or higher), which is roughly equivalent to say that p < .00001. Again, note that we are not imposing the ASN to be connected in a single component. Under these constraints, ASN could be just a set of small components, each composed by a pair of connected algorithms. ANALYSIS 4.1 The Algorithm Similarity Network We start by taking a look at the resulting ASN network. We show a depiction of the network in Figure 1 -calculated using the oNMI MAX similarity function and setting δ = 19.5 for the noise corrected backboning. The network contains all the results, both from synthetic and from real-world networks. The first remarkable thing about ASN is that it does have a community structure. The network is sparse -by construction, this is not a result -: only 9% of possible edges are in the network. However, and this is surprising, clustering is high -transitivity is 0.47, or 47% of connected node triads have all three edges necessary to close the triangle. For these reasons, we can run a community discovery algorithm on ASN . We choose to run the overlapping Infomap algorithm [38]. The algorithm attempts to compress the information about random walks on the network using community prefix codes: good communities compress the walks better because the random walker is "trapped" inside them. The quality measure is the codelength necessary to encode random walks. The codelength gives us a corroboration of the presence of communities. Without communities, we need ∼ 8.52 bits to encode the random walks. With communities, the codelength reduces to ∼ 4.48. Figure 2 shows the complement of the cumulative distribution (CCDF) of the edge weights of ASN before operating the backboning. We can see that, while the distribution is not a power-lawnote the log-log scale -, it nevertheless spans multiple orders of magnitude, with a clear skewed distribution. In fact, 50% of the edges have a weight lower than 10 -only in 10 cases out of the possible 960 + 819 the two algorithms were in the top five most similar results -, while the three strongest edges (.1% of the network) have weights of 1,453, 1,519, and 1,540, respectively. This means that the distribution could have been a power-law, had we performed enough tests. In any case, such broad distribution justifies our choice of backboning method, which is specifically designed to handle cases with large variance and lack of well-defined averages. Robustness In developing our framework, we made choices that have repercussions ASN 's shape. How much do these choices impact the final result? We are interested in estimating the amount of change in ASN 's topology, specifically whether it is stable: different ASN s calculated with different procedures and parameters are similar. The first test aims at quantifying the amount of change introduced by using a different oNMI measure. Recall that our official ASN uses the MAX variant. There are two alternatives: LFK and Figure 4: Correlation between the ASN weights using the LFR benchmarks (x-axis) and the real world networks (yaxis). Same legend as Figure 3, for different oNMI variants: (left) MAX, (middle) LFK, (right) SUM. Figure 3 shows how ASN s calculated using them correlated with the MAX standard version. SUM. It is immediately obvious from the plots that the choice of the specific measure of oNMI has no effect on the shape of ASN . We could have picked any variant and we would have likely observed similar results. In fact, the correlations between the methods are as follows: MAX vs LFK = 0.94; MAX vs SUM = 0.99; LFK vs SUM = 0.97. The second test focuses on the synthetic LFR benchmarks versus the 819 real world networks. Real world networks do not necessarily look like LFR benchmarks -or each other. On the other hand, all LFR benchmarks are similar to each other. Does that create different ASN s? We repeat our correlation test ( Figure 4). As in the previous cases, we observe a significant positive correlation for all testsalbeit lower than before: LFR vs Real (MAX) = 0.55; LFR vs Real (LFK) = 0.51; LFR vs Real (SUM) = 0.51. All these correlations are still statistically significant (p ∼ 0). However, we concede that there is a difference between real world networks and LFR benchmarks. It is worthwhile investigating this difference in future works, as a possible argument against the blind acceptance of LFR as the sole benchmark for testing community discovery algorithms. Third, our edge weights are a count of benchmarks in which two algorithms were in each other most similar lists. Alternative edge creation procedures might be to take the average oNMI, or to count the similarity between two algorithms only if they exceed a fixed oNMI threshold. Section 3.2 provides our theoretical reasons. Here we show that, at a practical level, our results are not gravely affected by such choice. We do so by calculating the NMI between ASN 's communities obtained with all three techniques. The ASN built by averaging the similarity scores has a 0.63 NMI with our option, while the one obtained by a fixed threshold has a 0.46 NMI. On the basis of these similarities, we conclude that there is an underlying ASN structure, and we think our choices allow us to capture it best. Communities In Figure 1, we show a partition of ASN into communities. A seasoned researcher in the community discovery field would be able to give meaningful labels to those communities. Here, we objectively quantify this meaningfulness along a few dimensions of the many possible. We start by considering a few attributes of community detection algorithms, whether they: return overlapping partitions (in which Table 1: Features of the communities of ASN . n: # of nodes. Over: % overlapping algorithms. Spr: % algorithms based either on centrality measures (including edge betweenness and random walks) or some sort of spreading process (e.g. label percolation). Q: % algorithms based on modularity maximization. NSim: % algorithms based on neighborhood similarity. Algorithms can be part of multiple/no classes, so the rows do not sum to one. communities can share nodes), are based on some centrality measure (be it random walks or shortest paths) or spreading process (it will become apparent why we lump these two categories), are based on modularity maximization [29], or are based on a neighborhood similarity approach (e.g. they cluster the adjacency matrix). In Table 1 we calculate the fraction of nodes in a community in each of those categories. Note that we count overlap nodes in all of their communities, so some nodes contribute to up to three communities. As we expect, some communities have a stronger presence of a single category. The largest community (in blue) groups centrality-based algorithms (Infomap [38], Edge betweenness [27], Walktrap [34], etc) with the ones based on spreading processes (label percolation [36], SLPA [5], Ganxis [42], etc). Some of these can be overlapping, but the majority of nodes in the community is part of this "spreading" category. This community shows a strong relationship between random walks, centrality-based approaches, and approaches founded on spreading processes. The second largest community (in red) is mostly populated by overlapping approaches (more than 90% of its nodes are overlapping) -BigClam [43], k-Clique [31], and DEMON [8] are some examples. The third largest community (in purple) is mostly composed by algorithms driven by neighbor similarity (more than 70% of them) rather than the classical "internal density" definition (the two are not necessarily the same). The fourth largest community (in green) exclusively groups modularity maximization algorithms. We now calculate descriptive statistics of the groupings each method returns and then we calculate its average across all the test networks. To facilitate interpretation, we also aggregate at the level of the ASN community, as we show in Figure 1. Table 2 reports those statistics. We also calculate the standard errors, which prove that these differences are significant, but we omit them to reduce clutter. The results from Table 2 can be combined from the knowledge we gathered from Table 1. For instance, consider community 4. We know from Table 1 that this hosts peculiar algorithms working on "neighbor similarity" rather than internal density. This might seem like a small difference, but Table 2 shows its significant repercussions: the average modularity we get from these algorithms is practically zero. Moreover, the algorithms tend to return more - Table 2: The averages of various community descriptive statistics per algorithm group.|C |: Average number of communities. Avg Size: Average number of nodes in the communities.d: Average community density.Q: Average modularity -when the algorithm is overlapping we use the overlapping modularity instead of the regular definition.c: Average conductance -from [24]. Avg Ncut: Average normalized cut -from [24]. and therefore smaller -communities, which tend to be denser but also to have higher conductance. 3 This is another warning sign for uncritically accepting modularity as the de facto quality measure to look at when evaluating the performance of a community discovery algorithm. It works perfectly for the methods based on the same community definition, but there are other -different and validcommunity definitions. Other interesting facts include the almost identical average modularity between community 2 -whose algorithms are explicitly maximizing modularity -and community 3 -which is based on spreading processes. Community 1 has higher internal density, but also higher conductance and normalized cut than average, showing how overlapping approaches can find unusually dense communities, sacrificing the requirement of having few outgoing connections. The categories we discussed are necessarily broad and might group algorithms that have significant differences in other aspects. For instance, there are hundreds of different ways to make your algorithm return overlapping communities -communities sharing nodes. Our approach allows us to focus on such methods to find differences inside the algorithm communities. In practice, we can generate different versions of ASN , by only considering the similarities between the algorithms in the "overlapping" category. Note that this is different than simply inducing the graph from the original ASN , selecting only the overlapping algorithms and all the edges between them. Here we select the nodes and all their similarities and then we apply the backboning, with a differenthigher -δ threshold. In this way, we can deploy a more stringent similarity test, that is able to distinguish between subcategories of the main category. Figure 5 depicts the result. Infomap divides the overlapping ASN in three communities, proving the point that there are substantial sub-classes in the overlapping coverage category. There are strong arguments in favor of these classes being meaningful, although a full discussion requires more space and data. For instance, consider the bottom-right community of the network (in blue). It contains all the methods which apply the same strategy to find overlapping communities: rather than clustering nodes, they cluster edges. This is true for Linecomms [12], HLC [1], Ganet+ [33], and OLC [3]. The remaining methods do not cluster link directly, but ASN suggests that their strategies might be comparable. We can conclude that ASN provides a way to narrow down to subcategories of community discovery and find relevant information to motivate one's choice of an algorithm. Ground Truth in Synthetic Networks The version of ASN based on synthetic LFR benchmarks allows an additional analysis. The LFR benchmark generates a network with a known ground truth: it establishes edges according to a planted partition, which it also provides as an output. Thus, we can add a node to the network: the ground truth. We calculate the similarity of the ground truth division in communities with the one provided by each algorithm. We now can evaluate how the algorithms performed, by looking at the edge weights between the ground truth node and the algorithm itself. In the MAX measure, this means the number of times the algorithm was in the top similarity with the ground truth and vice versa. Table 3 shows the ten best algorithms in our sample. We do not show the worst algorithms, because MAX is a strict test, and thus there is a long list of (21) algorithms with weight equal to zero, which is not informative. The table shows that the best performing algorithm are Linecomms, OSLOM, and the overlap version of Infomap. Should we conclude that these are the best community discovery algorithms in the literature? The answer is yes only if we limit ourselves to the task of finding the same type of communities that the LFR benchmark plants in its output network. Crucially, oNMI MAX 1 linecomms 165 2 oslom 73 3 infomap-overlap 64 4 savi 62 5 labelperc 57 6 rmcl 54 7 edgebetween 41 7 leadeig 41 7 vbmod 41 10 gce 32 Table 3: The ten nodes with the highest MAX edge weight with the ground truth node in ASN -using exclusively data from the LFR synthetic networks. Table 3 are not scattered randomly in the network: they tend to be in the same area. Specifically we know that the ground truth node is located deep inside the blue community, as most of the top ten algorithms from Table 3 are classified in that group. Rank Algorithm We can quantify this objectively by calculating the average path length between the ten nodes, which is equal to 2.51 -on average you need to cross two and a half edges to go from any of these ten nodes to any other of the ten. This is shorter than the overall average path length in ASN , which is 3.25. We test statistical significance by calculating the expected average path length when selecting ten random nodes in the network. Figure 6 shows the distribution of their distances. Only seven out of a thousand attempts generated a smaller or equal average path length. We conclude this section with a word of caution when using benchmarks to establish the quality of a community discovery algorithm, which is routinely done in review works and when proposing a new approach. If the benchmark does not fit the desired definition of community, it might not return a fair evaluation. If one is interested in communities based on neighborhood similarity -the green community in Figure 1 -the LFR benchmark is not the correct one to use. Moreover, when deciding to test a new method against the state of the art, one must choose the algorithms in the literature fitting the same community definition, or the benchmark test would be pointless. This warning goes the other way: assuming that all valid communities look like the ones generated by the LFR benchmark would impoverish a field that -as the strong clusters in ASN show -does indeed have significantly different perspectives of what a community is. CONCLUSION In this paper we contributed to the literature on reviewing community discovery algorithms. Rather than classify them by their process, community definition, or performance, here we classify them by their similarity. How similar are the groupings they return? We performed the most comprehensive analysis of community discovery algorithms to date, including 73 algorithms tested over more than a thousand synthetic and real world networks. We were able to reconstruct an Algorithm Similarity Network -ASN -connecting algorithms to each other based on their output similarity. ASN confirms the intuition about the community discovery literature: there are indeed different valid definitions of community, as the strong clustering in the network shows. The clusters are meaningful as they reflect real differences among the algorithms' features. ASN allows us to perform multi-level analysis: by focusing on a specific category, we can apply our framework to discover meaningful sub-categories. Finally, ASN 's topology highlights how projecting the community detection problem on a single definition of community -e.g. "a group of nodes densely connected to each other and sparsely connected with the rest of the network" -does the entire sub-field a disservice, by trivializing a much more diverse set of valid community definitions. By its very nature, this paper will always be a work in progress. We do not claim that there are only 73 algorithms in the community discovery literature that are worth investigating. We only gathered what we could. Future work based on this paper can and will include whatever additions authors in the field feel should be consideredand they are encouraged to help us by sending suggestions and/or working implementations to mcos@itu.dk. The most up to date version of ASN will be available at http://www.michelecoscia.com/ ?page_id=1640. Moreover, for simplicity, here we focused only on algorithms that work on the simplest graph representations. Several algorithms specialize in directed, multilayer, bipartite, and/or metadata-rich graphs. These will be included as we refine the ASN building procedure in the future.
4,964
1907.02253
2953622591
We present LumiereNet, a simple, modular, and completely deep-learning based architecture that synthesizes, high quality, full-pose headshot lecture videos from instructor's new audio narration of any length. Unlike prior works, LumiereNet is entirely composed of trainable neural network modules to learn mapping functions from the audio to video through (intermediate) estimated pose-based compact and abstract latent codes. Our video demos are available at [22] and [23].
* Visual speech synthesis Over the last two decades, there has been extensive study dedicated towards creating realistic animations for speech @cite_27 in 2D or 3D. 2D has the advantage that video cutouts of the mouth area can be used and combined leading to realistic visualizations @cite_0 @cite_19 . 3D approaches are much more versatile, as viewpoints and illumination can be changed at will @cite_16 . Given that our goal is to produce 2D animation based on audio, instead of formulating entire mapping as an end-to-end optimization task, we are inherently interested in the intermediate representations. Recent advances in this line were more or less focused on synthesizing only the parts of the face (around the mouth) and borrowing the rest of the subject from existing video footage @cite_9 @cite_8 @cite_24 .
{ "abstract": [ "Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.", "Talking heads synthesis with expressions from speech is proposed in this paper. Talking heads synthesis can be considered as a learning problem of sequence-to-sequence mapping, which consists of audio as input and video as output. To synthesize talking heads, we use SAVEE database which consists of videos of multiple sentences speeches recorded from front of face. Audiovisual data can be considered as two parallel sequential data of audio and visual features and it is composed of continuous value. Thus, audio and visual features of our dataset are represented by a regression model. In this research, the regression model is trained with long short-term memory (LSTM) by minimizing mean squared error (MSE). Then, audio features are used as input and visual features are used as target of LSTM. Thereby, talking heads are synthesized from speech. Our method is proposed to use lower level audio features than phonemes and it enables to synthesize talking heads with expressions while existing researches which use phonemes as audio features only can synthesize neutral expression talking heads. With SAVEE database, we achieved the minimum MSE 17.03 on our testing dataset. In experiment, we use mel-frequency cepstral coefficient (MFCC), AMFCC and A2 MFCC with energy as audio feature and active appearance model (AAM) on entire face region as visual feature.", "Video Rewrite uses existing footage to create automatically new video of a person mouthing words that she did not speak in the original footage. This technique is useful in movie dubbing, for example, where the movie sequence can be modified to sync the actors’ lip motions to the new soundtrack. Video Rewrite automatically labels the phonemes in the training data and in the new audio track. Video Rewrite reorders the mouth images in the training footage to match the phoneme sequence of the new audio track. When particular phonemes are unavailable in the training footage, Video Rewrite selects the closest approximations. The resulting sequence of mouth images is stitched into the background footage. This stitching process automatically corrects for differences in head position and orientation between the mouth images and the background footage. Video Rewrite uses computer-vision techniques to track points on the speaker’s mouth in the training footage, and morphing techniques to combine these mouth gestures into the final video sequence. The new video combines the dynamics of the original actor’s articulations with the mannerisms and setting dictated by the background footage. Video Rewrite is the first facial-animation system to automate all the labeling and assembly tasks required to resync existing footage to a new soundtrack.", "We describe how to create with machine learning techniques a generative, speech animation module. A human subject is first recorded using a videocamera as he she utters a predetermined speech corpus. After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subject's mouth uttering entirely novel utterances that were not recorded in the original video. The synthesized utterance is re-composited onto a background sequence which contains natural head and eye movement. The final output is videorealistic in the sense that it looks like a video camera recording of the subject. At run time, the input to the system can be either real audio sequences or synthetic audio produced by a text-to-speech system, as long as they have been phonetically aligned.The two key contributions of this paper are 1) a variant of the multidimensional morphable model (MMM) to synthesize new, previously unseen mouth configurations from a small set of mouth image prototypes; and 2) a trajectory synthesis technique based on regularization, which is automatically trained from the recorded video corpus, and which is capable of synthesizing trajectories in MMM space corresponding to any desired utterance.", "We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.", "We present ObamaNet, the first architecture that generates both audio and synchronized photo-realistic lip-sync videos from any new text. Contrary to other published lip-sync approaches, ours is only composed of fully trainable neural modules and does not rely on any traditional computer graphics methods. More precisely, we use three main modules: a text-to-speech network based on Char2Wav, a time-delayed LSTM to generate mouth-keypoints synced to the audio, and a network based on Pix2Pix to generate the video frames conditioned on the keypoints.", "We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet. We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence." ], "cite_N": [ "@cite_8", "@cite_9", "@cite_0", "@cite_19", "@cite_27", "@cite_24", "@cite_16" ], "mid": [ "2738406145", "2289286917", "2147885303", "2120654454", "2301937176", "2782422271", "2739192055" ] }
LumièreNet: Lecture Video Synthesis from Audio
To meet the increasing needs for people to keep learning throughout their careers, massive open online course (MOOCs) companies, such as Udacity and Coursera, not only aggressively design new and relevant courses, but also frequently refresh existing course lectures to keep them upto-date. In particular, instructor-produced lecture videos (i.e., a subject-matter expert delivering a lecture) are central in the current generation of MOOCs [11,12]. Due to their importance in online courses, video production is increasing exponentially, but production techniques are still not nimble enough to quickly shoot, edit, personalize, and internationalize the lecture video. This is because video * Work done at Udacity, AI Team production today requires considerable resources and processes (i.e., instructor, studio, equipment, and production staff) throughout the development phases. In current video production pipeline, an AI machinery which semi (or fully) automates lecture video production at scale would be highly valuable to enable agile video content development (rather than re-shooting each new video). To that end, we propose a new method to synthesize lecture videos from any length of audio narration 1 . Given an instructor's lecture audios, we wish to synthesize corresponding video of any length. This problem of audio to video synthesis is, in general, challenging, as we must learn a mapping which goes from lower dimensional signals (audio) to a higher dimensional (3D) time-varying image sequences (videos). However, with the availability of video stock footage of a subject teaching and significant recent advances in the community, we are able to discover this mapping between audio and corresponding visuals directly in a supervised way. There have been several important attempts in this direction by focusing on synthesizing parts of the face (around the mouth) [27,19]. However, as instructors' emotional states are communicated not only with facial expressions, but also through body posture, movement, and gestures [5,20], we introduce a pose estimation based latent represen- tation as an intermediate code to synthesize an instructor's face, body, and the background altogether. We design these compact and abstract codes from the extracted human poses for a subject which allows video image frame and audio sequences to be conditionally independent given them. It is convenient to think of the obtained pose detection that each frame yields as a corresponding set of audio, pose figures) and pose figures, person images). Our primary contributions are twofold. We present a fully neural network-based, modular framework which is sufficient to achieve convincing video results. This framework is simpler than prior classic computer vision based models. We also illustrate the effects of several important architectural choices for each sub-module network. Even though our approach is developed with primary intents to support agile video content development which is crucial in current online MOOC courses, we acknowledge there could be potential misuse of the technologies. Nonetheless, we believe it is crucial synthesized video using our approach requires to indicate as synthetic and it is also imperative to obtain consent from the instructors across the entire production processes. Human pose estimation Human pose estimation is a general problem in computer vision to detect human figures in images and video. Recent deep-learning based algorithmic advances enable not only the detection and localization of major body points, but detailed surface-based human body representation (e.g.,OpenPose [3] and DensePose [10]). We use a pretrained DensePose estimator to create body figure RGB images from video frames. Image-to-Image translation Several recent frameworks have used generative adversarial networks (GANs) [8] to learn a parametric translation function between input and output images [15]. Similar ideas have been applied to various tasks, such as generating photographs from sketches, or even video applications for both paired and unpaired cases [32,4,29,1]. However, none of these techniques are fool-proof, and some amount of limitations often remain. LumièreNet Problem Formulation We consider the problem of synthesizing a video from an audio expressed as "How to learn a function to map from an audio sequence x (x 1 , . . . , x T ) recorded by an instructor to video frame sequence y (y 1 , . . . , y T )?" 2 . Answers to this question certainly require some assumptions about the video generation process, so we begin differently by introducing a probabilistic model for video generation. The basic idea is that two hidden (or intermediate) representations of pose-estimation w and corresponding compact and abstract codes z defined as w (w 1 , . . . , w T ), z (z 1 , . . . , z T )(1) are introduced for audios and videos. This allows that the video frames y and audio sequences x to be conditionally independent given w (i.e., P (y|w, x) P (y|w)). Additionally, x and w are also conditionally independent given z (i.e., P (w|z, x) P (w|z)). With the goal to design a probabilistic mapping that reflects entire associations, the conditional independence assumptions in the model imply that P (y|x) P (y|w)P (w|z)P (z|x). ( Specifically, we consider neural network modules as a randomly chosen instance of this problem based on this probabilistic generative model. A key advantage is that each neural network model represents a single factor which separates the influence of other networks that can be trained and improved independently. This also greatly reduces each network's complexity and size as the dimension of compact latent codes z (128 in our experiments) is typically much smaller than w. In fact, this model can be seen as a generalization of simpler models in [27] and [19]. Designing P (w|z) DensePose estimator We use a pretrained DensePose system [10] to construct human pose figures w of video frame sequences y. Even though DensePose results do not account for fine details explicitly (like eye motion, blink, lip, hair, clothing), structural poses are largely well captured. Of course, DensePose estimation correctness can be compromised by inaccuracies of the system and self-occlusions of the body 3 . VAE model The role of the variational auto encoder (VAE) model is to learn a compressed abstract representation of each estimated DensePose image. Here, one could use a simple model [18] as our VAE model to encode each highdimensional DensePose image w i into a low-dimensional latent code z i respectively. While the main role of the VAE model is to squeeze each DensePose image, we also want to have reconstructed pose figures to have better perceptual quality. To that end, instead of using classic pixel-by-pixel loss, we use ImageNet-pretrained VGG-19 [25] based perceptual loss 4 , which has shown previously to help the VAE's output to preserve spatial correlation characteristics of the input image [14]. Given this VAE model, the decoder part which produces w for latent codes z is used as P (w|z). Mapping for P (z|x) Audio features extractor We represent audio signals using the log Mel-filterbank energy features. In the log filterbank energy computations, we apply 40 filters of 1024 size with 44ms-length sliding window at a 33.3ms sampling interval. This configuration is a simple way to match the final video generation rate (30 Hz in our experiments), while keeping frame shift to be 75% of the frame size without adding complications (c.f., upsampling DensePose images). Lastly, we apply normalization to the audio features to feed them to the bidirectional long short term memory (BLSTM) based neural networks [9] described below. BLSTM model With the VAE model's encoder output z, we try to learn a mapping that associates encoded z with the audio features sequence x. To ensure the input audios to be wellaligned (or conditioned) with the outputs, we employ future and past audio contexts via a single BLSTM layer concatenating forward and backward direction LSTM outputs, followed by a linear fully-connected (FC) layer of a code dimension. Also, to help ensure the outputs are coherent over time without abrupt changes or jumps, we prepare each input to have its own look-back window of length W . One could potentially consider learning to map directly to the pixel-level DensePose image space (not to latent embedding space). However, this would add high redundancies in the output layer and require much greater memory. Mapping for P (y|w) SeqPix2Pix model Our SeqPix2Pix model builds on the Pix2Pix framework [15], which uses a conditional generative adversarial network [8] to learn a mapping from input to output images. There have been a few attempts to overcome limitations of the application of conventional Pix2Pix algorithms to video settings. Most research focused on the fact that y consists of temporally ordered streams, so a model has to produce not only photorealistic, but spatio-temporally coherent frames as well [4,29]. The classic Pix2Pix formulation can be described as learning a mapping G : w → y, for each t, as min G max D L t (G, D)(3)where L t (G, D) log D(y t ) + log(1 − D(G(w t )). The key assumption of this approach is synthesizing each image independently across time as: P (y|w) T t=1 P (y t |w t ).(4) This would potentially limit the capabilities of temporal smoothing in output generations. Even though P (z|x) produces output streams that naturally exhibit temporal continuity, it would be beneficial to have additional temporal smoothness in the P (y|w) model. One way to reflect the memory property of the video sequence is by incorporating the following Markov property with memory length L: P (y|w) T t=1 P (y t |y t−L , . . . , y t−1 ; w t−L , . . . , w t ). (5) With this new assumption, we extend the Pix2Pix framework into a sequential setting by introducing a temporal predictor P : (y t−L , . . . , y t−1 ) → y t 5 : min (G,P ) max D L t (G, D, P )(6) subject to y t = G(w t ), (7) y t = P (G(w t−L ), . . . , G(w t−1 )), (8) y t = P (y t−L , . . . , y t−1 ), where L t (G, D, P ) is a modifed GAN loss over memory length L defined as t i=t−L log D(y i )+log(1−D(G(w i )). Each additional constraint from Equation 7 to Equation 9 has different purposes. First Equation 7 is what we 5 The idea of temporal predictor was inspired from [1]. call structural consistency constraint, and the last two constraints Equation 8 and 9 are temporal consistency constraints among the last L samples. In fact, these added constraints act as barrier functions to guide the convergence to better local optima. We can rewrite the final SeqPix2Pix formulation as: min (G,P ) max D L t (G, D, P )(10) + λ 0 l L2 (Φ(y t ), Φ(G(w t ))) (11) + λ 1 l L1 (y t , P (G(w t−L ), . . . , G(w t−1 ))) (12) + λ 2 l L1 (y t , P (y t−L , . . . , y t−1 ))(13) where l L2 is the Euclidean distance between feature representations Φ(·) 6 [16], and l L1 is the L 1 distance between pixel-level images. Unlike the original Pix2Pix, we opt out of the PatchGAN discriminator [15] because global structural properties are largely settled in w. Experiments Video shootings We filmed our instructor's lecture video for around 4 hours 7 . The footage was filmed in an in-house studio at Udacity and used the same setup as a regular production shoot. The studio had plain grey paper backgrounds, lights, an iPad prompter, and a single close-up C100 camera. Lecture transcripts are all prepared by the instructor, which are broken up into chunks that are about the same length of 3 to 4 minutes. We had the instructor read from the prompter. We kept a higher error tolerance than during regular shooting. In other words, we didn't stop for mistakes or have the instructor reshoot to fix errors, due to time constraints. Instead, we asked to continue naturally, even when there were errors with the instructor's delivery. This allowed us to complete filiming without making mistakes too obvious. For usual production shooting, we would do at least two retakes, or have the instructor continue until the take is perfect. With these guidelines, there was essentially one take of each script, and for four hours of video, it took about 8 hours of shooting. Data prepossessing Our instructor's videos were filmed at 30 frames per second. Each video is about 3 to 4 minutes and 1920x1080 resolution. We resized each frame to 455x256 resolution and cropped the central 256x256 regions. Audios are extracted from videos and converted from 48kHz to 16kHz. We use a ResNet-101 based DensePose estimator trained on cropped person instances from the COCO-DensePose dataset. The output consists of 2D fields representing body segments and U, V coordinate spaces aligned with each of the semantic parts of the 3D model. For SeqPix2Pix model training, we used 1 image for every 30 frames. Unlike other works, we did not apply any other manual prepossessing. Network architectures VAE model Both the encoder and decoder networks are based on convolutional neural networks. We use three convolutional layers in the encoder network with 3x3 kernels. Each convolutional layer is followed by a ReLU activation layer and maxpooling layer of size 2. Then two fully-connected output layers for mean and variance are added to the encoder. For decoder network, we use 4 convolutional layers with 3x3 kernels with upsampling each layer by a factor of 2. We also use ReLU as the activaion function. BLSTM model The BLSTM model is composed of forward and backwards LSTM layers containing 256 cell dimensions per direction and followed by an FC layer of 128 dimension. We set the look-back window length W = 15. SeqPix2Pix model We use the U-Net based Pix2Pix generative network architecture for G [25] and the DCGAN discriminator [24] for D. The temporal predictor P concatenates the last L frames (2 for our experiments) as an input to the identical U-Net architecture to predict next frame. Training details We first train the VAE model to encode DensePose frames into latent z space. The VAE model is trained using an RMSProp optimizer [13] with lr = 0.00025. After that, we train the BLSTM and SeqPix2Pix models in parallel. The BLSTM model is trained for L 2 loss using an RMSProp optimizer with lr = 0.000001. For the SeqPix2Pix model, we replace the negative log likelihood in modified GAN loss by a least-squares loss [21], set (λ 0 , λ 1 , λ 2 ) = (0.05, 10.0, 10.0), and use the ADAM [6] optimizer with lr = 0.0002 and (β 1 , β 2 ) = (0.5, 0.999). Experiment Results VAE model We show several qualitative example results in Figure 3. Overall, the VAE model is able to reconstruct an almost perfect image when the instructor's face is square, facing directly at the camera. Body shapes are very well translated. Original Frame Reconstructed Frame Figure 3: VAE reconstruction results. We show five frames in sequential order across nine seconds and compare to the original DensePose frames. The hands look good, but on closer examination, the lines between the two hands' fingers often look blurry and would be one of major causes for poor final generation outputs. BLSTM model We learned that preparing each audio feature input to have its own look-back window is crucial for improving validation loss and visual consistency of the reconstructed pose figures (generated by the VAE decoder). Figure 4(a) shows training and validation losses for different look-back window sizes for the same BLSTM model (i.e., forward and backwards LSTM layers containing 256 cell dimensions per direction and followed by an FC layer of 128 dimension). Figure 4(b) plots the losses during training of the BLSTM model with look-back window length W * =15, which used in our experiments. SeqPix2Pix model To qualitatively demonstrate how each added constraint helps to converge to perceptually better local optima, in This set struggles the most with eye placement and artifact looking pixels around her mouth. • Baseline 2 adds a structural (perceptual) constraint to Baseline 1 by setting (λ 0 , λ 1 , λ 2 ) = (0.05, 0.0, 0.0). The opening and closing frames are almost entirely perfect, but the middle three frames still struggle in the eye and mouth regions. • SeqPix2Pix includes additional temporal consistency constraints by setting (λ 0 , λ 1 , λ 2 ) = (0.05, 10.0, 10.0). This set fixes the eye and mouth incompletely, but slightly blurs the instructor's teeth. Because this occurred during fast motion, the teeth blurriness is not very noticeable, especially if it was playing, rather than a still image. Overall, the perceptual improvement from Baseline 1 set to the SeqPix2Pix set is significant. In particular, the instructor's eye alignment is improved greatly and mouth alignment as well. We also quantitatively compare those three models by measuring traditional metrics MSE, PSNR, and SSIM [30]. We evaluate all models on two test datasets (Test Set 1 of 4,641 figures and Test Set 2 of 3368 figures) and show results in Table 1. MSE, PSNR and SSIM rely on low-level differences between pixels and operate under the assumption of additive Gaussian noise, which may be invalid for quantitative assessments of the generated images. We therefore emphasize that the goal of these evaluations is to showcase qualitative differences between models trained with and without structural (perceptual) and/or temporal consistency constraints. For example, the SeqPix2Pix model does a very good job at generating realistic and natural faces compared to other baselines while achieving lowest SSIM scores. Full Video Demos, Limitations, and Future Work With an instructor's two completely different audio narrations (in both lengths and contents), video lectures are produced and are available at [22] and [23] respectively. Overall, the proposed LumièreNet model produces very convincing lecture video results. The hand and body gestures are smooth. The body and hair look very realistic and natural. The hands look good, but on further examination the lines between fingers look blurry and reveal the frames as a fake. The most noticeable flaw is in the eyes. Sometimes the eyes are looking in different directions or look uneven upon close attention. While the opening and closing of lips is almost perfect sync with the narrations, finer movement details are reduced in certain time periods. We see these shortcomings come partly from the lack of those fine details in the Dense-Pose estimator. Combining with explicit modeling of them (e.g., face keypoints from OpenPose) might enable better synthesis of such details. Moreover, to have more diverse gesture results, we think designing more informative latent codes spaces (e.g., [31]) would be beneficial. Conclusion In this paper, we have proposed a simple, modular, and fully neural network-based LumièreNet which produces an instructor's full pose lecture video given the audio narration input, which has not been addressed before from deep learning perspective as far as we know. Our new framework is capable of creating convincing full-pose video from arbitrary length of audio effectively. Encouraged by this result, many future directions are feasible to explore. One potential direction is to look into a latent embedding space of many instructors' video footage. Given a personalized compact latent code and a few videos of a new instructor, the system would start producing new videos after quick training. We hope that our results will catalyze new developments of deep learning technologies for commercial video content production.
3,240
1907.02253
2953622591
We present LumiereNet, a simple, modular, and completely deep-learning based architecture that synthesizes, high quality, full-pose headshot lecture videos from instructor's new audio narration of any length. Unlike prior works, LumiereNet is entirely composed of trainable neural network modules to learn mapping functions from the audio to video through (intermediate) estimated pose-based compact and abstract latent codes. Our video demos are available at [22] and [23].
* Human pose estimation Human pose estimation is a general problem in computer vision to detect human figures in images and video. Recent deep-learning based algorithmic advances enable not only the detection and localization of major body points, but detailed surface-based human body representation (e.g.,OpenPose @cite_20 and DensePose @cite_22 ). We use a pre-trained DensePose estimator to create body figure RGB images from video frames.
{ "abstract": [ "In this work we establish dense correspondences between an RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We gather dense correspondences for 50K persons appearing in the COCO dataset by introducing an efficient annotation pipeline. We then use our dataset to train CNN-based systems that deliver dense correspondence 'in the wild', namely in the presence of background, occlusions and scale variations. We improve our training set's effectiveness by training an inpainting network that can fill in missing ground truth values and report improvements with respect to the best results that would be achievable in the past. We experiment with fully-convolutional networks and region-based models and observe a superiority of the latter. We further improve accuracy through cascading, obtaining a system that delivers highly-accurate results at multiple frames per second on a single gpu. Supplementary materials, data, code, and videos are provided on the project page http: densepose.org.", "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency." ], "cite_N": [ "@cite_22", "@cite_20" ], "mid": [ "2963876278", "2559085405" ] }
LumièreNet: Lecture Video Synthesis from Audio
To meet the increasing needs for people to keep learning throughout their careers, massive open online course (MOOCs) companies, such as Udacity and Coursera, not only aggressively design new and relevant courses, but also frequently refresh existing course lectures to keep them upto-date. In particular, instructor-produced lecture videos (i.e., a subject-matter expert delivering a lecture) are central in the current generation of MOOCs [11,12]. Due to their importance in online courses, video production is increasing exponentially, but production techniques are still not nimble enough to quickly shoot, edit, personalize, and internationalize the lecture video. This is because video * Work done at Udacity, AI Team production today requires considerable resources and processes (i.e., instructor, studio, equipment, and production staff) throughout the development phases. In current video production pipeline, an AI machinery which semi (or fully) automates lecture video production at scale would be highly valuable to enable agile video content development (rather than re-shooting each new video). To that end, we propose a new method to synthesize lecture videos from any length of audio narration 1 . Given an instructor's lecture audios, we wish to synthesize corresponding video of any length. This problem of audio to video synthesis is, in general, challenging, as we must learn a mapping which goes from lower dimensional signals (audio) to a higher dimensional (3D) time-varying image sequences (videos). However, with the availability of video stock footage of a subject teaching and significant recent advances in the community, we are able to discover this mapping between audio and corresponding visuals directly in a supervised way. There have been several important attempts in this direction by focusing on synthesizing parts of the face (around the mouth) [27,19]. However, as instructors' emotional states are communicated not only with facial expressions, but also through body posture, movement, and gestures [5,20], we introduce a pose estimation based latent represen- tation as an intermediate code to synthesize an instructor's face, body, and the background altogether. We design these compact and abstract codes from the extracted human poses for a subject which allows video image frame and audio sequences to be conditionally independent given them. It is convenient to think of the obtained pose detection that each frame yields as a corresponding set of audio, pose figures) and pose figures, person images). Our primary contributions are twofold. We present a fully neural network-based, modular framework which is sufficient to achieve convincing video results. This framework is simpler than prior classic computer vision based models. We also illustrate the effects of several important architectural choices for each sub-module network. Even though our approach is developed with primary intents to support agile video content development which is crucial in current online MOOC courses, we acknowledge there could be potential misuse of the technologies. Nonetheless, we believe it is crucial synthesized video using our approach requires to indicate as synthetic and it is also imperative to obtain consent from the instructors across the entire production processes. Human pose estimation Human pose estimation is a general problem in computer vision to detect human figures in images and video. Recent deep-learning based algorithmic advances enable not only the detection and localization of major body points, but detailed surface-based human body representation (e.g.,OpenPose [3] and DensePose [10]). We use a pretrained DensePose estimator to create body figure RGB images from video frames. Image-to-Image translation Several recent frameworks have used generative adversarial networks (GANs) [8] to learn a parametric translation function between input and output images [15]. Similar ideas have been applied to various tasks, such as generating photographs from sketches, or even video applications for both paired and unpaired cases [32,4,29,1]. However, none of these techniques are fool-proof, and some amount of limitations often remain. LumièreNet Problem Formulation We consider the problem of synthesizing a video from an audio expressed as "How to learn a function to map from an audio sequence x (x 1 , . . . , x T ) recorded by an instructor to video frame sequence y (y 1 , . . . , y T )?" 2 . Answers to this question certainly require some assumptions about the video generation process, so we begin differently by introducing a probabilistic model for video generation. The basic idea is that two hidden (or intermediate) representations of pose-estimation w and corresponding compact and abstract codes z defined as w (w 1 , . . . , w T ), z (z 1 , . . . , z T )(1) are introduced for audios and videos. This allows that the video frames y and audio sequences x to be conditionally independent given w (i.e., P (y|w, x) P (y|w)). Additionally, x and w are also conditionally independent given z (i.e., P (w|z, x) P (w|z)). With the goal to design a probabilistic mapping that reflects entire associations, the conditional independence assumptions in the model imply that P (y|x) P (y|w)P (w|z)P (z|x). ( Specifically, we consider neural network modules as a randomly chosen instance of this problem based on this probabilistic generative model. A key advantage is that each neural network model represents a single factor which separates the influence of other networks that can be trained and improved independently. This also greatly reduces each network's complexity and size as the dimension of compact latent codes z (128 in our experiments) is typically much smaller than w. In fact, this model can be seen as a generalization of simpler models in [27] and [19]. Designing P (w|z) DensePose estimator We use a pretrained DensePose system [10] to construct human pose figures w of video frame sequences y. Even though DensePose results do not account for fine details explicitly (like eye motion, blink, lip, hair, clothing), structural poses are largely well captured. Of course, DensePose estimation correctness can be compromised by inaccuracies of the system and self-occlusions of the body 3 . VAE model The role of the variational auto encoder (VAE) model is to learn a compressed abstract representation of each estimated DensePose image. Here, one could use a simple model [18] as our VAE model to encode each highdimensional DensePose image w i into a low-dimensional latent code z i respectively. While the main role of the VAE model is to squeeze each DensePose image, we also want to have reconstructed pose figures to have better perceptual quality. To that end, instead of using classic pixel-by-pixel loss, we use ImageNet-pretrained VGG-19 [25] based perceptual loss 4 , which has shown previously to help the VAE's output to preserve spatial correlation characteristics of the input image [14]. Given this VAE model, the decoder part which produces w for latent codes z is used as P (w|z). Mapping for P (z|x) Audio features extractor We represent audio signals using the log Mel-filterbank energy features. In the log filterbank energy computations, we apply 40 filters of 1024 size with 44ms-length sliding window at a 33.3ms sampling interval. This configuration is a simple way to match the final video generation rate (30 Hz in our experiments), while keeping frame shift to be 75% of the frame size without adding complications (c.f., upsampling DensePose images). Lastly, we apply normalization to the audio features to feed them to the bidirectional long short term memory (BLSTM) based neural networks [9] described below. BLSTM model With the VAE model's encoder output z, we try to learn a mapping that associates encoded z with the audio features sequence x. To ensure the input audios to be wellaligned (or conditioned) with the outputs, we employ future and past audio contexts via a single BLSTM layer concatenating forward and backward direction LSTM outputs, followed by a linear fully-connected (FC) layer of a code dimension. Also, to help ensure the outputs are coherent over time without abrupt changes or jumps, we prepare each input to have its own look-back window of length W . One could potentially consider learning to map directly to the pixel-level DensePose image space (not to latent embedding space). However, this would add high redundancies in the output layer and require much greater memory. Mapping for P (y|w) SeqPix2Pix model Our SeqPix2Pix model builds on the Pix2Pix framework [15], which uses a conditional generative adversarial network [8] to learn a mapping from input to output images. There have been a few attempts to overcome limitations of the application of conventional Pix2Pix algorithms to video settings. Most research focused on the fact that y consists of temporally ordered streams, so a model has to produce not only photorealistic, but spatio-temporally coherent frames as well [4,29]. The classic Pix2Pix formulation can be described as learning a mapping G : w → y, for each t, as min G max D L t (G, D)(3)where L t (G, D) log D(y t ) + log(1 − D(G(w t )). The key assumption of this approach is synthesizing each image independently across time as: P (y|w) T t=1 P (y t |w t ).(4) This would potentially limit the capabilities of temporal smoothing in output generations. Even though P (z|x) produces output streams that naturally exhibit temporal continuity, it would be beneficial to have additional temporal smoothness in the P (y|w) model. One way to reflect the memory property of the video sequence is by incorporating the following Markov property with memory length L: P (y|w) T t=1 P (y t |y t−L , . . . , y t−1 ; w t−L , . . . , w t ). (5) With this new assumption, we extend the Pix2Pix framework into a sequential setting by introducing a temporal predictor P : (y t−L , . . . , y t−1 ) → y t 5 : min (G,P ) max D L t (G, D, P )(6) subject to y t = G(w t ), (7) y t = P (G(w t−L ), . . . , G(w t−1 )), (8) y t = P (y t−L , . . . , y t−1 ), where L t (G, D, P ) is a modifed GAN loss over memory length L defined as t i=t−L log D(y i )+log(1−D(G(w i )). Each additional constraint from Equation 7 to Equation 9 has different purposes. First Equation 7 is what we 5 The idea of temporal predictor was inspired from [1]. call structural consistency constraint, and the last two constraints Equation 8 and 9 are temporal consistency constraints among the last L samples. In fact, these added constraints act as barrier functions to guide the convergence to better local optima. We can rewrite the final SeqPix2Pix formulation as: min (G,P ) max D L t (G, D, P )(10) + λ 0 l L2 (Φ(y t ), Φ(G(w t ))) (11) + λ 1 l L1 (y t , P (G(w t−L ), . . . , G(w t−1 ))) (12) + λ 2 l L1 (y t , P (y t−L , . . . , y t−1 ))(13) where l L2 is the Euclidean distance between feature representations Φ(·) 6 [16], and l L1 is the L 1 distance between pixel-level images. Unlike the original Pix2Pix, we opt out of the PatchGAN discriminator [15] because global structural properties are largely settled in w. Experiments Video shootings We filmed our instructor's lecture video for around 4 hours 7 . The footage was filmed in an in-house studio at Udacity and used the same setup as a regular production shoot. The studio had plain grey paper backgrounds, lights, an iPad prompter, and a single close-up C100 camera. Lecture transcripts are all prepared by the instructor, which are broken up into chunks that are about the same length of 3 to 4 minutes. We had the instructor read from the prompter. We kept a higher error tolerance than during regular shooting. In other words, we didn't stop for mistakes or have the instructor reshoot to fix errors, due to time constraints. Instead, we asked to continue naturally, even when there were errors with the instructor's delivery. This allowed us to complete filiming without making mistakes too obvious. For usual production shooting, we would do at least two retakes, or have the instructor continue until the take is perfect. With these guidelines, there was essentially one take of each script, and for four hours of video, it took about 8 hours of shooting. Data prepossessing Our instructor's videos were filmed at 30 frames per second. Each video is about 3 to 4 minutes and 1920x1080 resolution. We resized each frame to 455x256 resolution and cropped the central 256x256 regions. Audios are extracted from videos and converted from 48kHz to 16kHz. We use a ResNet-101 based DensePose estimator trained on cropped person instances from the COCO-DensePose dataset. The output consists of 2D fields representing body segments and U, V coordinate spaces aligned with each of the semantic parts of the 3D model. For SeqPix2Pix model training, we used 1 image for every 30 frames. Unlike other works, we did not apply any other manual prepossessing. Network architectures VAE model Both the encoder and decoder networks are based on convolutional neural networks. We use three convolutional layers in the encoder network with 3x3 kernels. Each convolutional layer is followed by a ReLU activation layer and maxpooling layer of size 2. Then two fully-connected output layers for mean and variance are added to the encoder. For decoder network, we use 4 convolutional layers with 3x3 kernels with upsampling each layer by a factor of 2. We also use ReLU as the activaion function. BLSTM model The BLSTM model is composed of forward and backwards LSTM layers containing 256 cell dimensions per direction and followed by an FC layer of 128 dimension. We set the look-back window length W = 15. SeqPix2Pix model We use the U-Net based Pix2Pix generative network architecture for G [25] and the DCGAN discriminator [24] for D. The temporal predictor P concatenates the last L frames (2 for our experiments) as an input to the identical U-Net architecture to predict next frame. Training details We first train the VAE model to encode DensePose frames into latent z space. The VAE model is trained using an RMSProp optimizer [13] with lr = 0.00025. After that, we train the BLSTM and SeqPix2Pix models in parallel. The BLSTM model is trained for L 2 loss using an RMSProp optimizer with lr = 0.000001. For the SeqPix2Pix model, we replace the negative log likelihood in modified GAN loss by a least-squares loss [21], set (λ 0 , λ 1 , λ 2 ) = (0.05, 10.0, 10.0), and use the ADAM [6] optimizer with lr = 0.0002 and (β 1 , β 2 ) = (0.5, 0.999). Experiment Results VAE model We show several qualitative example results in Figure 3. Overall, the VAE model is able to reconstruct an almost perfect image when the instructor's face is square, facing directly at the camera. Body shapes are very well translated. Original Frame Reconstructed Frame Figure 3: VAE reconstruction results. We show five frames in sequential order across nine seconds and compare to the original DensePose frames. The hands look good, but on closer examination, the lines between the two hands' fingers often look blurry and would be one of major causes for poor final generation outputs. BLSTM model We learned that preparing each audio feature input to have its own look-back window is crucial for improving validation loss and visual consistency of the reconstructed pose figures (generated by the VAE decoder). Figure 4(a) shows training and validation losses for different look-back window sizes for the same BLSTM model (i.e., forward and backwards LSTM layers containing 256 cell dimensions per direction and followed by an FC layer of 128 dimension). Figure 4(b) plots the losses during training of the BLSTM model with look-back window length W * =15, which used in our experiments. SeqPix2Pix model To qualitatively demonstrate how each added constraint helps to converge to perceptually better local optima, in This set struggles the most with eye placement and artifact looking pixels around her mouth. • Baseline 2 adds a structural (perceptual) constraint to Baseline 1 by setting (λ 0 , λ 1 , λ 2 ) = (0.05, 0.0, 0.0). The opening and closing frames are almost entirely perfect, but the middle three frames still struggle in the eye and mouth regions. • SeqPix2Pix includes additional temporal consistency constraints by setting (λ 0 , λ 1 , λ 2 ) = (0.05, 10.0, 10.0). This set fixes the eye and mouth incompletely, but slightly blurs the instructor's teeth. Because this occurred during fast motion, the teeth blurriness is not very noticeable, especially if it was playing, rather than a still image. Overall, the perceptual improvement from Baseline 1 set to the SeqPix2Pix set is significant. In particular, the instructor's eye alignment is improved greatly and mouth alignment as well. We also quantitatively compare those three models by measuring traditional metrics MSE, PSNR, and SSIM [30]. We evaluate all models on two test datasets (Test Set 1 of 4,641 figures and Test Set 2 of 3368 figures) and show results in Table 1. MSE, PSNR and SSIM rely on low-level differences between pixels and operate under the assumption of additive Gaussian noise, which may be invalid for quantitative assessments of the generated images. We therefore emphasize that the goal of these evaluations is to showcase qualitative differences between models trained with and without structural (perceptual) and/or temporal consistency constraints. For example, the SeqPix2Pix model does a very good job at generating realistic and natural faces compared to other baselines while achieving lowest SSIM scores. Full Video Demos, Limitations, and Future Work With an instructor's two completely different audio narrations (in both lengths and contents), video lectures are produced and are available at [22] and [23] respectively. Overall, the proposed LumièreNet model produces very convincing lecture video results. The hand and body gestures are smooth. The body and hair look very realistic and natural. The hands look good, but on further examination the lines between fingers look blurry and reveal the frames as a fake. The most noticeable flaw is in the eyes. Sometimes the eyes are looking in different directions or look uneven upon close attention. While the opening and closing of lips is almost perfect sync with the narrations, finer movement details are reduced in certain time periods. We see these shortcomings come partly from the lack of those fine details in the Dense-Pose estimator. Combining with explicit modeling of them (e.g., face keypoints from OpenPose) might enable better synthesis of such details. Moreover, to have more diverse gesture results, we think designing more informative latent codes spaces (e.g., [31]) would be beneficial. Conclusion In this paper, we have proposed a simple, modular, and fully neural network-based LumièreNet which produces an instructor's full pose lecture video given the audio narration input, which has not been addressed before from deep learning perspective as far as we know. Our new framework is capable of creating convincing full-pose video from arbitrary length of audio effectively. Encouraged by this result, many future directions are feasible to explore. One potential direction is to look into a latent embedding space of many instructors' video footage. Given a personalized compact latent code and a few videos of a new instructor, the system would start producing new videos after quick training. We hope that our results will catalyze new developments of deep learning technologies for commercial video content production.
3,240
1907.02253
2953622591
We present LumiereNet, a simple, modular, and completely deep-learning based architecture that synthesizes, high quality, full-pose headshot lecture videos from instructor's new audio narration of any length. Unlike prior works, LumiereNet is entirely composed of trainable neural network modules to learn mapping functions from the audio to video through (intermediate) estimated pose-based compact and abstract latent codes. Our video demos are available at [22] and [23].
* Image-to-Image translation Several recent frameworks have used generative adversarial networks (GANs) @cite_10 to learn a parametric translation function between input and output images @cite_1 . Similar ideas have been applied to various tasks, such as generating photographs from sketches, or even video applications for both paired and unpaired cases @cite_21 @cite_6 @cite_5 @cite_30 . However, none of these techniques are fool-proof, and some amount of limitations often remain.
{ "abstract": [ "We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content speech should be in Stephen Colbert’s style. Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation. In this work, we first study the advantages of using spatiotemporal constraints over spatial constraints for effective retargeting. We then demonstrate the proposed approach for the problems where information in both space and time matters such as face-to-face translation, flower-to-flower, wind and cloud synthesis, sunrise and sunset.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "", "This paper presents a simple method for \"do as I do\" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.", "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel approach for the video-to-video synthesis problem under adversarial learning framework. Through the introduction of new generator and discriminator architectures, coupled with a spatial-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, not possible before our work. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems. (Note: using Adobe Reader is highly recommended to view the paper.)", "" ], "cite_N": [ "@cite_30", "@cite_21", "@cite_1", "@cite_6", "@cite_5", "@cite_10" ], "mid": [ "2963917969", "2962793481", "", "2888164449", "2963841322", "" ] }
LumièreNet: Lecture Video Synthesis from Audio
To meet the increasing needs for people to keep learning throughout their careers, massive open online course (MOOCs) companies, such as Udacity and Coursera, not only aggressively design new and relevant courses, but also frequently refresh existing course lectures to keep them upto-date. In particular, instructor-produced lecture videos (i.e., a subject-matter expert delivering a lecture) are central in the current generation of MOOCs [11,12]. Due to their importance in online courses, video production is increasing exponentially, but production techniques are still not nimble enough to quickly shoot, edit, personalize, and internationalize the lecture video. This is because video * Work done at Udacity, AI Team production today requires considerable resources and processes (i.e., instructor, studio, equipment, and production staff) throughout the development phases. In current video production pipeline, an AI machinery which semi (or fully) automates lecture video production at scale would be highly valuable to enable agile video content development (rather than re-shooting each new video). To that end, we propose a new method to synthesize lecture videos from any length of audio narration 1 . Given an instructor's lecture audios, we wish to synthesize corresponding video of any length. This problem of audio to video synthesis is, in general, challenging, as we must learn a mapping which goes from lower dimensional signals (audio) to a higher dimensional (3D) time-varying image sequences (videos). However, with the availability of video stock footage of a subject teaching and significant recent advances in the community, we are able to discover this mapping between audio and corresponding visuals directly in a supervised way. There have been several important attempts in this direction by focusing on synthesizing parts of the face (around the mouth) [27,19]. However, as instructors' emotional states are communicated not only with facial expressions, but also through body posture, movement, and gestures [5,20], we introduce a pose estimation based latent represen- tation as an intermediate code to synthesize an instructor's face, body, and the background altogether. We design these compact and abstract codes from the extracted human poses for a subject which allows video image frame and audio sequences to be conditionally independent given them. It is convenient to think of the obtained pose detection that each frame yields as a corresponding set of audio, pose figures) and pose figures, person images). Our primary contributions are twofold. We present a fully neural network-based, modular framework which is sufficient to achieve convincing video results. This framework is simpler than prior classic computer vision based models. We also illustrate the effects of several important architectural choices for each sub-module network. Even though our approach is developed with primary intents to support agile video content development which is crucial in current online MOOC courses, we acknowledge there could be potential misuse of the technologies. Nonetheless, we believe it is crucial synthesized video using our approach requires to indicate as synthetic and it is also imperative to obtain consent from the instructors across the entire production processes. Human pose estimation Human pose estimation is a general problem in computer vision to detect human figures in images and video. Recent deep-learning based algorithmic advances enable not only the detection and localization of major body points, but detailed surface-based human body representation (e.g.,OpenPose [3] and DensePose [10]). We use a pretrained DensePose estimator to create body figure RGB images from video frames. Image-to-Image translation Several recent frameworks have used generative adversarial networks (GANs) [8] to learn a parametric translation function between input and output images [15]. Similar ideas have been applied to various tasks, such as generating photographs from sketches, or even video applications for both paired and unpaired cases [32,4,29,1]. However, none of these techniques are fool-proof, and some amount of limitations often remain. LumièreNet Problem Formulation We consider the problem of synthesizing a video from an audio expressed as "How to learn a function to map from an audio sequence x (x 1 , . . . , x T ) recorded by an instructor to video frame sequence y (y 1 , . . . , y T )?" 2 . Answers to this question certainly require some assumptions about the video generation process, so we begin differently by introducing a probabilistic model for video generation. The basic idea is that two hidden (or intermediate) representations of pose-estimation w and corresponding compact and abstract codes z defined as w (w 1 , . . . , w T ), z (z 1 , . . . , z T )(1) are introduced for audios and videos. This allows that the video frames y and audio sequences x to be conditionally independent given w (i.e., P (y|w, x) P (y|w)). Additionally, x and w are also conditionally independent given z (i.e., P (w|z, x) P (w|z)). With the goal to design a probabilistic mapping that reflects entire associations, the conditional independence assumptions in the model imply that P (y|x) P (y|w)P (w|z)P (z|x). ( Specifically, we consider neural network modules as a randomly chosen instance of this problem based on this probabilistic generative model. A key advantage is that each neural network model represents a single factor which separates the influence of other networks that can be trained and improved independently. This also greatly reduces each network's complexity and size as the dimension of compact latent codes z (128 in our experiments) is typically much smaller than w. In fact, this model can be seen as a generalization of simpler models in [27] and [19]. Designing P (w|z) DensePose estimator We use a pretrained DensePose system [10] to construct human pose figures w of video frame sequences y. Even though DensePose results do not account for fine details explicitly (like eye motion, blink, lip, hair, clothing), structural poses are largely well captured. Of course, DensePose estimation correctness can be compromised by inaccuracies of the system and self-occlusions of the body 3 . VAE model The role of the variational auto encoder (VAE) model is to learn a compressed abstract representation of each estimated DensePose image. Here, one could use a simple model [18] as our VAE model to encode each highdimensional DensePose image w i into a low-dimensional latent code z i respectively. While the main role of the VAE model is to squeeze each DensePose image, we also want to have reconstructed pose figures to have better perceptual quality. To that end, instead of using classic pixel-by-pixel loss, we use ImageNet-pretrained VGG-19 [25] based perceptual loss 4 , which has shown previously to help the VAE's output to preserve spatial correlation characteristics of the input image [14]. Given this VAE model, the decoder part which produces w for latent codes z is used as P (w|z). Mapping for P (z|x) Audio features extractor We represent audio signals using the log Mel-filterbank energy features. In the log filterbank energy computations, we apply 40 filters of 1024 size with 44ms-length sliding window at a 33.3ms sampling interval. This configuration is a simple way to match the final video generation rate (30 Hz in our experiments), while keeping frame shift to be 75% of the frame size without adding complications (c.f., upsampling DensePose images). Lastly, we apply normalization to the audio features to feed them to the bidirectional long short term memory (BLSTM) based neural networks [9] described below. BLSTM model With the VAE model's encoder output z, we try to learn a mapping that associates encoded z with the audio features sequence x. To ensure the input audios to be wellaligned (or conditioned) with the outputs, we employ future and past audio contexts via a single BLSTM layer concatenating forward and backward direction LSTM outputs, followed by a linear fully-connected (FC) layer of a code dimension. Also, to help ensure the outputs are coherent over time without abrupt changes or jumps, we prepare each input to have its own look-back window of length W . One could potentially consider learning to map directly to the pixel-level DensePose image space (not to latent embedding space). However, this would add high redundancies in the output layer and require much greater memory. Mapping for P (y|w) SeqPix2Pix model Our SeqPix2Pix model builds on the Pix2Pix framework [15], which uses a conditional generative adversarial network [8] to learn a mapping from input to output images. There have been a few attempts to overcome limitations of the application of conventional Pix2Pix algorithms to video settings. Most research focused on the fact that y consists of temporally ordered streams, so a model has to produce not only photorealistic, but spatio-temporally coherent frames as well [4,29]. The classic Pix2Pix formulation can be described as learning a mapping G : w → y, for each t, as min G max D L t (G, D)(3)where L t (G, D) log D(y t ) + log(1 − D(G(w t )). The key assumption of this approach is synthesizing each image independently across time as: P (y|w) T t=1 P (y t |w t ).(4) This would potentially limit the capabilities of temporal smoothing in output generations. Even though P (z|x) produces output streams that naturally exhibit temporal continuity, it would be beneficial to have additional temporal smoothness in the P (y|w) model. One way to reflect the memory property of the video sequence is by incorporating the following Markov property with memory length L: P (y|w) T t=1 P (y t |y t−L , . . . , y t−1 ; w t−L , . . . , w t ). (5) With this new assumption, we extend the Pix2Pix framework into a sequential setting by introducing a temporal predictor P : (y t−L , . . . , y t−1 ) → y t 5 : min (G,P ) max D L t (G, D, P )(6) subject to y t = G(w t ), (7) y t = P (G(w t−L ), . . . , G(w t−1 )), (8) y t = P (y t−L , . . . , y t−1 ), where L t (G, D, P ) is a modifed GAN loss over memory length L defined as t i=t−L log D(y i )+log(1−D(G(w i )). Each additional constraint from Equation 7 to Equation 9 has different purposes. First Equation 7 is what we 5 The idea of temporal predictor was inspired from [1]. call structural consistency constraint, and the last two constraints Equation 8 and 9 are temporal consistency constraints among the last L samples. In fact, these added constraints act as barrier functions to guide the convergence to better local optima. We can rewrite the final SeqPix2Pix formulation as: min (G,P ) max D L t (G, D, P )(10) + λ 0 l L2 (Φ(y t ), Φ(G(w t ))) (11) + λ 1 l L1 (y t , P (G(w t−L ), . . . , G(w t−1 ))) (12) + λ 2 l L1 (y t , P (y t−L , . . . , y t−1 ))(13) where l L2 is the Euclidean distance between feature representations Φ(·) 6 [16], and l L1 is the L 1 distance between pixel-level images. Unlike the original Pix2Pix, we opt out of the PatchGAN discriminator [15] because global structural properties are largely settled in w. Experiments Video shootings We filmed our instructor's lecture video for around 4 hours 7 . The footage was filmed in an in-house studio at Udacity and used the same setup as a regular production shoot. The studio had plain grey paper backgrounds, lights, an iPad prompter, and a single close-up C100 camera. Lecture transcripts are all prepared by the instructor, which are broken up into chunks that are about the same length of 3 to 4 minutes. We had the instructor read from the prompter. We kept a higher error tolerance than during regular shooting. In other words, we didn't stop for mistakes or have the instructor reshoot to fix errors, due to time constraints. Instead, we asked to continue naturally, even when there were errors with the instructor's delivery. This allowed us to complete filiming without making mistakes too obvious. For usual production shooting, we would do at least two retakes, or have the instructor continue until the take is perfect. With these guidelines, there was essentially one take of each script, and for four hours of video, it took about 8 hours of shooting. Data prepossessing Our instructor's videos were filmed at 30 frames per second. Each video is about 3 to 4 minutes and 1920x1080 resolution. We resized each frame to 455x256 resolution and cropped the central 256x256 regions. Audios are extracted from videos and converted from 48kHz to 16kHz. We use a ResNet-101 based DensePose estimator trained on cropped person instances from the COCO-DensePose dataset. The output consists of 2D fields representing body segments and U, V coordinate spaces aligned with each of the semantic parts of the 3D model. For SeqPix2Pix model training, we used 1 image for every 30 frames. Unlike other works, we did not apply any other manual prepossessing. Network architectures VAE model Both the encoder and decoder networks are based on convolutional neural networks. We use three convolutional layers in the encoder network with 3x3 kernels. Each convolutional layer is followed by a ReLU activation layer and maxpooling layer of size 2. Then two fully-connected output layers for mean and variance are added to the encoder. For decoder network, we use 4 convolutional layers with 3x3 kernels with upsampling each layer by a factor of 2. We also use ReLU as the activaion function. BLSTM model The BLSTM model is composed of forward and backwards LSTM layers containing 256 cell dimensions per direction and followed by an FC layer of 128 dimension. We set the look-back window length W = 15. SeqPix2Pix model We use the U-Net based Pix2Pix generative network architecture for G [25] and the DCGAN discriminator [24] for D. The temporal predictor P concatenates the last L frames (2 for our experiments) as an input to the identical U-Net architecture to predict next frame. Training details We first train the VAE model to encode DensePose frames into latent z space. The VAE model is trained using an RMSProp optimizer [13] with lr = 0.00025. After that, we train the BLSTM and SeqPix2Pix models in parallel. The BLSTM model is trained for L 2 loss using an RMSProp optimizer with lr = 0.000001. For the SeqPix2Pix model, we replace the negative log likelihood in modified GAN loss by a least-squares loss [21], set (λ 0 , λ 1 , λ 2 ) = (0.05, 10.0, 10.0), and use the ADAM [6] optimizer with lr = 0.0002 and (β 1 , β 2 ) = (0.5, 0.999). Experiment Results VAE model We show several qualitative example results in Figure 3. Overall, the VAE model is able to reconstruct an almost perfect image when the instructor's face is square, facing directly at the camera. Body shapes are very well translated. Original Frame Reconstructed Frame Figure 3: VAE reconstruction results. We show five frames in sequential order across nine seconds and compare to the original DensePose frames. The hands look good, but on closer examination, the lines between the two hands' fingers often look blurry and would be one of major causes for poor final generation outputs. BLSTM model We learned that preparing each audio feature input to have its own look-back window is crucial for improving validation loss and visual consistency of the reconstructed pose figures (generated by the VAE decoder). Figure 4(a) shows training and validation losses for different look-back window sizes for the same BLSTM model (i.e., forward and backwards LSTM layers containing 256 cell dimensions per direction and followed by an FC layer of 128 dimension). Figure 4(b) plots the losses during training of the BLSTM model with look-back window length W * =15, which used in our experiments. SeqPix2Pix model To qualitatively demonstrate how each added constraint helps to converge to perceptually better local optima, in This set struggles the most with eye placement and artifact looking pixels around her mouth. • Baseline 2 adds a structural (perceptual) constraint to Baseline 1 by setting (λ 0 , λ 1 , λ 2 ) = (0.05, 0.0, 0.0). The opening and closing frames are almost entirely perfect, but the middle three frames still struggle in the eye and mouth regions. • SeqPix2Pix includes additional temporal consistency constraints by setting (λ 0 , λ 1 , λ 2 ) = (0.05, 10.0, 10.0). This set fixes the eye and mouth incompletely, but slightly blurs the instructor's teeth. Because this occurred during fast motion, the teeth blurriness is not very noticeable, especially if it was playing, rather than a still image. Overall, the perceptual improvement from Baseline 1 set to the SeqPix2Pix set is significant. In particular, the instructor's eye alignment is improved greatly and mouth alignment as well. We also quantitatively compare those three models by measuring traditional metrics MSE, PSNR, and SSIM [30]. We evaluate all models on two test datasets (Test Set 1 of 4,641 figures and Test Set 2 of 3368 figures) and show results in Table 1. MSE, PSNR and SSIM rely on low-level differences between pixels and operate under the assumption of additive Gaussian noise, which may be invalid for quantitative assessments of the generated images. We therefore emphasize that the goal of these evaluations is to showcase qualitative differences between models trained with and without structural (perceptual) and/or temporal consistency constraints. For example, the SeqPix2Pix model does a very good job at generating realistic and natural faces compared to other baselines while achieving lowest SSIM scores. Full Video Demos, Limitations, and Future Work With an instructor's two completely different audio narrations (in both lengths and contents), video lectures are produced and are available at [22] and [23] respectively. Overall, the proposed LumièreNet model produces very convincing lecture video results. The hand and body gestures are smooth. The body and hair look very realistic and natural. The hands look good, but on further examination the lines between fingers look blurry and reveal the frames as a fake. The most noticeable flaw is in the eyes. Sometimes the eyes are looking in different directions or look uneven upon close attention. While the opening and closing of lips is almost perfect sync with the narrations, finer movement details are reduced in certain time periods. We see these shortcomings come partly from the lack of those fine details in the Dense-Pose estimator. Combining with explicit modeling of them (e.g., face keypoints from OpenPose) might enable better synthesis of such details. Moreover, to have more diverse gesture results, we think designing more informative latent codes spaces (e.g., [31]) would be beneficial. Conclusion In this paper, we have proposed a simple, modular, and fully neural network-based LumièreNet which produces an instructor's full pose lecture video given the audio narration input, which has not been addressed before from deep learning perspective as far as we know. Our new framework is capable of creating convincing full-pose video from arbitrary length of audio effectively. Encouraged by this result, many future directions are feasible to explore. One potential direction is to look into a latent embedding space of many instructors' video footage. Given a personalized compact latent code and a few videos of a new instructor, the system would start producing new videos after quick training. We hope that our results will catalyze new developments of deep learning technologies for commercial video content production.
3,240
1812.06228
2953137836
Since the labelling for the positive images videos is ambiguous in weakly supervised segment annotation, negative mining based methods that only use the intra-class information emerge. In these methods, negative instances are utilized to penalize unknown instances to rank their likelihood of being an object, which can be considered as a voting in terms of similarity. However, these methods 1) ignore the information contained in positive bags, 2) only rank the likelihood but cannot generate an explicit decision function. In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags. In the scheme, each instance votes for its label with a magnitude arising from the similarity, and the ambiguous positive instances are assigned soft labels that are iteratively updated during the voting. It overcomes the limitations of voting using only the negative bags. We also propose an expectation kernel density estimation (eKDE) algorithm to gain further insight into the voting mechanism. Experimental results demonstrate the superiority of our scheme beyond the baselines.
Negative mining methods train a classifier based on the strongly labelled negative training data. For each instance in a positive bag, based on the inter-class information, NegMin @cite_30 compute their similarities with all of the negative instances, and select the instance that has minimum max-similarity as of interest. CRANE @cite_33 selects negative instances to vote against an unknown instance by specifying some similarity threshold, and improves the robustness of labelling noise among negative instances. @cite_22 also make use of the similarity information as a pre-processing heuristic for a bag-level classification. They select instances with least similarity to the negative bags and use them to initialize cluster centers, which are then used to create the bag level feature descriptors of @cite_23 . Moreover, Jiang @cite_20 trains a one-class SVM based on negative instances, then ranks the saliency according to the distances to the decision boundary.
{ "abstract": [ "We propose a novel approach to annotating weakly labelled data. In contrast to many existing approaches that perform annotation by seeking clusters of self-similar exemplars (minimising intra-class variance), we perform image annotation by selecting exemplars that have never occurred before in the much larger, and strongly annotated, negative training set (maximising inter-class variance). Compared to existing methods, our approach is fast, robust, and obtains state of the art results on two challenging data-sets --- voc2007 (all poses), and the msr2 action data-set, where we obtain a 10 increase. Moreover, this use of negative mining complements existing methods, that seek to minimize the intra-class variance, and can be readily integrated with many of them.", "Multiple instance learning (MIL) is a paradigm in supervised learning that deals with the classification of collections of instances called bags. Each bag contains a number of instances from which features are extracted. The complexity of MIL is largely dependent on the number of instances in the training data set. Since we are usually confronted with a large instance space even for moderately sized real-world data sets applications, it is important to design efficient instance selection techniques to speed up the training process without compromising the performance. In this paper, we address the issue of instance selection in MIL. We propose MILIS, a novel MIL algorithm based on adaptive instance selection. We do this in an alternating optimization framework by intertwining the steps of instance selection and classifier learning in an iterative manner which is guaranteed to converge. Initial instance selection is achieved by a simple yet effective kernel density estimator on the negative instances. Experimental results demonstrate the utility and efficiency of the proposed approach as compared to the state of the art.", "The ubiquitous availability of Internet video offers the vision community the exciting opportunity to directly learn localized visual concepts from real-world imagery. Unfortunately, most such attempts are doomed because traditional approaches are ill-suited, both in terms of their computational characteristics and their inability to robustly contend with the label noise that plagues uncurated Internet content. We present CRANE, a weakly supervised algorithm that is specifically designed to learn under such conditions. First, we exploit the asymmetric availability of real-world training data, where small numbers of positive videos tagged with the concept are supplemented with large quantities of unreliable negative data. Second, we ensure that CRANE is robust to label noise, both in terms of tagged videos that fail to contain the concept as well as occasional negative videos that do. Finally, CRANE is highly parallelizable, making it practical to deploy at large scale without sacrificing the quality of the learned solution. Although CRANE is general, this paper focuses on segment annotation, where we show state-of-the-art pixel-level segmentation results on two datasets, one of which includes a training set of spatiotemporal segments from more than 20,000 videos.", "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty", "Recent advances of supervised salient object detection models demonstrate significant performance on benchmark datasets. Training such models, however, requires expensive pixel-wise annotations of salient objects. Moreover, many existing salient object detection models assume that at least a salient object exists in the input image. Such an impractical assumption leads to less appealing saliency maps on the background images, which contain no salient objects at all. To avoid expensive strong saliency annotations, in this paper, we study weakly supervised learning approaches for salient object detection. In specific, given a set of background images and or salient object images, where we only have annotations of salient object existence, we propose two approaches to train salient object detection models. In the first approach, we train a one-class SVM based on background superpixels. The further a superpixel is from the decision boundary of the one-class SVM, the more salient it is. The most interesting property of this approach is that we can effortlessly synthesize a set of background images to train the model. In the second approach, we present a solution toward jointly addressing salient object existence and detection tasks. We formulate salient object detection as an image labeling problem, where saliency labels of superpixels are modeled as hidden variables in the latent structural SVM framework. Experimental results on benchmark datasets validate the effectiveness of our proposed approaches." ], "cite_N": [ "@cite_30", "@cite_22", "@cite_33", "@cite_23", "@cite_20" ], "mid": [ "1575299770", "2127608660", "2105297725", "2098166271", "1645451374" ] }
Weakly supervised segment annotation via expectation kernel density estimation
With the development of communication technology and the popularity of digital cameras, one can easily access massive images/videos. Although these digital multimedia are usually associated with semantic tags indicating certain visual concepts appearing inside, the exact locations remain unknown, leading to their infeasibility for training traditional supervised visual recognition models. As a result, there has been a great interest in object localization for images/videos with weak labels [1], [2], [3], [4], [5]. An alternative is weakly supervised segment annotation (WSSA) [6], [7], [8], [9]. For images/videos with weak labels, those with objects of interest inside are considered as positive bags, while those without objects of interest are negative. Based on unsupervised over-segmentation, images/videos are transformed into segments, and the task is to distinguish whether they correspond to a given visual concept. Among the state-of-the-art methods for weakly supervised segment annotation (WSSA), there is a simple yet effective branch [2], [10], [8], [11]. They employ the inter-class or intra-class information by measuring similarities between instances based on two rules: 1) Positive instances are similar patterns existing in different positive bags; 2) Positive instances are dissimilar to all the instances in negative bags. For an unknown instance, they iterates through the labelled instances, and each gives a vote for or against its being a target. In [2], [8], [12], [13], the authors insist that inter-class information is more useful in a MIL setting, and propose to use negative instances to vote against the unknown instances, and select that least penalized as the instance of interest. In these methods, only negative instances with definite labels are eligible to vote. It is true that the number of negative instances is much larger than that of potential positive instances, and the labels are also more definite. However useful information in positive bags is ignored. However, there are two limitations for these methods. 1) Useful information in positive bags is ignored. 2) Only a ranking of likelihood instead of an explicit decision function is output. Although thresholding the ranking can generate a classification, there is not a strategy to theoretically decide the threshold value. In this paper, we argue that extra useful information can be mined from the weakly labelled positive bags besides the definite negative bags. Consequently the instances can be annotated by looking at the weakly labelled data themselves. Therefore we proposed a self-voting scheme, where all the instances are involved. The contributions of this paper are as follows: 1) A voting scheme involving both negative instances and ambiguous instances in positive bags is proposed. 2) The proposed voting scheme can output discriminant results beyond just ranking. 3) An expectation kernel density estimation (eKDE) algorithm is proposed to handle weakly labelled data. A deep interpretation is provided from the maximum posterior criterion (MAP) and eKDE for the proposed voting scheme 4) Relations to existing methods including negative mining, supervised KDE and semi-supervised KDE, are analyzed. In a WSSA task, two sets of images (the same for videos) are given with image-level labels. Each image in the positive set contains an instance of an identical object category, and each image in the negative set does not contain any instance of the object category. Negative mining methods determine a region in a positive image the likelihood of being an object of interest by its dissimilarity to the negative regions. Besides this inter-class information, our method further takes into account the intra-class information that all the object regions in different positive images should have high similarity because they come from an identical object category.The extra information improves the performance compared to negative mining. The remainder of this paper is organized as follows. Section II reviews the related works. We then detail the methodology in Section III. We first revisit the negative mining methods in a voting framework (III-A), then propose our weighted self-voting scheme (III-B). To get an insight into the mechanism of our scheme, we derive an interpretation from MAP and eKDE (III-C). Difference from other existing methods are also analysed (III-E). Experimental results are reported in Section IV . Section V concludes this work. III. METHODOLOGY In a weakly supervised learning scenario, a label is given at a coarser level, and accounts for a collection of instances rather than for individual instance, usually for the purpose of efforts reduction. A positive label indicates that the collection contains at least one instance of interest, while a negative one indicates that none of the collection is of interest. Such data can be naturally represented by bags that arise from multiple-instance learning. Without loss of generality, we denote such data by D = { B i , y i } m i=1 , where B i = {X ij } |Bi| j=1 is a bag, with X ij ∈ R D an instance and y i ∈ {1, −1} a label. The data annotation is to predict y ij ∈ {1, −1} for each instance. For the sake of clarity, we separate notations of positive bags and negative bags, then the sample set D = { B + i } p i=1 { B − i } n i=1 , where we assume that the numbers of positive bags and negative bags are p and n = m − p, respectively. A. Negative mining revisited Negative mining methods [2], [8] insist that, in the scenario of WSSA, the much larger amount of negative instances provide more useful information. Therefore they only make use of the negative bags with definite labels, and ignore the ambiguous information of positive bags, to localize objects of interest. For a given positive bag, NegMin [2] selects the instance that minimize the similarity to the nearest neighbour in the collection of the negative instances. Let s ij s(X, X ij ) > 0 denotes the similarity of X and X ij . The notion of NegMin can be formalized as follows. It scores an instance by f N egM in (X) = min n i=1 |B − i | j −u ij · s ij (1) with u ij ∈ {0, 1}, |B − i | j=1 u ij = 1 ∀i. Then the j * -th instance with the maximum score in a positive bag B + i is considered as the instance of interest: j * = arg max j∈{1,··· ,|B + i |} f N egM in (X ij ), i = 1, · · · , p.(2) Similarly, from the negative mining perspective, CRANE [8] selects instances from the negative bags to penalize their nearby instances in the positive bags, by the following scoring strategy: f CRAN E (X) = n i=1 |B − i | j −s cut (s ij ) · δ(s ij < ∆).(3) A naive constant s cut (·) = 1 is used in [8]. δ(·) denotes the indicator function, and ∆ = max t s(X ij , X t ) makes only the negative instances, which have X as its nearest neighbour in the positive bags, can vote a penalty. In the ambiguous positive bags, negative instances are usually similar to those in negative bags, while the concept instances are rarely the closest to negative instances. As a result, negative instances will be more penalized, and scored lower than those potential concept instances. For both (1) and (3), the instance scored higher are more likely a concept instance. For NegMin [2], the instance with the maximum score is considered as the object, which makes it infeasible for multiple instance detection [30]. Although CRANE [8] is able to rank the likelihood of the instances being of interest, there is not an explicit classification boundary, therefore a threshold is needed to manually set to generate concept instances. Moreover, these methods are usually sensitive to outliers, since they only employ the instances with extreme similarities for voting. B. Weighted self-voting In order to address the above limitations, we seek a voting scheme using both inter-class information and underlying intraclass information of positive instances. Suppose we already have instances with definite labels, to develop a reasonable voting scheme, each instance should vote to an unknown X for the label of itself according to their similarity, i.e., for a more similar instance, its voting magnitude should be larger, and vice versa. We then weight the voting by similarity, and yield a voting term of X ij with a label y ij for X: f ij (X) = y ij · s ij .(4) For the case of weakly labelled data, the labels for some instances are ambiguous. We therefore introduce another weight w ij ∈ [0, 1] to denote the likelihood of X ij having a positive label, and change the voting term to: f ij (X) = w ij · 1 · s ij for y ij = 1; (1 − w ij ) · (−1) · s ij for y ij = −1.(5) In other words, w ij p(y ij = 1|X ij ). Then given a set of weakly labelled bags, we can obtain the voting score for an unknown instance X as follows: f (X) = p i=1 |B + i | j w ij s ij − n i=1 |B − i | j s ij + p i=1 |B + i | j (1 − w ij )s ij ,(6) where we can see that each instance votes with a magnitude s ij for the label of itself that is definite or ambiguous. A negative instance votes for a definite −1, and w ij can be considered as a soft label that is introduced for ambiguous labels. Here we intuitively explain why employing ambiguous instances to vote is reasonable. More formal interpretation from the viewpoints of MAP and eKDE can be seen in later section. For an instance X, each instance votes for its own label with a value measuring their similarity (4) and (6). A potential object instance has many strong supporters existing in each positive image, because all the positive images contain same class of objects. In other words, among all of the votes, those positive values from its supporters are large due to high intra-class similarities, and the negative votes from its protesters are small due to low inter-class similarities. While a potential negative instance does not have many supporters because this pattern does not appear in all of the positive images, and all of the positive votes tend to be small. By contrast, it is more possible to be similar to the background, and obtain high negative vote values that will suppress the small positive ones. We expect (6) to be able to generate an explicit label for instance X by: y = sgn(f (X)).(7) Intuitively for an segment, when the voting for its being an object overwhelms that against its being an object, Eq. (6) gives a positive value to classify it as an object, and vice versa. Later in this paper, we will demonstrate that (6) actually complies with the MAP criterion under a expectation Kernel Density Estimation algorithm. Note that our voting scheme (6) makes use of the ambiguous positive bags as well as the definite negative instances. Both NegMin and CRANE are special cases of the formulation (6) that only involves negative instances. If we only use the negative instances with definite labels, (6) becomes: f neg (X) = − n i=1 |B − i | j s ij ,(8) which is actually the voting aggregation of all the negative instances, and is a reduced version of [13]. NegMin only picks the minimum of the voting, as seen in (1). CRANE selects part of the negative instances by δ to vote, and the voting magnitude is cut off by f cut . Since NegMin and CRANE use instances with extreme similarities to vote, they are sensitive to outliers, while our voting scheme is much more robust by considering all of the sample. In addition, our scheme is able to mine the useful information contained in the ambiguous bags, and output the category of an instance. C. Interpretation from MAP and eKDE We interpret the scoring scheme (6) and (7) from the viewpoints of MAP and eKDE. Given an instance X ij , we consider its label as a binary random variable y ij ∈ {1, −1}, where 1 and −1 represent the positive class and the negative class respectively. Note that it is a Bernuulli distribution. When we describe the probability of y ij = 1 by the parameter w ij , the probability distribution can be written in the form p(y ij |X ij ) = w 1+y ij 2 ij (1 − w ij ) 1−y ij 2 .(9) Suppose we already have the labels y ij for instance X ij in each B + i , and denote a kernel function by k ij k(X, X ij ), we can estimate the class conditional probabilities using the conventional KDE as follows: p * (X|y = 1) = p i=1 |B + i | j 1+yij 2 · k ij p i=1 |B + i | j 1+yij 2 ,(10)p * (X|y = −1) = n i=1 |B − i | j k ij + p i=1 |B + i | j 1−yij 2 · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j 1−yij 2 .(11) In contrast to a fully conventional KDE instance, the difference is that y ij here are random variables rather than constants. Consequently we have to compute the density using the expectation over the random variables y ij : p(X|y = 1) = E yij [p * (X|y = 1)] = p i=1 |B + i | j w ij · k ij p i=1 |B + i | j w ij , p(X|y = −1) = E yij [p * (X|y = −1)] = n i=1 |B − i | j k ij + p i=1 |B + i | j (1 − w ij ) · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) .(12) Eq. (12) estimates probability density using kernel functions with expectation over extra random variables. We call them expectation kernel density estimation (eKDE). Then the decision scheme (6) and (7) has an interpretation of MAP criterion. For an instance X, MAP decides its label by: From the Bayes' theorem, we have p(y|X) ∝ p(X|y)p(y). y = arg max y∈{−1,1} p(y|X).(13) Then (13) is equivalent to,ŷ = sgn(p(X|y = 1)p(y = 1) − p(X|y = −1)p(y = −1)). As a typical approach in machine learning, we can aggregate the posterior probabilities to approximate the effective number of points assigned to a class, and estimate the class priors p(y) by the fractions of the data points assigned to each of the classes. p(y = 1) = p i=1 |B + i | j w ij /N, p(y = −1) = n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) /N,(16) where N denotes the total number of data points and can be omitted during computing the decision values. Using k ij measuring the similarity s ij , and substituting (12) and (16) into (15), we obtain a discriminant function exactly the same as (7). Therefore our weighted voting scheme complies with the MAP criterion when using the proposed eKDE for weakly supervised density estimation. D. Algorithm From the above demonstration, we can determine the label for a segment using Eqs. (6) and (7) equivalent to Eq. (15) that is a weighted difference of class conditional probability densities. On the one hand, the estimation of class conditional probability density p(X|y) is dependent on the post probabilities w ij through (12). On the other hand, the post probabilities w ij are dependent on p(X|y) through a simple deduction using the Bayes' theorem and the sum rule of probability: w ij = p(X ij |y = 1)p(y = 1) p(X ij |y = 1)p(y = 1) + p(X ij |y = −1)p(y = −1) , This mutual dependency naturally induces an iteratively method to solve the problem, which is described in Algorithm 1. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel to measure the similarity, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We set different bandwidth values for positive class and negative class, and maximizing the overall class density difference to choose values from {0.001, 0.01, 0.1, 1, 10, 100, 1000}. The instance labels are initialized with bag labels, i.e., w ij = 1. The algorithm is terminated when w ij is not changed, which only needs a few iterations in practice. Algorithm 1: WSSA via eKDE. Input: A set of bags D = { B + i } p i=1 { B − i } n i=1 . Output: Instance labels y ij in positive bags. 1 Initialize w ij = 1 ; 2 while not converged do 3 Update p(X ij |y = 1) and p(X ij |y = −1) using (12); 4 Update w ij using(17); 5 end 6 Calculate voting score f (X ij ) for each instance using (6); 7 Return instances labels y ij with (7); As for the convergence, a similar formulation where class conditional probability density and posterior probability are coupled is proposed in [23], and its closed-form solution is derived. In practice, they use an EM-style iterative method to avoid the expensive solution and have proven the convergence of the iterative process. Our eKDE can be considered as a variant of SSKDE in a weakly supervised scenario (their difference is analyzed in Section 3.5), therefore the convergence can be guaranteed. In our experiments, the algorithm usually terminates in a few iterations. Computation cost: Although NegMin and CRANE do not use all of the negative instances to vote, they need to iterate through all of the instances to select the instances eligible to vote. Therefore, the computation cost of our voting scheme on the negative instances is theoretically identical to these negative mining methods. The computation from the instances in the positive bags increases our computation cost, making our method slower than the baselines. E. Difference from existing methods Given that our voting scheme has an interpretation from the eKDE perspective, we analyze its difference from SKDE [20] and SSKDE [23]. By manually defining a conditional probability p(X|X ij ), SKDE obtains a density estimation p(X) from observations with labels, and employs supervised mean shift to seek modes. The main difference is that we try to estimate classspecific density p(X|y) with weak labels, while SKDE aims at marginal density p(X) under full supervision. This difference leads to SKDE cannot output category for instances, and it needs to try various starting points to seek different local maxima to obtain the key instances. Our eKDE interpretation is also relative to SSKDE. It extends the conventional KDE to estimate the posterior probability p(y|X) in a semi-supervised setting. In a semi-supervised setting, a little fraction of positive and negative instances are labelled to utilize a large amount of data that are totally unlabelled. While in a weakly supervised setting, a large amount of definite negative instances are available, and positive instances are given at bag level containing noises. This causes the difference in the way of using the labelled samples. For labelled instance, SSKDE calculates its posterior probability based partially on unlabelled set, whose relative importance is manually set by a parameter t in [23]. While in a weakly supervised setting, negative sample is large and their labels are definite, so their posterior probabilities do not rely on unlabelled sample. In addition, the weak labels provide good initialization for ambiguous instances to speed up the convergence. F. Second voting for refinement The above Algorithm 1 realizes weakly supervised segment annotation. It deals with each segment separately and cannot ensure the connectivity of the detected segments. However, an object in an image/video must be a continues region. In other words, a set of adjacent regions form an object. We therefore design a second round voting to integrate the fact and refine the annotation results. We first explore the adjacency of regions in an image/video, then for each region, the score is tuned with the mean score of its neighbours. Through this fine tuning, we encourage the adjacent regions to have similar voting scores, and expect the region misclassified as background due to the similar appearance (e.g. a car window versus house window eKDE OBoW [31] MILBoost [6] CRANE [8] when detecting a car) is expected to be corrected by location cues. In our experiments, such a refinement improves the results slightly. IV. EXPERIMENTS Setup and implementations. We compare our method with existing methods: OBoW [31], CRANE [8], NegMin [2], MILBoost [6]. Our weighted self-voting scheme is referred to as eKDE. We consider Pittsburgh Car (PittCar) [1] and YouTube-Objects (YTO) manually annotated in [8]. PittCar dataset consists of 400 images, where 200 images contain cars. The background in the car images are similar street scenes to the non-car images, which is well suited to evaluate negative mining methods. Some examples are shown in the first row of Fig. 1. YTO dataset contains ten classes of videos collected from YouTube, see the last two rows of Fig. 1 for some examples. Tang et al. [8] generated a groundtruthed set by manually annotating the segments for 151 selected shots. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We use unsupervised methods [32] and [33] to obtain over-segmentation for images and videos respectively. We represent each segment using histogram of bag-of-visual-words obtained by mapping dense SIFT features [34] into 1000 words. For each description vector, we use L2 normalization before feeding to each model. CRANE sweeps the threshold to generate precision/recall (PR) curves to conduct an evaluation. In order to evaluate the discriminant performance of our method, we adopt the more popular evaluation metrics for object localization: the annotation is considered correct when the overlapping of the selected region and the ground-truth is larger than 0.5 for images and 0.125 for videos, then the average precision (the fraction of the correctly annotated images) is calculated. For fair comparison, we decide the threshold value for CRANE such that the number of the detected segments are the same as ours. Results and analysis. We list the average precision in Table I, where we can see that our method obtain better results than the baselines. In order to analyze the above quantitative results, we visualize some annotation results in Fig. 2. As expected, our weighted voting method generates the best ranking of the segments belonging to an object. MILBoost is usually able to locate the most discriminant region precisely, but the correctly annotated object regions are often too sparse, which leads to bad AP. For CRANE, only negative instances that are nearby a segment could vote a penalty. This leads to many background regions in a positive image not penalized, and these segments jointly have the identical maximum score 0. Combined with deep features. Following [27], [28], [29], we leverage the DCNN models pre-trained for large scale classification task. We adopt the VGG-NET [35] pre-trained on the ImageNet in our method. For the relatively simple PittCar dataset, we directly extract the feature maps using the original CNN parameters. For YTO dataset, we fine-tune the parameter before extracting features. Please note that we did not use the pixel-wise ground-truth during the tuning to ensure that our method is still weakly supervised. For each image/frame, we resize it to 224 × 224 and extract feature through the VGG model. The feature maps of Conv5-4, Conv4-4, and Conv3-4 layers are collected, and are up-sampled to restore the original size. Then they are concatenated to a h*w*1280 3D tensor. We then max-pool the vectors in a super-pixel to obtain a 1280-dimensional feature representation. Our method using these deep features are referred to as eKDE deep. As shown in Table I, replacing the SIFT feature by deep features in our voting can greatly improve the performance of segment annotation. This demonstrates that our algorithm can take advantage of deep CNN features and obtain much better results. Note that we adopt different evaluation metrics from [29], therefore higher values do not mean our method is better than theirs. V. CONCLUSION AND DISCUSSION In this paper, we revisited the negative mining based methods under a voting framework. These methods can be considered as voting through only negative instances, which leads to their limitations: missing the useful information in positive bags and inability to determine the label of an instance. To overcome these limitations, we proposed a self-voting scheme involving the ambiguous instances as well as the definite negative ones. Each instance voted for its label with a weight computed from similarity. The ambiguous instances were assigned soft labels that were iteratively updated. We also derive an interpretation from eKDE and MAP, and analyzed the difference from the existing methods. In addition, deep CNN features can be included into the method to boost performance significantly. In future work, we will investigate how to construct end-to-end CNN for segment annotation.
4,221
1812.06228
2953137836
Since the labelling for the positive images videos is ambiguous in weakly supervised segment annotation, negative mining based methods that only use the intra-class information emerge. In these methods, negative instances are utilized to penalize unknown instances to rank their likelihood of being an object, which can be considered as a voting in terms of similarity. However, these methods 1) ignore the information contained in positive bags, 2) only rank the likelihood but cannot generate an explicit decision function. In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags. In the scheme, each instance votes for its label with a magnitude arising from the similarity, and the ambiguous positive instances are assigned soft labels that are iteratively updated during the voting. It overcomes the limitations of voting using only the negative bags. We also propose an expectation kernel density estimation (eKDE) algorithm to gain further insight into the voting mechanism. Experimental results demonstrate the superiority of our scheme beyond the baselines.
Besides using the inter-class information, key instance detection can be accomplished by searching similar patterns among diverse positive bags. The most classical framework is diverse density (DD) @cite_8 . It defines a conditional probability with similarity, and uses the noisy-or model to define a diverse density to select instances with high similarities to diverse positive bags and low similarities to negative bags. DD has been widely used as a basis for many methods including EM-DD @cite_18 , GEM-DD @cite_12 and DD-SVM @cite_4 . However, DD is sensitive to labelling noise. Evidence confidence @cite_3 is proposed to seek the mode on observed instances rather than in a continuous space to facilitate the computation and alleviate the sensitivity. @cite_17 exploit similarity among class-specific features to decide prototypes, which are used in a voting-based mechanism to select instances with a high diverse occurrence.
{ "abstract": [ "We present a new multiple-instance (MI) learning technique (EM-DD) that combines EM with the diverse density (DD) algorithm. EM-DD is a general-purpose MI algorithm that can be applied with boolean or real-value labels and makes real-value predictions. On the boolean Musk benchmarks, the EM-DD algorithm without any tuning significantly outperforms all previous algorithms. EM-DD is relatively insensitive to the number of relevant attributes in the data set and scales up well to large bag sizes. Furthermore, EM-DD provides a new framework for MI learning, in which the MI problem is converted to a single-instance setting by using EM to estimate the instance responsible for the label of the bag.", "Designing computer programs to automatically categorize images using low-level features is a challenging research topic in computer vision. In this paper, we present a new learning technique, which extends Multiple-Instance Learning (MIL), and its application to the problem of region-based image categorization. Images are viewed as bags, each of which contains a number of instances corresponding to regions obtained from image segmentation. The standard MIL problem assumes that a bag is labeled positive if at least one of its instances is positive; otherwise, the bag is negative. In the proposed MIL framework, DD-SVM, a bag label is determined by some number of instances satisfying various properties. DD-SVM first learns a collection of instance prototypes according to a Diverse Density (DD) function. Each instance prototype represents a class of instances that is more likely to appear in bags with the specific label than in the other bags. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new feature space, named the bag feature space. Finally, standard support vector machines are trained in the bag feature space. We provide experimental results on an image categorization problem and a drug activity prediction problem.", "Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.", "Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ 2 and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.", "We define localized content-based image retrieval as a CBIR task where the user is only interested in a portion of the image, and the rest of the image is irrelevant. In this paper we present a localized CBIR system, ACCIO, that uses labeled images in conjunction with a multiple-instance learning algorithm to first identify the desired object and weight the features accordingly, and then to rank images in the database using a similarity measure that is based upon only the relevant portions of the image. A challenge for localized CBIR is how to represent the image to capture the content. We present and compare two novel image representations, which extend traditional segmentation-based and salient point-based techniques respectively, to capture content in a localized CBIR setting.", "" ], "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_3", "@cite_12", "@cite_17" ], "mid": [ "2163474322", "2136595724", "2154318594", "1614115966", "2080928936", "" ] }
Weakly supervised segment annotation via expectation kernel density estimation
With the development of communication technology and the popularity of digital cameras, one can easily access massive images/videos. Although these digital multimedia are usually associated with semantic tags indicating certain visual concepts appearing inside, the exact locations remain unknown, leading to their infeasibility for training traditional supervised visual recognition models. As a result, there has been a great interest in object localization for images/videos with weak labels [1], [2], [3], [4], [5]. An alternative is weakly supervised segment annotation (WSSA) [6], [7], [8], [9]. For images/videos with weak labels, those with objects of interest inside are considered as positive bags, while those without objects of interest are negative. Based on unsupervised over-segmentation, images/videos are transformed into segments, and the task is to distinguish whether they correspond to a given visual concept. Among the state-of-the-art methods for weakly supervised segment annotation (WSSA), there is a simple yet effective branch [2], [10], [8], [11]. They employ the inter-class or intra-class information by measuring similarities between instances based on two rules: 1) Positive instances are similar patterns existing in different positive bags; 2) Positive instances are dissimilar to all the instances in negative bags. For an unknown instance, they iterates through the labelled instances, and each gives a vote for or against its being a target. In [2], [8], [12], [13], the authors insist that inter-class information is more useful in a MIL setting, and propose to use negative instances to vote against the unknown instances, and select that least penalized as the instance of interest. In these methods, only negative instances with definite labels are eligible to vote. It is true that the number of negative instances is much larger than that of potential positive instances, and the labels are also more definite. However useful information in positive bags is ignored. However, there are two limitations for these methods. 1) Useful information in positive bags is ignored. 2) Only a ranking of likelihood instead of an explicit decision function is output. Although thresholding the ranking can generate a classification, there is not a strategy to theoretically decide the threshold value. In this paper, we argue that extra useful information can be mined from the weakly labelled positive bags besides the definite negative bags. Consequently the instances can be annotated by looking at the weakly labelled data themselves. Therefore we proposed a self-voting scheme, where all the instances are involved. The contributions of this paper are as follows: 1) A voting scheme involving both negative instances and ambiguous instances in positive bags is proposed. 2) The proposed voting scheme can output discriminant results beyond just ranking. 3) An expectation kernel density estimation (eKDE) algorithm is proposed to handle weakly labelled data. A deep interpretation is provided from the maximum posterior criterion (MAP) and eKDE for the proposed voting scheme 4) Relations to existing methods including negative mining, supervised KDE and semi-supervised KDE, are analyzed. In a WSSA task, two sets of images (the same for videos) are given with image-level labels. Each image in the positive set contains an instance of an identical object category, and each image in the negative set does not contain any instance of the object category. Negative mining methods determine a region in a positive image the likelihood of being an object of interest by its dissimilarity to the negative regions. Besides this inter-class information, our method further takes into account the intra-class information that all the object regions in different positive images should have high similarity because they come from an identical object category.The extra information improves the performance compared to negative mining. The remainder of this paper is organized as follows. Section II reviews the related works. We then detail the methodology in Section III. We first revisit the negative mining methods in a voting framework (III-A), then propose our weighted self-voting scheme (III-B). To get an insight into the mechanism of our scheme, we derive an interpretation from MAP and eKDE (III-C). Difference from other existing methods are also analysed (III-E). Experimental results are reported in Section IV . Section V concludes this work. III. METHODOLOGY In a weakly supervised learning scenario, a label is given at a coarser level, and accounts for a collection of instances rather than for individual instance, usually for the purpose of efforts reduction. A positive label indicates that the collection contains at least one instance of interest, while a negative one indicates that none of the collection is of interest. Such data can be naturally represented by bags that arise from multiple-instance learning. Without loss of generality, we denote such data by D = { B i , y i } m i=1 , where B i = {X ij } |Bi| j=1 is a bag, with X ij ∈ R D an instance and y i ∈ {1, −1} a label. The data annotation is to predict y ij ∈ {1, −1} for each instance. For the sake of clarity, we separate notations of positive bags and negative bags, then the sample set D = { B + i } p i=1 { B − i } n i=1 , where we assume that the numbers of positive bags and negative bags are p and n = m − p, respectively. A. Negative mining revisited Negative mining methods [2], [8] insist that, in the scenario of WSSA, the much larger amount of negative instances provide more useful information. Therefore they only make use of the negative bags with definite labels, and ignore the ambiguous information of positive bags, to localize objects of interest. For a given positive bag, NegMin [2] selects the instance that minimize the similarity to the nearest neighbour in the collection of the negative instances. Let s ij s(X, X ij ) > 0 denotes the similarity of X and X ij . The notion of NegMin can be formalized as follows. It scores an instance by f N egM in (X) = min n i=1 |B − i | j −u ij · s ij (1) with u ij ∈ {0, 1}, |B − i | j=1 u ij = 1 ∀i. Then the j * -th instance with the maximum score in a positive bag B + i is considered as the instance of interest: j * = arg max j∈{1,··· ,|B + i |} f N egM in (X ij ), i = 1, · · · , p.(2) Similarly, from the negative mining perspective, CRANE [8] selects instances from the negative bags to penalize their nearby instances in the positive bags, by the following scoring strategy: f CRAN E (X) = n i=1 |B − i | j −s cut (s ij ) · δ(s ij < ∆).(3) A naive constant s cut (·) = 1 is used in [8]. δ(·) denotes the indicator function, and ∆ = max t s(X ij , X t ) makes only the negative instances, which have X as its nearest neighbour in the positive bags, can vote a penalty. In the ambiguous positive bags, negative instances are usually similar to those in negative bags, while the concept instances are rarely the closest to negative instances. As a result, negative instances will be more penalized, and scored lower than those potential concept instances. For both (1) and (3), the instance scored higher are more likely a concept instance. For NegMin [2], the instance with the maximum score is considered as the object, which makes it infeasible for multiple instance detection [30]. Although CRANE [8] is able to rank the likelihood of the instances being of interest, there is not an explicit classification boundary, therefore a threshold is needed to manually set to generate concept instances. Moreover, these methods are usually sensitive to outliers, since they only employ the instances with extreme similarities for voting. B. Weighted self-voting In order to address the above limitations, we seek a voting scheme using both inter-class information and underlying intraclass information of positive instances. Suppose we already have instances with definite labels, to develop a reasonable voting scheme, each instance should vote to an unknown X for the label of itself according to their similarity, i.e., for a more similar instance, its voting magnitude should be larger, and vice versa. We then weight the voting by similarity, and yield a voting term of X ij with a label y ij for X: f ij (X) = y ij · s ij .(4) For the case of weakly labelled data, the labels for some instances are ambiguous. We therefore introduce another weight w ij ∈ [0, 1] to denote the likelihood of X ij having a positive label, and change the voting term to: f ij (X) = w ij · 1 · s ij for y ij = 1; (1 − w ij ) · (−1) · s ij for y ij = −1.(5) In other words, w ij p(y ij = 1|X ij ). Then given a set of weakly labelled bags, we can obtain the voting score for an unknown instance X as follows: f (X) = p i=1 |B + i | j w ij s ij − n i=1 |B − i | j s ij + p i=1 |B + i | j (1 − w ij )s ij ,(6) where we can see that each instance votes with a magnitude s ij for the label of itself that is definite or ambiguous. A negative instance votes for a definite −1, and w ij can be considered as a soft label that is introduced for ambiguous labels. Here we intuitively explain why employing ambiguous instances to vote is reasonable. More formal interpretation from the viewpoints of MAP and eKDE can be seen in later section. For an instance X, each instance votes for its own label with a value measuring their similarity (4) and (6). A potential object instance has many strong supporters existing in each positive image, because all the positive images contain same class of objects. In other words, among all of the votes, those positive values from its supporters are large due to high intra-class similarities, and the negative votes from its protesters are small due to low inter-class similarities. While a potential negative instance does not have many supporters because this pattern does not appear in all of the positive images, and all of the positive votes tend to be small. By contrast, it is more possible to be similar to the background, and obtain high negative vote values that will suppress the small positive ones. We expect (6) to be able to generate an explicit label for instance X by: y = sgn(f (X)).(7) Intuitively for an segment, when the voting for its being an object overwhelms that against its being an object, Eq. (6) gives a positive value to classify it as an object, and vice versa. Later in this paper, we will demonstrate that (6) actually complies with the MAP criterion under a expectation Kernel Density Estimation algorithm. Note that our voting scheme (6) makes use of the ambiguous positive bags as well as the definite negative instances. Both NegMin and CRANE are special cases of the formulation (6) that only involves negative instances. If we only use the negative instances with definite labels, (6) becomes: f neg (X) = − n i=1 |B − i | j s ij ,(8) which is actually the voting aggregation of all the negative instances, and is a reduced version of [13]. NegMin only picks the minimum of the voting, as seen in (1). CRANE selects part of the negative instances by δ to vote, and the voting magnitude is cut off by f cut . Since NegMin and CRANE use instances with extreme similarities to vote, they are sensitive to outliers, while our voting scheme is much more robust by considering all of the sample. In addition, our scheme is able to mine the useful information contained in the ambiguous bags, and output the category of an instance. C. Interpretation from MAP and eKDE We interpret the scoring scheme (6) and (7) from the viewpoints of MAP and eKDE. Given an instance X ij , we consider its label as a binary random variable y ij ∈ {1, −1}, where 1 and −1 represent the positive class and the negative class respectively. Note that it is a Bernuulli distribution. When we describe the probability of y ij = 1 by the parameter w ij , the probability distribution can be written in the form p(y ij |X ij ) = w 1+y ij 2 ij (1 − w ij ) 1−y ij 2 .(9) Suppose we already have the labels y ij for instance X ij in each B + i , and denote a kernel function by k ij k(X, X ij ), we can estimate the class conditional probabilities using the conventional KDE as follows: p * (X|y = 1) = p i=1 |B + i | j 1+yij 2 · k ij p i=1 |B + i | j 1+yij 2 ,(10)p * (X|y = −1) = n i=1 |B − i | j k ij + p i=1 |B + i | j 1−yij 2 · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j 1−yij 2 .(11) In contrast to a fully conventional KDE instance, the difference is that y ij here are random variables rather than constants. Consequently we have to compute the density using the expectation over the random variables y ij : p(X|y = 1) = E yij [p * (X|y = 1)] = p i=1 |B + i | j w ij · k ij p i=1 |B + i | j w ij , p(X|y = −1) = E yij [p * (X|y = −1)] = n i=1 |B − i | j k ij + p i=1 |B + i | j (1 − w ij ) · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) .(12) Eq. (12) estimates probability density using kernel functions with expectation over extra random variables. We call them expectation kernel density estimation (eKDE). Then the decision scheme (6) and (7) has an interpretation of MAP criterion. For an instance X, MAP decides its label by: From the Bayes' theorem, we have p(y|X) ∝ p(X|y)p(y). y = arg max y∈{−1,1} p(y|X).(13) Then (13) is equivalent to,ŷ = sgn(p(X|y = 1)p(y = 1) − p(X|y = −1)p(y = −1)). As a typical approach in machine learning, we can aggregate the posterior probabilities to approximate the effective number of points assigned to a class, and estimate the class priors p(y) by the fractions of the data points assigned to each of the classes. p(y = 1) = p i=1 |B + i | j w ij /N, p(y = −1) = n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) /N,(16) where N denotes the total number of data points and can be omitted during computing the decision values. Using k ij measuring the similarity s ij , and substituting (12) and (16) into (15), we obtain a discriminant function exactly the same as (7). Therefore our weighted voting scheme complies with the MAP criterion when using the proposed eKDE for weakly supervised density estimation. D. Algorithm From the above demonstration, we can determine the label for a segment using Eqs. (6) and (7) equivalent to Eq. (15) that is a weighted difference of class conditional probability densities. On the one hand, the estimation of class conditional probability density p(X|y) is dependent on the post probabilities w ij through (12). On the other hand, the post probabilities w ij are dependent on p(X|y) through a simple deduction using the Bayes' theorem and the sum rule of probability: w ij = p(X ij |y = 1)p(y = 1) p(X ij |y = 1)p(y = 1) + p(X ij |y = −1)p(y = −1) , This mutual dependency naturally induces an iteratively method to solve the problem, which is described in Algorithm 1. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel to measure the similarity, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We set different bandwidth values for positive class and negative class, and maximizing the overall class density difference to choose values from {0.001, 0.01, 0.1, 1, 10, 100, 1000}. The instance labels are initialized with bag labels, i.e., w ij = 1. The algorithm is terminated when w ij is not changed, which only needs a few iterations in practice. Algorithm 1: WSSA via eKDE. Input: A set of bags D = { B + i } p i=1 { B − i } n i=1 . Output: Instance labels y ij in positive bags. 1 Initialize w ij = 1 ; 2 while not converged do 3 Update p(X ij |y = 1) and p(X ij |y = −1) using (12); 4 Update w ij using(17); 5 end 6 Calculate voting score f (X ij ) for each instance using (6); 7 Return instances labels y ij with (7); As for the convergence, a similar formulation where class conditional probability density and posterior probability are coupled is proposed in [23], and its closed-form solution is derived. In practice, they use an EM-style iterative method to avoid the expensive solution and have proven the convergence of the iterative process. Our eKDE can be considered as a variant of SSKDE in a weakly supervised scenario (their difference is analyzed in Section 3.5), therefore the convergence can be guaranteed. In our experiments, the algorithm usually terminates in a few iterations. Computation cost: Although NegMin and CRANE do not use all of the negative instances to vote, they need to iterate through all of the instances to select the instances eligible to vote. Therefore, the computation cost of our voting scheme on the negative instances is theoretically identical to these negative mining methods. The computation from the instances in the positive bags increases our computation cost, making our method slower than the baselines. E. Difference from existing methods Given that our voting scheme has an interpretation from the eKDE perspective, we analyze its difference from SKDE [20] and SSKDE [23]. By manually defining a conditional probability p(X|X ij ), SKDE obtains a density estimation p(X) from observations with labels, and employs supervised mean shift to seek modes. The main difference is that we try to estimate classspecific density p(X|y) with weak labels, while SKDE aims at marginal density p(X) under full supervision. This difference leads to SKDE cannot output category for instances, and it needs to try various starting points to seek different local maxima to obtain the key instances. Our eKDE interpretation is also relative to SSKDE. It extends the conventional KDE to estimate the posterior probability p(y|X) in a semi-supervised setting. In a semi-supervised setting, a little fraction of positive and negative instances are labelled to utilize a large amount of data that are totally unlabelled. While in a weakly supervised setting, a large amount of definite negative instances are available, and positive instances are given at bag level containing noises. This causes the difference in the way of using the labelled samples. For labelled instance, SSKDE calculates its posterior probability based partially on unlabelled set, whose relative importance is manually set by a parameter t in [23]. While in a weakly supervised setting, negative sample is large and their labels are definite, so their posterior probabilities do not rely on unlabelled sample. In addition, the weak labels provide good initialization for ambiguous instances to speed up the convergence. F. Second voting for refinement The above Algorithm 1 realizes weakly supervised segment annotation. It deals with each segment separately and cannot ensure the connectivity of the detected segments. However, an object in an image/video must be a continues region. In other words, a set of adjacent regions form an object. We therefore design a second round voting to integrate the fact and refine the annotation results. We first explore the adjacency of regions in an image/video, then for each region, the score is tuned with the mean score of its neighbours. Through this fine tuning, we encourage the adjacent regions to have similar voting scores, and expect the region misclassified as background due to the similar appearance (e.g. a car window versus house window eKDE OBoW [31] MILBoost [6] CRANE [8] when detecting a car) is expected to be corrected by location cues. In our experiments, such a refinement improves the results slightly. IV. EXPERIMENTS Setup and implementations. We compare our method with existing methods: OBoW [31], CRANE [8], NegMin [2], MILBoost [6]. Our weighted self-voting scheme is referred to as eKDE. We consider Pittsburgh Car (PittCar) [1] and YouTube-Objects (YTO) manually annotated in [8]. PittCar dataset consists of 400 images, where 200 images contain cars. The background in the car images are similar street scenes to the non-car images, which is well suited to evaluate negative mining methods. Some examples are shown in the first row of Fig. 1. YTO dataset contains ten classes of videos collected from YouTube, see the last two rows of Fig. 1 for some examples. Tang et al. [8] generated a groundtruthed set by manually annotating the segments for 151 selected shots. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We use unsupervised methods [32] and [33] to obtain over-segmentation for images and videos respectively. We represent each segment using histogram of bag-of-visual-words obtained by mapping dense SIFT features [34] into 1000 words. For each description vector, we use L2 normalization before feeding to each model. CRANE sweeps the threshold to generate precision/recall (PR) curves to conduct an evaluation. In order to evaluate the discriminant performance of our method, we adopt the more popular evaluation metrics for object localization: the annotation is considered correct when the overlapping of the selected region and the ground-truth is larger than 0.5 for images and 0.125 for videos, then the average precision (the fraction of the correctly annotated images) is calculated. For fair comparison, we decide the threshold value for CRANE such that the number of the detected segments are the same as ours. Results and analysis. We list the average precision in Table I, where we can see that our method obtain better results than the baselines. In order to analyze the above quantitative results, we visualize some annotation results in Fig. 2. As expected, our weighted voting method generates the best ranking of the segments belonging to an object. MILBoost is usually able to locate the most discriminant region precisely, but the correctly annotated object regions are often too sparse, which leads to bad AP. For CRANE, only negative instances that are nearby a segment could vote a penalty. This leads to many background regions in a positive image not penalized, and these segments jointly have the identical maximum score 0. Combined with deep features. Following [27], [28], [29], we leverage the DCNN models pre-trained for large scale classification task. We adopt the VGG-NET [35] pre-trained on the ImageNet in our method. For the relatively simple PittCar dataset, we directly extract the feature maps using the original CNN parameters. For YTO dataset, we fine-tune the parameter before extracting features. Please note that we did not use the pixel-wise ground-truth during the tuning to ensure that our method is still weakly supervised. For each image/frame, we resize it to 224 × 224 and extract feature through the VGG model. The feature maps of Conv5-4, Conv4-4, and Conv3-4 layers are collected, and are up-sampled to restore the original size. Then they are concatenated to a h*w*1280 3D tensor. We then max-pool the vectors in a super-pixel to obtain a 1280-dimensional feature representation. Our method using these deep features are referred to as eKDE deep. As shown in Table I, replacing the SIFT feature by deep features in our voting can greatly improve the performance of segment annotation. This demonstrates that our algorithm can take advantage of deep CNN features and obtain much better results. Note that we adopt different evaluation metrics from [29], therefore higher values do not mean our method is better than theirs. V. CONCLUSION AND DISCUSSION In this paper, we revisited the negative mining based methods under a voting framework. These methods can be considered as voting through only negative instances, which leads to their limitations: missing the useful information in positive bags and inability to determine the label of an instance. To overcome these limitations, we proposed a self-voting scheme involving the ambiguous instances as well as the definite negative ones. Each instance voted for its label with a weight computed from similarity. The ambiguous instances were assigned soft labels that were iteratively updated. We also derive an interpretation from eKDE and MAP, and analyzed the difference from the existing methods. In addition, deep CNN features can be included into the method to boost performance significantly. In future work, we will investigate how to construct end-to-end CNN for segment annotation.
4,221
1812.06228
2953137836
Since the labelling for the positive images videos is ambiguous in weakly supervised segment annotation, negative mining based methods that only use the intra-class information emerge. In these methods, negative instances are utilized to penalize unknown instances to rank their likelihood of being an object, which can be considered as a voting in terms of similarity. However, these methods 1) ignore the information contained in positive bags, 2) only rank the likelihood but cannot generate an explicit decision function. In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags. In the scheme, each instance votes for its label with a magnitude arising from the similarity, and the ambiguous positive instances are assigned soft labels that are iteratively updated during the voting. It overcomes the limitations of voting using only the negative bags. We also propose an expectation kernel density estimation (eKDE) algorithm to gain further insight into the voting mechanism. Experimental results demonstrate the superiority of our scheme beyond the baselines.
Since we derive a KDE interpretation for our voting scheme, we also make a literature review on this subject. KDE possesses the advantages of nonparametric method for unsupervised density estimation. @cite_35 propose a supervised KDE to make use of labels, and extend the mean shift @cite_32 to a supervised version to seek modes. In order to make full use of unlabelled data, @cite_36 @cite_21 propose a semi-supervised KDE to estimate class-specific density based on a little fraction of labelled data. SSKDE is later extended to a manifold structure @cite_5 .
{ "abstract": [ "Multiple-instance learning (MIL) is a variation on supervised learning. Instead of receiving a set of labeled instances, the learner receives a set of bags that are labeled. Each bag contains many instances. The aim of MIL is to classify new bags or instances. In this work, we propose a novel algorithm, MIL-SKDE (multiple-instance learning with supervised kernel density estimation), which addresses MIL problem through an extended framework of ''KDE (kernel density estimation)+mean shift''. Since the KDE+mean shift framework is an unsupervised learning method, we extend KDE to its supervised version, called supervised KDE (SKDE), by considering class labels of samples. To seek the modes (local maxima) of SKDE, we also extend mean shift to a supervised version by taking into account sample labels. SKDE is an alternative of the well-known diverse density estimation (DDE) whose modes are called concepts. Comparing to DDE, SKDE is more convenient to learn multi-modal concepts and robust to labeling noise (mistakenly labeled bags). Finally, each bag is mapped into a concept space where the multi-class SVM classifiers are learned. Experimental results demonstrate that our approach outperforms the state-of-the-art MIL approaches.", "Insufficiency of labeled training data is a major obstacle for automatically annotating large-scale video databases with semantic concepts. Existing semi-supervised learning algorithms based on parametric models try to tackle this issue by incorporating the information in a large amount of unlabeled data. However, they are based on a \"model assumption\" that the assumed generative model is correct, which usually cannot be satisfied in automatic video annotation due to the large variations of video semantic concepts. In this paper, we propose a novel semi-supervised learning algorithm, named Semi Supervised Learning by Kernel Density Estimation (SSLKDE), which is based on a non-parametric method, and therefore the \"model assumption\" is avoided. While only labeled data are utilized in the classical Kernel Density Estimation (KDE) approach, in SSLKDE both labeled and unlabeled data are leveraged to estimate class conditional probability densities based on an extended form of KDE. We also investigate the connection between SSLKDE and existing graph-based semi-supervised learning algorithms. Experiments prove that SSLKDE significantly outperforms existing supervised methods for video annotation.", "Insufficiency of labeled training data is a major obstacle for automatic video annotation. Semi-supervised learning is an effective approach to this problem by leveraging a large amount of unlabeled data. However, existing semi-supervised learning algorithms have not demonstrated promising results in large-scale video annotation due to several difficulties, such as large variation of video content and intractable computational cost. In this paper, we propose a novel semi-supervised learning algorithm named semi-supervised kernel density estimation (SSKDE) which is developed based on kernel density estimation (KDE) approach. While only labeled data are utilized in classical KDE, in SSKDE both labeled and unlabeled data are leveraged to estimate class conditional probability densities based on an extended form of KDE. It is a non-parametric method, and it thus naturally avoids the model assumption problem that exists in many parametric semi-supervised methods. Meanwhile, it can be implemented with an efficient iterative solution process. So, this method is appropriate for video annotation. Furthermore, motivated by existing adaptive KDE approach, we propose an improved algorithm named semi-supervised adaptive kernel density estimation (SSAKDE). It employs local adaptive kernels rather than a fixed kernel, such that broader kernels can be applied in the regions with low density. In this way, more accurate density estimates can be obtained. Extensive experiments have demonstrated the effectiveness of the proposed methods.", "A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "The insufficiency of labeled training data is a major obstacle in automatic image annotation. To tackle this problem, we propose a semi-supervised manifold kernel density estimation (SSMKDE) approach based on a recently proposed manifold KDE method. Our contributions are twofold. First, SSMKDE leverages both labeled and unlabeled samples and formulates all data in a manifold structure, which enables a more accurate label prediction. Second, the relationship between KDE-based methods and graph-based semi-supervised learning (SSL) methods is analyzed, which helps to better understand graph-based SSL methods. Extensive experiments demonstrate the superiority of SSMKDE over existing KDE-based and graph-based SSL methods." ], "cite_N": [ "@cite_35", "@cite_36", "@cite_21", "@cite_32", "@cite_5" ], "mid": [ "2139827065", "2075807547", "2018184190", "2067191022", "2009883891" ] }
Weakly supervised segment annotation via expectation kernel density estimation
With the development of communication technology and the popularity of digital cameras, one can easily access massive images/videos. Although these digital multimedia are usually associated with semantic tags indicating certain visual concepts appearing inside, the exact locations remain unknown, leading to their infeasibility for training traditional supervised visual recognition models. As a result, there has been a great interest in object localization for images/videos with weak labels [1], [2], [3], [4], [5]. An alternative is weakly supervised segment annotation (WSSA) [6], [7], [8], [9]. For images/videos with weak labels, those with objects of interest inside are considered as positive bags, while those without objects of interest are negative. Based on unsupervised over-segmentation, images/videos are transformed into segments, and the task is to distinguish whether they correspond to a given visual concept. Among the state-of-the-art methods for weakly supervised segment annotation (WSSA), there is a simple yet effective branch [2], [10], [8], [11]. They employ the inter-class or intra-class information by measuring similarities between instances based on two rules: 1) Positive instances are similar patterns existing in different positive bags; 2) Positive instances are dissimilar to all the instances in negative bags. For an unknown instance, they iterates through the labelled instances, and each gives a vote for or against its being a target. In [2], [8], [12], [13], the authors insist that inter-class information is more useful in a MIL setting, and propose to use negative instances to vote against the unknown instances, and select that least penalized as the instance of interest. In these methods, only negative instances with definite labels are eligible to vote. It is true that the number of negative instances is much larger than that of potential positive instances, and the labels are also more definite. However useful information in positive bags is ignored. However, there are two limitations for these methods. 1) Useful information in positive bags is ignored. 2) Only a ranking of likelihood instead of an explicit decision function is output. Although thresholding the ranking can generate a classification, there is not a strategy to theoretically decide the threshold value. In this paper, we argue that extra useful information can be mined from the weakly labelled positive bags besides the definite negative bags. Consequently the instances can be annotated by looking at the weakly labelled data themselves. Therefore we proposed a self-voting scheme, where all the instances are involved. The contributions of this paper are as follows: 1) A voting scheme involving both negative instances and ambiguous instances in positive bags is proposed. 2) The proposed voting scheme can output discriminant results beyond just ranking. 3) An expectation kernel density estimation (eKDE) algorithm is proposed to handle weakly labelled data. A deep interpretation is provided from the maximum posterior criterion (MAP) and eKDE for the proposed voting scheme 4) Relations to existing methods including negative mining, supervised KDE and semi-supervised KDE, are analyzed. In a WSSA task, two sets of images (the same for videos) are given with image-level labels. Each image in the positive set contains an instance of an identical object category, and each image in the negative set does not contain any instance of the object category. Negative mining methods determine a region in a positive image the likelihood of being an object of interest by its dissimilarity to the negative regions. Besides this inter-class information, our method further takes into account the intra-class information that all the object regions in different positive images should have high similarity because they come from an identical object category.The extra information improves the performance compared to negative mining. The remainder of this paper is organized as follows. Section II reviews the related works. We then detail the methodology in Section III. We first revisit the negative mining methods in a voting framework (III-A), then propose our weighted self-voting scheme (III-B). To get an insight into the mechanism of our scheme, we derive an interpretation from MAP and eKDE (III-C). Difference from other existing methods are also analysed (III-E). Experimental results are reported in Section IV . Section V concludes this work. III. METHODOLOGY In a weakly supervised learning scenario, a label is given at a coarser level, and accounts for a collection of instances rather than for individual instance, usually for the purpose of efforts reduction. A positive label indicates that the collection contains at least one instance of interest, while a negative one indicates that none of the collection is of interest. Such data can be naturally represented by bags that arise from multiple-instance learning. Without loss of generality, we denote such data by D = { B i , y i } m i=1 , where B i = {X ij } |Bi| j=1 is a bag, with X ij ∈ R D an instance and y i ∈ {1, −1} a label. The data annotation is to predict y ij ∈ {1, −1} for each instance. For the sake of clarity, we separate notations of positive bags and negative bags, then the sample set D = { B + i } p i=1 { B − i } n i=1 , where we assume that the numbers of positive bags and negative bags are p and n = m − p, respectively. A. Negative mining revisited Negative mining methods [2], [8] insist that, in the scenario of WSSA, the much larger amount of negative instances provide more useful information. Therefore they only make use of the negative bags with definite labels, and ignore the ambiguous information of positive bags, to localize objects of interest. For a given positive bag, NegMin [2] selects the instance that minimize the similarity to the nearest neighbour in the collection of the negative instances. Let s ij s(X, X ij ) > 0 denotes the similarity of X and X ij . The notion of NegMin can be formalized as follows. It scores an instance by f N egM in (X) = min n i=1 |B − i | j −u ij · s ij (1) with u ij ∈ {0, 1}, |B − i | j=1 u ij = 1 ∀i. Then the j * -th instance with the maximum score in a positive bag B + i is considered as the instance of interest: j * = arg max j∈{1,··· ,|B + i |} f N egM in (X ij ), i = 1, · · · , p.(2) Similarly, from the negative mining perspective, CRANE [8] selects instances from the negative bags to penalize their nearby instances in the positive bags, by the following scoring strategy: f CRAN E (X) = n i=1 |B − i | j −s cut (s ij ) · δ(s ij < ∆).(3) A naive constant s cut (·) = 1 is used in [8]. δ(·) denotes the indicator function, and ∆ = max t s(X ij , X t ) makes only the negative instances, which have X as its nearest neighbour in the positive bags, can vote a penalty. In the ambiguous positive bags, negative instances are usually similar to those in negative bags, while the concept instances are rarely the closest to negative instances. As a result, negative instances will be more penalized, and scored lower than those potential concept instances. For both (1) and (3), the instance scored higher are more likely a concept instance. For NegMin [2], the instance with the maximum score is considered as the object, which makes it infeasible for multiple instance detection [30]. Although CRANE [8] is able to rank the likelihood of the instances being of interest, there is not an explicit classification boundary, therefore a threshold is needed to manually set to generate concept instances. Moreover, these methods are usually sensitive to outliers, since they only employ the instances with extreme similarities for voting. B. Weighted self-voting In order to address the above limitations, we seek a voting scheme using both inter-class information and underlying intraclass information of positive instances. Suppose we already have instances with definite labels, to develop a reasonable voting scheme, each instance should vote to an unknown X for the label of itself according to their similarity, i.e., for a more similar instance, its voting magnitude should be larger, and vice versa. We then weight the voting by similarity, and yield a voting term of X ij with a label y ij for X: f ij (X) = y ij · s ij .(4) For the case of weakly labelled data, the labels for some instances are ambiguous. We therefore introduce another weight w ij ∈ [0, 1] to denote the likelihood of X ij having a positive label, and change the voting term to: f ij (X) = w ij · 1 · s ij for y ij = 1; (1 − w ij ) · (−1) · s ij for y ij = −1.(5) In other words, w ij p(y ij = 1|X ij ). Then given a set of weakly labelled bags, we can obtain the voting score for an unknown instance X as follows: f (X) = p i=1 |B + i | j w ij s ij − n i=1 |B − i | j s ij + p i=1 |B + i | j (1 − w ij )s ij ,(6) where we can see that each instance votes with a magnitude s ij for the label of itself that is definite or ambiguous. A negative instance votes for a definite −1, and w ij can be considered as a soft label that is introduced for ambiguous labels. Here we intuitively explain why employing ambiguous instances to vote is reasonable. More formal interpretation from the viewpoints of MAP and eKDE can be seen in later section. For an instance X, each instance votes for its own label with a value measuring their similarity (4) and (6). A potential object instance has many strong supporters existing in each positive image, because all the positive images contain same class of objects. In other words, among all of the votes, those positive values from its supporters are large due to high intra-class similarities, and the negative votes from its protesters are small due to low inter-class similarities. While a potential negative instance does not have many supporters because this pattern does not appear in all of the positive images, and all of the positive votes tend to be small. By contrast, it is more possible to be similar to the background, and obtain high negative vote values that will suppress the small positive ones. We expect (6) to be able to generate an explicit label for instance X by: y = sgn(f (X)).(7) Intuitively for an segment, when the voting for its being an object overwhelms that against its being an object, Eq. (6) gives a positive value to classify it as an object, and vice versa. Later in this paper, we will demonstrate that (6) actually complies with the MAP criterion under a expectation Kernel Density Estimation algorithm. Note that our voting scheme (6) makes use of the ambiguous positive bags as well as the definite negative instances. Both NegMin and CRANE are special cases of the formulation (6) that only involves negative instances. If we only use the negative instances with definite labels, (6) becomes: f neg (X) = − n i=1 |B − i | j s ij ,(8) which is actually the voting aggregation of all the negative instances, and is a reduced version of [13]. NegMin only picks the minimum of the voting, as seen in (1). CRANE selects part of the negative instances by δ to vote, and the voting magnitude is cut off by f cut . Since NegMin and CRANE use instances with extreme similarities to vote, they are sensitive to outliers, while our voting scheme is much more robust by considering all of the sample. In addition, our scheme is able to mine the useful information contained in the ambiguous bags, and output the category of an instance. C. Interpretation from MAP and eKDE We interpret the scoring scheme (6) and (7) from the viewpoints of MAP and eKDE. Given an instance X ij , we consider its label as a binary random variable y ij ∈ {1, −1}, where 1 and −1 represent the positive class and the negative class respectively. Note that it is a Bernuulli distribution. When we describe the probability of y ij = 1 by the parameter w ij , the probability distribution can be written in the form p(y ij |X ij ) = w 1+y ij 2 ij (1 − w ij ) 1−y ij 2 .(9) Suppose we already have the labels y ij for instance X ij in each B + i , and denote a kernel function by k ij k(X, X ij ), we can estimate the class conditional probabilities using the conventional KDE as follows: p * (X|y = 1) = p i=1 |B + i | j 1+yij 2 · k ij p i=1 |B + i | j 1+yij 2 ,(10)p * (X|y = −1) = n i=1 |B − i | j k ij + p i=1 |B + i | j 1−yij 2 · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j 1−yij 2 .(11) In contrast to a fully conventional KDE instance, the difference is that y ij here are random variables rather than constants. Consequently we have to compute the density using the expectation over the random variables y ij : p(X|y = 1) = E yij [p * (X|y = 1)] = p i=1 |B + i | j w ij · k ij p i=1 |B + i | j w ij , p(X|y = −1) = E yij [p * (X|y = −1)] = n i=1 |B − i | j k ij + p i=1 |B + i | j (1 − w ij ) · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) .(12) Eq. (12) estimates probability density using kernel functions with expectation over extra random variables. We call them expectation kernel density estimation (eKDE). Then the decision scheme (6) and (7) has an interpretation of MAP criterion. For an instance X, MAP decides its label by: From the Bayes' theorem, we have p(y|X) ∝ p(X|y)p(y). y = arg max y∈{−1,1} p(y|X).(13) Then (13) is equivalent to,ŷ = sgn(p(X|y = 1)p(y = 1) − p(X|y = −1)p(y = −1)). As a typical approach in machine learning, we can aggregate the posterior probabilities to approximate the effective number of points assigned to a class, and estimate the class priors p(y) by the fractions of the data points assigned to each of the classes. p(y = 1) = p i=1 |B + i | j w ij /N, p(y = −1) = n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) /N,(16) where N denotes the total number of data points and can be omitted during computing the decision values. Using k ij measuring the similarity s ij , and substituting (12) and (16) into (15), we obtain a discriminant function exactly the same as (7). Therefore our weighted voting scheme complies with the MAP criterion when using the proposed eKDE for weakly supervised density estimation. D. Algorithm From the above demonstration, we can determine the label for a segment using Eqs. (6) and (7) equivalent to Eq. (15) that is a weighted difference of class conditional probability densities. On the one hand, the estimation of class conditional probability density p(X|y) is dependent on the post probabilities w ij through (12). On the other hand, the post probabilities w ij are dependent on p(X|y) through a simple deduction using the Bayes' theorem and the sum rule of probability: w ij = p(X ij |y = 1)p(y = 1) p(X ij |y = 1)p(y = 1) + p(X ij |y = −1)p(y = −1) , This mutual dependency naturally induces an iteratively method to solve the problem, which is described in Algorithm 1. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel to measure the similarity, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We set different bandwidth values for positive class and negative class, and maximizing the overall class density difference to choose values from {0.001, 0.01, 0.1, 1, 10, 100, 1000}. The instance labels are initialized with bag labels, i.e., w ij = 1. The algorithm is terminated when w ij is not changed, which only needs a few iterations in practice. Algorithm 1: WSSA via eKDE. Input: A set of bags D = { B + i } p i=1 { B − i } n i=1 . Output: Instance labels y ij in positive bags. 1 Initialize w ij = 1 ; 2 while not converged do 3 Update p(X ij |y = 1) and p(X ij |y = −1) using (12); 4 Update w ij using(17); 5 end 6 Calculate voting score f (X ij ) for each instance using (6); 7 Return instances labels y ij with (7); As for the convergence, a similar formulation where class conditional probability density and posterior probability are coupled is proposed in [23], and its closed-form solution is derived. In practice, they use an EM-style iterative method to avoid the expensive solution and have proven the convergence of the iterative process. Our eKDE can be considered as a variant of SSKDE in a weakly supervised scenario (their difference is analyzed in Section 3.5), therefore the convergence can be guaranteed. In our experiments, the algorithm usually terminates in a few iterations. Computation cost: Although NegMin and CRANE do not use all of the negative instances to vote, they need to iterate through all of the instances to select the instances eligible to vote. Therefore, the computation cost of our voting scheme on the negative instances is theoretically identical to these negative mining methods. The computation from the instances in the positive bags increases our computation cost, making our method slower than the baselines. E. Difference from existing methods Given that our voting scheme has an interpretation from the eKDE perspective, we analyze its difference from SKDE [20] and SSKDE [23]. By manually defining a conditional probability p(X|X ij ), SKDE obtains a density estimation p(X) from observations with labels, and employs supervised mean shift to seek modes. The main difference is that we try to estimate classspecific density p(X|y) with weak labels, while SKDE aims at marginal density p(X) under full supervision. This difference leads to SKDE cannot output category for instances, and it needs to try various starting points to seek different local maxima to obtain the key instances. Our eKDE interpretation is also relative to SSKDE. It extends the conventional KDE to estimate the posterior probability p(y|X) in a semi-supervised setting. In a semi-supervised setting, a little fraction of positive and negative instances are labelled to utilize a large amount of data that are totally unlabelled. While in a weakly supervised setting, a large amount of definite negative instances are available, and positive instances are given at bag level containing noises. This causes the difference in the way of using the labelled samples. For labelled instance, SSKDE calculates its posterior probability based partially on unlabelled set, whose relative importance is manually set by a parameter t in [23]. While in a weakly supervised setting, negative sample is large and their labels are definite, so their posterior probabilities do not rely on unlabelled sample. In addition, the weak labels provide good initialization for ambiguous instances to speed up the convergence. F. Second voting for refinement The above Algorithm 1 realizes weakly supervised segment annotation. It deals with each segment separately and cannot ensure the connectivity of the detected segments. However, an object in an image/video must be a continues region. In other words, a set of adjacent regions form an object. We therefore design a second round voting to integrate the fact and refine the annotation results. We first explore the adjacency of regions in an image/video, then for each region, the score is tuned with the mean score of its neighbours. Through this fine tuning, we encourage the adjacent regions to have similar voting scores, and expect the region misclassified as background due to the similar appearance (e.g. a car window versus house window eKDE OBoW [31] MILBoost [6] CRANE [8] when detecting a car) is expected to be corrected by location cues. In our experiments, such a refinement improves the results slightly. IV. EXPERIMENTS Setup and implementations. We compare our method with existing methods: OBoW [31], CRANE [8], NegMin [2], MILBoost [6]. Our weighted self-voting scheme is referred to as eKDE. We consider Pittsburgh Car (PittCar) [1] and YouTube-Objects (YTO) manually annotated in [8]. PittCar dataset consists of 400 images, where 200 images contain cars. The background in the car images are similar street scenes to the non-car images, which is well suited to evaluate negative mining methods. Some examples are shown in the first row of Fig. 1. YTO dataset contains ten classes of videos collected from YouTube, see the last two rows of Fig. 1 for some examples. Tang et al. [8] generated a groundtruthed set by manually annotating the segments for 151 selected shots. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We use unsupervised methods [32] and [33] to obtain over-segmentation for images and videos respectively. We represent each segment using histogram of bag-of-visual-words obtained by mapping dense SIFT features [34] into 1000 words. For each description vector, we use L2 normalization before feeding to each model. CRANE sweeps the threshold to generate precision/recall (PR) curves to conduct an evaluation. In order to evaluate the discriminant performance of our method, we adopt the more popular evaluation metrics for object localization: the annotation is considered correct when the overlapping of the selected region and the ground-truth is larger than 0.5 for images and 0.125 for videos, then the average precision (the fraction of the correctly annotated images) is calculated. For fair comparison, we decide the threshold value for CRANE such that the number of the detected segments are the same as ours. Results and analysis. We list the average precision in Table I, where we can see that our method obtain better results than the baselines. In order to analyze the above quantitative results, we visualize some annotation results in Fig. 2. As expected, our weighted voting method generates the best ranking of the segments belonging to an object. MILBoost is usually able to locate the most discriminant region precisely, but the correctly annotated object regions are often too sparse, which leads to bad AP. For CRANE, only negative instances that are nearby a segment could vote a penalty. This leads to many background regions in a positive image not penalized, and these segments jointly have the identical maximum score 0. Combined with deep features. Following [27], [28], [29], we leverage the DCNN models pre-trained for large scale classification task. We adopt the VGG-NET [35] pre-trained on the ImageNet in our method. For the relatively simple PittCar dataset, we directly extract the feature maps using the original CNN parameters. For YTO dataset, we fine-tune the parameter before extracting features. Please note that we did not use the pixel-wise ground-truth during the tuning to ensure that our method is still weakly supervised. For each image/frame, we resize it to 224 × 224 and extract feature through the VGG model. The feature maps of Conv5-4, Conv4-4, and Conv3-4 layers are collected, and are up-sampled to restore the original size. Then they are concatenated to a h*w*1280 3D tensor. We then max-pool the vectors in a super-pixel to obtain a 1280-dimensional feature representation. Our method using these deep features are referred to as eKDE deep. As shown in Table I, replacing the SIFT feature by deep features in our voting can greatly improve the performance of segment annotation. This demonstrates that our algorithm can take advantage of deep CNN features and obtain much better results. Note that we adopt different evaluation metrics from [29], therefore higher values do not mean our method is better than theirs. V. CONCLUSION AND DISCUSSION In this paper, we revisited the negative mining based methods under a voting framework. These methods can be considered as voting through only negative instances, which leads to their limitations: missing the useful information in positive bags and inability to determine the label of an instance. To overcome these limitations, we proposed a self-voting scheme involving the ambiguous instances as well as the definite negative ones. Each instance voted for its label with a weight computed from similarity. The ambiguous instances were assigned soft labels that were iteratively updated. We also derive an interpretation from eKDE and MAP, and analyzed the difference from the existing methods. In addition, deep CNN features can be included into the method to boost performance significantly. In future work, we will investigate how to construct end-to-end CNN for segment annotation.
4,221
1812.06228
2953137836
Since the labelling for the positive images videos is ambiguous in weakly supervised segment annotation, negative mining based methods that only use the intra-class information emerge. In these methods, negative instances are utilized to penalize unknown instances to rank their likelihood of being an object, which can be considered as a voting in terms of similarity. However, these methods 1) ignore the information contained in positive bags, 2) only rank the likelihood but cannot generate an explicit decision function. In this paper, we propose a voting scheme involving not only the definite negative instances but also the ambiguous positive instances to make use of the extra useful information in the weakly labelled positive bags. In the scheme, each instance votes for its label with a magnitude arising from the similarity, and the ambiguous positive instances are assigned soft labels that are iteratively updated during the voting. It overcomes the limitations of voting using only the negative bags. We also propose an expectation kernel density estimation (eKDE) algorithm to gain further insight into the voting mechanism. Experimental results demonstrate the superiority of our scheme beyond the baselines.
Shallow learning methods have been outperformed by deep convolutional neural networks (DCNN) significantly on the visual recognition tasks resulting from their powerful feature representation @cite_24 . One approach to boosting the performance of shallow methods is using deep features from pre-trained DCNN models. R-CNN @cite_28 combined SVM with DCNN features to boost the object detection performance. DCNN features have also been incorporated into weakly supervised visual recognition tasks. @cite_7 concatenate multiple convolutional outputs and max-pool them to represent the super-pixel features. Observing that a region probably belongs to an object if many channels of the hidden-layer activation fire simultaneously, @cite_27 select the object regions using aggregation map, then max-pool the concatenation of multiple-layer activations to represent the image. Similarly, based on the findings that the hidden-layer activations of a pre-trained object recognition network usually fire up on objects rather than background, @cite_2 leverage these masks for weakly supervised semantic segmentation.
{ "abstract": [ "As an interesting and emerging topic, co-saliency detection aims at simultaneously extracting common salient objects from a group of images. On one hand, traditional co-saliency detection approaches rely heavily on human knowledge for designing hand-crafted metrics to possibly reflect the faithful properties of the co-salient regions. Such strategies, however, always suffer from poor generalization capability to flexibly adapt various scenarios in real applications. On the other hand, most current methods pursue co-saliency detection in unsupervised fashions. This, however, tends to weaken their performance in real complex scenarios because they are lack of robust learning mechanism to make full use of the weak labels of each image. To alleviate these two problems, this paper proposes a new SP-MIL framework for co-saliency detection, which integrates both multiple instance learning (MIL) and self-paced learning (SPL) into a unified learning framework. Specifically, for the first problem, we formulate the co-saliency detection problem as a MIL paradigm to learn the discriminative classifiers to detect the co-saliency object in the “instance-level”. The formulated MIL component facilitates our method capable of automatically producing the proper metrics to measure the intra-image contrast and the inter-image consistency for detecting co-saliency in a purely self-learning way. For the second problem, the embedded SPL paradigm is able to alleviate the data ambiguity under the weak supervision of co-saliency detection and guide a robust learning manner in complex scenarios. Experiments on benchmark datasets together with multiple extended computer vision applications demonstrate the superiority of the proposed framework beyond the state-of-the-arts.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available.", "", "Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground background masks. Unfortunately these priors either require pixel-level annotations bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract accurate masks from networks pre-trained for the task of object recognition, thus forgoing external objectness modules. We first show how foreground background masks can be obtained from the activations of higher-level convolutional layers of a network. We then show how to obtain multi-class masks by the fusion of foreground background ones with information extracted from a weakly-supervised localization network. Our experiments evidence that exploiting these masks in conjunction with a weakly-supervised training loss yields state-of-the-art tag-based weakly-supervised semantic segmentation results." ], "cite_N": [ "@cite_7", "@cite_28", "@cite_24", "@cite_27", "@cite_2" ], "mid": [ "2358876993", "2102605133", "1994002998", "", "2624650145" ] }
Weakly supervised segment annotation via expectation kernel density estimation
With the development of communication technology and the popularity of digital cameras, one can easily access massive images/videos. Although these digital multimedia are usually associated with semantic tags indicating certain visual concepts appearing inside, the exact locations remain unknown, leading to their infeasibility for training traditional supervised visual recognition models. As a result, there has been a great interest in object localization for images/videos with weak labels [1], [2], [3], [4], [5]. An alternative is weakly supervised segment annotation (WSSA) [6], [7], [8], [9]. For images/videos with weak labels, those with objects of interest inside are considered as positive bags, while those without objects of interest are negative. Based on unsupervised over-segmentation, images/videos are transformed into segments, and the task is to distinguish whether they correspond to a given visual concept. Among the state-of-the-art methods for weakly supervised segment annotation (WSSA), there is a simple yet effective branch [2], [10], [8], [11]. They employ the inter-class or intra-class information by measuring similarities between instances based on two rules: 1) Positive instances are similar patterns existing in different positive bags; 2) Positive instances are dissimilar to all the instances in negative bags. For an unknown instance, they iterates through the labelled instances, and each gives a vote for or against its being a target. In [2], [8], [12], [13], the authors insist that inter-class information is more useful in a MIL setting, and propose to use negative instances to vote against the unknown instances, and select that least penalized as the instance of interest. In these methods, only negative instances with definite labels are eligible to vote. It is true that the number of negative instances is much larger than that of potential positive instances, and the labels are also more definite. However useful information in positive bags is ignored. However, there are two limitations for these methods. 1) Useful information in positive bags is ignored. 2) Only a ranking of likelihood instead of an explicit decision function is output. Although thresholding the ranking can generate a classification, there is not a strategy to theoretically decide the threshold value. In this paper, we argue that extra useful information can be mined from the weakly labelled positive bags besides the definite negative bags. Consequently the instances can be annotated by looking at the weakly labelled data themselves. Therefore we proposed a self-voting scheme, where all the instances are involved. The contributions of this paper are as follows: 1) A voting scheme involving both negative instances and ambiguous instances in positive bags is proposed. 2) The proposed voting scheme can output discriminant results beyond just ranking. 3) An expectation kernel density estimation (eKDE) algorithm is proposed to handle weakly labelled data. A deep interpretation is provided from the maximum posterior criterion (MAP) and eKDE for the proposed voting scheme 4) Relations to existing methods including negative mining, supervised KDE and semi-supervised KDE, are analyzed. In a WSSA task, two sets of images (the same for videos) are given with image-level labels. Each image in the positive set contains an instance of an identical object category, and each image in the negative set does not contain any instance of the object category. Negative mining methods determine a region in a positive image the likelihood of being an object of interest by its dissimilarity to the negative regions. Besides this inter-class information, our method further takes into account the intra-class information that all the object regions in different positive images should have high similarity because they come from an identical object category.The extra information improves the performance compared to negative mining. The remainder of this paper is organized as follows. Section II reviews the related works. We then detail the methodology in Section III. We first revisit the negative mining methods in a voting framework (III-A), then propose our weighted self-voting scheme (III-B). To get an insight into the mechanism of our scheme, we derive an interpretation from MAP and eKDE (III-C). Difference from other existing methods are also analysed (III-E). Experimental results are reported in Section IV . Section V concludes this work. III. METHODOLOGY In a weakly supervised learning scenario, a label is given at a coarser level, and accounts for a collection of instances rather than for individual instance, usually for the purpose of efforts reduction. A positive label indicates that the collection contains at least one instance of interest, while a negative one indicates that none of the collection is of interest. Such data can be naturally represented by bags that arise from multiple-instance learning. Without loss of generality, we denote such data by D = { B i , y i } m i=1 , where B i = {X ij } |Bi| j=1 is a bag, with X ij ∈ R D an instance and y i ∈ {1, −1} a label. The data annotation is to predict y ij ∈ {1, −1} for each instance. For the sake of clarity, we separate notations of positive bags and negative bags, then the sample set D = { B + i } p i=1 { B − i } n i=1 , where we assume that the numbers of positive bags and negative bags are p and n = m − p, respectively. A. Negative mining revisited Negative mining methods [2], [8] insist that, in the scenario of WSSA, the much larger amount of negative instances provide more useful information. Therefore they only make use of the negative bags with definite labels, and ignore the ambiguous information of positive bags, to localize objects of interest. For a given positive bag, NegMin [2] selects the instance that minimize the similarity to the nearest neighbour in the collection of the negative instances. Let s ij s(X, X ij ) > 0 denotes the similarity of X and X ij . The notion of NegMin can be formalized as follows. It scores an instance by f N egM in (X) = min n i=1 |B − i | j −u ij · s ij (1) with u ij ∈ {0, 1}, |B − i | j=1 u ij = 1 ∀i. Then the j * -th instance with the maximum score in a positive bag B + i is considered as the instance of interest: j * = arg max j∈{1,··· ,|B + i |} f N egM in (X ij ), i = 1, · · · , p.(2) Similarly, from the negative mining perspective, CRANE [8] selects instances from the negative bags to penalize their nearby instances in the positive bags, by the following scoring strategy: f CRAN E (X) = n i=1 |B − i | j −s cut (s ij ) · δ(s ij < ∆).(3) A naive constant s cut (·) = 1 is used in [8]. δ(·) denotes the indicator function, and ∆ = max t s(X ij , X t ) makes only the negative instances, which have X as its nearest neighbour in the positive bags, can vote a penalty. In the ambiguous positive bags, negative instances are usually similar to those in negative bags, while the concept instances are rarely the closest to negative instances. As a result, negative instances will be more penalized, and scored lower than those potential concept instances. For both (1) and (3), the instance scored higher are more likely a concept instance. For NegMin [2], the instance with the maximum score is considered as the object, which makes it infeasible for multiple instance detection [30]. Although CRANE [8] is able to rank the likelihood of the instances being of interest, there is not an explicit classification boundary, therefore a threshold is needed to manually set to generate concept instances. Moreover, these methods are usually sensitive to outliers, since they only employ the instances with extreme similarities for voting. B. Weighted self-voting In order to address the above limitations, we seek a voting scheme using both inter-class information and underlying intraclass information of positive instances. Suppose we already have instances with definite labels, to develop a reasonable voting scheme, each instance should vote to an unknown X for the label of itself according to their similarity, i.e., for a more similar instance, its voting magnitude should be larger, and vice versa. We then weight the voting by similarity, and yield a voting term of X ij with a label y ij for X: f ij (X) = y ij · s ij .(4) For the case of weakly labelled data, the labels for some instances are ambiguous. We therefore introduce another weight w ij ∈ [0, 1] to denote the likelihood of X ij having a positive label, and change the voting term to: f ij (X) = w ij · 1 · s ij for y ij = 1; (1 − w ij ) · (−1) · s ij for y ij = −1.(5) In other words, w ij p(y ij = 1|X ij ). Then given a set of weakly labelled bags, we can obtain the voting score for an unknown instance X as follows: f (X) = p i=1 |B + i | j w ij s ij − n i=1 |B − i | j s ij + p i=1 |B + i | j (1 − w ij )s ij ,(6) where we can see that each instance votes with a magnitude s ij for the label of itself that is definite or ambiguous. A negative instance votes for a definite −1, and w ij can be considered as a soft label that is introduced for ambiguous labels. Here we intuitively explain why employing ambiguous instances to vote is reasonable. More formal interpretation from the viewpoints of MAP and eKDE can be seen in later section. For an instance X, each instance votes for its own label with a value measuring their similarity (4) and (6). A potential object instance has many strong supporters existing in each positive image, because all the positive images contain same class of objects. In other words, among all of the votes, those positive values from its supporters are large due to high intra-class similarities, and the negative votes from its protesters are small due to low inter-class similarities. While a potential negative instance does not have many supporters because this pattern does not appear in all of the positive images, and all of the positive votes tend to be small. By contrast, it is more possible to be similar to the background, and obtain high negative vote values that will suppress the small positive ones. We expect (6) to be able to generate an explicit label for instance X by: y = sgn(f (X)).(7) Intuitively for an segment, when the voting for its being an object overwhelms that against its being an object, Eq. (6) gives a positive value to classify it as an object, and vice versa. Later in this paper, we will demonstrate that (6) actually complies with the MAP criterion under a expectation Kernel Density Estimation algorithm. Note that our voting scheme (6) makes use of the ambiguous positive bags as well as the definite negative instances. Both NegMin and CRANE are special cases of the formulation (6) that only involves negative instances. If we only use the negative instances with definite labels, (6) becomes: f neg (X) = − n i=1 |B − i | j s ij ,(8) which is actually the voting aggregation of all the negative instances, and is a reduced version of [13]. NegMin only picks the minimum of the voting, as seen in (1). CRANE selects part of the negative instances by δ to vote, and the voting magnitude is cut off by f cut . Since NegMin and CRANE use instances with extreme similarities to vote, they are sensitive to outliers, while our voting scheme is much more robust by considering all of the sample. In addition, our scheme is able to mine the useful information contained in the ambiguous bags, and output the category of an instance. C. Interpretation from MAP and eKDE We interpret the scoring scheme (6) and (7) from the viewpoints of MAP and eKDE. Given an instance X ij , we consider its label as a binary random variable y ij ∈ {1, −1}, where 1 and −1 represent the positive class and the negative class respectively. Note that it is a Bernuulli distribution. When we describe the probability of y ij = 1 by the parameter w ij , the probability distribution can be written in the form p(y ij |X ij ) = w 1+y ij 2 ij (1 − w ij ) 1−y ij 2 .(9) Suppose we already have the labels y ij for instance X ij in each B + i , and denote a kernel function by k ij k(X, X ij ), we can estimate the class conditional probabilities using the conventional KDE as follows: p * (X|y = 1) = p i=1 |B + i | j 1+yij 2 · k ij p i=1 |B + i | j 1+yij 2 ,(10)p * (X|y = −1) = n i=1 |B − i | j k ij + p i=1 |B + i | j 1−yij 2 · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j 1−yij 2 .(11) In contrast to a fully conventional KDE instance, the difference is that y ij here are random variables rather than constants. Consequently we have to compute the density using the expectation over the random variables y ij : p(X|y = 1) = E yij [p * (X|y = 1)] = p i=1 |B + i | j w ij · k ij p i=1 |B + i | j w ij , p(X|y = −1) = E yij [p * (X|y = −1)] = n i=1 |B − i | j k ij + p i=1 |B + i | j (1 − w ij ) · k ij n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) .(12) Eq. (12) estimates probability density using kernel functions with expectation over extra random variables. We call them expectation kernel density estimation (eKDE). Then the decision scheme (6) and (7) has an interpretation of MAP criterion. For an instance X, MAP decides its label by: From the Bayes' theorem, we have p(y|X) ∝ p(X|y)p(y). y = arg max y∈{−1,1} p(y|X).(13) Then (13) is equivalent to,ŷ = sgn(p(X|y = 1)p(y = 1) − p(X|y = −1)p(y = −1)). As a typical approach in machine learning, we can aggregate the posterior probabilities to approximate the effective number of points assigned to a class, and estimate the class priors p(y) by the fractions of the data points assigned to each of the classes. p(y = 1) = p i=1 |B + i | j w ij /N, p(y = −1) = n i=1 |B − i | j 1 + p i=1 |B + i | j (1 − w ij ) /N,(16) where N denotes the total number of data points and can be omitted during computing the decision values. Using k ij measuring the similarity s ij , and substituting (12) and (16) into (15), we obtain a discriminant function exactly the same as (7). Therefore our weighted voting scheme complies with the MAP criterion when using the proposed eKDE for weakly supervised density estimation. D. Algorithm From the above demonstration, we can determine the label for a segment using Eqs. (6) and (7) equivalent to Eq. (15) that is a weighted difference of class conditional probability densities. On the one hand, the estimation of class conditional probability density p(X|y) is dependent on the post probabilities w ij through (12). On the other hand, the post probabilities w ij are dependent on p(X|y) through a simple deduction using the Bayes' theorem and the sum rule of probability: w ij = p(X ij |y = 1)p(y = 1) p(X ij |y = 1)p(y = 1) + p(X ij |y = −1)p(y = −1) , This mutual dependency naturally induces an iteratively method to solve the problem, which is described in Algorithm 1. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel to measure the similarity, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We set different bandwidth values for positive class and negative class, and maximizing the overall class density difference to choose values from {0.001, 0.01, 0.1, 1, 10, 100, 1000}. The instance labels are initialized with bag labels, i.e., w ij = 1. The algorithm is terminated when w ij is not changed, which only needs a few iterations in practice. Algorithm 1: WSSA via eKDE. Input: A set of bags D = { B + i } p i=1 { B − i } n i=1 . Output: Instance labels y ij in positive bags. 1 Initialize w ij = 1 ; 2 while not converged do 3 Update p(X ij |y = 1) and p(X ij |y = −1) using (12); 4 Update w ij using(17); 5 end 6 Calculate voting score f (X ij ) for each instance using (6); 7 Return instances labels y ij with (7); As for the convergence, a similar formulation where class conditional probability density and posterior probability are coupled is proposed in [23], and its closed-form solution is derived. In practice, they use an EM-style iterative method to avoid the expensive solution and have proven the convergence of the iterative process. Our eKDE can be considered as a variant of SSKDE in a weakly supervised scenario (their difference is analyzed in Section 3.5), therefore the convergence can be guaranteed. In our experiments, the algorithm usually terminates in a few iterations. Computation cost: Although NegMin and CRANE do not use all of the negative instances to vote, they need to iterate through all of the instances to select the instances eligible to vote. Therefore, the computation cost of our voting scheme on the negative instances is theoretically identical to these negative mining methods. The computation from the instances in the positive bags increases our computation cost, making our method slower than the baselines. E. Difference from existing methods Given that our voting scheme has an interpretation from the eKDE perspective, we analyze its difference from SKDE [20] and SSKDE [23]. By manually defining a conditional probability p(X|X ij ), SKDE obtains a density estimation p(X) from observations with labels, and employs supervised mean shift to seek modes. The main difference is that we try to estimate classspecific density p(X|y) with weak labels, while SKDE aims at marginal density p(X) under full supervision. This difference leads to SKDE cannot output category for instances, and it needs to try various starting points to seek different local maxima to obtain the key instances. Our eKDE interpretation is also relative to SSKDE. It extends the conventional KDE to estimate the posterior probability p(y|X) in a semi-supervised setting. In a semi-supervised setting, a little fraction of positive and negative instances are labelled to utilize a large amount of data that are totally unlabelled. While in a weakly supervised setting, a large amount of definite negative instances are available, and positive instances are given at bag level containing noises. This causes the difference in the way of using the labelled samples. For labelled instance, SSKDE calculates its posterior probability based partially on unlabelled set, whose relative importance is manually set by a parameter t in [23]. While in a weakly supervised setting, negative sample is large and their labels are definite, so their posterior probabilities do not rely on unlabelled sample. In addition, the weak labels provide good initialization for ambiguous instances to speed up the convergence. F. Second voting for refinement The above Algorithm 1 realizes weakly supervised segment annotation. It deals with each segment separately and cannot ensure the connectivity of the detected segments. However, an object in an image/video must be a continues region. In other words, a set of adjacent regions form an object. We therefore design a second round voting to integrate the fact and refine the annotation results. We first explore the adjacency of regions in an image/video, then for each region, the score is tuned with the mean score of its neighbours. Through this fine tuning, we encourage the adjacent regions to have similar voting scores, and expect the region misclassified as background due to the similar appearance (e.g. a car window versus house window eKDE OBoW [31] MILBoost [6] CRANE [8] when detecting a car) is expected to be corrected by location cues. In our experiments, such a refinement improves the results slightly. IV. EXPERIMENTS Setup and implementations. We compare our method with existing methods: OBoW [31], CRANE [8], NegMin [2], MILBoost [6]. Our weighted self-voting scheme is referred to as eKDE. We consider Pittsburgh Car (PittCar) [1] and YouTube-Objects (YTO) manually annotated in [8]. PittCar dataset consists of 400 images, where 200 images contain cars. The background in the car images are similar street scenes to the non-car images, which is well suited to evaluate negative mining methods. Some examples are shown in the first row of Fig. 1. YTO dataset contains ten classes of videos collected from YouTube, see the last two rows of Fig. 1 for some examples. Tang et al. [8] generated a groundtruthed set by manually annotating the segments for 151 selected shots. To keep consistency with NegMin and CRANE that use L p distance, we adopt Gaussian kernel, and restrict the covariance matrix to be isotropic, Σ = σ 2 I. We use unsupervised methods [32] and [33] to obtain over-segmentation for images and videos respectively. We represent each segment using histogram of bag-of-visual-words obtained by mapping dense SIFT features [34] into 1000 words. For each description vector, we use L2 normalization before feeding to each model. CRANE sweeps the threshold to generate precision/recall (PR) curves to conduct an evaluation. In order to evaluate the discriminant performance of our method, we adopt the more popular evaluation metrics for object localization: the annotation is considered correct when the overlapping of the selected region and the ground-truth is larger than 0.5 for images and 0.125 for videos, then the average precision (the fraction of the correctly annotated images) is calculated. For fair comparison, we decide the threshold value for CRANE such that the number of the detected segments are the same as ours. Results and analysis. We list the average precision in Table I, where we can see that our method obtain better results than the baselines. In order to analyze the above quantitative results, we visualize some annotation results in Fig. 2. As expected, our weighted voting method generates the best ranking of the segments belonging to an object. MILBoost is usually able to locate the most discriminant region precisely, but the correctly annotated object regions are often too sparse, which leads to bad AP. For CRANE, only negative instances that are nearby a segment could vote a penalty. This leads to many background regions in a positive image not penalized, and these segments jointly have the identical maximum score 0. Combined with deep features. Following [27], [28], [29], we leverage the DCNN models pre-trained for large scale classification task. We adopt the VGG-NET [35] pre-trained on the ImageNet in our method. For the relatively simple PittCar dataset, we directly extract the feature maps using the original CNN parameters. For YTO dataset, we fine-tune the parameter before extracting features. Please note that we did not use the pixel-wise ground-truth during the tuning to ensure that our method is still weakly supervised. For each image/frame, we resize it to 224 × 224 and extract feature through the VGG model. The feature maps of Conv5-4, Conv4-4, and Conv3-4 layers are collected, and are up-sampled to restore the original size. Then they are concatenated to a h*w*1280 3D tensor. We then max-pool the vectors in a super-pixel to obtain a 1280-dimensional feature representation. Our method using these deep features are referred to as eKDE deep. As shown in Table I, replacing the SIFT feature by deep features in our voting can greatly improve the performance of segment annotation. This demonstrates that our algorithm can take advantage of deep CNN features and obtain much better results. Note that we adopt different evaluation metrics from [29], therefore higher values do not mean our method is better than theirs. V. CONCLUSION AND DISCUSSION In this paper, we revisited the negative mining based methods under a voting framework. These methods can be considered as voting through only negative instances, which leads to their limitations: missing the useful information in positive bags and inability to determine the label of an instance. To overcome these limitations, we proposed a self-voting scheme involving the ambiguous instances as well as the definite negative ones. Each instance voted for its label with a weight computed from similarity. The ambiguous instances were assigned soft labels that were iteratively updated. We also derive an interpretation from eKDE and MAP, and analyzed the difference from the existing methods. In addition, deep CNN features can be included into the method to boost performance significantly. In future work, we will investigate how to construct end-to-end CNN for segment annotation.
4,221
1812.05586
2954062199
We propose a novel approach for generating region proposals for performing face-detection. Instead of classifying anchor boxes using features from a pixel in the convolutional feature map, we adopt a pooling-based approach for generating region proposals. However, pooling hundreds of thousands of anchors which are evaluated for generating proposals becomes a computational bottleneck during inference. To this end, an efficient anchor placement strategy for reducing the number of anchor-boxes is proposed. We then show that proposals generated by our network (Floating Anchor Region Proposal Network, FA-RPN) are better than RPN for generating region proposals for face detection. We discuss several beneficial features of FA-RPN proposals like iterative refinement, placement of fractional anchors and changing anchors which can be enabled without making any changes to the trained model. Our face detector based on FA-RPN obtains 89.4 mAP with a ResNet-50 backbone on the WIDER dataset.
Generating class agnostic region proposals has been investigated in computer vision for more than a decade. Initial methods include multi-scale combinatorial grouping @cite_6 , constrained parametric min-cuts @cite_21 , selective search @cite_16 . These methods generate region proposals which obtain high recall for objects in a category agnostic fashion. They were also very successful in the pre-deep learning era and obtained state-of-the-art performance even with a bag-of-words model @cite_21 . Using region proposals based on selective search @cite_21 , R-CNN @cite_26 was the first deep learning based detector. Unsupervised region proposals were also used in later detectors like Fast-RCNN @cite_36 but since the Faster-RCNN detector @cite_2 generated region proposals using a convolutional neural network, it has become the de-facto algorithm for generating region proposals.
{ "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "", "", "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline." ], "cite_N": [ "@cite_26", "@cite_36", "@cite_21", "@cite_6", "@cite_2", "@cite_16" ], "mid": [ "2102605133", "", "", "1991367009", "2953106684", "2017691720" ] }
FA-RPN: Floating Region Proposals for Face Detection
Face detection is an important computer vision problem and has multiple applications in surveillance, tracking, consumer facing devices like iPhones etc. Hence, various approaches have been proposed towards solving it [39,41,16,43,17,34,42,27,23] and successful solutions have also been deployed in practice. So, expectations from face detection algorithms are much higher and error rates today are quite low. Algorithms need to detect faces which are as small as 5 pixels to 500 pixels in size. As localization is essential for detection, evaluating every small region of the image is important. Face detection datasets can have up to a thousand faces in a single image, which is not common in generic object detection. Detectors like Faster-RCNN [28] employ a region proposal network (RPN) which places anchor boxes of different sizes and aspect ratios uniformly on the image and classifies them for generating object-like regions. However, RPN only uses a single pixel in the convolutional feature * Equal Contribution map for evaluating the proposal hypotheses, independent of the size of the object. Therefore, the feature representation in RPN entirely relies on the contextual information encoded in the high-dimensional feature representation generated at the pixel. It does not pool features from the entire extent of an object while generating the feature representation, see Fig 1. Thus, it can miss object regions or generate proposals which are not well localized. Further, it is not possible to iterate and refine the positions of the anchorboxes as part of the proposal network. If objects of different scale/aspect-ratios are to be learned or if we want to place anchors at sub-pixel resolution, filters specific to each of these conditions need to be added during training. Generating proposals using a pooling based algorithm can alleviate such problems easily. There are predominantly two pooling based methods for the final classification of RoIs in an image -Fast-RCNN [12] and R-FCN [9]. Fast-RCNN projects the regionproposals to the convolutional feature-map, and pools the features inside the region of interest (RoI) to a fixed size grid (typically 7×7) and applies two fully connected layers which perform classification and regression. Due to computational constraints, this approach is practically infeasible for proposal generation as one would need to apply it to hundreds of thousands of regions -which is the number of region candidates which are typically evaluated by a region proposal algorithm. To reduce the dependence on fully connected layers, R-FCN performs local convolutions (7×7) inside an RoI for capturing the spatial-extent of each object. Since each of these local filters can be applied to the previous featuremap, we just need to pool the response from the appropriate region corresponding to each local filter. This makes it a good candidate for a pooling-based proposal approach as it is possible to apply it to a large number of RoIs efficiently. However, in high resolution images, proposal algorithms like RPN evaluate hundreds of thousands of anchors during inference. It is computationally infeasible to perform pooling on that many regions. Luckily, many anchors are not necessary (e.g. large anchors which are very close to each other). In this paper, we show that careful anchor placement strategies can reduce the number of proposals significantly to the point where a pooling-based algorithm becomes feasible for proposal generation. This yields an efficient and effective objectness detector which does not suffer from the aforementioned problems present in RPN designs. A pooling-based proposal method based on R-FCN which relies on position sensitive filters is particularly well suited for face detection. While objects deform and positional correspondence between different parts is often lostfaces are rigid, structured and parts have positional semantic correspondence (e.g. nose, eyes, lips). Moreover, it is possible to place anchor boxes of different size and aspect ratios without adding more filters. We can also place fractional anchor boxes and perform bilinear interpolation while pooling features for computing objectness. We can further improve localization performance of the proposal candidates by iteratively pooling again from the generated RoIs and all these design changes can be made during inference! Due to these reasons, we refer to our proposal network as Floating Anchor Region Proposal Network (FA-RPN). We highlight these advantages in Fig. 1 and Fig. 2. On the WIDER dataset [40] we show that FA-RPN proposals are better than RPN proposals. FA-RPN also obtains state-of-the-art results on WIDER and PascalFaces which demonstrates its effectiveness for face detection. Related Work Generating class agnostic region proposals has been investigated in computer vision for more than a decade. Initial methods include multi-scale combinatorial grouping [2], constrained parametric min-cuts [36], selective search [7] etc. These methods generate region proposals which obtain high recall for objects in a category agnostic fashion. They were also very successful in the pre-deep learning era and obtained state-of-the-art performance even with a bagof-words model [36]. Using region proposals based on selective search [36], R-CNN [13] was the first deep learning based detector. Unsupervised region proposals were also used in later detectors like Fast-RCNN [12] but since the Faster-RCNN detector [28] generated region proposals using a convolutional neural network, it has become the defacto algorithm for generating region proposals. To improve RPN, several modifications have been proposed. State-of-the-art detectors can also detect objects in a single step. Detectors like SSH [23], SSD [20], RetinaNet [19], MS-CNN [5] generate multi-scale feature maps to classify and regress anchors placed on these feature-maps. These single-shot detectors are closely related to the region proposal network as they have specific filters to detect objects of different sizes and aspect ratios but also combine feature-maps from multiple layers of the deep neural network. No further refinement is performed after the initial offsets generated by the network are applied. Another class of detectors are iterative, like G-CNN [22], Cascade-RCNN [6], LocNet [11], FPN [18], RFCN-3000 [32], Faster-RCNN [28]. These detectors refine a pre-defined set of anchor-boxes in multiple stages and have more layers to further improve classification and localization of regressed anchors. One should note that even in these networks, the first stage comprises of the region proposal network which eliminates the major chunk of background regions. FA-RPN is closer to this line of work but, in contrast, it supports iterative refinement of region proposals during inference. We briefly review some recent work on face detection. With the availability of large scale datasets like WIDER [40] which contain many small faces in high resolution images, multiple new techniques for face detection have been proposed [39,41,16,43,17,34,42,27,3]. A lot of focus has been on scale, combining features of different layers [16,42,23,41] and improving configurations of the region proposal network [42,41]. For example, in finding tiny faces [16], it is proposed to perform detection on an image pyramid and have different scale filters for objects of different sizes. SSH [23] and S3FD [41] efficiently utilize all the intermediate layers of the network. PyramidBox [35] replaces the context module in SSH by deeper and wider sub-networks to better capture the contextual information for face detection. Recently, even GANs [14] have been used to improve the performance on tiny faces [3]. In face detection, the choice of anchors and their placement on the image is very important [42,41]. For example, using extra strided anchors were shown to be beneficial [42]. Geometric constraints of the scene have also been used to prune region proposals [1]. Some of these changes require re-training RPN again. In our framework, design decisions such as evaluating different anchor scales, changing the stride of anchors, and adding fractional anchors can simply be made during inference as we share filters for all object sizes and only pooling is performed for them. Moreover, a pooling based design also provides precise spatial FA-RPN -Floating Anchor Region Proposal Network In this section, we discuss training of FA-RPN, which performs iterative classification and regression of anchors placed on an image for generating accurate region proposals. An overview of our approach is shown in Fig. 3. Anchor Placement In this architecture, classification is not performed using a single high-dimensional feature vector but by pooling fea-tures inside the RoI. Hence, there are no restrictions on how RoIs can be placed during training and inference. As long as the convolutional filters can learn objectness, we can apply the model on RoIs of different sizes and aspect ratios, even if the network was not trained explicitly for those particular scales and aspect-ratios. FA-RPN places anchors of different scales and aspect ratios on a grid, as generated in the region proposal network, and clips the anchors which extend beyond the image. While placing anchors, we vary the spatial stride as we increase the anchor size. Since nearby anchors at larger scales have a very high overlap, including them is not necessary. We change the stride of anchor-boxes to max(c, s/d), where s is square-root of the area of an anchor-box, c is a constant and d is the scaling factor, shown in Fig 3. In practice, we set c to 16 and d to 5. This ensures that not too many overlapping anchor-boxes are placed on the image, while ensuring significant overlap between adjacent anchors to cover all objects. Naive placement of anchor boxes of 3 aspect ratios and 5 scales with stride equaling 16 pixels in a 800 × 1280 image leads to a 2-3 × slow-down when performing inference. With the proposed placement method, we reduce the number of RoIs per image from 400,000 to 100,000 for a 1280 × 1280 image for the above mentioned anchor configuration. When we increase the image size, computation for convolution also increases proportionally, so as long as the time required for pooling is not significant compared to convolution, we will not observe a noticeable difference in performance. There is no restriction that the stride of anchors should be the same as the stride of the convolutional feature-map. We can even place RoIs between two pixels in the convolutional feature-map without making any architectural change to the network. This allows us to augment the groundtruth bounding boxes as positive RoIs during training. This is unlike RPN, where the maximum overlapping anchor is Sampling Since there are hundreds of thousands of anchors which can be placed on an image, we sample anchors during training. We observe that using focal loss [19] reduced recall for RPN (hyper-parameter tuning could be a reason), so we did not use it for FA-RPN. We use the commonly used technique of sampling RoIs for handing class imbalance. In FA-RPN, an anchor-box is marked as positive if its overlap with a ground truth box is greater than 0.5. An anchor is marked as negative if its overlap is less than 0.4. A maximum of 128 positive and negative anchors are sampled in a batch. Since the probability of a random anchor being an easy sample is high, we also sample 32 anchor-boxes which have an overlap of at-least 0.1 with the ground-truth boxes as hard negatives. Just for training FA-RPN proposals, all other RoIs can be ignored. However, for training an end-toend detector, we also need to score other RoIs in the image. When training an end-to-end detector, we select a maximum of 50,000 RoIs in an image (prioritizing those which have at-least 0.1 overlap with ground-truth boxes first). Iterative Refinement The initial set of placed anchors are expected to cover the ground-truth objects present in the image. However, these anchors may not always have an overlap greater than 0.5 with all objects and hence would be given low scores by the classifier. This problem is amplified for small object instances as mentioned in several methods [41,16]. In this case, no anchor-boxes may have a high score for some ground-truth boxes. Therefore, the ground-truth boxes may not be covered in the top 500 to 1000 proposals generated in the image. In FA-RPN, rather than selecting the top 1000 proposals, we generate 20000 proposals during inference and then perform pooling again on these 20000 proposals from the same feature-map (we can also have another convolutional layer which refines the first stage region proposals). The hypothesis is that after refinement, the anchors would be better localized and hence the scores which we obtain after pooling features inside an RoI would be more reliable. Therefore, after refinement, the ordering of the top 1000 proposals would be different because scores are pooled from refined anchor-boxes rather than the anchorboxes which were placed uniformly on a grid. Since we only need to perform pooling for this operation, it is efficient and can be easily implemented when the number of RoIs is close to 100,000. Note that our method is entirely pooling based and does not have any fully connected lay-ers like cascade-RCNN [6] or G-CNN [22]. Therefore, it is much more efficient for iterative refinement. Complexity and Speed FA-RPN is very efficient. On 800 × 1280 size images, it takes 50 milliseconds to perform forward propagation for our network on a P6000 GPU. We also discuss how much time it takes to use R-FCN for end-to-end detection. For general object detection, when the number of classes is increased, to say 100, the contribution from the pooling layer also increases. This is because the complexity for pooling is linear in the number of classes. So, if we increase the number of classes to 100, this operation would become 100 times slower and at that stage, pooling will account for a significant portion of the time in forward-propagation. For instance, without our anchor placement strategy, it takes 100 seconds to perform inference for 100 classes in a single image on a V100 GPU. However, as for face detection, we only need to perform pooling for 2 classes and use a different anchor placement scheme, we do not face this problem and objectness can be efficiently computed even with tens of thousands of anchor boxes. Scale Normalized Training The positional correspondence of R-FCN is lost when RoI bins become too small. The idea of local convolution or having filters specific to different parts of an object is relevant when each bin corresponds to a unique region in the convolutional feature-map. The position-sensitive filters implicitly assume that features in the previous layer have a resolution which is similar to that after PSRoIPooling. Otherwise, if the RoI is too small, then all the position sensitive filters will pool from more or less the same position, nullifying the hypothesis that these filters are position sensitive. Therefore, we perform scale normalized training [31], which performs selective gradient propagation for RoIs which are close to a resolution of 224 × 224 and excludes those RoIs which can be observed at a better resolution during training. In this setting, the position-sensitive nature of filters is preserved to some extent, which helps in improving the performance of FA-RPN. Datasets We perform experiments on three benchmark datasets, WIDER [40], AFW [44], and Pascal Faces [38]. The WIDER dataset contains 32,203 images with 393,703 annotated faces, 158,989 of which are in the train set, 39,496 in the validation set, and the rest are in the test set. The validation and test set are divided into "easy", "medium", and "hard" subsets cumulatively (i.e. the "hard" set contains all faces and "medium" contains "easy" and "medium"). This is the most challenging public face dataset mainly due to the significant variation in the scale of faces and occlusion. We train all models on the train set of the WIDER dataset and evaluate on the validation set. We mention in our experiments when initialization of our pre-trained model is from ImageNet or COCO. Ablation studies are also performed on the the validation set (i.e. "hard" subset which contains the whole dataset). Pascal Faces and AFW have 1335 and 473 faces respectively. We use Pascal Faces and AFW only as test sets for evaluating the generalization of our trained models. When performing experiments on these datasets, we apply the model trained on the WIDER train set out of the box. Experiments We train a ResNet-50 [15] based Faster-RCNN detector with deformable convolutions [10] and SNIP [31]. FA-RPN proposals are generated on the concatenated conv4 and conv5 features. On WIDER we train on the following image resolutions (1800, 2800), (1024, 1440) and (512, 800). The SNIP ranges we use for WIDER are as follows, [0, 200) for (1800, 2800), [32,300) for (1024, 1440) and [80, ∞) for (512, 800) as the size of the shorter side of the image is around 1024. We train for 8 epochs with a stepdown at 5.33 epochs. In all experiments we use a learning rate and weight decay of 0.0005 and train on 8 GPUs. We use the same learning rate and training schedule even when training on 4 GPUs. In all our experiments, we use online hard example mining (OHEM) [30] to train the 2 fully connected layers in our detector. For the detector, we perform hard example mining on 900 proposals with a batch size of 256. RoIs greater than 0.5 overlap with ground-truth bounding boxes are marked as positive and anything less than that is labelled as negative. No hard-example mining is performed for training the Faster-RCNN head. We use Soft-NMS [4] with σ = 0.35 when performing inference. Since Pascal Faces and AFW contain low resolution images and also do not contain faces as small as the WIDER dataset, we do not perform inference on the 1800 × 2800 resolution. All other parameters remain the same as the experiments on the WIDER dataset. On the WIDER dataset, we remove anchors for different aspect ratios (i.e. we only have one anchor per scale with an aspect ratio of 1) and add a 16 × 16 size anchor for improving the recall for small faces. Note that extreme size anchors are removed during training with SNIP using the same rules which are used for training Faster-RCNN. With these settings, we outperform state-of-the-art results on the WIDER dataset demonstrating the effectiveness of FA-RPN. However, the objective of this paper is not to show that FA-RPN is necessary to obtain state-of-the-art performance. FA-RPN is an elegant and efficient alternative to RPN and can be combined with multi-stage face detection methods to improve performance. Table 1: Ablation analysis with different core-components of our face detector on the hard-set of the WIDER dataset (hard-set contains all images in the dataset). Effect of Multiple Iterations in FA-RPN We evaluate FA-RPN on WIDER when we perform multiple iterations during inference. Since FA-RPN operates on RoIs rather than classifying single-pixel feature-maps like RPN, we can further refine the RoIs which are generated after applying the regression offsets. As the initial set of anchor boxes are coarse, the RoIs generated after the first step are not very well localized. Performing another level of pooling on the generated RoIs helps to improve recall for our proposals. As can be seen in Table 1 and the left-hand side plot in Fig. 5, this refinement step helps to improve the precision and recall. We also generate anchors with different strides -16 and 32 pixels -and show how the final detection performance improves as we refine proposals. Evaluating different Anchors and Strides during Inference In this section, we show the flexibility of FA-RPN for generating region proposals. We train our network with a stride of 32 pixels and during inference, we generate anchors at a stride of 16 pixels on the WIDER dataset. The result is shown in the right-hand side plot in Fig. 5. We notice that the dense anchors improve performance by 3.8%. On the left side of the plot we show the effect of iterative refinement of FA-RPN proposals. This further provides a boost of 1.4% on top of the denser anchors. This shows that our network is robust to changes in anchor configuration, and can detect faces even on anchor sizes which were not provided during training. To achieve this with RPN, one would need to re-train it again, while in FA-RPN it is a simple inference time hyper-parameter which can be tuned on a validation set even after the training phase. Effect of Scale and COCO pre-training on Face Detection Variation of scale is among the main challenges in detection datasets. Datasets like WIDER consist of many small faces which can be hard to detect for a CNN at the original image scale. Therefore, upsampling images is crucial to obtaining good performance. However, as shown in [31], when we upsample images, large objects become hard to classify and when we downsample images to detect large objects, small objects become harder to classify. Therefore, standard multi-scale training is not effective when training on extreme resolutions. In Table 1 we show the effect of performing SNIP based multi-scale training in our FA-RPN based Faster-RCNN detector. When performing inference on the same resolutions, we observe an improvement in detection performance on the WIDER dataset by 1%. Note that this improvement is on top of multi-scale inference. We also initialized our ResNet-50 model which was pretrained on the COCO detection dataset. We show that even pre-training on object detection helps in improving the performance of face detectors by a significant amount, Table 1. : We compare with recently published methods on the WIDER dataset. The plots are for "easy", "medium" and "hard" respectively from left to right. As can be seen, FA-RPN outperforms published baselines on this dataset. Note that, "hard" set contains the whole dataset while "easy" and "medium" are subsets. Comparison on the WIDER dataset We compare our method with MSCNN [5], HR [16], SSH [23], S3FD [41], MSO [42], and PyramidBox [35] which are the published state-of-the-art methods on the WIDER dataset. Our simple detector outperforms all existing methods on this dataset. On the "hard" set, which includes all the annotations in the WIDER dataset, our performance (average precision) is 89.4%, which is the best among all methods. We also perform well in the easy and medium sets. The precision recall plots for each of these cases are shown in Fig. 6. Note that we did not use featurepyramids or lower layer features from conv2 and conv3 [23,41,16] , enhancing predictions with context [16] or with deeper networks like ResNext-152 [37]/ Xception [8] for obtaining these results. This result demonstrates that FA-RPN is competitive with existing proposal techniques as it can lead to a state-of-the-art detector. We also do not use recently proposed techniques like stochastic face lifting [42], having different filters for different size objects [16] or maxout background loss [41]. Our performance can be further improved if the above mentioned architectural changes are made to our network or better training methods which also fine-tune batch-normalization statistics are used [25,33]. Comparison on the PascalFaces and AFW datasets To show the generalization of our trained detector, we also apply it out-of-the-box to the Pascal Faces [38] and AFW [44] datasets without fine-tuning. The performance of FA-RPN is compared with SSH [23], Face-Magnet [29], HyperFace [27], HeadHunter [21], and DPM [26] detectors which reported results on these datasets. The results are shown in Fig. 7. Compared to WIDER, the resolution of PASCAL images is lower and they do not contain many small images, so it is sufficient to apply FA-RPN to the two lower resolutions in the pyramid. This also leads to faster inference. As can be seen, FA-RPN out-of-the-box generalizes well to these datasets. FA-RPN achives state-of-the-art result on the PascalFaces and reduces the error rate to 0.68% on this dataset. Efficiency Our FA-RPN based detector is efficient and takes less than 0.05 seconds to perform inference on an image of size 800 × 1280. With advances in GPUs over the last few years, performing inference even at very high resolutions (1800 × 2800) is efficient and takes less than 0.4 seconds on a 1080Ti GPU. With improved GPU architectures like the 2080Ti and with the use of lower precision like 16 or 8 bits, the speed can be further improved by two to four times (depending on the precision used in inference) at the same cost. Multi-scale inference can be further accelerated with AutoFocus [24]. Figure 8 shows qualitative results on the WIDER validation subset. We picked 20 diverse images to highlight the results generated by FA-RPN. Detections are shown by green rectangles and the brightness encodes the confidence. As can be seen, our face detector works very well in crowded scenes and can find hundreds of small faces in a wide variety of images. This shows that FA-RPN have a very high recall and can detect faces accurately. It generalizes well in both indoor and outdoor scenes and under different lighting conditions. Our performance across a wide range of scales is also good without using diverse features from different layers of the network. It is also robust to changes in pose, occlusion, blur and even works on old photographs! Qualitative Results Conclusion We introduced FA-RPN, a novel method for generating pooling based proposals for face detection. We proposed techniques for anchor placement and label assignment which were essential in the design of such pooling based proposal algorithm. FA-RPN has several benefits like efficient iterative refinement, flexibility in selecting scale and anchor stride during inference, sub-pixel anchor placement etc. Using FA-RPN, we obtained state-of-the-art results on the challenging WIDER dataset, showing the effectiveness of FA-RPN for this task. FA-RPN also achieved state-of-the-art results out-of-the-box on datasets like Pas-calFaces showing its generalizability.
4,196
1812.05586
2954062199
We propose a novel approach for generating region proposals for performing face-detection. Instead of classifying anchor boxes using features from a pixel in the convolutional feature map, we adopt a pooling-based approach for generating region proposals. However, pooling hundreds of thousands of anchors which are evaluated for generating proposals becomes a computational bottleneck during inference. To this end, an efficient anchor placement strategy for reducing the number of anchor-boxes is proposed. We then show that proposals generated by our network (Floating Anchor Region Proposal Network, FA-RPN) are better than RPN for generating region proposals for face detection. We discuss several beneficial features of FA-RPN proposals like iterative refinement, placement of fractional anchors and changing anchors which can be enabled without making any changes to the trained model. Our face detector based on FA-RPN obtains 89.4 mAP with a ResNet-50 backbone on the WIDER dataset.
To improve RPN, several modifications have been proposed. State-of-the-art detectors can also detect objects in a single step. Detectors like SSH @cite_24 , SSD @cite_33 , RetinaNet @cite_7 , MS-CNN @cite_31 generate multi-scale feature maps to classify and regress anchors placed on these feature-maps. These single-shot detectors are closely related to the region proposal network as they have specific filters to detect objects of different sizes and aspect ratios but also combine feature-maps from multiple layers of the deep neural network. No further refinement is performed after the initial offsets generated by the network are applied. Another class of detectors are iterative, like G-CNN @cite_4 , Cascade-RCNN @cite_17 , LocNet @cite_1 , FPN @cite_28 , RFCN-3000 @cite_14 , Faster-RCNN @cite_2 . These detectors refine a pre-defined set of anchor-boxes in multiple stages and have more layers to further improve classification and localization of regressed anchors. One should note that even in these networks, the first stage comprises of the region proposal network which eliminates the major chunk of background regions. FA-RPN is closer to this line of work but, in contrast, it supports iterative refinement of region proposals during inference.
{ "abstract": [ "", "We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.", "", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "We propose a novel object localization methodology with the purpose of boosting the localization accuracy of state-of-the-art object detection systems. Our model, given a search region, aims at returning the bounding box of an object of interest inside this region. To accomplish its goal, it relies on assigning conditional probabilities to each row and column of this region, where these probabilities provide useful information regarding the location of the boundaries of the object inside the search region and allow the accurate inference of the object bounding box under a simple probabilistic framework. For implementing our localization model, we make use of a convolutional neural network architecture that is properly adapted for this task, called LocNet. We show experimentally that LocNet achieves a very significant improvement on the mAP for high IoU thresholds on PASCAL VOC2007 test set and that it can be very easily coupled with recent state-of-the-art object detection systems, helping them to boost their performance. Finally, we demonstrate that our detection approach can achieve high detection accuracy even when it is given as input a set of sliding windows, thus proving that it is independent of box proposal methods.", "We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the \"head\" of its underlying classification network -- i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a runtime of 50 ms image on a GPU. The code is available at this https URL.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.", "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https: github.com zhaoweicai cascade-rcnn." ], "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_7", "@cite_28", "@cite_1", "@cite_24", "@cite_2", "@cite_31", "@cite_17" ], "mid": [ "", "2237643543", "", "2743473392", "2949533892", "2177544419", "2747648373", "2953106684", "2490270993", "2964241181" ] }
FA-RPN: Floating Region Proposals for Face Detection
Face detection is an important computer vision problem and has multiple applications in surveillance, tracking, consumer facing devices like iPhones etc. Hence, various approaches have been proposed towards solving it [39,41,16,43,17,34,42,27,23] and successful solutions have also been deployed in practice. So, expectations from face detection algorithms are much higher and error rates today are quite low. Algorithms need to detect faces which are as small as 5 pixels to 500 pixels in size. As localization is essential for detection, evaluating every small region of the image is important. Face detection datasets can have up to a thousand faces in a single image, which is not common in generic object detection. Detectors like Faster-RCNN [28] employ a region proposal network (RPN) which places anchor boxes of different sizes and aspect ratios uniformly on the image and classifies them for generating object-like regions. However, RPN only uses a single pixel in the convolutional feature * Equal Contribution map for evaluating the proposal hypotheses, independent of the size of the object. Therefore, the feature representation in RPN entirely relies on the contextual information encoded in the high-dimensional feature representation generated at the pixel. It does not pool features from the entire extent of an object while generating the feature representation, see Fig 1. Thus, it can miss object regions or generate proposals which are not well localized. Further, it is not possible to iterate and refine the positions of the anchorboxes as part of the proposal network. If objects of different scale/aspect-ratios are to be learned or if we want to place anchors at sub-pixel resolution, filters specific to each of these conditions need to be added during training. Generating proposals using a pooling based algorithm can alleviate such problems easily. There are predominantly two pooling based methods for the final classification of RoIs in an image -Fast-RCNN [12] and R-FCN [9]. Fast-RCNN projects the regionproposals to the convolutional feature-map, and pools the features inside the region of interest (RoI) to a fixed size grid (typically 7×7) and applies two fully connected layers which perform classification and regression. Due to computational constraints, this approach is practically infeasible for proposal generation as one would need to apply it to hundreds of thousands of regions -which is the number of region candidates which are typically evaluated by a region proposal algorithm. To reduce the dependence on fully connected layers, R-FCN performs local convolutions (7×7) inside an RoI for capturing the spatial-extent of each object. Since each of these local filters can be applied to the previous featuremap, we just need to pool the response from the appropriate region corresponding to each local filter. This makes it a good candidate for a pooling-based proposal approach as it is possible to apply it to a large number of RoIs efficiently. However, in high resolution images, proposal algorithms like RPN evaluate hundreds of thousands of anchors during inference. It is computationally infeasible to perform pooling on that many regions. Luckily, many anchors are not necessary (e.g. large anchors which are very close to each other). In this paper, we show that careful anchor placement strategies can reduce the number of proposals significantly to the point where a pooling-based algorithm becomes feasible for proposal generation. This yields an efficient and effective objectness detector which does not suffer from the aforementioned problems present in RPN designs. A pooling-based proposal method based on R-FCN which relies on position sensitive filters is particularly well suited for face detection. While objects deform and positional correspondence between different parts is often lostfaces are rigid, structured and parts have positional semantic correspondence (e.g. nose, eyes, lips). Moreover, it is possible to place anchor boxes of different size and aspect ratios without adding more filters. We can also place fractional anchor boxes and perform bilinear interpolation while pooling features for computing objectness. We can further improve localization performance of the proposal candidates by iteratively pooling again from the generated RoIs and all these design changes can be made during inference! Due to these reasons, we refer to our proposal network as Floating Anchor Region Proposal Network (FA-RPN). We highlight these advantages in Fig. 1 and Fig. 2. On the WIDER dataset [40] we show that FA-RPN proposals are better than RPN proposals. FA-RPN also obtains state-of-the-art results on WIDER and PascalFaces which demonstrates its effectiveness for face detection. Related Work Generating class agnostic region proposals has been investigated in computer vision for more than a decade. Initial methods include multi-scale combinatorial grouping [2], constrained parametric min-cuts [36], selective search [7] etc. These methods generate region proposals which obtain high recall for objects in a category agnostic fashion. They were also very successful in the pre-deep learning era and obtained state-of-the-art performance even with a bagof-words model [36]. Using region proposals based on selective search [36], R-CNN [13] was the first deep learning based detector. Unsupervised region proposals were also used in later detectors like Fast-RCNN [12] but since the Faster-RCNN detector [28] generated region proposals using a convolutional neural network, it has become the defacto algorithm for generating region proposals. To improve RPN, several modifications have been proposed. State-of-the-art detectors can also detect objects in a single step. Detectors like SSH [23], SSD [20], RetinaNet [19], MS-CNN [5] generate multi-scale feature maps to classify and regress anchors placed on these feature-maps. These single-shot detectors are closely related to the region proposal network as they have specific filters to detect objects of different sizes and aspect ratios but also combine feature-maps from multiple layers of the deep neural network. No further refinement is performed after the initial offsets generated by the network are applied. Another class of detectors are iterative, like G-CNN [22], Cascade-RCNN [6], LocNet [11], FPN [18], RFCN-3000 [32], Faster-RCNN [28]. These detectors refine a pre-defined set of anchor-boxes in multiple stages and have more layers to further improve classification and localization of regressed anchors. One should note that even in these networks, the first stage comprises of the region proposal network which eliminates the major chunk of background regions. FA-RPN is closer to this line of work but, in contrast, it supports iterative refinement of region proposals during inference. We briefly review some recent work on face detection. With the availability of large scale datasets like WIDER [40] which contain many small faces in high resolution images, multiple new techniques for face detection have been proposed [39,41,16,43,17,34,42,27,3]. A lot of focus has been on scale, combining features of different layers [16,42,23,41] and improving configurations of the region proposal network [42,41]. For example, in finding tiny faces [16], it is proposed to perform detection on an image pyramid and have different scale filters for objects of different sizes. SSH [23] and S3FD [41] efficiently utilize all the intermediate layers of the network. PyramidBox [35] replaces the context module in SSH by deeper and wider sub-networks to better capture the contextual information for face detection. Recently, even GANs [14] have been used to improve the performance on tiny faces [3]. In face detection, the choice of anchors and their placement on the image is very important [42,41]. For example, using extra strided anchors were shown to be beneficial [42]. Geometric constraints of the scene have also been used to prune region proposals [1]. Some of these changes require re-training RPN again. In our framework, design decisions such as evaluating different anchor scales, changing the stride of anchors, and adding fractional anchors can simply be made during inference as we share filters for all object sizes and only pooling is performed for them. Moreover, a pooling based design also provides precise spatial FA-RPN -Floating Anchor Region Proposal Network In this section, we discuss training of FA-RPN, which performs iterative classification and regression of anchors placed on an image for generating accurate region proposals. An overview of our approach is shown in Fig. 3. Anchor Placement In this architecture, classification is not performed using a single high-dimensional feature vector but by pooling fea-tures inside the RoI. Hence, there are no restrictions on how RoIs can be placed during training and inference. As long as the convolutional filters can learn objectness, we can apply the model on RoIs of different sizes and aspect ratios, even if the network was not trained explicitly for those particular scales and aspect-ratios. FA-RPN places anchors of different scales and aspect ratios on a grid, as generated in the region proposal network, and clips the anchors which extend beyond the image. While placing anchors, we vary the spatial stride as we increase the anchor size. Since nearby anchors at larger scales have a very high overlap, including them is not necessary. We change the stride of anchor-boxes to max(c, s/d), where s is square-root of the area of an anchor-box, c is a constant and d is the scaling factor, shown in Fig 3. In practice, we set c to 16 and d to 5. This ensures that not too many overlapping anchor-boxes are placed on the image, while ensuring significant overlap between adjacent anchors to cover all objects. Naive placement of anchor boxes of 3 aspect ratios and 5 scales with stride equaling 16 pixels in a 800 × 1280 image leads to a 2-3 × slow-down when performing inference. With the proposed placement method, we reduce the number of RoIs per image from 400,000 to 100,000 for a 1280 × 1280 image for the above mentioned anchor configuration. When we increase the image size, computation for convolution also increases proportionally, so as long as the time required for pooling is not significant compared to convolution, we will not observe a noticeable difference in performance. There is no restriction that the stride of anchors should be the same as the stride of the convolutional feature-map. We can even place RoIs between two pixels in the convolutional feature-map without making any architectural change to the network. This allows us to augment the groundtruth bounding boxes as positive RoIs during training. This is unlike RPN, where the maximum overlapping anchor is Sampling Since there are hundreds of thousands of anchors which can be placed on an image, we sample anchors during training. We observe that using focal loss [19] reduced recall for RPN (hyper-parameter tuning could be a reason), so we did not use it for FA-RPN. We use the commonly used technique of sampling RoIs for handing class imbalance. In FA-RPN, an anchor-box is marked as positive if its overlap with a ground truth box is greater than 0.5. An anchor is marked as negative if its overlap is less than 0.4. A maximum of 128 positive and negative anchors are sampled in a batch. Since the probability of a random anchor being an easy sample is high, we also sample 32 anchor-boxes which have an overlap of at-least 0.1 with the ground-truth boxes as hard negatives. Just for training FA-RPN proposals, all other RoIs can be ignored. However, for training an end-toend detector, we also need to score other RoIs in the image. When training an end-to-end detector, we select a maximum of 50,000 RoIs in an image (prioritizing those which have at-least 0.1 overlap with ground-truth boxes first). Iterative Refinement The initial set of placed anchors are expected to cover the ground-truth objects present in the image. However, these anchors may not always have an overlap greater than 0.5 with all objects and hence would be given low scores by the classifier. This problem is amplified for small object instances as mentioned in several methods [41,16]. In this case, no anchor-boxes may have a high score for some ground-truth boxes. Therefore, the ground-truth boxes may not be covered in the top 500 to 1000 proposals generated in the image. In FA-RPN, rather than selecting the top 1000 proposals, we generate 20000 proposals during inference and then perform pooling again on these 20000 proposals from the same feature-map (we can also have another convolutional layer which refines the first stage region proposals). The hypothesis is that after refinement, the anchors would be better localized and hence the scores which we obtain after pooling features inside an RoI would be more reliable. Therefore, after refinement, the ordering of the top 1000 proposals would be different because scores are pooled from refined anchor-boxes rather than the anchorboxes which were placed uniformly on a grid. Since we only need to perform pooling for this operation, it is efficient and can be easily implemented when the number of RoIs is close to 100,000. Note that our method is entirely pooling based and does not have any fully connected lay-ers like cascade-RCNN [6] or G-CNN [22]. Therefore, it is much more efficient for iterative refinement. Complexity and Speed FA-RPN is very efficient. On 800 × 1280 size images, it takes 50 milliseconds to perform forward propagation for our network on a P6000 GPU. We also discuss how much time it takes to use R-FCN for end-to-end detection. For general object detection, when the number of classes is increased, to say 100, the contribution from the pooling layer also increases. This is because the complexity for pooling is linear in the number of classes. So, if we increase the number of classes to 100, this operation would become 100 times slower and at that stage, pooling will account for a significant portion of the time in forward-propagation. For instance, without our anchor placement strategy, it takes 100 seconds to perform inference for 100 classes in a single image on a V100 GPU. However, as for face detection, we only need to perform pooling for 2 classes and use a different anchor placement scheme, we do not face this problem and objectness can be efficiently computed even with tens of thousands of anchor boxes. Scale Normalized Training The positional correspondence of R-FCN is lost when RoI bins become too small. The idea of local convolution or having filters specific to different parts of an object is relevant when each bin corresponds to a unique region in the convolutional feature-map. The position-sensitive filters implicitly assume that features in the previous layer have a resolution which is similar to that after PSRoIPooling. Otherwise, if the RoI is too small, then all the position sensitive filters will pool from more or less the same position, nullifying the hypothesis that these filters are position sensitive. Therefore, we perform scale normalized training [31], which performs selective gradient propagation for RoIs which are close to a resolution of 224 × 224 and excludes those RoIs which can be observed at a better resolution during training. In this setting, the position-sensitive nature of filters is preserved to some extent, which helps in improving the performance of FA-RPN. Datasets We perform experiments on three benchmark datasets, WIDER [40], AFW [44], and Pascal Faces [38]. The WIDER dataset contains 32,203 images with 393,703 annotated faces, 158,989 of which are in the train set, 39,496 in the validation set, and the rest are in the test set. The validation and test set are divided into "easy", "medium", and "hard" subsets cumulatively (i.e. the "hard" set contains all faces and "medium" contains "easy" and "medium"). This is the most challenging public face dataset mainly due to the significant variation in the scale of faces and occlusion. We train all models on the train set of the WIDER dataset and evaluate on the validation set. We mention in our experiments when initialization of our pre-trained model is from ImageNet or COCO. Ablation studies are also performed on the the validation set (i.e. "hard" subset which contains the whole dataset). Pascal Faces and AFW have 1335 and 473 faces respectively. We use Pascal Faces and AFW only as test sets for evaluating the generalization of our trained models. When performing experiments on these datasets, we apply the model trained on the WIDER train set out of the box. Experiments We train a ResNet-50 [15] based Faster-RCNN detector with deformable convolutions [10] and SNIP [31]. FA-RPN proposals are generated on the concatenated conv4 and conv5 features. On WIDER we train on the following image resolutions (1800, 2800), (1024, 1440) and (512, 800). The SNIP ranges we use for WIDER are as follows, [0, 200) for (1800, 2800), [32,300) for (1024, 1440) and [80, ∞) for (512, 800) as the size of the shorter side of the image is around 1024. We train for 8 epochs with a stepdown at 5.33 epochs. In all experiments we use a learning rate and weight decay of 0.0005 and train on 8 GPUs. We use the same learning rate and training schedule even when training on 4 GPUs. In all our experiments, we use online hard example mining (OHEM) [30] to train the 2 fully connected layers in our detector. For the detector, we perform hard example mining on 900 proposals with a batch size of 256. RoIs greater than 0.5 overlap with ground-truth bounding boxes are marked as positive and anything less than that is labelled as negative. No hard-example mining is performed for training the Faster-RCNN head. We use Soft-NMS [4] with σ = 0.35 when performing inference. Since Pascal Faces and AFW contain low resolution images and also do not contain faces as small as the WIDER dataset, we do not perform inference on the 1800 × 2800 resolution. All other parameters remain the same as the experiments on the WIDER dataset. On the WIDER dataset, we remove anchors for different aspect ratios (i.e. we only have one anchor per scale with an aspect ratio of 1) and add a 16 × 16 size anchor for improving the recall for small faces. Note that extreme size anchors are removed during training with SNIP using the same rules which are used for training Faster-RCNN. With these settings, we outperform state-of-the-art results on the WIDER dataset demonstrating the effectiveness of FA-RPN. However, the objective of this paper is not to show that FA-RPN is necessary to obtain state-of-the-art performance. FA-RPN is an elegant and efficient alternative to RPN and can be combined with multi-stage face detection methods to improve performance. Table 1: Ablation analysis with different core-components of our face detector on the hard-set of the WIDER dataset (hard-set contains all images in the dataset). Effect of Multiple Iterations in FA-RPN We evaluate FA-RPN on WIDER when we perform multiple iterations during inference. Since FA-RPN operates on RoIs rather than classifying single-pixel feature-maps like RPN, we can further refine the RoIs which are generated after applying the regression offsets. As the initial set of anchor boxes are coarse, the RoIs generated after the first step are not very well localized. Performing another level of pooling on the generated RoIs helps to improve recall for our proposals. As can be seen in Table 1 and the left-hand side plot in Fig. 5, this refinement step helps to improve the precision and recall. We also generate anchors with different strides -16 and 32 pixels -and show how the final detection performance improves as we refine proposals. Evaluating different Anchors and Strides during Inference In this section, we show the flexibility of FA-RPN for generating region proposals. We train our network with a stride of 32 pixels and during inference, we generate anchors at a stride of 16 pixels on the WIDER dataset. The result is shown in the right-hand side plot in Fig. 5. We notice that the dense anchors improve performance by 3.8%. On the left side of the plot we show the effect of iterative refinement of FA-RPN proposals. This further provides a boost of 1.4% on top of the denser anchors. This shows that our network is robust to changes in anchor configuration, and can detect faces even on anchor sizes which were not provided during training. To achieve this with RPN, one would need to re-train it again, while in FA-RPN it is a simple inference time hyper-parameter which can be tuned on a validation set even after the training phase. Effect of Scale and COCO pre-training on Face Detection Variation of scale is among the main challenges in detection datasets. Datasets like WIDER consist of many small faces which can be hard to detect for a CNN at the original image scale. Therefore, upsampling images is crucial to obtaining good performance. However, as shown in [31], when we upsample images, large objects become hard to classify and when we downsample images to detect large objects, small objects become harder to classify. Therefore, standard multi-scale training is not effective when training on extreme resolutions. In Table 1 we show the effect of performing SNIP based multi-scale training in our FA-RPN based Faster-RCNN detector. When performing inference on the same resolutions, we observe an improvement in detection performance on the WIDER dataset by 1%. Note that this improvement is on top of multi-scale inference. We also initialized our ResNet-50 model which was pretrained on the COCO detection dataset. We show that even pre-training on object detection helps in improving the performance of face detectors by a significant amount, Table 1. : We compare with recently published methods on the WIDER dataset. The plots are for "easy", "medium" and "hard" respectively from left to right. As can be seen, FA-RPN outperforms published baselines on this dataset. Note that, "hard" set contains the whole dataset while "easy" and "medium" are subsets. Comparison on the WIDER dataset We compare our method with MSCNN [5], HR [16], SSH [23], S3FD [41], MSO [42], and PyramidBox [35] which are the published state-of-the-art methods on the WIDER dataset. Our simple detector outperforms all existing methods on this dataset. On the "hard" set, which includes all the annotations in the WIDER dataset, our performance (average precision) is 89.4%, which is the best among all methods. We also perform well in the easy and medium sets. The precision recall plots for each of these cases are shown in Fig. 6. Note that we did not use featurepyramids or lower layer features from conv2 and conv3 [23,41,16] , enhancing predictions with context [16] or with deeper networks like ResNext-152 [37]/ Xception [8] for obtaining these results. This result demonstrates that FA-RPN is competitive with existing proposal techniques as it can lead to a state-of-the-art detector. We also do not use recently proposed techniques like stochastic face lifting [42], having different filters for different size objects [16] or maxout background loss [41]. Our performance can be further improved if the above mentioned architectural changes are made to our network or better training methods which also fine-tune batch-normalization statistics are used [25,33]. Comparison on the PascalFaces and AFW datasets To show the generalization of our trained detector, we also apply it out-of-the-box to the Pascal Faces [38] and AFW [44] datasets without fine-tuning. The performance of FA-RPN is compared with SSH [23], Face-Magnet [29], HyperFace [27], HeadHunter [21], and DPM [26] detectors which reported results on these datasets. The results are shown in Fig. 7. Compared to WIDER, the resolution of PASCAL images is lower and they do not contain many small images, so it is sufficient to apply FA-RPN to the two lower resolutions in the pyramid. This also leads to faster inference. As can be seen, FA-RPN out-of-the-box generalizes well to these datasets. FA-RPN achives state-of-the-art result on the PascalFaces and reduces the error rate to 0.68% on this dataset. Efficiency Our FA-RPN based detector is efficient and takes less than 0.05 seconds to perform inference on an image of size 800 × 1280. With advances in GPUs over the last few years, performing inference even at very high resolutions (1800 × 2800) is efficient and takes less than 0.4 seconds on a 1080Ti GPU. With improved GPU architectures like the 2080Ti and with the use of lower precision like 16 or 8 bits, the speed can be further improved by two to four times (depending on the precision used in inference) at the same cost. Multi-scale inference can be further accelerated with AutoFocus [24]. Figure 8 shows qualitative results on the WIDER validation subset. We picked 20 diverse images to highlight the results generated by FA-RPN. Detections are shown by green rectangles and the brightness encodes the confidence. As can be seen, our face detector works very well in crowded scenes and can find hundreds of small faces in a wide variety of images. This shows that FA-RPN have a very high recall and can detect faces accurately. It generalizes well in both indoor and outdoor scenes and under different lighting conditions. Our performance across a wide range of scales is also good without using diverse features from different layers of the network. It is also robust to changes in pose, occlusion, blur and even works on old photographs! Qualitative Results Conclusion We introduced FA-RPN, a novel method for generating pooling based proposals for face detection. We proposed techniques for anchor placement and label assignment which were essential in the design of such pooling based proposal algorithm. FA-RPN has several benefits like efficient iterative refinement, flexibility in selecting scale and anchor stride during inference, sub-pixel anchor placement etc. Using FA-RPN, we obtained state-of-the-art results on the challenging WIDER dataset, showing the effectiveness of FA-RPN for this task. FA-RPN also achieved state-of-the-art results out-of-the-box on datasets like Pas-calFaces showing its generalizability.
4,196
1907.01862
2953574061
Being able to check whether an online advertisement has been targeted is essential for resolving privacy controversies and implementing in practice data protection regulations like GDPR, CCPA, and COPPA. In this paper we describe the design, implementation, and deployment of an advertisement auditing system called iWnder that uses crowdsourcing to reveal in real time whether a display advertisement has been targeted or not. Crowdsourcing simplifies the detection of targeted advertising, but requires reporting to a central repository the impressions seen by different users, thereby jeopardising their privacy. We break this deadlock with a privacy preserving data sharing protocol that allows iWnder to compute global statistics required to detect targeting, while keeping the advertisements seen by individual users and their browsing history private. We conduct a simulation study to explore the effect of different parameters and a live validation to demonstrate the accuracy of our approach. Unlike previous solutions, iWnder can even detect indirect targeting, i.e., marketing campaigns that promote a product or service whose description bears no semantic overlap with its targeted audience.
Topic-based solutions perform content-based analysis to extract the relevant topics on a user's browsing history and the ads he receives. Then, using different heuristics and statistical means, targeted ads are identified as those having topics that share some semantic overlap with the user's browsing history. Topic-based detection could, in principle, be applied to real users, as we have done for evaluation purposes in . Existing work, however, has only used it in conjunction with artificially constructed , , robots that browse the web imitating very specific (single-topic) demographic groups @cite_14 @cite_17 , or to emulate real-users offline using click-streams @cite_5 .
{ "abstract": [ "To address the pressing need to provide transparency into the online targeted advertising ecosystem, we present AdReveal, a practical measurement and analysis framework, that provides a first look at the prevalence of different ad targeting mechanisms. We design and implement a browser based tool that provides detailed measurements of online display ads, and develop analysis techniques to characterize the contextual, behavioral and re-marketing based targeting mechanisms used by advertisers. Our analysis is based on a large dataset consisting of measurements from 103K webpages and 139K display ads. Our results show that advertisers frequently target users based on their online interests; almost half of the ad categories employ behavioral targeting. Ads related to Insurance, Real Estate and Travel and Tourism make extensive use of behavioral targeting. Furthermore, up to 65 of ad categories received by users are behaviorally targeted. Finally, our analysis of re-marketing shows that it is adopted by a wide range of websites and the most commonly targeted re-marketing based ads are from the Travel and Tourism and Shopping categories.", "Over the past decade, advertising has emerged as the primary source of revenue for many web sites and apps. In this paper we report a first-of-its-kind study that seeks to broadly understand the features, mechanisms and dynamics of display advertising on the web - i.e., the Adscape. Our study takes the perspective of users who are the targets of display ads shown on web sites. We develop a scalable crawling capability that enables us to gather the details of display ads including creatives and landing pages. Our crawling strategy is focused on maximizing the number of unique ads harvested. Of critical importance to our study is the recognition that a user's profile (i.e., browser profile and cookies) can have a significant impact on which ads are shown. We deploy our crawler over a variety of websites and profiles and this yields over 175K distinct display ads. We find that while targeting is widely used, there remain many instances in which delivered ads do not depend on user profile; further, ads vary more over user profiles than over websites. We also assess the population of advertisers seen and identify over 3.7K distinct entities from a variety of business segments. Finally, we find that when targeting is used, the specific types of ads delivered generally correspond with the details of user profiles, and also on users' patterns of visit.", "" ], "cite_N": [ "@cite_5", "@cite_14", "@cite_17" ], "mid": [ "2100074487", "2056201388", "" ] }
0
1907.01862
2953574061
Being able to check whether an online advertisement has been targeted is essential for resolving privacy controversies and implementing in practice data protection regulations like GDPR, CCPA, and COPPA. In this paper we describe the design, implementation, and deployment of an advertisement auditing system called iWnder that uses crowdsourcing to reveal in real time whether a display advertisement has been targeted or not. Crowdsourcing simplifies the detection of targeted advertising, but requires reporting to a central repository the impressions seen by different users, thereby jeopardising their privacy. We break this deadlock with a privacy preserving data sharing protocol that allows iWnder to compute global statistics required to detect targeting, while keeping the advertisements seen by individual users and their browsing history private. We conduct a simulation study to explore the effect of different parameters and a live validation to demonstrate the accuracy of our approach. Unlike previous solutions, iWnder can even detect indirect targeting, i.e., marketing campaigns that promote a product or service whose description bears no semantic overlap with its targeted audience.
The only topic-based solution meant to be used by real users is MyAdchoice @cite_18 , which has been implemented in the form of a browser extension. This extension is available only under request, and based on the information reported in the paper, it has been only used in a beta-testing phase by few tens of friends and colleagues. Independently of the specific pros and cons of individual solutions, topic-based detection presents some common limitations. The most important being that it can only detect direct interest-based targeted advertising. It is unable to detect other forms of targeting based on demographic or geographic parameters, as well as indirect targeting (see for definitions).
{ "abstract": [ "The intrusiveness and the increasing invasiveness of online advertising have, in the last few years, raised serious concerns regarding user privacy and Web usability. As a reaction to these concerns, we have witnessed the emergence of a myriad of ad-blocking and antitracking tools, whose aim is to return control to users over advertising. The problem with these technologies, however, is that they are extremely limited and radical in their approach: users can only choose either to block or allow all ads. With around 200 million people regularly using these tools, the economic model of the Web—in which users get content free in return for allowing advertisers to show them ads—is at serious peril. In this article, we propose a smart Web technology that aims at bringing transparency to online advertising, so that users can make an informed and equitable decision regarding ad blocking. The proposed technology is implemented as a Web-browser extension and enables users to exert fine-grained control over advertising, thus providing them with certain guarantees in terms of privacy and browsing experience, while preserving the Internet economic model. Experimental results in a real environment demonstrate the suitability and feasibility of our approach, and provide preliminary findings on behavioral targeting from real user browsing profiles." ], "cite_N": [ "@cite_18" ], "mid": [ "2266124366" ] }
0
1907.01862
2953574061
Being able to check whether an online advertisement has been targeted is essential for resolving privacy controversies and implementing in practice data protection regulations like GDPR, CCPA, and COPPA. In this paper we describe the design, implementation, and deployment of an advertisement auditing system called iWnder that uses crowdsourcing to reveal in real time whether a display advertisement has been targeted or not. Crowdsourcing simplifies the detection of targeted advertising, but requires reporting to a central repository the impressions seen by different users, thereby jeopardising their privacy. We break this deadlock with a privacy preserving data sharing protocol that allows iWnder to compute global statistics required to detect targeting, while keeping the advertisements seen by individual users and their browsing history private. We conduct a simulation study to explore the effect of different parameters and a live validation to demonstrate the accuracy of our approach. Unlike previous solutions, iWnder can even detect indirect targeting, i.e., marketing campaigns that promote a product or service whose description bears no semantic overlap with its targeted audience.
Correlation-based solutions treat the online advertising ecosystem as a blackbox and apply machine learning and statistical methods to detect correlations between the browsing behavior and other characteristics of a user (OS, device type, location, ) and the ads he sees. For instance, XRay @cite_37 and Sunlight @cite_47 create for each persona several . Each shadow account performs a subset of the actions performed by the original persona. By analyzing the common actions performed by shadow accounts receiving the same reaction from the ecosystem ( , the same ad), the authors can infer the cause of a targeting event. AdFisher @cite_53 uses similar concepts to find discrimination practices, for instance, in the ads shown to men vs. women. As with topic-based detection, this technique presents important challenges related to scalability and practical implementation. Moreover, they are not suitable for real-time targeting detection. With the exception of @cite_18 , no previous work has been implemented as a tool for end-users. Most of them, including @cite_18 , rely on content-based analysis, thereby suffering from scalability issues and inability to detect indirect targeting.
{ "abstract": [ "Today's Web services - such as Google, Amazon, and Facebook - leverage user data for varied purposes, including personalizing recommendations, targeting advertisements, and adjusting prices. At present, users have little insight into how their data is being used. Hence, they cannot make informed choices about the services they choose. To increase transparency, we developed XRay, the first fine-grained, robust, and scalable personal data tracking system for the Web. XRay predicts which data in an arbitrary Web account (such as emails, searches, or viewed products) is being used to target which outputs (such as ads, recommended products, or prices). XRay's core functions are service agnostic and easy to instantiate for new services, and they can track data within and across services. To make predictions independent of the audited service, XRay relies on the following insight: by comparing outputs from different accounts with similar, but not identical, subsets of data, one can pinpoint targeting through correlation. We show both theoretically, and through experiments on Gmail, Amazon, and YouTube, that XRay achieves high precision and recall by correlating data from a surprisingly small number of extra accounts.", "We present Sunlight, a system that detects the causes of targeting phenomena on the web -- such as personalized advertisements, recommendations, or content -- at large scale and with solid statistical confidence. Today's web is growing increasingly complex and impenetrable as myriad of services collect, analyze, use, and exchange users' personal information. No one can tell who has what data, for what purposes they are using it, and how those uses affect the users. The few studies that exist reveal problematic effects -- such as discriminatory pricing and advertising -- but they are either too small-scale to generalize or lack formal assessments of confidence in the results, making them difficult to trust or interpret. Sunlight brings a principled and scalable methodology to personal data measurements by adapting well-established methods from statistics for the specific problem of targeting detection. Our methodology formally separates different operations into four key phases: scalable hypothesis generation, interpretable hypothesis formation, statistical significance testing, and multiple testing correction. Each phase bears instantiations from multiple mechanisms from statistics, each making different assumptions and tradeoffs. Sunlight offers a modular design that allows exploration of this vast design space. We explore a portion of this space, thoroughly evaluating the tradeoffs both analytically and experimentally. Our exploration reveals subtle tensions between scalability and confidence. Sunlight's default functioning strikes a balance to provide the first system that can diagnose targeting at fine granularity, at scale, and with solid statistical justification of its results. We showcase our system by running two measurement studies of targeting on the web, both the largest of their kind. Our studies -- about ad targeting in Gmail and on the web -- reveal statistically justifiable evidence that contradicts two Google statements regarding the lack of targeting on sensitive and prohibited topics.", "The intrusiveness and the increasing invasiveness of online advertising have, in the last few years, raised serious concerns regarding user privacy and Web usability. As a reaction to these concerns, we have witnessed the emergence of a myriad of ad-blocking and antitracking tools, whose aim is to return control to users over advertising. The problem with these technologies, however, is that they are extremely limited and radical in their approach: users can only choose either to block or allow all ads. With around 200 million people regularly using these tools, the economic model of the Web—in which users get content free in return for allowing advertisers to show them ads—is at serious peril. In this article, we propose a smart Web technology that aims at bringing transparency to online advertising, so that users can make an informed and equitable decision regarding ad blocking. The proposed technology is implemented as a Web-browser extension and enables users to exert fine-grained control over advertising, thus providing them with certain guarantees in terms of privacy and browsing experience, while preserving the Internet economic model. Experimental results in a real environment demonstrate the suitability and feasibility of our approach, and provide preliminary findings on behavioral targeting from real user browsing profiles.", "To partly address people's concerns over web tracking, Google has created the Ad Settings webpage to provide information about and some choice over the profiles Google creates on users. We present AdFisher, an automated tool that explores how user behaviors, Google's ads, and Ad Settings interact. AdFisher can run browser-based experiments and analyze data using machine learning and significance tests. Our tool uses a rigorous experimental design and statistical analysis to ensure the statistical soundness of our results. We use AdFisher to find that the Ad Settings was opaque about some features of a user's profile, that it does provide some choice on ads, and that these choices can lead to seemingly discriminatory ads. In particular, we found that visiting webpages associated with substance abuse changed the ads shown but not the settings page. We also found that setting the gender to female resulted in getting fewer instances of an ad related to high paying jobs than setting it to male. We cannot determine who caused these findings due to our limited visibility into the ad ecosystem, which includes Google, advertisers, websites, and users. Nevertheless, these results can form the starting point for deeper investigations by either the companies themselves or by regulatory bodies." ], "cite_N": [ "@cite_37", "@cite_47", "@cite_18", "@cite_53" ], "mid": [ "1630743940", "2045645172", "2266124366", "2951240445" ] }
0
1907.01824
2954312627
Automatic cover detection -- the task of finding in an audio database all the covers of one or several query tracks -- has long been seen as a challenging theoretical problem in the MIR community and as an acute practical problem for authors and composers societies. Original algorithms proposed for this task have proven their accuracy on small datasets, but are unable to scale up to modern real-life audio corpora. On the other hand, faster approaches designed to process thousands of pairwise comparisons resulted in lower accuracy, making them unsuitable for practical use. In this work, we propose a neural network architecture that is trained to represent each track as a single embedding vector. The computation burden is therefore left to the embedding extraction -- that can be conducted offline and stored, while the pairwise comparison task reduces to a simple Euclidean distance computation. We further propose to extract each track's embedding out of its dominant melody representation, obtained by another neural network trained for this task. We then show that this architecture improves state-of-the-art accuracy both on small and large datasets, and is able to scale to query databases of thousands of tracks in a few seconds.
Another type of method has been proposed to alleviate the cost of the comparison function and to shift the burden to the audio features extraction function -- which can be done offline and stored. The general principle is to encode each audio track as a single scalar or vector -- its embedding -- and to reduce the similarity computation to a simple Euclidean distance between embeddings. Originally, embeddings were for instance computed as a single hash encoding a succession of pitch landmarks @cite_43 , or as a vector obtained by PCA dimensionality reduction of a chromagram's 2D-DFT @cite_32 or with locality-sensitive hashing of melodic excerpts @cite_25 .
{ "abstract": [ "", "Searching audio collections using high-level musical descriptors is a difficult problem, due to the lack of reliable methods for extracting melody, harmony, rhythm, and other such descriptors from unstructured audio signals. In this paper, we present a novel approach to melody-based retrieval in audio collections. Our approach supports audio, as well as symbolic queries and ranks results according to melodic similarity to the query. We introduce a beat-synchronous melodic representation consisting of salient melodic lines, which are extracted from the analyzed audio signal. We propose the use of a 2D shift-invariant transform to extract shift-invariant melodic fragments from the melodic representation and demonstrate how such fragments can be indexed and stored in a song database. An efficient search algorithm based on locality-sensitive hashing is used to perform retrieval according to similarity of melodic fragments. On the cover song detection task, good results are achieved for audio, as well as for symbolic queries, while fast retrieval performance makes the proposed system suitable for retrieval in large databases.", "Large-scale cover song recognition involves calculating itemto-item similarities that can accommodate differences in timing and tempo, rendering simple Euclidean measures unsuitable. Expensive solutions such as dynamic time warping do not scale to million of instances, making them inappropriate for commercial-scale applications. In this work, we transform a beat-synchronous chroma matrix with a 2D Fourier transform and show that the resulting representation has properties that fit the cover song recognition task. We can also apply PCA to efficiently scale comparisons. We report the best results to date on the largest available dataset of around 18,000 cover songs amid one million tracks, giving a mean average precision of 3.0 ." ], "cite_N": [ "@cite_43", "@cite_25", "@cite_32" ], "mid": [ "", "2097461116", "1486009449" ] }
COVER DETECTION USING DOMINANT MELODY EMBEDDINGS
Covers are different interpretations of the same original musical work. They usually share a similar melodic line, but typically differ greatly in one or several other dimensions, such as their structure, tempo, key, instrumentation, genre, etc. Automatic cover detection -the task of finding in an audio database all the covers of one or several query tracks -has long been seen as a challenging theoretical problem in MIR. It is also now an acute practical problem for copyright owners facing continuous expansion of usergenerated online content. Cover detection is not stricto sensu a classification problem: due to the ever growing amount of musical works (the classes) and the relatively small number of covers per work, the actual question is not so much "to which work this track belongs to ?" as "to which other tracks this track is the most similar ?". Formally, cover detection therefore requires to establish a similarity relationship S ij between a query track A i and a reference track B j . It implies the composite of a feature extraction function f followed by a pairwise comparison function g, expressed as S ij = g(f (A i ), f (B j )). If f and g are independent, the feature extraction of the reference tracks B j can be done offline and stored. The online feature extraction cost is then linear in the number of queries, while pairwise comparisons cost without optimisation scales quadratically in the number of tracks [16]. Efficient cover detection algorithms thus require a fast pairwise comparison function g. Comparing pairs of entire sequences, as DTW does, scales quadratically in the length of the sequences and becomes quickly prohibitive. At the opposite, reducing g to a simple Euclidean distance computation between tracks embeddings is independent of the length of the sequences. In this case, the accuracy of the detection entirely relies on the ability of f to extract the common musical facets between different covers. In this work, we describe a neural network architecture mapping each track to a single embedding vector, and trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for noncover pairs. We leverage on recent breakthroughs in dominant melody extraction, and show that the use of dominant melody embeddings yield promising performances both in term of accuracy and scalability. The rest of the paper is organized as follow: we review in §2 the main concepts used in this work. We detail our method in §3, and describe and discuss in §4 and §5 the different experiments conducted and their results. We finally present a comparison with existing methods in §6. We conclude with future improvements to bring to our method. Cover detection Successful approaches in cover detection used an input representation preserving common musical facets between different versions, in particular dominant melody [19,27,40], tonal progression -typically a sequence of chromas [10,12,33,39] or chords [2], or a fusion of both [11,29]. Most of these approaches then computed a similarity score between pairs of melodic and/or harmonic sequences, typically a cross-correlation [10], a variant of the DTW algorithm [12,20,33,39], or a combination of both [25]. These approaches lead to good results when evaluated on small datasets -at most a few hundreds of tracks, but are not scalable beyond due to their expensive comparison function. Faster methods have recently been proposed, based on efficient comparison of all possible subsequences pairs between chroma representations [34], or similarity search between 2D-DFT sequences derived from CQTs overlapping windows [31], but remain too costly to be scalable to query large modern audio databases. Another type of method has been proposed to alleviate the cost of the comparison function and to shift the burden to the audio features extraction function -which can be done offline and stored. The general principle is to encode each audio track as a single scalar or vector -its embedding -and to reduce the similarity computation to a simple Euclidean distance between embeddings. Originally, embeddings were for instance computed as a single hash encoding a succession of pitch landmarks [3], or as a vector obtained by PCA dimensionality reduction of a chromagram's 2D-DFT [4] or with locality-sensitive hashing of melodic excerpts [19]. As for many other MIR applications, ad-hoc -and somewhat arbitrary -hand-crafted features extraction was progressively replaced with data-driven automatic feature learning [15]. Different attempts to learn common features between covers have since been proposed: in particular, training a k-means algorithm to learn to extract an embedding out of chromagram's 2D-DFT lead to significant results improvements on large datasets [16]. Similar approaches, commonly referred to as metric learning approaches, have been used in different MIR contexts, such as music recommendation [21,41], live song identification [38], music similarity search [24], and recently cover detection [23]. Metric learning Although the concept can be traced back to earlier works [1,8], the term of metric learning was probably coined first in [43] to address this type of clustering tasks where the objective is merely to assess whether different samples are similar or dissimilar. It has since been extensively used in the image recognition field in particular [14,36,37]. The principle is to learn a mapping between the input space and a latent manifold where a simple distance measure (such as Euclidean distance) should approximate the neighborhood relationships in the input space. There is however a trivial solution to the problem, where the function ends up mapping all the examples to the same point. Contrastive Loss was introduced to circumvent this problem, aiming at simultaneously pulling similar pairs together and pushing dissimilar pairs apart [13]. However, when the amount of labels becomes larger, the number of dissimilar pairs becomes quickly intractable. It was moreover observed in practice that once the network has become reasonably good, negative pairs become relatively easy to discern, which stalls the training of the discriminative model. Pair mining is the strategy of training the model only with hard pairs, i.e. positive (resp. nega-tive) pairs with large (resp. small) distances [35]. Further improvement was introduced with the triplet loss, which is used to train a model to map each sample to an embedding that is closer to all of its positive counterparts than it is to all of its negative counterparts [30]. Formally, for all triplets {a, p, n} where a is an anchor, and p or n is one of its positive or negative example, respectively, the loss to minimize is expressed as = max(0, d ap + α − d an ), where α is a margin and d ap and d an are the distances between each anchor a and p or n, respectively. Dominant melody extraction Dominant melody extraction has long been another challenging problem in the MIR community [18,28,42]. A major breakthrough was brought recently with the introduction of a convolutional network that learns to extract the dominant melody out of the audio Harmonic CQT [7]. The HCQT is an elegant and astute representation of the audio signal in 3 dimensions (time, frequency, harmonic), stacking along the third dimension several standard CQTs computed at different minimal multiple frequencies. Harmonic components of audio signal will thus be represented along the third dimension and be localized at the same location along the first and second dimensions. This representation is particularly suitable for melody detection, as it can be directly processed by convolutional networks, whose 3-D filters can be trained to localize in the time and frequency plan the harmonic components. In a recent work [9], we suggested in an analogy with image processing that dominant melody extraction can be seen as a type of image segmentation, where contours of the melody have to be isolated from the surrounding background. We have thus proposed for dominant melody estimation an adaptation of U-Net [26] -a model originally designed for medical image segmentation -which slightly improves over [7]. PROPOSED METHOD We present here the input data used to train our network, the network architecture itself and its training loss. Input data We have used as input data the dominant melody 2D representation (F0-CQT) obtained by the network we proposed in [9]. The frequency and time resolutions required for melody extraction (60 bins per octave and 11 ms per time frame) are not needed for cover detection. Moreover, efficient triplet loss training requires large training batches, as we will see later, so we reduced data dimensionality as depicted on Figure 2. The F0-CQT is a) trimmed to keep only 3 octaves around its mean pitch (180 bins along the frequency axis), and only the first 3 minutes of the track (15500 time frames) -if shorter, the duration is not changed. The resulting matrix is then b) downsampled via bilinear 2D interpolation with a factor 5. On the frequency axis, the semi-tone resolution is thus reduced from five to one bin, which we considered adequate for cover detection. On the time axis, it is equivalent to a regular downsampling. Finally, as the representation of different tracks with possibly different durations shall be batched together during training, the downsampled F0-CQT is c) shrunk or stretched along the time axis by another bilinear interpolation to a fixed amount of bins (1024). This operation is equivalent to a tempo change: for the 3 minutes trimmed, shrinking is equivalent to multiply the tempo by a factor 3. We argue here that accelerated or decelerated version of a cover is still a cover of the original track. 3 4 2 × 1 8 × 1 0 2 4 × 3 6 × 1 1 1 4 × 9 × 2 3 8 × 5 × 4 1 3 × 3 × 8 5 × 2 × 1 6 1 × 1 × 1 6 1 × 1 × batch norm + conv2d + pool2d average dense + L2-norm. Model The proposed model is a simple convolutional network pictured in Figure 1. As we are constrained by the input data shape, whose time dimension is much larger than its frequency dimension, only five layers blocks are needed. Each layer block consists of a batch normalization layer, a convolution layer with 3 × 3 kernels and a mean-pooling layer with a 3 × 2 kernel and 3 × 2 stride in order to reduce time dimensionality faster than frequency dimensionality. A dropout rate of 0.1, 0.1, 0.2 and 0.3 is applied to the blocks 2, 3, 4 and 5, respectively. The first convolutional layer has K kernels, and this number is doubled at each level (i.e. the deeper layer outputs 2 4 K-depth tensors). The penultimate layer averages along frequency and time axes to obtain a vector. A last dense layer outputs and L2-normalizes the final embedding vector of size E. Our assumption behind the choice of this convolutional architecture is that we expect it to learn similar patterns in the dominant melody, at different scales (tempo invariance) and locations (key and structure invariance). Objective loss We use a triplet loss with online semi-hard negative pairs mining as in [30]. In practice, triplet mining is done within each training batch: instead of using all possible triplets, each track in the batch is successively considered as the anchor, and compared with all its covers in the batch. For each of these positives pairs, if there are negatives such as d an < d ap , then only the one with the highest d an is kept. If no such negative exist, then only the one with the lowest d an is kept. Other negatives are not considered. Model is fit with Adam optimizer [17], with initial learning rate at 1e −4 , divided by 2 each time the loss on the evaluation set does not decrease after 5k training steps. Training is stopped after 100k steps, or if the learning rate falls below 1e −7 . The triplet loss was computed using squared Euclidean distances (i.e. distances are within the [0, 4] range), and the margin was set to α = 1. Dataset As metric learning typically requires large amount of data, we fetched from internet the audio of cover tracks provided by the SecondHandSongs website API 1 . Only works with 5 to 15 covers, and only tracks lasting between 60 and 300 seconds where considered, for a total of W = 7460 works and T = 62310 tracks. The HCQT was computed for those 62310 tracks as detailed in [7], i.e. with f min = 32.7 Hz and 6 harmonics. Each CQT spans 6 octaves with a resolution of 5 bins per semi-tone, and a frame duration of~11 ms. The implementation was done with the Librosa library [22]. The dominant melody was extracted for these 62310 HCQT with the network we described in [9], and the output was trimmed, downsampled and resized as described in §3.1. PRELIMINARY EXPERIMENTS We present here some experiments conducted to develop the system. The 7460 works were split into disjoint train and evaluation sets, with respectively 6216 and 1244 works and five covers per work. The evaluation set represents 20% of the training set, which we considered fair enough given the total amount of covers. The same split has been used for all preliminary experiments. Metrics Ideally, we expect the model to produce embeddings such that cover pair distances are low and non-cover pair distances are high, with a large gap between the two distributions. In the preliminary experiments, we have thus evaluated the separation of the cover pairs distance distribution p c (d) from the non-cover pairs distance distribution p nc (d) with two metrics: -the ROC curve plots the true positives rate (covers, TPR) versus the false positive rate (non-covers, FPR) for different distance d thresholds. We report the area under the ROC curve (AuC), which gives a good indication about the distributions separation. We also report the TPR corresponding to an FPR of 5% (TPR@5%), as it gives an operational indication about the model's discriminative power. -we also report the Bhattacharyya coefficient (BC), expressed as d p c (d)p nc (d), as it directly measures the separation between the distributions (smaller is better) [6]. Influence of input data We first compared the results obtained for different inputs data: chromas and CQT computed using Librosa [22], and the dominant melody computed as described in 3.1. As shown on Figure 3 (left), dominant melody yields the best results. It does not imply that melody features are more suited than tonal features for cover detection, but shows that convolutional kernels are better at learning similar patterns at different scales and locations across different tracks when the input data is sparse, which is not the case for chromas and CQT. Results obtained when trimming the F0-CQT with various octaves and time spans are also shown Figure 3. It appears that keeping 3 octaves around the mean pitch of the dominant melody and a duration of 2 to 3 minutes yields the best results. Smaller spans do not include enough information, while larger spans generate confusion. All other results presented below are thus obtained with the dominant melody 2D representation as input data, and a span of 3 octaves and 180 seconds for each track. Influence of model and training parameters We then compared the results obtained for different numbers of kernels in the first layer (K) and the corresponding sizes of the embeddings (E). As shown on Figure 4 (left), results improve for greater K, which was expected. However, increasing K above a certain point does not improve the results further, as the model has probably already enough freedom to encode common musical facets. We have then compared the results obtained for different sizes of training batches (B). As shown on Figure 4 (right), results improve with larger B: within larger batches, each track will be compared with a greater number of non-covers, improving the separation between clusters of works. A closer look at the distances shows indeed that the negative pairs distance distribution p nc (d) gets narrower for larger batches (not showed here). Due to GPU memory constraints, we have not investigated values above B=100. All other results presented below are obtained with K=64, E=512 and B=100. LARGE SCALE LOOKUP EXPERIMENTS We now present experiments investigating the realistic use case, i.e. large audio collections lookup. When querying an audio collection, each query track can be of three kinds: a) it is already present in the database, b) it is a cover of some other track(s) already in the database, or c) it is a track that has no cover in the database. The case a) corresponds to the trivial case, where the query will produce a distance equal to zero when compared with itself, while case c) corresponds to the hard case where neither the query or any cover of the query have been seen during training. We investigate here the case b), where the query track itself has never been seen during training, but of which at least one cover has been seen during training. Metrics In these experiments, we are interested in measuring our method's ability to find covers in the reference set when queried with various unknown tracks. This is commonly addressed with the metrics proposed by MIREX 2 for the cover song identification task: the mean rank of first correct result (MR1), the mean number of true positives in the top ten positions (MT10) and the Mean Average Precision (MAP). We refer the reader to [32] for a detailed review of these standard metrics. We also report here the TPR@5%, already used in the premilinary experiments. Structuring the embeddings space We study here the role of the training set in structuring the embeddings space, and in particular the role of the number of covers of each work. More precisely, we tried to show evidence of the pushing effect (when a query is pushed away from all its non-covers clusters) and the pulling effect (when a query is pulled towards its unique covers cluster). To this aim, we built out of our dataset a query and a reference set. The query set includes 1244 works with five covers each. The reference set includes P of the remaining covers for each of the 1244 query works, and N covers for each other work not included in the query set ( Figure 5). Pushing covers We first train our model on the reference set with fixed P =5. We compute query tracks embeddings with the trained model, compute pairwise distances between query and reference embeddings, as well as the different metrics. We repeat this operation for different values of N ∈ [2, ..., 10], and report results on Figure 6 (left). We report MR1's percentile (defined here as MR1 divided by the total of reference tracks, in percent) instead of MR1, because the number of reference tracks varies with N . The MAP only slightly decreases as N increases, which indicates that the precision remains stable, even though the number of examples to sort and to rank is increasing. Moreover, the MR1 percentile and the TPR@5% clearly improve as N increases. As P is fixed, it means that the ranking and the separation between covers and non-covers clusters is improving as the non-queries clusters are consolidated, which illustrates the expected pushing effect. Pulling covers We reproduce the same protocol again, but now with N =5 fixed and for different values of P ∈ [2, ..., 10]. We report results on Figure 6 (right). It appears clearly that all metrics improve steadily as P increases, even though the actual query itself has never been seen during training. As N is fixed, this confirms the intuition that the model will get better in locating unseen tracks closer to their work's cluster if trained with higher number of covers of this work, which illustrates the expected pulling effect. Operational meaning of p c (d) and p nc (d) We now investigate further the distance distributions of cover and non-cover pairs. To this aim, we randomly split our entire dataset into a query and a reference set with a 1:5 ratio (resp. 10385 and 51925 tracks). Query tracks are thus not seen during training, but might have zero or more covers in the reference set. Covers probability Computing queries vs. references pairwise distances gives the distributions p c (d) and p nc (d) shown on Figure 7 (left). Using Bayes' theorem, it is straightforward to derive from p c (d) and p nc (d) the probability for a pair of tracks to be covers given their distance d (Figure 7, right). This curve has an operational meaning, as it maps a pair's distance with a probability of being covers without having to rank it among the entire dataset. Easy and hard covers We repeat the previous test five times with random splits, and report metrics in Table 1. At first sight, MR1 and MT@10 could seem inconsistent, but a closer look at the results gives an explanation. To illustrate what happens, imagine a set of five queries where the first query ranks ten covers correctly in the first ten positions, e.g. because they are all very similar, while all other four queries have their first correct answer at rank 100. This would yield to MT@10=2.0, and MR1=80.2. This kind of discrepancy between MR1 and MT@10 reflects the fact that some works in our dataset have similar covers that are easily clustered, while other are much more difficult to discriminate. This can be observed on the positive pairs distribution p c (d) on Figure 7 ( 6. COMPARISON WITH OTHER METHODS Comparison on small dataset We first compared with two recent methods [31,34], who reported results for a small dataset of 50 works with 7 covers each. The query set includes five covers of each work (250 tracks), while the reference set includes each work's remaining two covers (100 tracks). As this dataset is not publicly available anymore, we have mimicked it extracting randomly 350 tracks out of own dataset 3 . Our data-driven model can however not be trained with only 100 tracks of the reference set, as it would overfit immediately. We have thus trained our model on our full dataset, with two different setups: a) excluding the 350 tracks reserved for the query and reference sets. b) excluding the 250 tracks of the query set, but including the 100 tracks of the reference set. We repeated this operation ten times for each setup, and report the mean and standard deviation on Table 2 for the same metrics used in [31,34], as well as the p-value obtained by a statistical significance t-test carried out on results series. Table 2: Comparison between recent method [31,34] and our proposed method on a small dataset (precision at 10 P@10 is reported instead of MT@10. As there are only two covers per work in the reference set, P@10 maximum value is 0.2). Our method significantly improve previous results: for the hardest case a) where the model has not seen any queries work during training, embeddings space has been sufficiently structured to discriminate the unseen works from the other training clusters (pushing effect). For the easier case b), the pulling effect from the known queries covers provides further improvement. Comparison on large dataset We also compared with [16], who is to our knowledge the last attempt to report results for thousands of queries and references -a more realistic use case. This paper reported results on the SecondHandSong (SHS) subset of the Mil-lionSong dataset (MSD) [5] for two experiments: a) only the training set of 12960 covers of 4128 works was used both as the query and reference sets. b) the SHS MSD test set of 5236 covers of 1726 works was used to query the entire MSD used as reference. The SHS MSD is not available anymore. However, as our dataset has also been built from the SHS covers list, we consider that results can be compared 3 . We have therefore randomly generated out of our dataset a training and a test set mimicking the original ones. We trained our model on the training set, and perform the pairwise distances computation between the query and reference sets (as the query set is included in the reference set, we excluded for comparison the pairs of the same track). For experiment b), we have used our entire dataset as reference set as we do not have one million songs. We have repeated this operation five times and report in Table 3 the mean and standard deviations for the same metrics used in [16], as well as MR1, MT@10 and the p-value of the t-test carried out. Our method significantly improve previous results. For case a), results are notably good, which is not surprising as the model has already seen all the queries during the training. Case b) is on the other hand the hardest possible configuration, where the model has not seen any covers of the queries works during training, and clusterisation of unseen tracks entirely relies on the pushing effect. As to our method's computation times, we observed on a single Nvidia GPU Titan XP for a~3 mn audio track: 10 sec for F0 extraction,~1 sec for embeddings computation, and less than 0.2 sec for distances computation with the full dataset embeddings (previously computed offline). CONCLUSION In this work, we presented a method for cover detection, using a convolutional network which encodes each track as a single vector, and is trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for non-covers. We show that extracting embeddings out of the dominant melody 2D representation drastically yields better results compared to other spectral representations: the convolutional model learns to identify similar patterns in the dominant melody at different scales and locations (tempo, key and structure invariance). We have also shown that our method scales to audio databases of thousands of tracks. Once trained for a given database, it can be used to assess the probability for an unseen track to be a cover of any known track without having to be compared to the entire database. We have finally shown that our method improves previous methods both on small and large datasets. In the future, we plan to grow our training dataset to address the realistic use case where collections of millions of tracks should be queried: as for many other data-driven problems, will the cover detection problem be solved if the embeddings space is sufficiently structured?
4,391
1907.01824
2954312627
Automatic cover detection -- the task of finding in an audio database all the covers of one or several query tracks -- has long been seen as a challenging theoretical problem in the MIR community and as an acute practical problem for authors and composers societies. Original algorithms proposed for this task have proven their accuracy on small datasets, but are unable to scale up to modern real-life audio corpora. On the other hand, faster approaches designed to process thousands of pairwise comparisons resulted in lower accuracy, making them unsuitable for practical use. In this work, we propose a neural network architecture that is trained to represent each track as a single embedding vector. The computation burden is therefore left to the embedding extraction -- that can be conducted offline and stored, while the pairwise comparison task reduces to a simple Euclidean distance computation. We further propose to extract each track's embedding out of its dominant melody representation, obtained by another neural network trained for this task. We then show that this architecture improves state-of-the-art accuracy both on small and large datasets, and is able to scale to query databases of thousands of tracks in a few seconds.
The principle is to learn a mapping between the input space and a latent manifold where a simple distance measure (such as Euclidean distance) should approximate the neighborhood relationships in the input space. There is however a trivial solution to the problem, where the function ends up mapping all the examples to the same point. Contrastive Loss was introduced to circumvent this problem, aiming at simultaneously similar pairs together and dissimilar pairs apart @cite_35 .
{ "abstract": [ "Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE." ], "cite_N": [ "@cite_35" ], "mid": [ "2138621090" ] }
COVER DETECTION USING DOMINANT MELODY EMBEDDINGS
Covers are different interpretations of the same original musical work. They usually share a similar melodic line, but typically differ greatly in one or several other dimensions, such as their structure, tempo, key, instrumentation, genre, etc. Automatic cover detection -the task of finding in an audio database all the covers of one or several query tracks -has long been seen as a challenging theoretical problem in MIR. It is also now an acute practical problem for copyright owners facing continuous expansion of usergenerated online content. Cover detection is not stricto sensu a classification problem: due to the ever growing amount of musical works (the classes) and the relatively small number of covers per work, the actual question is not so much "to which work this track belongs to ?" as "to which other tracks this track is the most similar ?". Formally, cover detection therefore requires to establish a similarity relationship S ij between a query track A i and a reference track B j . It implies the composite of a feature extraction function f followed by a pairwise comparison function g, expressed as S ij = g(f (A i ), f (B j )). If f and g are independent, the feature extraction of the reference tracks B j can be done offline and stored. The online feature extraction cost is then linear in the number of queries, while pairwise comparisons cost without optimisation scales quadratically in the number of tracks [16]. Efficient cover detection algorithms thus require a fast pairwise comparison function g. Comparing pairs of entire sequences, as DTW does, scales quadratically in the length of the sequences and becomes quickly prohibitive. At the opposite, reducing g to a simple Euclidean distance computation between tracks embeddings is independent of the length of the sequences. In this case, the accuracy of the detection entirely relies on the ability of f to extract the common musical facets between different covers. In this work, we describe a neural network architecture mapping each track to a single embedding vector, and trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for noncover pairs. We leverage on recent breakthroughs in dominant melody extraction, and show that the use of dominant melody embeddings yield promising performances both in term of accuracy and scalability. The rest of the paper is organized as follow: we review in §2 the main concepts used in this work. We detail our method in §3, and describe and discuss in §4 and §5 the different experiments conducted and their results. We finally present a comparison with existing methods in §6. We conclude with future improvements to bring to our method. Cover detection Successful approaches in cover detection used an input representation preserving common musical facets between different versions, in particular dominant melody [19,27,40], tonal progression -typically a sequence of chromas [10,12,33,39] or chords [2], or a fusion of both [11,29]. Most of these approaches then computed a similarity score between pairs of melodic and/or harmonic sequences, typically a cross-correlation [10], a variant of the DTW algorithm [12,20,33,39], or a combination of both [25]. These approaches lead to good results when evaluated on small datasets -at most a few hundreds of tracks, but are not scalable beyond due to their expensive comparison function. Faster methods have recently been proposed, based on efficient comparison of all possible subsequences pairs between chroma representations [34], or similarity search between 2D-DFT sequences derived from CQTs overlapping windows [31], but remain too costly to be scalable to query large modern audio databases. Another type of method has been proposed to alleviate the cost of the comparison function and to shift the burden to the audio features extraction function -which can be done offline and stored. The general principle is to encode each audio track as a single scalar or vector -its embedding -and to reduce the similarity computation to a simple Euclidean distance between embeddings. Originally, embeddings were for instance computed as a single hash encoding a succession of pitch landmarks [3], or as a vector obtained by PCA dimensionality reduction of a chromagram's 2D-DFT [4] or with locality-sensitive hashing of melodic excerpts [19]. As for many other MIR applications, ad-hoc -and somewhat arbitrary -hand-crafted features extraction was progressively replaced with data-driven automatic feature learning [15]. Different attempts to learn common features between covers have since been proposed: in particular, training a k-means algorithm to learn to extract an embedding out of chromagram's 2D-DFT lead to significant results improvements on large datasets [16]. Similar approaches, commonly referred to as metric learning approaches, have been used in different MIR contexts, such as music recommendation [21,41], live song identification [38], music similarity search [24], and recently cover detection [23]. Metric learning Although the concept can be traced back to earlier works [1,8], the term of metric learning was probably coined first in [43] to address this type of clustering tasks where the objective is merely to assess whether different samples are similar or dissimilar. It has since been extensively used in the image recognition field in particular [14,36,37]. The principle is to learn a mapping between the input space and a latent manifold where a simple distance measure (such as Euclidean distance) should approximate the neighborhood relationships in the input space. There is however a trivial solution to the problem, where the function ends up mapping all the examples to the same point. Contrastive Loss was introduced to circumvent this problem, aiming at simultaneously pulling similar pairs together and pushing dissimilar pairs apart [13]. However, when the amount of labels becomes larger, the number of dissimilar pairs becomes quickly intractable. It was moreover observed in practice that once the network has become reasonably good, negative pairs become relatively easy to discern, which stalls the training of the discriminative model. Pair mining is the strategy of training the model only with hard pairs, i.e. positive (resp. nega-tive) pairs with large (resp. small) distances [35]. Further improvement was introduced with the triplet loss, which is used to train a model to map each sample to an embedding that is closer to all of its positive counterparts than it is to all of its negative counterparts [30]. Formally, for all triplets {a, p, n} where a is an anchor, and p or n is one of its positive or negative example, respectively, the loss to minimize is expressed as = max(0, d ap + α − d an ), where α is a margin and d ap and d an are the distances between each anchor a and p or n, respectively. Dominant melody extraction Dominant melody extraction has long been another challenging problem in the MIR community [18,28,42]. A major breakthrough was brought recently with the introduction of a convolutional network that learns to extract the dominant melody out of the audio Harmonic CQT [7]. The HCQT is an elegant and astute representation of the audio signal in 3 dimensions (time, frequency, harmonic), stacking along the third dimension several standard CQTs computed at different minimal multiple frequencies. Harmonic components of audio signal will thus be represented along the third dimension and be localized at the same location along the first and second dimensions. This representation is particularly suitable for melody detection, as it can be directly processed by convolutional networks, whose 3-D filters can be trained to localize in the time and frequency plan the harmonic components. In a recent work [9], we suggested in an analogy with image processing that dominant melody extraction can be seen as a type of image segmentation, where contours of the melody have to be isolated from the surrounding background. We have thus proposed for dominant melody estimation an adaptation of U-Net [26] -a model originally designed for medical image segmentation -which slightly improves over [7]. PROPOSED METHOD We present here the input data used to train our network, the network architecture itself and its training loss. Input data We have used as input data the dominant melody 2D representation (F0-CQT) obtained by the network we proposed in [9]. The frequency and time resolutions required for melody extraction (60 bins per octave and 11 ms per time frame) are not needed for cover detection. Moreover, efficient triplet loss training requires large training batches, as we will see later, so we reduced data dimensionality as depicted on Figure 2. The F0-CQT is a) trimmed to keep only 3 octaves around its mean pitch (180 bins along the frequency axis), and only the first 3 minutes of the track (15500 time frames) -if shorter, the duration is not changed. The resulting matrix is then b) downsampled via bilinear 2D interpolation with a factor 5. On the frequency axis, the semi-tone resolution is thus reduced from five to one bin, which we considered adequate for cover detection. On the time axis, it is equivalent to a regular downsampling. Finally, as the representation of different tracks with possibly different durations shall be batched together during training, the downsampled F0-CQT is c) shrunk or stretched along the time axis by another bilinear interpolation to a fixed amount of bins (1024). This operation is equivalent to a tempo change: for the 3 minutes trimmed, shrinking is equivalent to multiply the tempo by a factor 3. We argue here that accelerated or decelerated version of a cover is still a cover of the original track. 3 4 2 × 1 8 × 1 0 2 4 × 3 6 × 1 1 1 4 × 9 × 2 3 8 × 5 × 4 1 3 × 3 × 8 5 × 2 × 1 6 1 × 1 × 1 6 1 × 1 × batch norm + conv2d + pool2d average dense + L2-norm. Model The proposed model is a simple convolutional network pictured in Figure 1. As we are constrained by the input data shape, whose time dimension is much larger than its frequency dimension, only five layers blocks are needed. Each layer block consists of a batch normalization layer, a convolution layer with 3 × 3 kernels and a mean-pooling layer with a 3 × 2 kernel and 3 × 2 stride in order to reduce time dimensionality faster than frequency dimensionality. A dropout rate of 0.1, 0.1, 0.2 and 0.3 is applied to the blocks 2, 3, 4 and 5, respectively. The first convolutional layer has K kernels, and this number is doubled at each level (i.e. the deeper layer outputs 2 4 K-depth tensors). The penultimate layer averages along frequency and time axes to obtain a vector. A last dense layer outputs and L2-normalizes the final embedding vector of size E. Our assumption behind the choice of this convolutional architecture is that we expect it to learn similar patterns in the dominant melody, at different scales (tempo invariance) and locations (key and structure invariance). Objective loss We use a triplet loss with online semi-hard negative pairs mining as in [30]. In practice, triplet mining is done within each training batch: instead of using all possible triplets, each track in the batch is successively considered as the anchor, and compared with all its covers in the batch. For each of these positives pairs, if there are negatives such as d an < d ap , then only the one with the highest d an is kept. If no such negative exist, then only the one with the lowest d an is kept. Other negatives are not considered. Model is fit with Adam optimizer [17], with initial learning rate at 1e −4 , divided by 2 each time the loss on the evaluation set does not decrease after 5k training steps. Training is stopped after 100k steps, or if the learning rate falls below 1e −7 . The triplet loss was computed using squared Euclidean distances (i.e. distances are within the [0, 4] range), and the margin was set to α = 1. Dataset As metric learning typically requires large amount of data, we fetched from internet the audio of cover tracks provided by the SecondHandSongs website API 1 . Only works with 5 to 15 covers, and only tracks lasting between 60 and 300 seconds where considered, for a total of W = 7460 works and T = 62310 tracks. The HCQT was computed for those 62310 tracks as detailed in [7], i.e. with f min = 32.7 Hz and 6 harmonics. Each CQT spans 6 octaves with a resolution of 5 bins per semi-tone, and a frame duration of~11 ms. The implementation was done with the Librosa library [22]. The dominant melody was extracted for these 62310 HCQT with the network we described in [9], and the output was trimmed, downsampled and resized as described in §3.1. PRELIMINARY EXPERIMENTS We present here some experiments conducted to develop the system. The 7460 works were split into disjoint train and evaluation sets, with respectively 6216 and 1244 works and five covers per work. The evaluation set represents 20% of the training set, which we considered fair enough given the total amount of covers. The same split has been used for all preliminary experiments. Metrics Ideally, we expect the model to produce embeddings such that cover pair distances are low and non-cover pair distances are high, with a large gap between the two distributions. In the preliminary experiments, we have thus evaluated the separation of the cover pairs distance distribution p c (d) from the non-cover pairs distance distribution p nc (d) with two metrics: -the ROC curve plots the true positives rate (covers, TPR) versus the false positive rate (non-covers, FPR) for different distance d thresholds. We report the area under the ROC curve (AuC), which gives a good indication about the distributions separation. We also report the TPR corresponding to an FPR of 5% (TPR@5%), as it gives an operational indication about the model's discriminative power. -we also report the Bhattacharyya coefficient (BC), expressed as d p c (d)p nc (d), as it directly measures the separation between the distributions (smaller is better) [6]. Influence of input data We first compared the results obtained for different inputs data: chromas and CQT computed using Librosa [22], and the dominant melody computed as described in 3.1. As shown on Figure 3 (left), dominant melody yields the best results. It does not imply that melody features are more suited than tonal features for cover detection, but shows that convolutional kernels are better at learning similar patterns at different scales and locations across different tracks when the input data is sparse, which is not the case for chromas and CQT. Results obtained when trimming the F0-CQT with various octaves and time spans are also shown Figure 3. It appears that keeping 3 octaves around the mean pitch of the dominant melody and a duration of 2 to 3 minutes yields the best results. Smaller spans do not include enough information, while larger spans generate confusion. All other results presented below are thus obtained with the dominant melody 2D representation as input data, and a span of 3 octaves and 180 seconds for each track. Influence of model and training parameters We then compared the results obtained for different numbers of kernels in the first layer (K) and the corresponding sizes of the embeddings (E). As shown on Figure 4 (left), results improve for greater K, which was expected. However, increasing K above a certain point does not improve the results further, as the model has probably already enough freedom to encode common musical facets. We have then compared the results obtained for different sizes of training batches (B). As shown on Figure 4 (right), results improve with larger B: within larger batches, each track will be compared with a greater number of non-covers, improving the separation between clusters of works. A closer look at the distances shows indeed that the negative pairs distance distribution p nc (d) gets narrower for larger batches (not showed here). Due to GPU memory constraints, we have not investigated values above B=100. All other results presented below are obtained with K=64, E=512 and B=100. LARGE SCALE LOOKUP EXPERIMENTS We now present experiments investigating the realistic use case, i.e. large audio collections lookup. When querying an audio collection, each query track can be of three kinds: a) it is already present in the database, b) it is a cover of some other track(s) already in the database, or c) it is a track that has no cover in the database. The case a) corresponds to the trivial case, where the query will produce a distance equal to zero when compared with itself, while case c) corresponds to the hard case where neither the query or any cover of the query have been seen during training. We investigate here the case b), where the query track itself has never been seen during training, but of which at least one cover has been seen during training. Metrics In these experiments, we are interested in measuring our method's ability to find covers in the reference set when queried with various unknown tracks. This is commonly addressed with the metrics proposed by MIREX 2 for the cover song identification task: the mean rank of first correct result (MR1), the mean number of true positives in the top ten positions (MT10) and the Mean Average Precision (MAP). We refer the reader to [32] for a detailed review of these standard metrics. We also report here the TPR@5%, already used in the premilinary experiments. Structuring the embeddings space We study here the role of the training set in structuring the embeddings space, and in particular the role of the number of covers of each work. More precisely, we tried to show evidence of the pushing effect (when a query is pushed away from all its non-covers clusters) and the pulling effect (when a query is pulled towards its unique covers cluster). To this aim, we built out of our dataset a query and a reference set. The query set includes 1244 works with five covers each. The reference set includes P of the remaining covers for each of the 1244 query works, and N covers for each other work not included in the query set ( Figure 5). Pushing covers We first train our model on the reference set with fixed P =5. We compute query tracks embeddings with the trained model, compute pairwise distances between query and reference embeddings, as well as the different metrics. We repeat this operation for different values of N ∈ [2, ..., 10], and report results on Figure 6 (left). We report MR1's percentile (defined here as MR1 divided by the total of reference tracks, in percent) instead of MR1, because the number of reference tracks varies with N . The MAP only slightly decreases as N increases, which indicates that the precision remains stable, even though the number of examples to sort and to rank is increasing. Moreover, the MR1 percentile and the TPR@5% clearly improve as N increases. As P is fixed, it means that the ranking and the separation between covers and non-covers clusters is improving as the non-queries clusters are consolidated, which illustrates the expected pushing effect. Pulling covers We reproduce the same protocol again, but now with N =5 fixed and for different values of P ∈ [2, ..., 10]. We report results on Figure 6 (right). It appears clearly that all metrics improve steadily as P increases, even though the actual query itself has never been seen during training. As N is fixed, this confirms the intuition that the model will get better in locating unseen tracks closer to their work's cluster if trained with higher number of covers of this work, which illustrates the expected pulling effect. Operational meaning of p c (d) and p nc (d) We now investigate further the distance distributions of cover and non-cover pairs. To this aim, we randomly split our entire dataset into a query and a reference set with a 1:5 ratio (resp. 10385 and 51925 tracks). Query tracks are thus not seen during training, but might have zero or more covers in the reference set. Covers probability Computing queries vs. references pairwise distances gives the distributions p c (d) and p nc (d) shown on Figure 7 (left). Using Bayes' theorem, it is straightforward to derive from p c (d) and p nc (d) the probability for a pair of tracks to be covers given their distance d (Figure 7, right). This curve has an operational meaning, as it maps a pair's distance with a probability of being covers without having to rank it among the entire dataset. Easy and hard covers We repeat the previous test five times with random splits, and report metrics in Table 1. At first sight, MR1 and MT@10 could seem inconsistent, but a closer look at the results gives an explanation. To illustrate what happens, imagine a set of five queries where the first query ranks ten covers correctly in the first ten positions, e.g. because they are all very similar, while all other four queries have their first correct answer at rank 100. This would yield to MT@10=2.0, and MR1=80.2. This kind of discrepancy between MR1 and MT@10 reflects the fact that some works in our dataset have similar covers that are easily clustered, while other are much more difficult to discriminate. This can be observed on the positive pairs distribution p c (d) on Figure 7 ( 6. COMPARISON WITH OTHER METHODS Comparison on small dataset We first compared with two recent methods [31,34], who reported results for a small dataset of 50 works with 7 covers each. The query set includes five covers of each work (250 tracks), while the reference set includes each work's remaining two covers (100 tracks). As this dataset is not publicly available anymore, we have mimicked it extracting randomly 350 tracks out of own dataset 3 . Our data-driven model can however not be trained with only 100 tracks of the reference set, as it would overfit immediately. We have thus trained our model on our full dataset, with two different setups: a) excluding the 350 tracks reserved for the query and reference sets. b) excluding the 250 tracks of the query set, but including the 100 tracks of the reference set. We repeated this operation ten times for each setup, and report the mean and standard deviation on Table 2 for the same metrics used in [31,34], as well as the p-value obtained by a statistical significance t-test carried out on results series. Table 2: Comparison between recent method [31,34] and our proposed method on a small dataset (precision at 10 P@10 is reported instead of MT@10. As there are only two covers per work in the reference set, P@10 maximum value is 0.2). Our method significantly improve previous results: for the hardest case a) where the model has not seen any queries work during training, embeddings space has been sufficiently structured to discriminate the unseen works from the other training clusters (pushing effect). For the easier case b), the pulling effect from the known queries covers provides further improvement. Comparison on large dataset We also compared with [16], who is to our knowledge the last attempt to report results for thousands of queries and references -a more realistic use case. This paper reported results on the SecondHandSong (SHS) subset of the Mil-lionSong dataset (MSD) [5] for two experiments: a) only the training set of 12960 covers of 4128 works was used both as the query and reference sets. b) the SHS MSD test set of 5236 covers of 1726 works was used to query the entire MSD used as reference. The SHS MSD is not available anymore. However, as our dataset has also been built from the SHS covers list, we consider that results can be compared 3 . We have therefore randomly generated out of our dataset a training and a test set mimicking the original ones. We trained our model on the training set, and perform the pairwise distances computation between the query and reference sets (as the query set is included in the reference set, we excluded for comparison the pairs of the same track). For experiment b), we have used our entire dataset as reference set as we do not have one million songs. We have repeated this operation five times and report in Table 3 the mean and standard deviations for the same metrics used in [16], as well as MR1, MT@10 and the p-value of the t-test carried out. Our method significantly improve previous results. For case a), results are notably good, which is not surprising as the model has already seen all the queries during the training. Case b) is on the other hand the hardest possible configuration, where the model has not seen any covers of the queries works during training, and clusterisation of unseen tracks entirely relies on the pushing effect. As to our method's computation times, we observed on a single Nvidia GPU Titan XP for a~3 mn audio track: 10 sec for F0 extraction,~1 sec for embeddings computation, and less than 0.2 sec for distances computation with the full dataset embeddings (previously computed offline). CONCLUSION In this work, we presented a method for cover detection, using a convolutional network which encodes each track as a single vector, and is trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for non-covers. We show that extracting embeddings out of the dominant melody 2D representation drastically yields better results compared to other spectral representations: the convolutional model learns to identify similar patterns in the dominant melody at different scales and locations (tempo, key and structure invariance). We have also shown that our method scales to audio databases of thousands of tracks. Once trained for a given database, it can be used to assess the probability for an unseen track to be a cover of any known track without having to be compared to the entire database. We have finally shown that our method improves previous methods both on small and large datasets. In the future, we plan to grow our training dataset to address the realistic use case where collections of millions of tracks should be queried: as for many other data-driven problems, will the cover detection problem be solved if the embeddings space is sufficiently structured?
4,391
1907.01824
2954312627
Automatic cover detection -- the task of finding in an audio database all the covers of one or several query tracks -- has long been seen as a challenging theoretical problem in the MIR community and as an acute practical problem for authors and composers societies. Original algorithms proposed for this task have proven their accuracy on small datasets, but are unable to scale up to modern real-life audio corpora. On the other hand, faster approaches designed to process thousands of pairwise comparisons resulted in lower accuracy, making them unsuitable for practical use. In this work, we propose a neural network architecture that is trained to represent each track as a single embedding vector. The computation burden is therefore left to the embedding extraction -- that can be conducted offline and stored, while the pairwise comparison task reduces to a simple Euclidean distance computation. We further propose to extract each track's embedding out of its dominant melody representation, obtained by another neural network trained for this task. We then show that this architecture improves state-of-the-art accuracy both on small and large datasets, and is able to scale to query databases of thousands of tracks in a few seconds.
However, when the amount of labels becomes larger, the number of dissimilar pairs becomes quickly intractable. It was moreover observed in practice that once the network has become reasonably good, negative pairs become relatively easy to discern, which stalls the training of the discriminative model. is the strategy of training the model only with hard pairs, i.e. positive (resp. negative) pairs with large (resp. small) distances @cite_39 . Further improvement was introduced with the triplet loss, which is used to train a model to map each sample to an embedding that is closer to all of its positive counterparts than it is to all of its negative counterparts @cite_22 . Formally, for all triplets @math , @math , @math where @math is an anchor, and @math or @math is one of its positive or negative example, respectively, the loss to minimize is expressed as @math , where @math is a margin and @math and @math are the distances between each anchor @math and @math or @math , respectively.
{ "abstract": [ "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available." ], "cite_N": [ "@cite_22", "@cite_39" ], "mid": [ "2096733369", "1869500417" ] }
COVER DETECTION USING DOMINANT MELODY EMBEDDINGS
Covers are different interpretations of the same original musical work. They usually share a similar melodic line, but typically differ greatly in one or several other dimensions, such as their structure, tempo, key, instrumentation, genre, etc. Automatic cover detection -the task of finding in an audio database all the covers of one or several query tracks -has long been seen as a challenging theoretical problem in MIR. It is also now an acute practical problem for copyright owners facing continuous expansion of usergenerated online content. Cover detection is not stricto sensu a classification problem: due to the ever growing amount of musical works (the classes) and the relatively small number of covers per work, the actual question is not so much "to which work this track belongs to ?" as "to which other tracks this track is the most similar ?". Formally, cover detection therefore requires to establish a similarity relationship S ij between a query track A i and a reference track B j . It implies the composite of a feature extraction function f followed by a pairwise comparison function g, expressed as S ij = g(f (A i ), f (B j )). If f and g are independent, the feature extraction of the reference tracks B j can be done offline and stored. The online feature extraction cost is then linear in the number of queries, while pairwise comparisons cost without optimisation scales quadratically in the number of tracks [16]. Efficient cover detection algorithms thus require a fast pairwise comparison function g. Comparing pairs of entire sequences, as DTW does, scales quadratically in the length of the sequences and becomes quickly prohibitive. At the opposite, reducing g to a simple Euclidean distance computation between tracks embeddings is independent of the length of the sequences. In this case, the accuracy of the detection entirely relies on the ability of f to extract the common musical facets between different covers. In this work, we describe a neural network architecture mapping each track to a single embedding vector, and trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for noncover pairs. We leverage on recent breakthroughs in dominant melody extraction, and show that the use of dominant melody embeddings yield promising performances both in term of accuracy and scalability. The rest of the paper is organized as follow: we review in §2 the main concepts used in this work. We detail our method in §3, and describe and discuss in §4 and §5 the different experiments conducted and their results. We finally present a comparison with existing methods in §6. We conclude with future improvements to bring to our method. Cover detection Successful approaches in cover detection used an input representation preserving common musical facets between different versions, in particular dominant melody [19,27,40], tonal progression -typically a sequence of chromas [10,12,33,39] or chords [2], or a fusion of both [11,29]. Most of these approaches then computed a similarity score between pairs of melodic and/or harmonic sequences, typically a cross-correlation [10], a variant of the DTW algorithm [12,20,33,39], or a combination of both [25]. These approaches lead to good results when evaluated on small datasets -at most a few hundreds of tracks, but are not scalable beyond due to their expensive comparison function. Faster methods have recently been proposed, based on efficient comparison of all possible subsequences pairs between chroma representations [34], or similarity search between 2D-DFT sequences derived from CQTs overlapping windows [31], but remain too costly to be scalable to query large modern audio databases. Another type of method has been proposed to alleviate the cost of the comparison function and to shift the burden to the audio features extraction function -which can be done offline and stored. The general principle is to encode each audio track as a single scalar or vector -its embedding -and to reduce the similarity computation to a simple Euclidean distance between embeddings. Originally, embeddings were for instance computed as a single hash encoding a succession of pitch landmarks [3], or as a vector obtained by PCA dimensionality reduction of a chromagram's 2D-DFT [4] or with locality-sensitive hashing of melodic excerpts [19]. As for many other MIR applications, ad-hoc -and somewhat arbitrary -hand-crafted features extraction was progressively replaced with data-driven automatic feature learning [15]. Different attempts to learn common features between covers have since been proposed: in particular, training a k-means algorithm to learn to extract an embedding out of chromagram's 2D-DFT lead to significant results improvements on large datasets [16]. Similar approaches, commonly referred to as metric learning approaches, have been used in different MIR contexts, such as music recommendation [21,41], live song identification [38], music similarity search [24], and recently cover detection [23]. Metric learning Although the concept can be traced back to earlier works [1,8], the term of metric learning was probably coined first in [43] to address this type of clustering tasks where the objective is merely to assess whether different samples are similar or dissimilar. It has since been extensively used in the image recognition field in particular [14,36,37]. The principle is to learn a mapping between the input space and a latent manifold where a simple distance measure (such as Euclidean distance) should approximate the neighborhood relationships in the input space. There is however a trivial solution to the problem, where the function ends up mapping all the examples to the same point. Contrastive Loss was introduced to circumvent this problem, aiming at simultaneously pulling similar pairs together and pushing dissimilar pairs apart [13]. However, when the amount of labels becomes larger, the number of dissimilar pairs becomes quickly intractable. It was moreover observed in practice that once the network has become reasonably good, negative pairs become relatively easy to discern, which stalls the training of the discriminative model. Pair mining is the strategy of training the model only with hard pairs, i.e. positive (resp. nega-tive) pairs with large (resp. small) distances [35]. Further improvement was introduced with the triplet loss, which is used to train a model to map each sample to an embedding that is closer to all of its positive counterparts than it is to all of its negative counterparts [30]. Formally, for all triplets {a, p, n} where a is an anchor, and p or n is one of its positive or negative example, respectively, the loss to minimize is expressed as = max(0, d ap + α − d an ), where α is a margin and d ap and d an are the distances between each anchor a and p or n, respectively. Dominant melody extraction Dominant melody extraction has long been another challenging problem in the MIR community [18,28,42]. A major breakthrough was brought recently with the introduction of a convolutional network that learns to extract the dominant melody out of the audio Harmonic CQT [7]. The HCQT is an elegant and astute representation of the audio signal in 3 dimensions (time, frequency, harmonic), stacking along the third dimension several standard CQTs computed at different minimal multiple frequencies. Harmonic components of audio signal will thus be represented along the third dimension and be localized at the same location along the first and second dimensions. This representation is particularly suitable for melody detection, as it can be directly processed by convolutional networks, whose 3-D filters can be trained to localize in the time and frequency plan the harmonic components. In a recent work [9], we suggested in an analogy with image processing that dominant melody extraction can be seen as a type of image segmentation, where contours of the melody have to be isolated from the surrounding background. We have thus proposed for dominant melody estimation an adaptation of U-Net [26] -a model originally designed for medical image segmentation -which slightly improves over [7]. PROPOSED METHOD We present here the input data used to train our network, the network architecture itself and its training loss. Input data We have used as input data the dominant melody 2D representation (F0-CQT) obtained by the network we proposed in [9]. The frequency and time resolutions required for melody extraction (60 bins per octave and 11 ms per time frame) are not needed for cover detection. Moreover, efficient triplet loss training requires large training batches, as we will see later, so we reduced data dimensionality as depicted on Figure 2. The F0-CQT is a) trimmed to keep only 3 octaves around its mean pitch (180 bins along the frequency axis), and only the first 3 minutes of the track (15500 time frames) -if shorter, the duration is not changed. The resulting matrix is then b) downsampled via bilinear 2D interpolation with a factor 5. On the frequency axis, the semi-tone resolution is thus reduced from five to one bin, which we considered adequate for cover detection. On the time axis, it is equivalent to a regular downsampling. Finally, as the representation of different tracks with possibly different durations shall be batched together during training, the downsampled F0-CQT is c) shrunk or stretched along the time axis by another bilinear interpolation to a fixed amount of bins (1024). This operation is equivalent to a tempo change: for the 3 minutes trimmed, shrinking is equivalent to multiply the tempo by a factor 3. We argue here that accelerated or decelerated version of a cover is still a cover of the original track. 3 4 2 × 1 8 × 1 0 2 4 × 3 6 × 1 1 1 4 × 9 × 2 3 8 × 5 × 4 1 3 × 3 × 8 5 × 2 × 1 6 1 × 1 × 1 6 1 × 1 × batch norm + conv2d + pool2d average dense + L2-norm. Model The proposed model is a simple convolutional network pictured in Figure 1. As we are constrained by the input data shape, whose time dimension is much larger than its frequency dimension, only five layers blocks are needed. Each layer block consists of a batch normalization layer, a convolution layer with 3 × 3 kernels and a mean-pooling layer with a 3 × 2 kernel and 3 × 2 stride in order to reduce time dimensionality faster than frequency dimensionality. A dropout rate of 0.1, 0.1, 0.2 and 0.3 is applied to the blocks 2, 3, 4 and 5, respectively. The first convolutional layer has K kernels, and this number is doubled at each level (i.e. the deeper layer outputs 2 4 K-depth tensors). The penultimate layer averages along frequency and time axes to obtain a vector. A last dense layer outputs and L2-normalizes the final embedding vector of size E. Our assumption behind the choice of this convolutional architecture is that we expect it to learn similar patterns in the dominant melody, at different scales (tempo invariance) and locations (key and structure invariance). Objective loss We use a triplet loss with online semi-hard negative pairs mining as in [30]. In practice, triplet mining is done within each training batch: instead of using all possible triplets, each track in the batch is successively considered as the anchor, and compared with all its covers in the batch. For each of these positives pairs, if there are negatives such as d an < d ap , then only the one with the highest d an is kept. If no such negative exist, then only the one with the lowest d an is kept. Other negatives are not considered. Model is fit with Adam optimizer [17], with initial learning rate at 1e −4 , divided by 2 each time the loss on the evaluation set does not decrease after 5k training steps. Training is stopped after 100k steps, or if the learning rate falls below 1e −7 . The triplet loss was computed using squared Euclidean distances (i.e. distances are within the [0, 4] range), and the margin was set to α = 1. Dataset As metric learning typically requires large amount of data, we fetched from internet the audio of cover tracks provided by the SecondHandSongs website API 1 . Only works with 5 to 15 covers, and only tracks lasting between 60 and 300 seconds where considered, for a total of W = 7460 works and T = 62310 tracks. The HCQT was computed for those 62310 tracks as detailed in [7], i.e. with f min = 32.7 Hz and 6 harmonics. Each CQT spans 6 octaves with a resolution of 5 bins per semi-tone, and a frame duration of~11 ms. The implementation was done with the Librosa library [22]. The dominant melody was extracted for these 62310 HCQT with the network we described in [9], and the output was trimmed, downsampled and resized as described in §3.1. PRELIMINARY EXPERIMENTS We present here some experiments conducted to develop the system. The 7460 works were split into disjoint train and evaluation sets, with respectively 6216 and 1244 works and five covers per work. The evaluation set represents 20% of the training set, which we considered fair enough given the total amount of covers. The same split has been used for all preliminary experiments. Metrics Ideally, we expect the model to produce embeddings such that cover pair distances are low and non-cover pair distances are high, with a large gap between the two distributions. In the preliminary experiments, we have thus evaluated the separation of the cover pairs distance distribution p c (d) from the non-cover pairs distance distribution p nc (d) with two metrics: -the ROC curve plots the true positives rate (covers, TPR) versus the false positive rate (non-covers, FPR) for different distance d thresholds. We report the area under the ROC curve (AuC), which gives a good indication about the distributions separation. We also report the TPR corresponding to an FPR of 5% (TPR@5%), as it gives an operational indication about the model's discriminative power. -we also report the Bhattacharyya coefficient (BC), expressed as d p c (d)p nc (d), as it directly measures the separation between the distributions (smaller is better) [6]. Influence of input data We first compared the results obtained for different inputs data: chromas and CQT computed using Librosa [22], and the dominant melody computed as described in 3.1. As shown on Figure 3 (left), dominant melody yields the best results. It does not imply that melody features are more suited than tonal features for cover detection, but shows that convolutional kernels are better at learning similar patterns at different scales and locations across different tracks when the input data is sparse, which is not the case for chromas and CQT. Results obtained when trimming the F0-CQT with various octaves and time spans are also shown Figure 3. It appears that keeping 3 octaves around the mean pitch of the dominant melody and a duration of 2 to 3 minutes yields the best results. Smaller spans do not include enough information, while larger spans generate confusion. All other results presented below are thus obtained with the dominant melody 2D representation as input data, and a span of 3 octaves and 180 seconds for each track. Influence of model and training parameters We then compared the results obtained for different numbers of kernels in the first layer (K) and the corresponding sizes of the embeddings (E). As shown on Figure 4 (left), results improve for greater K, which was expected. However, increasing K above a certain point does not improve the results further, as the model has probably already enough freedom to encode common musical facets. We have then compared the results obtained for different sizes of training batches (B). As shown on Figure 4 (right), results improve with larger B: within larger batches, each track will be compared with a greater number of non-covers, improving the separation between clusters of works. A closer look at the distances shows indeed that the negative pairs distance distribution p nc (d) gets narrower for larger batches (not showed here). Due to GPU memory constraints, we have not investigated values above B=100. All other results presented below are obtained with K=64, E=512 and B=100. LARGE SCALE LOOKUP EXPERIMENTS We now present experiments investigating the realistic use case, i.e. large audio collections lookup. When querying an audio collection, each query track can be of three kinds: a) it is already present in the database, b) it is a cover of some other track(s) already in the database, or c) it is a track that has no cover in the database. The case a) corresponds to the trivial case, where the query will produce a distance equal to zero when compared with itself, while case c) corresponds to the hard case where neither the query or any cover of the query have been seen during training. We investigate here the case b), where the query track itself has never been seen during training, but of which at least one cover has been seen during training. Metrics In these experiments, we are interested in measuring our method's ability to find covers in the reference set when queried with various unknown tracks. This is commonly addressed with the metrics proposed by MIREX 2 for the cover song identification task: the mean rank of first correct result (MR1), the mean number of true positives in the top ten positions (MT10) and the Mean Average Precision (MAP). We refer the reader to [32] for a detailed review of these standard metrics. We also report here the TPR@5%, already used in the premilinary experiments. Structuring the embeddings space We study here the role of the training set in structuring the embeddings space, and in particular the role of the number of covers of each work. More precisely, we tried to show evidence of the pushing effect (when a query is pushed away from all its non-covers clusters) and the pulling effect (when a query is pulled towards its unique covers cluster). To this aim, we built out of our dataset a query and a reference set. The query set includes 1244 works with five covers each. The reference set includes P of the remaining covers for each of the 1244 query works, and N covers for each other work not included in the query set ( Figure 5). Pushing covers We first train our model on the reference set with fixed P =5. We compute query tracks embeddings with the trained model, compute pairwise distances between query and reference embeddings, as well as the different metrics. We repeat this operation for different values of N ∈ [2, ..., 10], and report results on Figure 6 (left). We report MR1's percentile (defined here as MR1 divided by the total of reference tracks, in percent) instead of MR1, because the number of reference tracks varies with N . The MAP only slightly decreases as N increases, which indicates that the precision remains stable, even though the number of examples to sort and to rank is increasing. Moreover, the MR1 percentile and the TPR@5% clearly improve as N increases. As P is fixed, it means that the ranking and the separation between covers and non-covers clusters is improving as the non-queries clusters are consolidated, which illustrates the expected pushing effect. Pulling covers We reproduce the same protocol again, but now with N =5 fixed and for different values of P ∈ [2, ..., 10]. We report results on Figure 6 (right). It appears clearly that all metrics improve steadily as P increases, even though the actual query itself has never been seen during training. As N is fixed, this confirms the intuition that the model will get better in locating unseen tracks closer to their work's cluster if trained with higher number of covers of this work, which illustrates the expected pulling effect. Operational meaning of p c (d) and p nc (d) We now investigate further the distance distributions of cover and non-cover pairs. To this aim, we randomly split our entire dataset into a query and a reference set with a 1:5 ratio (resp. 10385 and 51925 tracks). Query tracks are thus not seen during training, but might have zero or more covers in the reference set. Covers probability Computing queries vs. references pairwise distances gives the distributions p c (d) and p nc (d) shown on Figure 7 (left). Using Bayes' theorem, it is straightforward to derive from p c (d) and p nc (d) the probability for a pair of tracks to be covers given their distance d (Figure 7, right). This curve has an operational meaning, as it maps a pair's distance with a probability of being covers without having to rank it among the entire dataset. Easy and hard covers We repeat the previous test five times with random splits, and report metrics in Table 1. At first sight, MR1 and MT@10 could seem inconsistent, but a closer look at the results gives an explanation. To illustrate what happens, imagine a set of five queries where the first query ranks ten covers correctly in the first ten positions, e.g. because they are all very similar, while all other four queries have their first correct answer at rank 100. This would yield to MT@10=2.0, and MR1=80.2. This kind of discrepancy between MR1 and MT@10 reflects the fact that some works in our dataset have similar covers that are easily clustered, while other are much more difficult to discriminate. This can be observed on the positive pairs distribution p c (d) on Figure 7 ( 6. COMPARISON WITH OTHER METHODS Comparison on small dataset We first compared with two recent methods [31,34], who reported results for a small dataset of 50 works with 7 covers each. The query set includes five covers of each work (250 tracks), while the reference set includes each work's remaining two covers (100 tracks). As this dataset is not publicly available anymore, we have mimicked it extracting randomly 350 tracks out of own dataset 3 . Our data-driven model can however not be trained with only 100 tracks of the reference set, as it would overfit immediately. We have thus trained our model on our full dataset, with two different setups: a) excluding the 350 tracks reserved for the query and reference sets. b) excluding the 250 tracks of the query set, but including the 100 tracks of the reference set. We repeated this operation ten times for each setup, and report the mean and standard deviation on Table 2 for the same metrics used in [31,34], as well as the p-value obtained by a statistical significance t-test carried out on results series. Table 2: Comparison between recent method [31,34] and our proposed method on a small dataset (precision at 10 P@10 is reported instead of MT@10. As there are only two covers per work in the reference set, P@10 maximum value is 0.2). Our method significantly improve previous results: for the hardest case a) where the model has not seen any queries work during training, embeddings space has been sufficiently structured to discriminate the unseen works from the other training clusters (pushing effect). For the easier case b), the pulling effect from the known queries covers provides further improvement. Comparison on large dataset We also compared with [16], who is to our knowledge the last attempt to report results for thousands of queries and references -a more realistic use case. This paper reported results on the SecondHandSong (SHS) subset of the Mil-lionSong dataset (MSD) [5] for two experiments: a) only the training set of 12960 covers of 4128 works was used both as the query and reference sets. b) the SHS MSD test set of 5236 covers of 1726 works was used to query the entire MSD used as reference. The SHS MSD is not available anymore. However, as our dataset has also been built from the SHS covers list, we consider that results can be compared 3 . We have therefore randomly generated out of our dataset a training and a test set mimicking the original ones. We trained our model on the training set, and perform the pairwise distances computation between the query and reference sets (as the query set is included in the reference set, we excluded for comparison the pairs of the same track). For experiment b), we have used our entire dataset as reference set as we do not have one million songs. We have repeated this operation five times and report in Table 3 the mean and standard deviations for the same metrics used in [16], as well as MR1, MT@10 and the p-value of the t-test carried out. Our method significantly improve previous results. For case a), results are notably good, which is not surprising as the model has already seen all the queries during the training. Case b) is on the other hand the hardest possible configuration, where the model has not seen any covers of the queries works during training, and clusterisation of unseen tracks entirely relies on the pushing effect. As to our method's computation times, we observed on a single Nvidia GPU Titan XP for a~3 mn audio track: 10 sec for F0 extraction,~1 sec for embeddings computation, and less than 0.2 sec for distances computation with the full dataset embeddings (previously computed offline). CONCLUSION In this work, we presented a method for cover detection, using a convolutional network which encodes each track as a single vector, and is trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for non-covers. We show that extracting embeddings out of the dominant melody 2D representation drastically yields better results compared to other spectral representations: the convolutional model learns to identify similar patterns in the dominant melody at different scales and locations (tempo, key and structure invariance). We have also shown that our method scales to audio databases of thousands of tracks. Once trained for a given database, it can be used to assess the probability for an unseen track to be a cover of any known track without having to be compared to the entire database. We have finally shown that our method improves previous methods both on small and large datasets. In the future, we plan to grow our training dataset to address the realistic use case where collections of millions of tracks should be queried: as for many other data-driven problems, will the cover detection problem be solved if the embeddings space is sufficiently structured?
4,391
1907.01824
2954312627
Automatic cover detection -- the task of finding in an audio database all the covers of one or several query tracks -- has long been seen as a challenging theoretical problem in the MIR community and as an acute practical problem for authors and composers societies. Original algorithms proposed for this task have proven their accuracy on small datasets, but are unable to scale up to modern real-life audio corpora. On the other hand, faster approaches designed to process thousands of pairwise comparisons resulted in lower accuracy, making them unsuitable for practical use. In this work, we propose a neural network architecture that is trained to represent each track as a single embedding vector. The computation burden is therefore left to the embedding extraction -- that can be conducted offline and stored, while the pairwise comparison task reduces to a simple Euclidean distance computation. We further propose to extract each track's embedding out of its dominant melody representation, obtained by another neural network trained for this task. We then show that this architecture improves state-of-the-art accuracy both on small and large datasets, and is able to scale to query databases of thousands of tracks in a few seconds.
In a recent work @cite_42 , we suggested in an analogy with image processing that dominant melody extraction can be seen as a type of image segmentation, where contours of the melody have to be isolated from the surrounding background. We have thus proposed for dominant melody estimation an adaptation of U-Net @cite_40 -- a model originally designed for medical image segmentation -- which slightly improves over @cite_45 .
{ "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "Estimation of dominant melody in polyphonic music remains a difficult task, even though promising breakthroughs have been done recently with the introduction of the Harmonic CQT and the use of fully convolutional networks. In this paper, we build upon this idea and describe how U-Net- a neural network originally designed for medical image segmentation - can be used to estimate the dominant melody in polyphonic audio. We propose in particular the use of an original layer-by-layer sequential training method, and show that this method used along with careful training data conditioning improve the results compared to plain convolutional networks.", "" ], "cite_N": [ "@cite_40", "@cite_42", "@cite_45" ], "mid": [ "1901129140", "2921083967", "2773294482" ] }
COVER DETECTION USING DOMINANT MELODY EMBEDDINGS
Covers are different interpretations of the same original musical work. They usually share a similar melodic line, but typically differ greatly in one or several other dimensions, such as their structure, tempo, key, instrumentation, genre, etc. Automatic cover detection -the task of finding in an audio database all the covers of one or several query tracks -has long been seen as a challenging theoretical problem in MIR. It is also now an acute practical problem for copyright owners facing continuous expansion of usergenerated online content. Cover detection is not stricto sensu a classification problem: due to the ever growing amount of musical works (the classes) and the relatively small number of covers per work, the actual question is not so much "to which work this track belongs to ?" as "to which other tracks this track is the most similar ?". Formally, cover detection therefore requires to establish a similarity relationship S ij between a query track A i and a reference track B j . It implies the composite of a feature extraction function f followed by a pairwise comparison function g, expressed as S ij = g(f (A i ), f (B j )). If f and g are independent, the feature extraction of the reference tracks B j can be done offline and stored. The online feature extraction cost is then linear in the number of queries, while pairwise comparisons cost without optimisation scales quadratically in the number of tracks [16]. Efficient cover detection algorithms thus require a fast pairwise comparison function g. Comparing pairs of entire sequences, as DTW does, scales quadratically in the length of the sequences and becomes quickly prohibitive. At the opposite, reducing g to a simple Euclidean distance computation between tracks embeddings is independent of the length of the sequences. In this case, the accuracy of the detection entirely relies on the ability of f to extract the common musical facets between different covers. In this work, we describe a neural network architecture mapping each track to a single embedding vector, and trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for noncover pairs. We leverage on recent breakthroughs in dominant melody extraction, and show that the use of dominant melody embeddings yield promising performances both in term of accuracy and scalability. The rest of the paper is organized as follow: we review in §2 the main concepts used in this work. We detail our method in §3, and describe and discuss in §4 and §5 the different experiments conducted and their results. We finally present a comparison with existing methods in §6. We conclude with future improvements to bring to our method. Cover detection Successful approaches in cover detection used an input representation preserving common musical facets between different versions, in particular dominant melody [19,27,40], tonal progression -typically a sequence of chromas [10,12,33,39] or chords [2], or a fusion of both [11,29]. Most of these approaches then computed a similarity score between pairs of melodic and/or harmonic sequences, typically a cross-correlation [10], a variant of the DTW algorithm [12,20,33,39], or a combination of both [25]. These approaches lead to good results when evaluated on small datasets -at most a few hundreds of tracks, but are not scalable beyond due to their expensive comparison function. Faster methods have recently been proposed, based on efficient comparison of all possible subsequences pairs between chroma representations [34], or similarity search between 2D-DFT sequences derived from CQTs overlapping windows [31], but remain too costly to be scalable to query large modern audio databases. Another type of method has been proposed to alleviate the cost of the comparison function and to shift the burden to the audio features extraction function -which can be done offline and stored. The general principle is to encode each audio track as a single scalar or vector -its embedding -and to reduce the similarity computation to a simple Euclidean distance between embeddings. Originally, embeddings were for instance computed as a single hash encoding a succession of pitch landmarks [3], or as a vector obtained by PCA dimensionality reduction of a chromagram's 2D-DFT [4] or with locality-sensitive hashing of melodic excerpts [19]. As for many other MIR applications, ad-hoc -and somewhat arbitrary -hand-crafted features extraction was progressively replaced with data-driven automatic feature learning [15]. Different attempts to learn common features between covers have since been proposed: in particular, training a k-means algorithm to learn to extract an embedding out of chromagram's 2D-DFT lead to significant results improvements on large datasets [16]. Similar approaches, commonly referred to as metric learning approaches, have been used in different MIR contexts, such as music recommendation [21,41], live song identification [38], music similarity search [24], and recently cover detection [23]. Metric learning Although the concept can be traced back to earlier works [1,8], the term of metric learning was probably coined first in [43] to address this type of clustering tasks where the objective is merely to assess whether different samples are similar or dissimilar. It has since been extensively used in the image recognition field in particular [14,36,37]. The principle is to learn a mapping between the input space and a latent manifold where a simple distance measure (such as Euclidean distance) should approximate the neighborhood relationships in the input space. There is however a trivial solution to the problem, where the function ends up mapping all the examples to the same point. Contrastive Loss was introduced to circumvent this problem, aiming at simultaneously pulling similar pairs together and pushing dissimilar pairs apart [13]. However, when the amount of labels becomes larger, the number of dissimilar pairs becomes quickly intractable. It was moreover observed in practice that once the network has become reasonably good, negative pairs become relatively easy to discern, which stalls the training of the discriminative model. Pair mining is the strategy of training the model only with hard pairs, i.e. positive (resp. nega-tive) pairs with large (resp. small) distances [35]. Further improvement was introduced with the triplet loss, which is used to train a model to map each sample to an embedding that is closer to all of its positive counterparts than it is to all of its negative counterparts [30]. Formally, for all triplets {a, p, n} where a is an anchor, and p or n is one of its positive or negative example, respectively, the loss to minimize is expressed as = max(0, d ap + α − d an ), where α is a margin and d ap and d an are the distances between each anchor a and p or n, respectively. Dominant melody extraction Dominant melody extraction has long been another challenging problem in the MIR community [18,28,42]. A major breakthrough was brought recently with the introduction of a convolutional network that learns to extract the dominant melody out of the audio Harmonic CQT [7]. The HCQT is an elegant and astute representation of the audio signal in 3 dimensions (time, frequency, harmonic), stacking along the third dimension several standard CQTs computed at different minimal multiple frequencies. Harmonic components of audio signal will thus be represented along the third dimension and be localized at the same location along the first and second dimensions. This representation is particularly suitable for melody detection, as it can be directly processed by convolutional networks, whose 3-D filters can be trained to localize in the time and frequency plan the harmonic components. In a recent work [9], we suggested in an analogy with image processing that dominant melody extraction can be seen as a type of image segmentation, where contours of the melody have to be isolated from the surrounding background. We have thus proposed for dominant melody estimation an adaptation of U-Net [26] -a model originally designed for medical image segmentation -which slightly improves over [7]. PROPOSED METHOD We present here the input data used to train our network, the network architecture itself and its training loss. Input data We have used as input data the dominant melody 2D representation (F0-CQT) obtained by the network we proposed in [9]. The frequency and time resolutions required for melody extraction (60 bins per octave and 11 ms per time frame) are not needed for cover detection. Moreover, efficient triplet loss training requires large training batches, as we will see later, so we reduced data dimensionality as depicted on Figure 2. The F0-CQT is a) trimmed to keep only 3 octaves around its mean pitch (180 bins along the frequency axis), and only the first 3 minutes of the track (15500 time frames) -if shorter, the duration is not changed. The resulting matrix is then b) downsampled via bilinear 2D interpolation with a factor 5. On the frequency axis, the semi-tone resolution is thus reduced from five to one bin, which we considered adequate for cover detection. On the time axis, it is equivalent to a regular downsampling. Finally, as the representation of different tracks with possibly different durations shall be batched together during training, the downsampled F0-CQT is c) shrunk or stretched along the time axis by another bilinear interpolation to a fixed amount of bins (1024). This operation is equivalent to a tempo change: for the 3 minutes trimmed, shrinking is equivalent to multiply the tempo by a factor 3. We argue here that accelerated or decelerated version of a cover is still a cover of the original track. 3 4 2 × 1 8 × 1 0 2 4 × 3 6 × 1 1 1 4 × 9 × 2 3 8 × 5 × 4 1 3 × 3 × 8 5 × 2 × 1 6 1 × 1 × 1 6 1 × 1 × batch norm + conv2d + pool2d average dense + L2-norm. Model The proposed model is a simple convolutional network pictured in Figure 1. As we are constrained by the input data shape, whose time dimension is much larger than its frequency dimension, only five layers blocks are needed. Each layer block consists of a batch normalization layer, a convolution layer with 3 × 3 kernels and a mean-pooling layer with a 3 × 2 kernel and 3 × 2 stride in order to reduce time dimensionality faster than frequency dimensionality. A dropout rate of 0.1, 0.1, 0.2 and 0.3 is applied to the blocks 2, 3, 4 and 5, respectively. The first convolutional layer has K kernels, and this number is doubled at each level (i.e. the deeper layer outputs 2 4 K-depth tensors). The penultimate layer averages along frequency and time axes to obtain a vector. A last dense layer outputs and L2-normalizes the final embedding vector of size E. Our assumption behind the choice of this convolutional architecture is that we expect it to learn similar patterns in the dominant melody, at different scales (tempo invariance) and locations (key and structure invariance). Objective loss We use a triplet loss with online semi-hard negative pairs mining as in [30]. In practice, triplet mining is done within each training batch: instead of using all possible triplets, each track in the batch is successively considered as the anchor, and compared with all its covers in the batch. For each of these positives pairs, if there are negatives such as d an < d ap , then only the one with the highest d an is kept. If no such negative exist, then only the one with the lowest d an is kept. Other negatives are not considered. Model is fit with Adam optimizer [17], with initial learning rate at 1e −4 , divided by 2 each time the loss on the evaluation set does not decrease after 5k training steps. Training is stopped after 100k steps, or if the learning rate falls below 1e −7 . The triplet loss was computed using squared Euclidean distances (i.e. distances are within the [0, 4] range), and the margin was set to α = 1. Dataset As metric learning typically requires large amount of data, we fetched from internet the audio of cover tracks provided by the SecondHandSongs website API 1 . Only works with 5 to 15 covers, and only tracks lasting between 60 and 300 seconds where considered, for a total of W = 7460 works and T = 62310 tracks. The HCQT was computed for those 62310 tracks as detailed in [7], i.e. with f min = 32.7 Hz and 6 harmonics. Each CQT spans 6 octaves with a resolution of 5 bins per semi-tone, and a frame duration of~11 ms. The implementation was done with the Librosa library [22]. The dominant melody was extracted for these 62310 HCQT with the network we described in [9], and the output was trimmed, downsampled and resized as described in §3.1. PRELIMINARY EXPERIMENTS We present here some experiments conducted to develop the system. The 7460 works were split into disjoint train and evaluation sets, with respectively 6216 and 1244 works and five covers per work. The evaluation set represents 20% of the training set, which we considered fair enough given the total amount of covers. The same split has been used for all preliminary experiments. Metrics Ideally, we expect the model to produce embeddings such that cover pair distances are low and non-cover pair distances are high, with a large gap between the two distributions. In the preliminary experiments, we have thus evaluated the separation of the cover pairs distance distribution p c (d) from the non-cover pairs distance distribution p nc (d) with two metrics: -the ROC curve plots the true positives rate (covers, TPR) versus the false positive rate (non-covers, FPR) for different distance d thresholds. We report the area under the ROC curve (AuC), which gives a good indication about the distributions separation. We also report the TPR corresponding to an FPR of 5% (TPR@5%), as it gives an operational indication about the model's discriminative power. -we also report the Bhattacharyya coefficient (BC), expressed as d p c (d)p nc (d), as it directly measures the separation between the distributions (smaller is better) [6]. Influence of input data We first compared the results obtained for different inputs data: chromas and CQT computed using Librosa [22], and the dominant melody computed as described in 3.1. As shown on Figure 3 (left), dominant melody yields the best results. It does not imply that melody features are more suited than tonal features for cover detection, but shows that convolutional kernels are better at learning similar patterns at different scales and locations across different tracks when the input data is sparse, which is not the case for chromas and CQT. Results obtained when trimming the F0-CQT with various octaves and time spans are also shown Figure 3. It appears that keeping 3 octaves around the mean pitch of the dominant melody and a duration of 2 to 3 minutes yields the best results. Smaller spans do not include enough information, while larger spans generate confusion. All other results presented below are thus obtained with the dominant melody 2D representation as input data, and a span of 3 octaves and 180 seconds for each track. Influence of model and training parameters We then compared the results obtained for different numbers of kernels in the first layer (K) and the corresponding sizes of the embeddings (E). As shown on Figure 4 (left), results improve for greater K, which was expected. However, increasing K above a certain point does not improve the results further, as the model has probably already enough freedom to encode common musical facets. We have then compared the results obtained for different sizes of training batches (B). As shown on Figure 4 (right), results improve with larger B: within larger batches, each track will be compared with a greater number of non-covers, improving the separation between clusters of works. A closer look at the distances shows indeed that the negative pairs distance distribution p nc (d) gets narrower for larger batches (not showed here). Due to GPU memory constraints, we have not investigated values above B=100. All other results presented below are obtained with K=64, E=512 and B=100. LARGE SCALE LOOKUP EXPERIMENTS We now present experiments investigating the realistic use case, i.e. large audio collections lookup. When querying an audio collection, each query track can be of three kinds: a) it is already present in the database, b) it is a cover of some other track(s) already in the database, or c) it is a track that has no cover in the database. The case a) corresponds to the trivial case, where the query will produce a distance equal to zero when compared with itself, while case c) corresponds to the hard case where neither the query or any cover of the query have been seen during training. We investigate here the case b), where the query track itself has never been seen during training, but of which at least one cover has been seen during training. Metrics In these experiments, we are interested in measuring our method's ability to find covers in the reference set when queried with various unknown tracks. This is commonly addressed with the metrics proposed by MIREX 2 for the cover song identification task: the mean rank of first correct result (MR1), the mean number of true positives in the top ten positions (MT10) and the Mean Average Precision (MAP). We refer the reader to [32] for a detailed review of these standard metrics. We also report here the TPR@5%, already used in the premilinary experiments. Structuring the embeddings space We study here the role of the training set in structuring the embeddings space, and in particular the role of the number of covers of each work. More precisely, we tried to show evidence of the pushing effect (when a query is pushed away from all its non-covers clusters) and the pulling effect (when a query is pulled towards its unique covers cluster). To this aim, we built out of our dataset a query and a reference set. The query set includes 1244 works with five covers each. The reference set includes P of the remaining covers for each of the 1244 query works, and N covers for each other work not included in the query set ( Figure 5). Pushing covers We first train our model on the reference set with fixed P =5. We compute query tracks embeddings with the trained model, compute pairwise distances between query and reference embeddings, as well as the different metrics. We repeat this operation for different values of N ∈ [2, ..., 10], and report results on Figure 6 (left). We report MR1's percentile (defined here as MR1 divided by the total of reference tracks, in percent) instead of MR1, because the number of reference tracks varies with N . The MAP only slightly decreases as N increases, which indicates that the precision remains stable, even though the number of examples to sort and to rank is increasing. Moreover, the MR1 percentile and the TPR@5% clearly improve as N increases. As P is fixed, it means that the ranking and the separation between covers and non-covers clusters is improving as the non-queries clusters are consolidated, which illustrates the expected pushing effect. Pulling covers We reproduce the same protocol again, but now with N =5 fixed and for different values of P ∈ [2, ..., 10]. We report results on Figure 6 (right). It appears clearly that all metrics improve steadily as P increases, even though the actual query itself has never been seen during training. As N is fixed, this confirms the intuition that the model will get better in locating unseen tracks closer to their work's cluster if trained with higher number of covers of this work, which illustrates the expected pulling effect. Operational meaning of p c (d) and p nc (d) We now investigate further the distance distributions of cover and non-cover pairs. To this aim, we randomly split our entire dataset into a query and a reference set with a 1:5 ratio (resp. 10385 and 51925 tracks). Query tracks are thus not seen during training, but might have zero or more covers in the reference set. Covers probability Computing queries vs. references pairwise distances gives the distributions p c (d) and p nc (d) shown on Figure 7 (left). Using Bayes' theorem, it is straightforward to derive from p c (d) and p nc (d) the probability for a pair of tracks to be covers given their distance d (Figure 7, right). This curve has an operational meaning, as it maps a pair's distance with a probability of being covers without having to rank it among the entire dataset. Easy and hard covers We repeat the previous test five times with random splits, and report metrics in Table 1. At first sight, MR1 and MT@10 could seem inconsistent, but a closer look at the results gives an explanation. To illustrate what happens, imagine a set of five queries where the first query ranks ten covers correctly in the first ten positions, e.g. because they are all very similar, while all other four queries have their first correct answer at rank 100. This would yield to MT@10=2.0, and MR1=80.2. This kind of discrepancy between MR1 and MT@10 reflects the fact that some works in our dataset have similar covers that are easily clustered, while other are much more difficult to discriminate. This can be observed on the positive pairs distribution p c (d) on Figure 7 ( 6. COMPARISON WITH OTHER METHODS Comparison on small dataset We first compared with two recent methods [31,34], who reported results for a small dataset of 50 works with 7 covers each. The query set includes five covers of each work (250 tracks), while the reference set includes each work's remaining two covers (100 tracks). As this dataset is not publicly available anymore, we have mimicked it extracting randomly 350 tracks out of own dataset 3 . Our data-driven model can however not be trained with only 100 tracks of the reference set, as it would overfit immediately. We have thus trained our model on our full dataset, with two different setups: a) excluding the 350 tracks reserved for the query and reference sets. b) excluding the 250 tracks of the query set, but including the 100 tracks of the reference set. We repeated this operation ten times for each setup, and report the mean and standard deviation on Table 2 for the same metrics used in [31,34], as well as the p-value obtained by a statistical significance t-test carried out on results series. Table 2: Comparison between recent method [31,34] and our proposed method on a small dataset (precision at 10 P@10 is reported instead of MT@10. As there are only two covers per work in the reference set, P@10 maximum value is 0.2). Our method significantly improve previous results: for the hardest case a) where the model has not seen any queries work during training, embeddings space has been sufficiently structured to discriminate the unseen works from the other training clusters (pushing effect). For the easier case b), the pulling effect from the known queries covers provides further improvement. Comparison on large dataset We also compared with [16], who is to our knowledge the last attempt to report results for thousands of queries and references -a more realistic use case. This paper reported results on the SecondHandSong (SHS) subset of the Mil-lionSong dataset (MSD) [5] for two experiments: a) only the training set of 12960 covers of 4128 works was used both as the query and reference sets. b) the SHS MSD test set of 5236 covers of 1726 works was used to query the entire MSD used as reference. The SHS MSD is not available anymore. However, as our dataset has also been built from the SHS covers list, we consider that results can be compared 3 . We have therefore randomly generated out of our dataset a training and a test set mimicking the original ones. We trained our model on the training set, and perform the pairwise distances computation between the query and reference sets (as the query set is included in the reference set, we excluded for comparison the pairs of the same track). For experiment b), we have used our entire dataset as reference set as we do not have one million songs. We have repeated this operation five times and report in Table 3 the mean and standard deviations for the same metrics used in [16], as well as MR1, MT@10 and the p-value of the t-test carried out. Our method significantly improve previous results. For case a), results are notably good, which is not surprising as the model has already seen all the queries during the training. Case b) is on the other hand the hardest possible configuration, where the model has not seen any covers of the queries works during training, and clusterisation of unseen tracks entirely relies on the pushing effect. As to our method's computation times, we observed on a single Nvidia GPU Titan XP for a~3 mn audio track: 10 sec for F0 extraction,~1 sec for embeddings computation, and less than 0.2 sec for distances computation with the full dataset embeddings (previously computed offline). CONCLUSION In this work, we presented a method for cover detection, using a convolutional network which encodes each track as a single vector, and is trained to minimize cover pairs Euclidean distance in the embeddings space, while maximizing it for non-covers. We show that extracting embeddings out of the dominant melody 2D representation drastically yields better results compared to other spectral representations: the convolutional model learns to identify similar patterns in the dominant melody at different scales and locations (tempo, key and structure invariance). We have also shown that our method scales to audio databases of thousands of tracks. Once trained for a given database, it can be used to assess the probability for an unseen track to be a cover of any known track without having to be compared to the entire database. We have finally shown that our method improves previous methods both on small and large datasets. In the future, we plan to grow our training dataset to address the realistic use case where collections of millions of tracks should be queried: as for many other data-driven problems, will the cover detection problem be solved if the embeddings space is sufficiently structured?
4,391
1907.01957
2955363999
State-of-the-art end-to-end automatic speech recognition (ASR) extracts acoustic features from input speech signal every 10 ms which corresponds to a frame rate of 100 frames second. In this report, we investigate the use of high-frame-rate features extraction in end-to-end ASR. High frame rates of 200 and 400 frames second are used in the features extraction and provide additional information for end-to-end ASR. The effectiveness of high-frame-rate features extraction is evaluated independently and in combination with speed perturbation based data augmentation. Experiments performed on two speech corpora, Wall Street Journal (WSJ) and CHiME-5, show that using high-frame-rate features extraction yields improved performance for end-to-end ASR, both independently and in combination with speed perturbation. On WSJ corpus, the relative reduction of word error rate (WER) yielded by high-frame-rate features extraction independently and in combination with speed perturbation are up to 21.3 and 24.1 , respectively. On CHiME-5 corpus, the corresponding relative WER reductions are up to 2.8 and 7.9 , respectively, on the test data recorded by microphone arrays and up to 11.8 and 21.2 , respectively, on the test data recorded by binaural microphones.
Speed perturbation @cite_5 is a data augmentation technique which creates warped time signals in addition to the original speech signals. Given an audio signal of length @math and a warping factor @math , speed perturbation creates a new signal with duration @math by resampling the original signal with a sampling rate of @math , where @math is the sampling rate of the original signal. Speed perturbation shifts the speech spectrum and also results in change in number of frames as the duration of the resulting signal is different @cite_5 .
{ "abstract": [ "Data augmentation is a common strategy adopted to increase the quantity of training data, avoid overfitting and improve robustness of the models. In this paper, we investigate audio-level speech augmentation methods which directly process the raw signal. The method we particularly recommend is to change the speed of the audio signal, producing 3 versions of the original signal with speed factors of 0.9, 1.0 and 1.1. The proposed technique has a low implementation cost, making it easy to adopt. We present results on 4 different LVCSR tasks with training data ranging from 100 hours to 1000 hours, to examine the effectiveness of audio augmentation in a variety of data scenarios. An average relative improvement of 4.3 was observed across the 4 tasks." ], "cite_N": [ "@cite_5" ], "mid": [ "2407080277" ] }
End-to-End Speech Recognition with High-Frame-Rate Features Extraction
End-to-end automatic speech recognition (ASR) uses a single neural network architecture within a deep learning framework to perform speech-to-text task [1]. There are two major approaches for end-to-end ASR; attention-based approach uses an attention mechanism to create required alignments between acoustic frames and output symbols which have different lengths, and connectionnist temporal classification (CTC) approach uses Markov assumptions to address sequential problems by dynamic programming [1,2]. In the attention-based end-to-end approach, an encoderdecoder architecture is used to solve the speech-to-text problem which is formulated as a sequence mapping from speech feature sequence to text [3,4]. In the encoder-decoder architecture, the input feature vectors are converted into a frame-wise hidden vector by the encoder. In this architecture, bidirectional long short-term memory (BLSTM) [5,1] are often used as an encoder network [2]. A pyramid BLSTM (pBLSTM) encoder with subsampling was found to yield better performance than the BLSTM encoder [4]. In [6], initial layers of the VGG net architecture (deep convolutional neural network (CNN)) [7,8] was found to be helpful when being used prior to the BLSTM in the encoder network. The encoder consisting of the VGG net and the pBLSTM yields better performance than the pBLSTM encoder in many cases [9]. State-of-the-art end-to-end ASR extracts acoustic features from input speech signal every 10 ms which corresponds to a frame rate of 100 frames/second. Extracting acoustic features at frame rates higher than 100 frames/second could gain more information from the input speech signal. The temporal resolution of the feature matrices is also increased and could be useful for end-to-end ASR which uses the VGG net and pBLSTM for encoder because these networks make use of temporal information from input features. In this report, we investigate the use of high-frame-rate features extraction in end-to-end ASR. High frame rates of 200 and 400 frames/second are used in the features extraction for endto-end ASR with hybrid CTC/attention architecture [2]. The effectiveness of the high-frame-rate features extraction is evaluated independently and in combination with speed perturbation based data augmentation [10]. Experiments are carried out with two speech corpora, the Wall Street Journal (WSJ) corpus [11] and the CHiME-5 corpus which was used for the CHiME 2018 speech separation and recognition challenge [12]. CHiME-5 is a large scale corpus of real multi-speaker conversational speech recorded via multi-microphone hardware in multiple homes. The main difficulty of this corpus comes from the source and microphone distance in addition to the spontaneous and overlapped nature of speech [12]. We show the effectiveness of using high-frame-rate features extraction in end-to-end ASR, independently and in combination with speed perturbation based data augmentation. High-frame-rate features extraction State-of-the-art end-to-end ASR systems typically extract feature vectors every 10ms which corresponds to a frame rate of 100 frames/second. When high-frame-rate features extraction of 200 and 400 frames/second are used, feature vectors are extracted every 5 and 2.5 ms, respectively. When the hop size is reduced, more feature vectors are extracted and the temporal resolution of the feature matrices is increased. In this work, Mel filter-bank (FBANK) features [16,17] of 40 dimensions are used. The FBANK features are extracted in a conventional manner as follows: speech signal is first pre-emphasized by using a filter having a transfer function H[z] = 1−0.97z −1 . Speech frames of 25 ms are then extracted at a given frame rate and multiplied with Hamming windows. Discrete Fourier transform (DFT) is used to transform speech frames into spectral domain. Sums of the element-wise multiplication between the magnitude spectrum and the Mel-scale filter-bank are computed. The FBANK coefficients are obtained by taking logarithm of these sums. The FBANK features are augmented with 3-dimensional pitch features which include the value of pitch, delta-pitch and the probability of voicing at each frame [18,12]. In this work, the FBANK and pitch features are extracted using the Kaldi speech recognition toolkit [19]. Figs. 1 (b), 1 (c), 1 (d) show examples of the 43dimensional FBANK+pitch feature matrices extracted from a speech utterance ( Fig. 1 (a)) in the WSJ corpus at frame rates of 100, 200, and 400 frames/second, respectively. It can be observed from these figures that the temporal resolution of the feature matrices increases when the frame rate increases. This higher temporal resolution could provide additional temporal information for the encoder network using VGG net and pBLSTM which make use of the temporal information from input features. ASR experiments are carried out to examine the temporal resolutions of the feature matrices which are useful for end-to-end ASR. Speech corpora We carry out experiments on two speech corpora, the Wall Street Journal (WSJ) corpus [11] and the CHiME-5 corpus which was used for the CHiME 2018 speech separation and recognition challenge [12]. These two different ASR tasks, one consisting of clean speech recorded by single microphone (WSJ task) and another consisting of conversational speech recorded by both distant microphone arrays and binaural microphones (CHiME-5 task), are suitable for evaluating the effectiveness of high-frame-rate features extraction for end-to-end ASR in different scenarios. WSJ corpus WSJ is a corpus of read speech [11]. The speech utterances in the corpus are quite clean. We use the standard configuration train si284 set for training, test dev93 for validation and test eval92 for test evaluation. The training, development, and evaluation sets consist of 37318, 503, and 333 utterances, respectively. These training, development, and evaluation sets are consistent with the definitions in the Kaldi [19] and ESPnet [9] recipes for this corpus. CHiME-5 corpus Recording scenario CHiME-5 is the first large-scale corpus of real multi-speaker conversational speech recorded via commercially available multi-microphone hardware in multiple homes [12]. Natural conversational speech from a dinner party of 4 participants was recorded for transcription. Each party was recorded with 6 distant Microsoft Kinect microphone arrays and 4 binaural microphone pairs worn by the participants. There are in total 20 different parties recorded in 20 real homes. This corpus was designed for the CHiME 2018 challenge [12]. Each party has a minimum duration of 2 hours which composes of three phases, each corresponding to a different loca-tion: i) kitchen -preparing the meal in the kitchen area; ii) dining -eating meal in the dining area; iii) living -a post-dinner period in a separate living room area. The participants can move naturally within the home in different locations, but they should stay in each location for at least 30 minutes. There is no constraint on the topic of the conversations. The conversational speech is thus spontaneous. Audio and transcriptions The audio of the parties was recorded with a set of six Microsoft Kinect devices which were strategically placed to capture each conversation by at least two devices in each location. Each Kinect device has a linear array of 4 sample-synchronized microphones and a camera. The audio was also recorded with the Soundman OKM II Classic Studio binaural microphones worn by each participant [12]. Manual transcriptions were produced for all the recorded audio. The start and end times and the word sequences of an utterance produced by a speaker are manually obtained by listening to the speaker's binaural recording. These information are used for the same utterance recorded by other recording devices but the start and end times are shifted by an amount that compensates for the asynchonization between devices. Data for training and test Training, development and evaluation sets are created from the 20 parties. Data from 16 parties are used for training. The data used for training ASR systems combines both left and right channels of the binaural microphone data and a subset of all Kinect microphone data from 16 parties. In this report, the total amount of speech used in the training set is around 167 hours (the data/train worn u200k set [12]). Each of the development and evaluation sets is created from 2 parties of around 4.5 and 5.2 hours of speech, respectively. The speakers in the training, development and evaluation sets are not overlapped. For the development and evaluation data, information about the location of the speaker and the reference array are provided. The reference array is chosen to be the one that is situated in the same area. In this work, the results are reported for the single-array track [12] where only the data recorded by the reference array is used for recognition. The results on this corpus in the present report are obtained on the development sets consisting of speech data recorded by the binaural microphones (dev-binaural) and the microphone arrays (dev-array) because the transcriptions of the evaluation set are not publicly available at the time of this submission. Utterances having overlapped speech are not excluded from the training and the development sets. In total, the training set consists of around 318K utterances and each development set consists of around 7.4K utterances. The dev-binaural set consists of only signals from the left channel of the binaural microphones [12,9]. Data augmentation Training data can be augmented to avoid over fitting and improve the robustness of the models [10]. Generally, adding more training data helps improving system's performance. In this work, we apply the speed perturbation based data augmentation technique [10,20] to increase the amount of training data of the WSJ and CHiME-5 corpora. The speed perturbation technique creates new training data by resampling the original data. Two additional copies of the original training sets are created by modifying the speed of speech to 90% and 110% of the original rate. For each corpus, the whole training set after data aug-mentation is 3 times larger than the original training set. For CHiME-5, due to the change in the length of the signals after resampling, the start and end times of the speech utterances in the parties are automatically updated by scaling the original start and end times with the resampling rates. For WSJ, this change does not affect the features extraction as the feature vectors are extracted from the whole utterances. Experiments Speech recognition systems Front-end processing Acoustic features are extracted from the training, development, and evaluation sets for training and testing of ASR systems, on the WSJ and CHiME-5 corpora. Utterance-level mean normalization is applied on the features. For WSJ, the FBANK+pitch features are extracted from the whole speech utterances. For CHiME-5, the FBANK+pitch features are extracted from speech utterances which are located in long audio sequences by using the provided start and end times. In the training set, individual speech signals from each microphone in each Kinect microphone array are used directly. In the development set using speech from the reference microphone array, speech signals from four microphones in the microphone array is processed with a weighted delay-and-sum beamformer (BeamformIt [21]) for enhancement prior to features extraction. Three frame rates are examined in the features extraction: the conventional frame rate of 100 frames/second and two high frame rates of 200 and 400 frames/second. Speed perturbation is applied only on the training sets whereas high-frame-rate features extraction is applied on the training, development, and evaluation sets. End-to-end ASR architecture Hybrid CTC/attention end-to-end ASR systems [2] are built using the ESPnet toolkit [9]. The system architecture is depicted in Fig. 2. We examine two types of shared encoder, one consists of the initial layers of the VGG net architecture (deep CNN) [7,8] followed by a 4-layer pBLSTM [5,4], as in [6], and another consists of the 4-layer pBLSTM. The objective is to examine whether increasing the temporal resolution of the input features could be useful for the VGG net and the pBLSTM which make use of temporal information in the input features. Figure 2: Hybrid CTC/attention architecture [6,2] of the endto-end ASR systems used in this report. The shared encoder could include either the pBLSTM or the VGG net + pBLSTM. We use a 6-layer CNN architecture which consists of two consecutive 2D convolutional layers followed by one 2D Maxpooling layer, then another two 2D convolutional layers followed by one 2D max-pooling layer. The 2D filters used in the convolutional layers have the same size of 3×3. The maxpooling layers have patch of 3×3 and stride of 2×2. The 4-layer pBLSTM has 320 cells in each layer and direction, and linear projection is followed by each BLSTM layer. The subsampling factor performed by the pBLSTM is 4 [6]. In this report, location-based attention mechanism [3] is used in the hybrid CTC/attention architecture. This mechanism uses 10 centered convolution filters of width 100 to extract the convolutional features. The decoder network is a 1-layer LSTM with 300 cells. The hybrid CTC/attention architecture is trained within a multi-objective training framework by combining CTC and attention-based cross entropy to improve robustness and achieve fast convergence [9]. The training is performed with 15 epochs using the Chainer deep learning toolkit [22]. The AdaDelta algorithm [23] with gradient clipping [24] is used for the optimization. We use λ = 0.2 for WSJ and λ = 0.1 for CHiME-5 in the multi-objective learning framework for training the hybrid CTC/attention systems [2], in consistent with the ESPnet training recipes for these corpora [9]. During joint decoding, CTC and attention-based scores are combined in a one-pass beam search algorithm [9]. A recurrent neural network language model (RNN-LM), which is a 1layer LSTM, is trained on the transcriptions of the training data, for each corpus. This RNN-LM is used in the joint decoding where its log probability is combined with the CTC and attention scores [9]. The weight of the RNN-LM's log probability is set to 0.1 and the beam width is set to 20 during decoding. Experimental results Tabs. 1 and 2 show the results in terms of word error rates (WERs) on the WSJ corpus, for the systems using the pBLSTM and the VGG net + pBLSTM encoders, respectively. Tabs. 3 and 4 show the corresponding WERs on the CHiME-5 corpus. In these tables, the results with "+SP" are obtained when speed perturbation (SP) is used to augment the training sets. On both corpora, using features extraction with frame rates higher than the conventional frame rate of 100 frames/second appears to be helpful in reducing the WERs, both when the pBLSTM and the VGG net + pBLSTM encoders are used. Also, high-frame-rate features extraction and speed perturbation based data augmentation are complementary because the gains obtained when using these two methods together are higher than those obtained with each method when they are used separately. In addition, the systems using the VGG net + pBLSTM encoder have lower WERs than those using the pBLSTM encoder, on both corpora. WSJ On WSJ, increasing the frame rate from 200 to 400 frames/second still yields a little WER reduction, but not always. When the VGG net + pBLSTM encoder is used (see Tab. 2), the best relative WER reductions on the Dev93 and Eval92 sets obtained with high-frame-rate features extraction are 21.3% and 12.1%, respectively. When using high-frame-rate features extraction with speed perturbation, the best relative WER reductions on the Dev93 and Eval92 sets are 24.1% and 15.1%, respectively. CHiME-5 CHiME-5 is a challenging task with high WERs on the development sets. On this corpus, increasing the frame rate from 200 to 400 frames/second generally does not yield WER reduction. When the VGG net + pBLSTM encoder is used (see Tab. 4), the baseline system has WERs of 61.1% and 89.6% on the dev-binaural and dev-array sets, respectively. The WERs of the baseline system introduced by the challenge organizers on the same sets were 67.2% and 94.7%, respectively [12]. In the architecture using the VGG net + pBLSTM encoder, the best relative WER reductions on the dev-binaural and devarray sets obtained with high-frame-rate features extraction are 11.8% and 2.8%, respectively. When using high-frame-rate features extraction in combination with speed perturbation, the relative WER reductions on the dev-binaural and dev-array sets are 21.2% and 7.9%, respectively. Conclusion This report investigated the use of high-frame-rate features extraction in end-to-end speech recognition. Experimental results on the WSJ and CHiME-5 corpora showed that improved ASR performance was achieved when using features extraction at a frame rate higher than 100 frames/second. These results showed that end-to-end ASR using pBLSTM and VGG net + pBLSTM encoders can make use of additional information from input feature matrices of higher temporal resolution than those extracted with the conventional 100 frames/second frame rate. Using high-frame-rate features extraction in combination with speed perturbation based data augmentation yielded complementary gains. The relative WER reductions obtained by the combination of these two methods were up to 24.1% and 21.2% on the WSJ and CHiME-5 corpora, respectively.
2,730