aid
string
mid
string
abstract
string
related_work
string
ref_abstract
dict
title
string
text_except_rw
string
total_words
int64
1811.03325
2900399720
This paper presents a discovery that the length of the entities in various datasets follows a family of scale-free power law distributions. The concept of entity here broadly includes the named entity, entity mention, time expression, aspect term, and domain-specific entity that are well investigated in natural language processing and related areas. The entity length denotes the number of words in an entity. The power law distributions in entity length possess the scale-free property and have well-defined means and finite variances. We explain the phenomenon of power laws in entity length by the principle of least effort in communication and the preferential mechanism.
According to the review by , first demonstrated that the word length in a corpus empirically and theoretically follows a variant of Poisson distributions. The word length of a natural corpus has been observed to follow the variants of Poisson distributions in more than 32 languages @cite_17 .
{ "abstract": [ "Abstract In this study, word length distributions are investigated in Old Icelandic songs and prose texts. Although Old Icelandic is separated from the other languages studied so far within the framework of the Gottingen word length project, the same model, viz. the hyper‐Poisson distribution, can successfully be fitted to the data." ], "cite_N": [ "@cite_17" ], "mid": [ "2167237149" ] }
Marshall-Olkin Power-Law Distributions in Length-Frequency of Entities
and Zipf (1936Zipf ( , 1949 found a very long time ago that the rank-frequency of words in natural languages follows a family of power-law distributions. During his exploration, Zipf also found that the meaning-frequency of words follows power-law distributions as well. The rank-frequency distribution of words is later credited as Zipf's law and provides a direction to understand the use of languages in our communicative system. Zipf's law has been observed in many languages (Zipf, 1949;Corominas-Murtra and Solé, 2010) and has attracted tremendous attention of researchers from diverse areas for more than eighty years (Piantadosi, 2014). The Zipf distribution has a linear behavior in the log-log scale and is widely used to model phenomena such as word frequencies, city sizes, income distribution, and network structures. However, the Zipf distribution may not fit well the probabilities of the first positive integer numbers, which are often observed to be higher or lower than expected by the linear model. Besides the rank-frequency and meaning-frequency of words, Zipf also analyzed word length, sentence length, and phonemes (Zipf, 1949). Although Zipf explained the use of these three language units under the same principle of least effort as he explained word frequency and word meaning in a qualitative way, unfortunately, extensive studies have demonstrated that the frequencies of these three language units do not follow a power-law distribution, but follow variants of Poisson distributions, lognormal distributions, or gamma distributions (Williams, 1940;Fucks, 1955Fucks, , 1956Wake, 1957;Miller et al., 1958;Williams, 1975;Grotjahn and Altmann, 1993;Wimmer et al., 1994;Best, 1996;Sigurd et al., 2004). In the last two decades, the field of natural language processing and related areas have constructed numerous datasets for diverse linguistic tasks (Manning and Schutze, 1999;Martin, 2008, 2020). Those datasets provide us opportunities to analyze some other forms of languages, among which entity is an important one. An entity is a real-world object, such as persons, locations, and organizations (Chinchor, 1997;Sang and Meulder, 2003). Entities generally involve important concepts with concrete meanings and usually act as (part of) the subject or the object or even both in a sentence. For example, in the sentence "Michael Jordan could be an NBA player, or a professor of University of California, Berkeley," the entity "Michael Jordan" acts as the subject while other two entities "NBA" and "University of California, Berkeley" are parts of the object. Because of its importance in language, entities have been extensively studied and are involved in diverse linguistic tasks, such as named entity recognition (Chinchor, 1997;Sang and Meulder, 2003) and entity linking (Ji and Grishman, 2011; Table 1: Some examples of entities in English and their corresponding entity lengths (l). Symbols and punctuations in entities are taken into account during the calculation. Entity Entity Length (l) NBA 1 Michael Jordan 2 United Arab Emirates 3 University of California , Berkeley 5 10:00 p.m. on August 20 , 1940 7 human cytomegalovirus ( HCMV ) major immediate 7 2015). To the best of our knowledge, however, there is no existing literature that investigates the underlying distribution(s) of entities which may provide a better understanding on language use and provide insights into designing effective and efficient algorithms for entity-related linguistic tasks. In this paper, we fill in this gap and conduct a thorough investigation on the length-frequency distributions of entities in different types and different languages. We aim to fit the length-frequency of entities with a uniform model or a family of models. Entity length is defined by the number of words in an entity. Entity length is an important feature of natural language processing that reflects the complexity and structure of texts. Table 1 presents some examples of entities and their corresponding lengths. After a careful exploration, we find that the lengthfrequency of entities cannot be well characterized by pure power-law models, but can be well characterized by the Marshall-Olkin power-law (MOPL) models that are developed by Pérez-Casany and Casellas (2013). MOPL models are a family of generalized models of power-law models. Compared with pure power-law models, MOFL models have more flexibility to adjust the probabilities of the first few data points while keeping the linearity of the remaining probabilities. Specifically, we collect twelve datasets about different types of entities (e.g., named entities and time expressions) and eighteen datasets about entities in different languages (e.g., English and French). Those datasets are dramatically diverse from each other in terms of their sources, domains, text genres, generated time, corpus sizes, and entity types, and those languages have significant differences in terms of their phonetic systems and spelling systems (see Section 4.1 for details). However, we find that the length of these diverse entities demonstrates some similar characteristics, and the length-frequency distributions of these diverse entities can be well characterized by a family of MOPL models. To evaluate the quality of MOPL models fitting to the length-frequency of diverse entities, we use the Kolmogorov-Smirnov (KS) test (Smirnov, 1948;Stephens, 1974) and define an average-error metric to evaluate the goodness-of-fit of the MOFL models and compare the fitting results with two state-of-the-art power-law models, namely CSN2009 (Clauset et al., 2009) and LS avg (Zhong et al., 2022b), and an alternative log-normal model. We conduct experiments on thirty datasets about entities in different types and different languages, and experimental results demonstrate that MOPL models well characterize the length-frequency distributions of diverse entities, and the fitting results of MOPL are much better than the ones of the three compared models. Specifically, MOPL achieves much better results in the KS test and average-error metric than the three compared models. Experimental results also demonstrate that MOPL models fit the length-frequency of entities in an individual dataset less than one minute, which is comparable with the most efficient model LS avg and much better than the CSN2009 model. This indicates that MOPL models are more suitable to characterize the length-frequency of diverse entities than the three compared models and that MOPL models are scalable to entities in large-scale real-world datasets. 1 To summarize, we mainly make in this paper the following contributions. • We investigate the underlying distributions of diverse entities, finding that the length-frequency of entities in different types and languages can be characterized by MOPL models. Our finding adds a piece of stable knowledge to the filed of language and provides insights for entity-related linguistic tasks. • We demonstrate the superiority of MOPL models against two state-of-theart power-law models and a log-normal model in terms of fitting to the length-frequency of diverse entities in different types and languages. • Experiments demonstrate that MOPL is scalable to large-scale real-world datasets without linearly nor exponentially increasing the runtime when the number of entities increases. The remaining of this paper is organized as follows. Section 2 reviews the literature about power-law distributions in languages. Section 3 introduces the MOPL models that we use to characterize the length-frequency of divers entities. Section 4 reports experimental results and computational efficiency of MOPL models and compared models fitting to the length-frequency distributions of entities in different types and different languages. Section 5 discusses possible implications and limitations of this paper while Section 6 draws the conclusion. Power-Law Distributions in Languages The most famous power-law distribution in languages is the one in the rankfrequency of words. This linguistic phenomenon was originally discovered by Jean-Baptiste Estoup (Estoup, 1916) and then further explored by George K. Zipf (Zipf, 1936(Zipf, , 1949; such linguistic phenomenon is later credited as Zipf's law. Zipf's law reveals that the r-th most frequently occurring word in a corpus has the frequency defined by f (r) ∝ r −z , where r denotes the frequency rank of a word in the corpus and f (r) denotes its frequency. The Zipf's law has been observed in many languages (Zipf, 1949;Li, 2002;Corominas-Murtra and Solé, 2010;Piantadosi, 2014), and the scaling exponent z is observed to be close to 1. During his exploration, Zipf found as well that the meaning-frequency of words in a corpus also follows a family of power-law distributions. Besides real languages, researchers have also explored randomly generated texts and genetic regulatory networks (Pratap et al., 2019;Anbalagan et al., 2021;Pratap et al., 2022). Miller (1957Miller ( , 1965 and Li (1992) found that the rank-frequency of random texts also follows power-law distributions. Malone and Maher (2012) and Wang et al. (2017) found that the rank-frequency of user passwords from different websites can be characterized by power-law distributions. We now discover another form of human languages, namely entities, whose length-frequency distributions can be characterized by the Marshall-Olkin extended power-law distributions. There are significant differences between power-law distributions in the length-frequency of entities and in the rank-frequency of words. Firstly, the meanings and functions of words and of entities in a sentence are different. In the rank-frequency of words, those most frequent words are always auxiliary words without concrete meanings (random texts and user passwords have no concrete meanings as well), while entities generally involve important concepts with concrete meanings and play important roles in a sentence, such as the subject and the object. Secondly, the numbers of their data points are different. In the rank-frequency of words, an r-rank word appears as a data point, while in the length-frequency of entities, all the l-length entities composite a data point. So the number of data points in the rank-frequency of words is as large as the size of vocabulary in a corpus, while the number of data points in the length-frequency of entities is generally less than 100, and our analysis shows that, in about 93.3% of datasets (28 out of 30), the longest entity contains no more than 100 words (see Table 2 and 3). Thirdly, the scaling exponents of these two kinds of power-law distributions are different. The scaling exponents in the rank-frequency of words are observed to approximate to 1, indicating that these power-law distributions do not have theoretical means nor finite variances. By contrast, the exponents in the lengthfrequency of entities are greater than 2, theoretically indicating well-defined means in all these power-law distributions; and in real-world datasets, these power-law distributions have finite means and variances. Length-Frequency Distributions of Words and Sentences A line of researches that is somewhat related to our work is about the length distributions of words and sentences. According to a review article by Grotjahn and Altmann (1993), Fucks (1955Fucks ( , 1956 first theoretically and experimentally demonstrated that the length-frequency of words in a corpus follows a family of Poisson distributions. This linguistic phenomenon has been observed in more than 32 languages (Best, 1996). On the other hand, Williams (1940) and Wake (1957) observed that the length-frequency of sentences in different languages can be characterized by a family of log-normal distributions. Sigurd et al. (2004) observed that the length-frequencies of words and sentences from English, Swedish, and German corpora can be characterized by variants of log-normal distributions or gamma distributions. Unlike the length-frequency of words and sentences that can be characterized by variants of Poisson distributions, log-normal distributions, or gamma distributions, we find from experiments on datasets about entities in different types and different languages that the length-frequency of entities cannot be characterized by Poisson distributions nor log-normal distributions but are well characterized by a family of Marshall-Olkin power-law (MOPL) distributions. Moreover, our extensive experiments demonstrate that MOPL models characterize the length-frequency of entities much better than two state-of-the-art power-law models and one alternative log-normal model and that MOPL models are scalable to the length-frequency of entities in large-scale real-world datasets. Methodology We first briefly introduce the discrete power-law distributions and then detail the Marshall-Olkin power-law (MOPL) models that we use to characterize the lengthfrequency distributions of entities in different types and different languages. After that we introduce the Kolmogorov-Smirnov (KS) test (Smirnov, 1948;Stephens, 1974) and the average-error metric that are used to evaluate the goodness-of-fit. Discrete Power-Law Distribution The discrete power-law distribution is given a special case of power-law distributions with discrete values. It is defined by Eq. (1): P (X = x) = x −α ζ(α)(1) where x ∈ N + , α > 0 is the scaling exponent, and ζ(α) = Σ ∞ k=1 k −α is the Riemann Zeta function. Eq. (1) can be written as Eq. (2), which demonstrates the linear behavior in the log-log scale: log P (X = k) = −α log x − log ζ(α)(2) The survival function (SF) of the power-law distribution is given by Eq. (3): F (X) = P (X > x) = ζ(α, x + 1) ζ(α)(3) where ζ(α, x) = Σ ∞ k=x k −α is the Hurwitz zeta function. Marshall-Olkin Power-Law Distribution Pérez-Casany and Casellas (2013) explore a new form of power-law distributions by extending the original power-law function through the Marshall-Olkin transformation. They extend the original power-law function to a more general function called Marshall-Olkin power-law distribution. This function have two parameters, α and β, and its survival function (SF) is given as below: P (X > x) = G(x; α, β) = βF (X) 1 − βF (X) = βζ(α, x + 1) ζ(α) − βζ(α + 1)(4) where β > 0, α > 1 and β = 1 − β. The probability mass function (PMF) can be computed through Eq. (5): P (X = x) = G(x − 1; α, β) − G(x; α, β) = x −α βζ(α) [ζ(α) − βζ(α, x)][ζ(α) − (β)ζ(α, x + 1)](5) where x ∈ N + and ζ(α, x) = Σ ∞ k=x+1 k −α stands for the Hurwitz Zeta function. The Marshall-Olkin power-law (MOPL) distributions are a generalization of power-law distributions and overcome some limitations of pure power-law distributions by introducing a parameter. Such parameter allows for more flexibility in adjusting the probabilities of small values while keeping the linearity in tails. The MOPL models are capable of fitting the concave and convex issues encountered in realistic situations, and have been applied to characterize various data such as music compositions and web page visits (Pérez-Casany and Casellas, 2013). In this paper, we use the MOPL models to characterize the length-frequency distributions of entities in different types and different languages. Kolmogorov-Smirnov Test Like many previous researches (Clauset et al., 2009;Hanel et al., 2017;Wang et al., 2017;Gerlach and Altmann, 2019;Artico et al., 2020;Nettasinghe and Krishnamurthy, 2021;Zhong et al., 2022b), we employ the Kolmogorov-Smirnov (KS) test (Smirnov, 1948;Stephens, 1974) to examine the goodness-of-fit. The KS statistic (D n ) quantifies the distance between the cumulative distribution function (CDF) of a set of data points (F n (l)) and the CDF of a theoretic distribution (F (l)), as defined by Eq. (6): D n = sup l |F n (l) − F (l)| (6) where sup l is the supremum of the set of distances. The KS statistic D n ∈ [0, 1] is the maximal distance between the two CDF curves F n (l) and F (l). The smaller the D n value is, the better the theoretic distribution fits the data points. The KS test can also be used to examine whether two underlying distributions are significantly different. In such case, the two-sample KS statistic (D n,m ) is defined by Eq. (7): D n,m = sup l |F n (l) − F m (l)|(7) where F n (l) and F m (l) are the CDF curves of two sets of data points. In the KS test, the null hypothesis (H 0 ) is that the data points are drawn from a theoretic distribution, where the theoretic distribution can be any parametric distribution, such as zipf distribution, normal distribution, power law distribution, and lognormal distribution; the alternative (H 1 ) is that the data points are not drawn from the theoretic distribution. A larger p-value suggests that it is safer to draw a conclusion that these data points are not significantly different from the hypothesized distribution. In two-sample KS test, the null hypothesis (H ′ 0 ) is that the two sets of data points are drawn from the same underlying distribution, while the alternative (H ′ 1 ) is that they are not from the same distribution. Similarly, a larger p-value suggests that it is safer to draw a conclusion that the two sets of data points are drawn from the same underlying distribution. Average Error Besides the KS test, we also define a metric called average error to examine the goodness-of-fit. The average error is defined by Eq. (8): E avg = 1 N x i |p N (x i ) − p (x i )| p N (x i ) · p (x i )(8) where p N (x) and p(x) are the probability density functions (PDF) of the raw data and the hypothesized data. N =| {(x i , p N (x i )} | stands for the number of data points. Defining the average-error metric by Eq. (8) is to remove the impact of different sample sizes. For different models fitting to the same dataset, the smaller the model achieves the E avg , the better the model fits the dataset. Experiments We fit Marshall-Olkin power-law (MOPL) models to twelve datasets about different types of entities and eighteen datasets about entities in different languages and compare the fitting results of MOPL with two state-of-the-art models, namely CSN2009 (Clauset et al., 2009) and LS avg (Zhong et al., 2022b), and an alternative log-normal model. Datasets The datasets we use in this paper mainly involve two kinds: (1) entities in different types and (2) entities in different languages. Most of these datasets contain annotated entities while some contain automatically annotated entities. We collect from both their training and test sets of these datasets for their entities. Entities in Different Types This kind of datasets contains twelve datasets regarding different types of entities collected from dramatically diverse sources, including general named entities (Grishman and Sundheim, 1996;Chinchor, 1997;Sang and Meulder, 2003), entity mentions (Ling and Weld, 2012;Pradhan et al., 2013), time expressions (Pustejovsky et al., 2003a,b;Zhong and Cambria, 2023), aspect terms (Liu, 2012;Pontiki et al., 2014), literary entities (Bamman et al., 2019), defense entities, informal entities (Ritter et al., 2011;Derczynski et al., 2016), and domain-specific entities (Fukuda et al., 1998;Takeuchi and Collier, 2005) that are well studied in the field of natural language processing and related areas. In this paper, we use the term of "entity" to broadly represent these diverse concepts, and these specific concepts are treated as different types of entities. In a specific type of entities, researchers may also assign some pre-defined labels (e.g., PERSON, LOCATION, and ORGANIZATION) to these entities. We use "different types of entities" or "entity types" to represent the above general named entities, time expressions, aspect terms, etc., while use "different categories of entities" or "entity categories" to represent these pre-defined labels. In our analysis, we are concerned with "different types of entities" and do not care much about "different categories of entities." Because each type of entities may also contain different categories/labels and can reveal general habits of our humans in using language, while a certain category of entities reveal only our specific/narrow habit(s). In this paper, we care more about those general habits and principles than specific/narrow one(s). Since English is the most studied language in natural language processing and related areas, we analyze these different types of entities in English. The twelve datasets are (1) ABSA (Pontiki et al., 2014(Pontiki et al., , 2015, (2) (Pustejovsky et al., 2003b;Mazur and Dale, 2010;UzZaman et al., 2013;Zhong et al., 2017;Zhong and Cambria, 2018), (11) Twitter (Strauss et al., 2016;Derczynski et al., 2016), (12) WikiAnchor (Ling and Weld, 2012). They are briefly described below in alphabetical order. ABSA contains two corpora that are used in SemEval-2014 (Pontiki et al., 2014) and SemEval-2015(Pontiki et al., 2015 for aspect-based sentiment analysis. While the two corpora have several language units for different tasks, we are concerned with aspect terms and collect these aspect terms for the analysis of their length-frequency distribution. ACE04 is a benchmark dataset used for the 2004 Automatic Content Extraction (ACE) technology evaluation (Doddington et al., 2004). It consists of various types of data collected from different sources (e.g., newswire and broadcast news) for the analysis of entities and relations in three languages: Arabic, Chinese, and English. We use its English entities for the analysis of different types of entities, while use its Arabic entities for the analysis of entities in different languages. BBN consists of Wall Street Journal articles for pronoun co-reference and entity analysis (Weischedel and Brunstein, 2005). It includes 28 entity categories in total. We collect all of its entities for analysis, without considering its entity categories. BioMed contains fourteen corpora that are developed for the analysis of biomedical entities. Crichton et al. (2017) collect the fourteen corpora and we can get these corpora from their paper for the biomedical entities. CoNLL03 is a benchmark dataset with 1,393 news articles derived from the Reuters RCV1 Corpus, which is collected between the period of August 1996 and August 1997 (Sang and Meulder, 2003). We collect its entities without entity categories for the analysis of the length-frequency distribution. COVID19 is a newly constructed dataset for the analysis of entities related to the recent COVID-19 pandemic (Wang et al., 2020). We collect and analyze its entities for the length-frequency analysis. LitBank is a dataset collected from 100 different English-language literary articles across over a long period of time and it is developed for the analysis of literary entities (Bamman et al., 2019). OntoNotes5 is a large-scale dataset collected from different sources (e.g., news articles, newswire and web data) over a long period of time for the comprehensive analyses of syntax, co-reference, proposition, word sense, and named entities in three languages (i.e., English, Chinese, and Arabic) (Pradhan et al., 2013). In this paper we are concerned with its entities in English for analysis. Re3d 2 is a dataset with various documents relevant to the conflict in Syria and Iraq. The dataset is constructed for the analysis of entity and relation extraction in the domain of defense and security. We collect its entities for analysis. TimeExp consists of three corpora that are developed for the analysis of time expressions (Zhong et al., 2017;Zhong and Cambria, 2018;Zhong et al., 2020). These corpora include TempEval-3 (including TimeBank (Pustejovsky et al., 2003b), TE3-Silver, AQUAINT, and the Platinum corpus) (UzZaman et al., 2013), WikiWars (Mazur and Dale, 2010), and Tweets (Zhong et al., 2017). Twitter consists of two corpora whose text is collected from Twitter: WNUT16 (Strauss et al., 2016) and Broad Twitter Corpus (Derczynski et al., 2016). These two corpora are developed for the analysis of entities in informal text. WikiAnchor treats the anchor text (i.e., the text in the hyperlinks) from Wikipedia (the 20110513 version) as entity mentions (Ling and Weld, 2012). We collect these entity mentions (i.e., anchor text) for length-frequency analysis. For each of these datasets that contain two or more corpora (i.e., ABSA, BioMed, TimeExp, and Twitter), we simply merge all the entities from the whole corpora. Note again that we collect from these datasets only their entities for the analysis of length-frequency distribution; we do not care about their entity categories (or pre-defined labels). Table 2 reports the entity types and statistics of the twelve datasets. As mentioned in Section 3.2, the entity length l is defined by the number of words in an entity. Table 2 shows that the numbers of entities in the twelve datasets are diverse dramatically, ranging from 3,394 (Re3d) to 10,260,797 (COVID19); and the maximal lengths and standard deviations of these entities are also diverse: the maximal lengths are varied from 14 to 129 and the standard deviations are varied from 0.36 to 19.66, respectively. However, the average lengths of these entities are comparable and range around 2 (only from 1.26 to 2.93). This indicates that the average length is a common characteristic among these diverse entities. Entities in Different Languages This kind of datasets contains named entities in eighteen different languages. These datasets are collected from 2004 Automatic Content Extraction (ACE) evaluation (Doddington et al., 2004), European Newspapers 3 , NCHLT Afrikaans Named Entity Annotated Corpus 4 , Basque EIEC (version 1.0) 5 , BSNLP 2017 6 , Italian KIND (Paccosi and Aprosio, 2021), Norwegian Navnkjenner (Johansen, 2019), and RONEC (Dumitrescu and Avram, 2019). The eighteen languages include (1) Afrikaans, (2) Arabic, (3) Basque, (4) Bokmal, (5) Croatian, (6) Czech, (7) France, (8) German, (9) Italian, (10) Netherland, (11) Nynorsk, (12) Polish, (13) Romanian, (14) Russian, (15) Samnorsk, (16) Slovak, (17) Slovene, and (18) Ukrainian. We do not include English in this kind of datasets because different types of entities are analyzed in English. Table 3 summarizes the statistics of entities in the eighteen languages. It shows that the numbers of these entities are significantly diverse, ranging from 4,748 (Basque) to 21,105,675 (Croatian). The maximal lengths and standard deviations of these entities in different languages are somewhat diverse but not that dramatical; while the average lengths of these entities are comparable, ranging around 2 (specifically, from 1.10 to 2.35). These statistics are consistent with corresponding ones of different types of entities reported in Table 2. This indicates that entities across different types and different languages share some similar characteristics. Compared Methods We evaluate the quality of MOPL models in fitting the length-frequency distributions of entities against two state-of-the-art models, namely CSN2009 (Clauset et al., 2009) and LS avg (Zhong et al., 2022b), and an alternative log-normal model. CSN2009: Clauset et al. (2009) propose a maximum-likelihood fitting method, which is denoted by CSN2009, that combines with goodness-of-fit tests based on the Kolmogorov-Smirnov statistic to fit power-law distributions to empirical data. CSN2009 estimates the exponent of a power-law model and the minimal value from which the power-law distribution starts. Besides data fitting, CSN2009 also adopts the KS test with likelihood ratios to evaluate the goodness-of-fit of how well a model fits to data. CSN2009 has been the most popular method in the last decade in fitting power-law distributions. LS avg : Zhong et al. (2022b) demonstrate through extensive experiments that least-squares methods can accurately fit to power-law distributions. They propose a least-squares method to fit power-law distributions to empirical data and use an average strategy to reduce the impact of noisy data that deviate from the fitted line. LogNormal: Log-normal distributions are alternative distributions that researchers usually use to fit data when considering power-law distributions. Therefore, besides CSN2009 and LS avg , we also compare MOPL models with the log-normal model in terms of fitting the length-frequency of entities. Implementation Details For the experiments of data fitting, we use the zipfextR package (Pérez-Casany and Casellas, 2013) in the R programming language to implement our method and apply the codes of CSN2009 7 and LS avg 8 to the datasets. For the KS test, we use (Dimitrova et al., 2020) packages in the R programming language for MOPL, LS avg , and the log-normal model, while use CSN2009's KS-test module for CSN2009. In experiments, we find that for the same model on the same dataset, dgof and KSgeneral achieve the same D n value (i.e., the KS statistic) but different p-values. This suggests that the D n values are accurate while the p-values may not be accurate. In this paper, we use the dgof package to report the D n values and make the final Accept/Reject decisions. All our experiments are conducted on a Dell PowerEdge R740 server with a 96-CPUs processor, 256GB memory, and the CentOS-7 system. Experimental Results Tables 4 and 5 report the fitting and goodness-of-fit testing results of MOPL and the three compared models on the length-frequency distributions of entities in different types. Specifically, Table 4 reports the estimated parameters of the models and the coverages (i.e., percentages of data that models cover) while Table 5 reports the goodness-of-fit testing results of the models on the datasets, including D n , E avg , and DEC where DEC indicates the decision to accept or reject the hypothesis H 0 . Figure 1 visualizes the results of MOPL and the three compared models fitting to the length-frequency distributions of entities in different types. Tables 6 reports the fitting results while Table 7 reports the goodness-of-fit testing results of MOPL and the three compared models fitting to the length-frequency of entities in different languages. Figures 2 and 3 visualize those fittings to the length-frequency of entities in different languages. What follows are separate discussions on model fitting and testing results on the length-frequency of entities in different types and different languages. Results on the length-frequency of entities in different types Let us first look at the three measures that examine the goodness-of-fit in Table 5: D n , E avg , and DEC. Table 5 shows that MOPL achieves the best results in all the three measures on all the twelve datasets, in comparison with the three compared models. Specifically, MOPL achieves the performance of D n in the range from 7.88E-05 to 1.22E-02 and the E avg value from 0.18 to 1.40 as well as all the "Accept" across the twelve datasets. By contrast, LS avg achieves the performance of D n in the range from 2.73E-01 to 8.00E-01 and the E avg value from 1.12 to 4.57 as well as all the "Reject" across the datasets. The three measures that CSN2009 achieves are 4.46E-03∼6.02E-02 for D n , 0.25∼0.66 for E avg , and 5 "Accept" and 7 "Reject" for DEC. The three measures of LogNormal are 1.76E-02∼1.21E-01 for D n , 0.36∼11.27 for E avg , and all 12 "Reject" for DEC. This indicates that MOPL fits the length-frequency distributions of entities in different types much better than LS avg and CSN2009, which are developed to fit power-law distributions, and LogNormal, which is often used as an alternative model for power-law models to fit empirical data. Figure 1 intuitively visualizes the difference between MOPL and the three compared models in fitting the length-frequency distributions of entities on the twelve datasets. From Figure 1 we can see that the fittings of MOPL are much better than the ones of the three compared models. More importantly, MOPL achieving all the "Accept" on the twelve datasets indicates that MOPL is a suitable model to characterize the length-frequency of entities in different types. The fact that MOPL achieves the best goodness-of-fit testing results indicates that MOPL achieves the best estimated parameters. As shown in Table 4, therefore, theα of MOPL should be considered as the relatively accurate estimated exponents fitting to the power-law segments of the length-frequency distributions of entities in different types. All theα of MOPL fitting to these different types of entities range from 2.69 to 5.83, and most of theseα range from 2.69 to 4.74. This indicates that the length-frequency of entities in different types have stable scaling property. Let us now look at the fittings of the two state-of-the-art compared models, LS avg and CSN2009. Theα of LS avg are deviated relatively far away from theα of MOPL. The reason is that LS avg assumes that a power-law starts from the very beginning of an empirical dataset, but Figure 1 shows that such assumption is not applicable to the length-frequency of entities. This indicates that a pure power-law model is unsuitable to characterize the length-frequency of entities in different types. On the other hand, theα of CSN2009 are deviated slightly from the theα of MOPL. The reason is that CSN2009 adopts a minimum-KS-statistic strategy to choose larger lower bound (i.e.,x min ) and fits only the long tails. Consequently, CSN2009 discards the majority of data and achieves low coverages, which are only from 1.23% to 70.99%. By contrast, other models cover more than 98.70% of data. This result that CSN2009 achieves low coverage in fitting to empirical data is consistent with the observation reported in Zhong et al. (2022b). Results on the length-frequency of entities in different languages Let us first look at the three goodness-of-fit testing measures in Table 7 as well: D n , E avg , and DEC. Table 7 shows that none of the four models (i.e., MOPL, LS avg , CSN2009, and LogNormal) can perfectly characterize the lengthfrequency distributions of entities in the eighteen languages. The fittings to the length-frequency of entities in different languages are much worse than the fittings to the length-frequency of entities in different types. A possible reason is that some of these datasets in the non-English languages contain a large number of noises. As we mentioned above, English is the most studied language in the field of natural language processing and related areas; other languages are also studied, but their annotated datasets may not be as accurate as the datasets in English. Another possible reason is that none of our authors are familiar with those languages and cannot guarantee the accuracy of the annotations for these datasets. Let us now look at the comparison among the four models fitting to the lengthfrequency of entities. While MOPL does not well characterize the length-frequency distributions of entities in all the eighteen languages, MOPL outperforms the three compared models. Specifically, MOPL achieves the D n value in the range from 1.72E-03 to 4.01E-02, achieves the E avg value in the range from 0.17 to 2.47, and achieves 8 "Accept" and 10 "Reject" for DEC across all the eighteen languages. By contrast, LS avg achieves the D n value from 1.00E-01 to 7.69E-01, achieves the E avg value from 0.33 to 23.99, and achieves all 18 "Reject" for DEC across the eighteen languages. CSN2009 achieves the D n value from 4.92E-03 to 5.69E-02, achieves the E avg value from 0.15 to 3.18, and achieves 6 "Accept" and 12 "Reject" for DEC. LogNormal achieves the D n value from 1.70E-02 to 1.24E-01, achieves the E avg value from 0.34 to 6.81, and achieves all 18 "Reject" for DEC. The comparison among the four models fitting to the length-frequency of entities is intuitively visualized in Figures 2 and 3. The fitting and testing results indicate that MOPL is more suitable to characterize the length-frequency distributions of entities in different languages than LS avg , CSN2009, and LogNormal. Table 6 shows that theα of MOPL fitting to the length-frequency distributions of entities in different languages range only from 2.66 to 5.12, which is consistent with theα of MOPL fitting to different types of entities, as shown in Table 4. This indicates that the length-frequency distributions of entities in different languages also have stable scaling property. In terms of data coverage, MOPL, LS avg , and LogNormal cover almost all the data (i.e., from 99.91% to 100%), while CSN2009 achieves relatively low coverages (i.e., lower to 0.60%). Specifically, CSN2009 discards at least 50% of data in 13 out of 18 languages, and discards at least 90% of data in 8 out of 18 languages. The low coverage of CSN2009 on the lengthfrequency of entities in different languages is consistent with the one of CSN2009 on the length-frequency of entities in different types reported in Table 4 as well as the observation reported in Zhong et al. (2022b). Table 8 reports the runtimes of MOPL, LS avg , CSN2009 and LogNormal fitting to the length-frequency distributions of entities in different types and different languages. 11 Table 8 shows that while the runtimes of MOPL fitting to lengthfrequency of entities in both different types and different languages are less efficient than ones of LS avg and LogNormal, they are significantly more efficient than the ones of CSN2009. Moreover, while the number of entities in individual dataset ranges from 3,394 to 10,260,797 in different types (see Table 2) and from 4,748 to 21,105,675 in different languages (see Table 3), the runtime of MOPL performing on individual dataset ranges only from 41.71 to 409.67 milliseconds, all of which are less than one second. That means the runtime of MOPL neither increases linearly nor exponentially as the number of entities increases. This suggests that MOPL can be easily applied on large-scale datasets with high efficiency. Computational Efficiency Discussion Some Implications on Entity-related Linguistic Tasks We here briefly discuss some implications of this linguistic phenomenon (i.e., the length-frequency of entities in different types and different languages can be characterized by Marshall-Olkin power-law distributions) on entity-related linguistic tasks. This linguistic phenomenon may be able to explain why many statistical models and deep-learning models, such as conditional random fields (Lafferty et al., 2001), long short-term memory networks (Hochreiter and Schmidhuber, 1997), and transformer (Devlin et al., 2018), can be applied for recognizing all these different types of entities from unstructured text (Fukuda et al., 1998;Sang and Meulder, 2003;Takeuchi and Collier, 2005;Nadeau and Sekine, 2007;Ritter et al., 2011;Liu, 2012;Pontiki et al., 2014;Krallinger et al., 2015;Derczynski et al., 2016;Yadav and Bethard, 2018;Zhong, 2020;Zhong et al., 2022a). This linguistic phenomenon may also be able to provide insights into analyzing those languages with low-resources. Since entities in different types and different languages share many common characteristics (e.g., their length-frequency distributions, average lengths, and scaling property), we could transfer knowledge and resource available in those well-studied languages to those low-resource languages. We could also apply those statistical modes and deep-learning models that have demonstrated to be effective and efficient in well-studied languages to those low-resource languages. Distilling this knowledge about the length-frequency distributions of entities can also drive us to design effective and efficient algorithms for specific linguistic tasks. For example, Zhong et al. (2017) found that an average time expression contains only about two words of which one is time token and the other is modifier or numeral, and then they designed proper rules to recognize time expressions from unstructured text. To apply this linguistic knowledge and achieve more progress in linguistic tasks, however, we still need to explore into deeper understanding of this linguistic phenomenon. Limitations While we find that the length-frequency distributions of entities in different types can be well characterized by Marshall-Olkin power-law (MOPL) models, and the ones in different languages can also be roughly characterized by MOPL models, we should note that our analysis on these datasets about different languages may be inaccurate because many of these languages are not well studied in the field of natural language processing and related areas and we authors do not have sufficient expertise knowledge to cover our analysis on these different languages. Conclusion In this paper, we discover that the length-frequency distributions of entities in different types and different languages can be characterized by a family of Marshall-Olkin power-law (MOPL) models. Our discovery adds a stable knowledge to the field of language and provides some insights into conducting entity-related linguistic tasks and may also provide a new perspective for future potential research in understanding the language use. Experimental results on the length-frequency of entities in both different types and different languages demonstrate the superiority of MOPL models against a log-normal model and two state-of-the-art power-law models, namely LS avg that is developed by Zhong et al. (2022b) and CSN2009 that is developed by Clauset et al. (2009). Experimental results also demonstrate that MOPL models are scalable to the length-frequency of entities in large-scale real-world datasets.
6,482
1811.02722
2964195534
Subspace clustering aims to find groups of similar objects (clusters) that exist in lower dimensional subspaces from a high dimensional dataset. It has a wide range of applications, such as analysing high dimensional sensor data or DNA sequences. However, existing algorithms have limitations in finding clusters in non-disjoint subspaces and scaling to large data, which impinge their applicability in areas such as bioinformatics and the Internet of Things. We aim to address such limitations by proposing a subspace clustering algorithm using a bottom-up strategy. Our algorithm first searches for base clusters in low dimensional subspaces. It then forms clusters in higher-dimensional subspaces using these base clusters, which we formulate as a frequent pattern mining problem. This formulation enables efficient search for clusters in higher-dimensional subspaces, which is done using FP-trees. The proposed algorithm is evaluated against traditional bottom-up clustering algorithms and state-of-the-art subspace clustering algorithms. The experimental results show that the proposed algorithm produces clusters with high accuracy, and scales well to large volumes of data. We also demonstrate the algorithm’s performance using real-life ten genomic datasets.
From an algorithmic point of view, clustering algorithms can be classified into bottom-up algorithms and top-down algorithms @cite_9 . As our algorithm follows a bottom-up strategy, we briefly discuss the relevant algorithms of this class to highlight our contributions.
{ "abstract": [ "As a prolific research area in data mining, subspace clustering and related problems induced a vast quantity of proposed solutions. However, many publications compare a new proposition—if at all—with one or two competitors, or even with a so-called “naive” ad hoc solution, but fail to clarify the exact problem definition. As a consequence, even if two solutions are thoroughly compared experimentally, it will often remain unclear whether both solutions tackle the same problem or, if they do, whether they agree in certain tacit assumptions and how such assumptions may influence the outcome of an algorithm. In this survey, we try to clarify: (i) the different problem definitions related to subspace clustering in general; (ii) the specific difficulties encountered in this field of research; (iii) the varying assumptions, heuristics, and intuitions forming the basis of different approaches; and (iv) how several prominent solutions tackle different problems." ], "cite_N": [ "@cite_9" ], "mid": [ "2079361215" ] }
Scalable Bottom-up Subspace Clustering using FP-Trees for High Dimensional Data
Subspace clustering aims to find groups of similar objects, or clusters, that exist in lower dimensional subspaces from a high dimensional dataset. This has a wide range of applications, including the rapidly growing fields of the Internet of Things (IoT) [1] and bioinformatics [2]. Applications such as these generate large volumes of high dimensional data, which bring new challenges to the subspace clustering problem. In this paper we propose a novel approach to subspace clustering that addresses two key challenges in these applications: scalability to large datasets and non-disjoint subspaces. The first challenge lies in handling large inputs. This is essential for many applications nowadays since the captured data can grow to million of records in a short period of time. It has been shown [3], [4] that many existing algorithms have high computational costs and take considerable time to cluster relatively small inputs, e.g., STATPC [3] needs more than 13 hours to cluster 7,500 records of 16 dimensions. Table I illustrates how our algorithm can scale to inputs with large volumes of data, in comparison to state-of-the-art subspace clustering algorithms SWCC [2], SSC [5], and LRR [6]. The running time of our algorithm over 100,000 data points is half that required by SWCC (which is a highly efficient coclustering algorithm, but cannot find clusters in non-disjoint subspaces). The state-of-the-art subspace clustering algorithms SSC and LRR also suffer as the number of data points increases. SSC triggers memory errors when the numbers of data points reaches 15,000, while LRR cannot terminate in 12 hours for just 5,000 points. 5 The second challenge involves finding clusters in nondisjoint subspaces [7]. Many recent algorithms [5], [6] assume that clusters are located in disjoint subspaces, which do not have any intersection except for the origin. This is a strong assumption that can be unrealistic, because real-life data may be correlated in different overlapping subsets of dimensions, also known as the property of local feature relevance [8]. For example, with gene expression data, a particular gene can be involved in multiple genetic pathways, which can result in different symptoms among different sets of patients [9]. Hence, a gene can belong to different clusters that have dimensions in common while differing in other dimensions [10]. Figure 1 presents another example of clusters in non-disjoint subspaces that are observed in data collected from IoT applications. The heatmap visualizes the subspace clustering results of a car parking occupancy dataset at 10 locations from 9am to 1pm, where each column represents a car parking bay, and each row represents an hour of the day. It can be observed that clusters C 1 and C 2 are in non-disjoint subspaces since they share the dimensions of parking bays P2 and P3 in common. In the case of C 1 , this can be interpreted as the utilisation of these two parking bays following some pattern that is also observed at P1 between 9am-10am. On the other hand, cluster C 2 shows that P2 and P3 follow a different pattern between 11am-1pm, and share that pattern with P4 and P5. Further analysis of the data can suggest that {P 2, P 3, P 1} are busy parking bays during morning peaks, whereas {P 2, P 3, P 4, P 5} have higher occupancy levels during lunch time. To address these challenges, we propose a novel algorithm that can find clusters in non-disjoint subspaces and scale well with large inputs. The algorithm follows a bottom-up strategy and comprises two phases. First, it searches for potential clusters in low dimensional subspaces, which we call base clusters. We start with base clusters instead of dense units in separate dimensions, which are used in existing bottomup clustering algorithms [8]. This allows our algorithm to preserve the covariance of data between different dimensions, which is also a critical factor when clustering high dimensional data, as we further elaborate in Section 4.1. In addition, this approach makes our algorithm more stable and tolerant to variations in parameters settings. In the second phase, base clusters that share similar sets of data points are aggregated together to form clusters in higher dimensional subspaces. This process of aggregation is non-trivial. One of the main challenges lies in keeping the number of aggregated clusters tractable. This not only directly affects the computational costs of the algorithm, but also ensures that the final result is presented in an appropriate number of meaningful clusters. Many existing algorithms [11], [12] depend on combinatorial search to combine low dimensional clusters (dense units). If there are on average m dense units in each dimension, the first level of aggregation of CLIQUE [11] (to combine one-dimensional dense units into two-dimensional clusters) would need to check |m| d pairwise possible aggregations, where d is the number of dimensions. Further aggregation would need to be applied sequentially for each subsequent higher dimension. We alleviate this heavy computation by transforming the aggregation problem into a frequent pattern mining problem [13] to achieve efficient and robust aggregation of base clusters. This approach also allows us to avoid the construction of a similarity matrix, which has quadratic complexity with respect to the input volume. Therefore, we reduce both time and space complexity and enable the algorithm to work with very large inputs. During this process, a base cluster may be aggregated into more than one cluster in different higher dimensional subspaces that have overlapping dimensions, which enables us to find non-disjoint subspace clusters. The general steps of our algorithms are summarized in Figure 2 and are detailed in Section 4. We make the following contributions: • We propose a novel subspace clustering algorithm that can find clusters in non-disjoint subspaces and handle very large inputs. The novelty of our approach is reflected in both phases of the algorithm. First, we search for base clusters in low dimensional subspaces to preserve the covariance of data between different dimensions. Second, we transform the process of sequential aggregation of low dimensional clusters to a problem of frequent pattern mining to construct high dimensional clusters. • We demonstrate that the proposed algorithm outperforms traditional subspace clustering algorithms using bottomup strategies, as well as state-of-the-art algorithms with other clustering strategies, in terms of accuracy and scalability on large volumes of data. • We conduct a range of experiments to demonstrate the effectiveness of our algorithm in different practical applications. Specifically, we present how the algorithm can be applied to (1) real-life sensor data from the City of Melbourne, Australia [14], and (2) 10 different gene expression datasets [9], and produce comparable or better results than state-of-the-art algorithms. III. PROBLEM STATEMENT We first present the notation used in this paper. • S (k) i is a subspace of k dimensions, which is represented as a set of its component dimensions: S (k) i = {d i1 , ..., d ij , ..., d ik }, d ij represents the j th dimension. • X j or {x j } is a set of points; x j denotes a point: x j = {x ji } k i=1 , x ji is the coordinate in the i th dimension. • C Xj Si is a cluster formed by points X j in subspace S i . Let X = {x i ∈ R d : i = 1..n} be a set of n points in a d-dimensional space, and X j be a subset of X. The set of all subspace clusters is denoted as Y = {C Xj Si , i : 1..s, j : 1..c}. Here, s denotes the number of subspaces containing clusters, and c denotes the number of all clusters. More than one cluster can exist in a subspace, i.e., c ≥ s. Our subspace clustering algorithm finds all clusters by identifying their corresponding subspaces and point sets. We take a bottom-up approach to find the clusters in subspaces starting from finding base clusters in low dimensional subspaces. The algorithm to find the base clusters is orthogonal to our study. We use k-means in the experiments for simplicity, although any low dimensional clustering algorithms may be used. Once the base clusters are found, our algorithm aggregates them to form clusters in higher-dimensional subspaces. We follow a probabilistic approach together with the downward closure property of density to guarantee the validity of the formation of clusters in higher dimensional subspaces. This is formulated as Lemma 1. Lemma 1: Given two points x 1 and x 2 in subspace S i , the probability that x 1 and x 2 belong to the same cluster in subspace S i is proportional to the cardinality |{S i ′ }| (S i ′ ⊂ S i ) in which x 1 and x 2 belong to the same cluster. Proof: Let C Si denote the event where two points x 1 and x 2 belong to the same cluster in subspace S i . Assume that we already perform clustering in lower dimensional subspaces and find that these two points belong to the same cluster in a set of p subspaces S = {S i1 , ..., S ij , ..., S ip } (S ij ⊂ S i ). Given this knowledge, the probability that x 1 and x 2 belong to the same cluster in S i is: P 1 = P (C Si | C Si1 , ..., C Sip ) = P (C Si , C Si1 , ..., C Sip ) P (C Si1 , ..., C Sip ) We show that the probability P 1 increases as new evidence of the cluster formation of x 1 and x 2 is found in other subspaces of S i . Specifically, let these two points also belong to a cluster in a certain subspace S im ⊂ S i (S im ⊂ S, i.e., S im is indeed a newly discovered subspace in which x 1 and x 2 belong to the same cluster). The probability of them belonging to the same cluster in S i becomes: P 2 = P (C S i | C S i1 , ..., C S ip , C S im ) = P (C S i , C S i1 , ..., C S ip , C S im ) P (C S i1 , ..., C S ip , C S im ) By applying the chain rule, we can show that P 2 > P 1 : P2 P1 = P (CS i , CS i1 , ..., CS ip , CS im ) P (CS i1 , ..., CS ip , CS im ) × P (CS i1 , ..., CS ip ) P (CS i , CS i1 , ..., CS ip ) According to the downward closure property of density, if x 1 and x 2 are near in S i , they are also near in all subspaces of S i , including S im . Hence, P (C Sim | C Si ) = 1, or P (C Sim , C Si ) = P (C Si ). Therefore, P (C Si , C Si1 , ..., C Sip , C Sim ) = P (C Si , C Si1 , ..., C Sip ). The previous equation can then be rewritten as: P2 P1 = P (CS i1 , ..., CS ip ) P (CS i1 , ..., CS ip , CS im ) = C S im P (CS i1 , ..., CS ip , CS im ) P (CS i1 , ..., CS ip , CS im ) By marginalising the numerator over C Sim , we can deduce that P 2 P 1 ≥ 1. We therefore show that additional evidence of x 1 and x 2 belonging to the same cluster in another subspace S mi ⊂ S i increases the probability that these two points belong to the same cluster in S i . Thus, Lemma 1 is proved. The intuition of Lemma 1 is that the formation of clusters in lower dimensional subspaces can be used as evidence to reinforce and increase the posterior probability of the formation of a cluster for the same set of points in the higher dimensional super subspaces. Therefore, we say that there is a high probability that a set of points form a cluster in a high dimensional subspace if they form clusters in a sufficiently large number of its subspaces. IV. PROPOSED METHOD We propose a two-phase subspace clustering algorithm as summarised in Algorithm 1. A. Phase 1: Base Cluster Search Our first phase searches for lower dimensional clusters. These are called base clusters as they are the basis that form higher dimensional clusters. Unlike traditional bottomup subspace clustering algorithms such as CLIQUE [11], ENCLUS and MAFIA [12] that search for dense units in individual dimensions, we search for base clusters in subspaces with two or more dimensions. This approach can preserve the covariance between different dimensions. Not only is the proximity between points in each dimension important but also the covariances of values in different dimensions are critical to decide the formation of clusters. Figure 3a shows a distribution of 300 points in a 3-dimensional space. Points {x i } 100 i=1 are from a normal distribution N (1, 2) and form a dense unit in dimension d 1 . Similarly, {x i } 200 i=101 and {x i } 300 i=201 follow two normal distributions N (7, 2) and N (10, 2) in d 2 and d 3 , and form two dense units in these dimensions respectively. When clustering these points in 2D and 3D spaces, where covariance is implicitly implied, these points do not form any cluster, as confirmed by k-means or visual inspection of Figure 3a. This can be explained with the normal probability density distribution in Figure 3b. While the first 100 points {x i } 100 i=1 are close to each other in d 1 , the same set of points have large variances in d 2 and d 3 , and cannot be considered close in higher dimensional space. The correlation between different dimensions is omitted when each dimension is considered separately. Figure 3c shows an example where missing clusters can be prevented. It contains 300 points whose coordinates in each N (10, 2). In fact, if we consider each dimension separately, the values in each dimension are the same as the previous distribution shown in Figure 3a. However, in this example, we enforce that for each point x i , its coordinates in all dimensions must be drawn from the same distribution. No dense units are found in each individual dimension since the points are normally distributed, as can be observed from the probability density distributions in Figure 3d (which do not show any significant peaks, compared to Figure 3b). With no dense unit, no cluster is found by the aforementioned methods. However, it is visually evident that 3 clusters exist in this dataset. Note that the dimensionality of the final clusters is higher than p if the search for base clusters starts with a p-dimensional subspace. For example, if the algorithm performs phase 1 with 3-dimensional (3D) subspaces, it assumes there is no cluster in 2D or 1D subspaces. For this reason, it is ideal to start phase 1 in subspaces that are low dimensional, i.e., keeping p small. Another factor that affects the algorithm is the number of subspaces that need to be searched. If the dimensionality of the full space is low, it is feasible to perform the search in all of its p-dimensional subspaces. As an example with a 50D dataset, the total number of 2D subspaces is 50 2 = 1225. If the number of dimensions is high, it is possible to perform sampling of subspaces instead of considering all of them, as long as each dimension is sampled sufficiently frequently. In this paper, we search for base clusters in all 2D subspaces if the number of dimensions is less than 100, while in higher dimensional datasets we perform subspace sampling. We find that in practice this provides a good balance between clustering quality and computational complexity. Table II shows an example of the output of phase 1. Note that we use the following notation: C Si,j denotes the j th cluster in the subspace S i . It searches for clusters in 6 subspaces {S 1 , ..., S 6 } of the full data space S. Points x 1 , x 2 and x 3 belong to the same cluster C S1,1 in subspace S 1 . They also belong to cluster C S2,1 in subspace S 2 , while sharing no common cluster in other subspaces. The base clusters found that cover similar sets of data points are aggregated together to form clusters in higher dimensional subspaces. Subspace S i of a high dimensional cluster is constituted of all the dimensions of its aggregated base clusters. According to Lemma 1, these base clusters can be considered as evidence to increase the posterior probability of the formation of the high dimensional cluster. B. Phase 2: High Dimensional Cluster Construction Phase 2 learns the patterns of proximity among the points from the output of phase 1, which is denoted as Z (Table II), to derive the final clusters and present them in a succinct and interpretable way. To this end, we consider Z as a transaction database where each point corresponds to a transaction and the base clusters covering that point are the items of that transaction. From Table II, the first row is the transaction of point x 1 , and the corresponding items are C S11 , C S21 , C S31 , C S61 . Subsequently, we use Z as the input to build an FP-Tree [13], in which each branch is an aggregation of base clusters and represents a high dimensional cluster. Effectively, each frequent pattern mined from the tree indicates a sufficiently large group of points that form clusters in a high dimensional subspace. The minimal size of a cluster is controlled by the minimum support (min_sup) [13] of the frequent pattern mining process. In practice, the choice of the min_sup parameter can be guided by the expected minimum cluster size. Note that not all frequent patterns are useful as they can produce redundant clusters. For any cluster defined by the frequent pattern F i , all subsets of F i are also frequent, and correspond to clusters in lower dimensions, but none of them form a cluster as complete as F i does. Therefore, we only need to mine the maximal frequent patterns. In addition, it is important to control the number of frequent patterns since these can quickly grow. Prior to the extraction of maximal frequent patterns, phase 2 analyses the frequencies of patterns at different levels of the FP-Tree, and prunes small branches with low frequencies. These branches correspond to insignificant patterns and only reflect the characteristics of a small portion of the points that do not justify a cluster. This Figure 4. Subspaces of base clusters Points S 1 S 2 S 3 S 4 S 5 S 6 x 1 C S 1 ,1 C S 2 ,1 C S 3 ,1 ∅ ∅ C S 6 ,1 x 2 C S 1 ,1 C S 2 ,1 C S 3 ,2 C S 4 ,1 ∅ C S 6 ,1 x 3 C S 1 1 C S 2 ,1 C S 3 ,3 ∅ C S 5 ,1 C S 6 ,1 x 4 C S 1 ,2 C S 2 ,2 C S 3 ,4 C S 4 ,2 C S 5 ,1 C S 6 ,1 x 5 C S 1 ,2 C S 2 ,2 C S 3 ,4 C S 4 ,2 C S 5 ,2 C S 6 ,1C 1 {x 1 , x 2 , x 3 } {C S 6 ,1 , C S 1 ,1 , C S 2 ,1 } C 2 {x 4 , x 5 } {C S 6 ,1 , C S 1 ,2 , C S 2 ,2 , C S 3 ,4 } is essential to prevent the algorithm from producing a huge number of small and meaningless clusters. To this end, phase 2 first performs a scan on the FP-Tree and records the frequency on each branch at each depth level of the tree. It then finds the knee-point [30], which indicates the level after which the frequencies significantly drop. Subsequently, the remainder of that branch is pruned. We present a running example using Table II as the input, with min_sup set to 0.4. Figure 4 shows the FP-Tree before being pruned. The pruning eliminates the node C S51 , which has a frequency of 1 (i.e., the patterns only apply to x 3 ) and hence should not justify a separate cluster. The branch that starts at node C S51 on the right branch of the tree is also pruned (the patterns only apply to x 4 ). Eventually, two clusters are found as presented in Table III. The process of building the tree and mining maximal frequent patterns only requires two passes over the input Z. The process of pruning the tree performs one traversal of the tree, which is linear with respect to the size of Z. This contributes to the low computational complexity and therefore improves the scalability of the algorithm. V. EVALUATION We evaluate our algorithm using real-life datasets from a variety of applications. First, we apply our algorithm to ten gene expression datasets, and compare its accuracy with six clustering algorithms that are commonly used for biomedical data. Next, we apply the algorithm to a real-life dataset of car parking occupancy in a major city, and quantitatively evaluate the result. Finally, we evaluate the algorithm using synthetic datasets of different sizes and dimensions, and compare the results with traditional bottom-up clustering algorithms [3] as well as other state-of-the-art subspace clustering algorithms [5], [6]. We also evaluate the scalability of our algorithm on large datasets. All experiments are conducted with MATLAB on an Intel Core i7-4790 3.6GHz CPU and 16GB of RAM. A. Clustering Gene Expression Data We first perform clustering on ten gene expression datasets that were widely used in different studies [2]. The sizes and characteristics of these datasets are summarised in Table IV. The performance of our proposed algorithm is compared with 7 other algorithms, including EWKM [31], BBAC-S [26], ITCC [27], FFCFW [28], HICC [29], and SWCC [2]. The metric used to measure the correctness of the result is normalised mutual information (NMI) [32]. Note that we also used precision, recall, f-measure, and accuracy to evaluate the clustering results but do not present the comparison numbers here because they are not directly comparable to those presented in the previous papers [2]. Our approach to true/false positives and true/false negatives for clustering is slightly different from the one used in the aforementioned papers. After finding the clusters, these algorithms use the Hungarian algorithm [33] to find the best mapping between the clustering result and the given labels. However, the Hungarian algorithm requires that the algorithms find the correct number of clusters, which is guaranteed in [2] because this is given as an input parameter. Our algorithm does not require the number of clusters to be specified in advance, and hence it is not always guaranteed to produce the correct number of clusters. Instead, we use the approach presented in [32] to determine true/false positives and true/false negatives. Next, we present the parameter settings for the algorithms in this experiment. In phase 1, we start the search for base clusters in two-dimensional subspaces (2D), and use k-means to find the base clusters in each of these subspaces. Therefore, there are only two parameters required by our algorithm: the number of base clusters k in each subspace, and the expected minimum size of a cluster, reflected in min sup. We We compute NMI for each clustering result and compare the average results of all algorithms in Table V. A t-test [34] is performed with a significance level of 5% to determine if the average NMI values produced by our algorithm are significantly different from those produced by the other algorithms. In Table V, the cells of the other algorithms are color-coded to highlight the relative performance of our algorithm. A white cell of a baseline algorithm indicates that the baseline algorithm performs worse than ours with statistical significance, a black cell indicates the baseline algorithm has a higher NMI value than ours, whereas a grey cell shows no statistical difference between the results. For example, the last row of the table indicates that the result of our algorithm is better than most of the other algorithms, has no statistical difference compared to BBAC-S, and is worse than k-means. It can be observed from the results that our algorithm produces comparable or better results than all other algorithms for the datasets of ADE, BR2, COL, PRO, and SRB (except for k-means). Our algorithm also performs better than ITCC, FFCFW, and HICC on all datasets. In summary, this demonstrates that we can achieve as good or better accuracy than state-of-the-art algorithms over a variety of genomic datasets. B. Clustering Car Parking Occupancy Data Next, we demonstrate the capability of our algorithm to work with data collected from a real-life IoT application. The City of Melbourne has deployed sensors to record parking events at parking bays around the central business district (CBD). We extract the start and end time of all parking events to compile the parking occupancy at 276 locations at 15 minutes intervals between 09:00-18:00, yielding an input of size 276×36 for each day. The aim is to find clusters of car parking spots that have similar patterns of occupancy at certain times of the day. Each clustering task is performed on five days worth of data to find the patterns of parking occupancy during weekdays. Parking occupancy is an important metric that indicates the efficiency of car park utilisation [35], which heavily affects traffic, ease of commute and business in the CBD. Analysing the car occupancy can reveal patterns in parking behaviour at different car parks during different times of the day, which can then be used to review the parking hotspots or tariffs. By clustering the parking occupancy data, each cluster C Xj Si represents a parking pattern observed at the locations (points) X j during the times (dimensions) defined by S i . The results are evaluated using two methods. First, we analysed the coherence of each cluster by statistically verifying whether the clustered parking bays have small deviations in the values of parking occupancy during the corresponding time periods, compared to the rest of the data. The examples of two clusters are shown in Figure 5, where each blue bar represents the mean and standard deviation of the parking occupancy at a certain time of the day, observed at parking bays grouped by the cluster. For example, Cluster 1 in Figure 5a shows the pattern shared by a group of parking bays during 9:00-10:30 and 14:45-17:45 with small standard deviations, compared to significant deviations at other times of the day. Similarly, Cluster 2 shows another pattern that has an occupancy rate of 55% around midday, while such correlation is not observed at other times of the day. Second, to quantify the effectiveness of the method, we use the clusering result to construct an ensemble prediction model to predict the parking occupancy over the next few hours, and compare the accuracy of our model with other models. The details of the prediction models are as follows: • Model 1 applies decision tree regression [36] directly on the occupancy data. • Model 2 first clusters the data using the proposed algorithm and then fits a decision tree regression on the set of car parks in each cluster separately. • Model 3 follows the same approach as Model 2 except that it uses the k-means algorithm in the first phase. Each cluster ideally represents a pattern of parking occupancy shared by a group of parking bays. Fitting a submodel to each cluster allows each submodel to learn the data in more detail and predict with higher accuracy if the values are coherent. Therefore, the accuracy of the prediction model directly reflects the quality of the clusters. This approach of using clustering in an ensemble prediction model has previously been used in [37], [38]. Each prediction model uses the values between 09:00-12:45 as training data to predict the occupancy rates of the next two hours. The coefficient of determination (R2) [39] is used to measure the accuracy. Figure 6 shows that our model (m2) outperforms the other two, reflected in higher R2 scores. It can also be observed that Model 3, which relies on k-means, is not as accurate as Model 1, which implies that fitting submodels to the input does not always translate to higher accuracy. In fact, the accuracy can deteriorate if the values in each submodel are not coherent. In summary, by incorporating the clustering results into decision tree regression to improve the prediction accuracy, we quantitatively show that our clustering algorithm can cluster data into meaningful partitions that share similar patterns. It also demonstrates its capability of handling real datasets with high levels of noise and outliers. C. Experiments with Synthetic Data We further evaluate our algorithm on a variety of synthetic datasets in order to assess (1) its capability to find clusters in disjoint and non-disjoint subspaces, and (2) its capability to scale with large inputs. Figure 7 shows the grayscale heatmap of a sample dataset containing 900 points in a 35-dimensional space. The points {x i } 300 i=1 form a cluster in the subspace S 1:10 , which is constituted by the dimensions {d 1 , ...d 10 }; points {x i } 600 i=301 form a cluster in the subspace S 11:30 , which is constituted by the dimensions {d 11 , ..., d 30 }. The points within the same clusters are more coherent, which is reflected by the more uniform shade of gray of the heatmap. These two clusters intersect only at the origin and hence are disjoint. On the other hand, Figure 7b is an example of data having clusters In this experiment, we start the search for base clusters in 2D subspaces. k-means is used to find base clusters in phase 1. Two parameters are required for our algorithm, which is the number of base clusters k in each subspace, and the minimum support min_sup required for the construction of the FP-Tree. Note that the value of min_sup can be deduced from the minimum expected number of points of a cluster. Setting an appropriate value for k is non-trivial. As we argued earlier, the purpose of phase 1 is to find the similarity in cluster membership of the points in the low dimensional subspaces, rather than the exact cluster of each point. We invoke 12 iterations of our algorithm with k ∈ {3, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55} and take the best result. For the baseline algorithms, we also analyse the properties of the synthetic data to derive the data density, the correct number of clusters, and the average dimensions of clusters to provide the ideal range of parameters. The parameters for CLIQUE, SUBCLU, DOC, and STATPC are replicated from [3]. Each of the baseline algorithms is executed 30 times and the average results are recorded. 1) Initial Tests against Baseline Algorithms: In this section we benchmark our algorithm with clustering algorithms including CLIQUE, SUBCLU, DOC, P3C, and STATPC [3], as well as state-of-the-art algorithms including SSC [5], LRR [6], and SSWC [2]. The number of points of the datasets is set to 1000 and the number of dimensions varies from 10 to 100. The running time limit of each algorithm is set to 30 minutes. The result is summarized in Table VI. It can be observed that our algorithm produces comparable or better results compared to SSC and SWCC across all the datasets. These three algorithms, along with STATPC, are the only algorithms that can run to completion within the time threshold. DOC gives consistently high accuracy provided that all five parameters of the algorithm are well-tuned. However, it has significantly higher running time and cannot cluster data larger than 1000 × 40 within 30 mintues. We also analyse the effect of the setting for the parameter k on the clustering results, as shows in Figure 8a. This shows that the clustering results of our algorithm are reasonably insensitive to the setting of k over a wide range of values. For each dataset, there is a value of k at which the clustering result peaks, after which the result deteriorates. We can also observe there is a wide range of k values for which the clustering results are reasonably stable. In practice, the algorithm can be set to run multiple times with different parameters to find the ideal setting. 2) Clustering Non-disjoint Subspaces: We verify the capability of our algorithm to find clusters in non-disjoint subspaces. In this evaluation we use 1000 data points, where the number of dimensions varies between 20 and 100, and the clusters reside in overlapping subspaces, as illustrated Figure 7b. The other algorithms that produce comparable results in the previous section are not included since they are not able to find clusters in overlapping subspaces: SSC and LRR are only able to find clusters in disjoint subspaces [5] [6]. Moreover, SWCC assigns weights for each column according to its membership to all clusters and the weights of each column are summed up to 1. This indicates that the memberships of each column to different clusters are exclusive. The result of this evaluation is presented in Figure 8b. The consistently high NMI values (≥ 0.5) confirm the capability of the proposed algorithm in finding clusters in nondisjoint subspaces. 3) Scalability Tests against SSC and SWCC: We evaluate the scalability of our algorithm to the number of data points by generating data having 10 dimensions and varying the number of data points from 1,000 to 1,000,000. We include only SSC and SSWC in this scalability evaluation because they are the fastest baseline algorithms with high accuracy. The execution time is presented in Figure 8c. It shows that our algorithm and SWCC can cluster up to 1 million data points while SSC triggers memory errors when the number of points exceeds 15,000. In summary, these tests on the synthetic datasets demonstrate that our algorithm is relatively insensitive to the choice of parameter settings, while achieving the best overall performance as the number of data points increases. VI. CONCLUSION We proposed a subspace clustering algorithm to find clusters in non-disjoint subspaces. Unlike traditional bottom-up clustering algorithms, our algorithm starts the search for base clusters in low dimensional subspaces instead of in individual dimensions, in order to capture the covariances of values between dimensions, and to increase the tolerance of the algorithm to variations in the parameter settings. Our algorithm aggregates the base clusters to form clusters in higher dimensional subspaces based on the technique of frequent pattern mining. Our approach not only avoids the combinatorial complexity of existing bottom-up algorithms, but also ensures more meaningful clustering results by keeping the numbers of final clusters tractable. Our experiments show that the proposed algorithm finds subspace clusters with high accuracy and scales to large inputs, in terms of both the number of records and the number of dimensions. This makes the algorithm practical to many applications in real life, as demonstrated in clustering gene expression data and car parking occupancy data.
5,950
1811.02722
2964195534
Subspace clustering aims to find groups of similar objects (clusters) that exist in lower dimensional subspaces from a high dimensional dataset. It has a wide range of applications, such as analysing high dimensional sensor data or DNA sequences. However, existing algorithms have limitations in finding clusters in non-disjoint subspaces and scaling to large data, which impinge their applicability in areas such as bioinformatics and the Internet of Things. We aim to address such limitations by proposing a subspace clustering algorithm using a bottom-up strategy. Our algorithm first searches for base clusters in low dimensional subspaces. It then forms clusters in higher-dimensional subspaces using these base clusters, which we formulate as a frequent pattern mining problem. This formulation enables efficient search for clusters in higher-dimensional subspaces, which is done using FP-trees. The proposed algorithm is evaluated against traditional bottom-up clustering algorithms and state-of-the-art subspace clustering algorithms. The experimental results show that the proposed algorithm produces clusters with high accuracy, and scales well to large volumes of data. We also demonstrate the algorithm’s performance using real-life ten genomic datasets.
Another relevant topic is co-clustering (a.k.a bi-clustering or pattern-based clustering) @cite_35 . Co-clustering can be considered as a more general class of clustering high dimensional data by simultaneously clustering rows (points) and columns (dimensions). The main point that differentiates co-clustering from subspace clustering lies in the approach to the problem, and the homogeneous methodology to find clusters in both axis-parallel and arbitrarily oriented subspaces @cite_9 . In this paper, we also compare the performance of our algorithm on gene expression data with a range of co-clustering algorithms, including SWCC @cite_23 , BBAC-S @cite_5 , ITCC @cite_18 , FFCFW @cite_34 , and HICC @cite_36 .
{ "abstract": [ "", "", "Two-dimensional contingency tables or co-occurrence matrices arise frequently in various important applications such as text analysis and web-log mining. As a fundamental research topic, co-clustering aims to generate a meaningful partition of the contingency table to reveal hidden relationships between rows and columns. Traditional co-clustering algorithms usually produce a predefined number of flat partition of both rows and columns, which do not reveal relationship among clusters. To address this limitation, hierarchical co-clustering algorithms have attracted a lot of research interests recently. Although successful in various applications, the existing hierarchical co-clustering algorithms are usually based on certain heuristics and do not have solid theoretical background. In this paper, we present a new co-clustering algorithm, HICC, with solid theoretical background. It simultaneously constructs a hierarchical structure of both row and column clusters, which retains sufficient mutual information between rows and columns of the contingency table. An efficient and effective greedy algorithm is developed, which grows a co-cluster hierarchy by successively performing row-wise or column-wise splits that lead to the maximal mutual information gain. Extensive experiments on both synthetic and real datasets demonstrate that our algorithm can reveal essential relationships of row (and column) clusters and has better clustering precision than existing algorithms. Moreover, the experiments on real dataset show that HICC can effectively reveal hidden relationships between rows and columns in the contingency table.", "As a prolific research area in data mining, subspace clustering and related problems induced a vast quantity of proposed solutions. However, many publications compare a new proposition—if at all—with one or two competitors, or even with a so-called “naive” ad hoc solution, but fail to clarify the exact problem definition. As a consequence, even if two solutions are thoroughly compared experimentally, it will often remain unclear whether both solutions tackle the same problem or, if they do, whether they agree in certain tacit assumptions and how such assumptions may influence the outcome of an algorithm. In this survey, we try to clarify: (i) the different problem definitions related to subspace clustering in general; (ii) the specific difficulties encountered in this field of research; (iii) the varying assumptions, heuristics, and intuitions forming the basis of different approaches; and (iv) how several prominent solutions tackle different problems.", "Microarray technology enables the collection of vast amounts of gene expression data from biological experiments. Clustering algorithms have been successfully applied to exploring the gene expression data. Since a set of genes may be only correlated to a subset of samples, it is useful to use co-clustering to recover co-clusters in the gene expression data. In this paper, we propose a novel algorithm, called Subspace Weighting Co-Clustering (SWCC), for high dimensional gene expression data. In SWCC, a gene subspace weight matrix is introduced to identify the contribution of gene objects in distinguishing different sample clusters. We design a new co-clustering objective function to recover the co-clusters in the gene expression data, in which the subspace weight matrix is introduced. An iterative algorithm is developed to solve the objective function, in which the subspace weight matrix is automatically computed during the iterative co-clustering process. Our empirical study shows encouraging results of the proposed algorithm in comparison with six state-of-the-art clustering algorithms on ten gene expression data sets. We also propose to use SWCC for gene clustering and selection. The experimental results show that the selected genes can improve the classification performance of Random Forests.", "Co-clustering, or simultaneous clustering of rows and columns of a two-dimensional data matrix, is rapidly becoming a powerful data analysis technique. Co-clustering has enjoyed wide success in varied application domains such as text clustering, gene-microarray analysis, natural language processing and image, speech and video analysis. In this paper, we introduce a partitional co-clustering formulation that is driven by the search for a good matrix approximation---every co-clustering is associated with an approximation of the original data matrix and the quality of co-clustering is determined by the approximation error. We allow the approximation error to be measured using a large class of loss functions called Bregman divergences that include squared Euclidean distance and KL-divergence as special cases. In addition, we permit multiple structurally different co-clustering schemes that preserve various linear statistics of the original data matrix. To accomplish the above tasks, we introduce a new minimum Bregman information (MBI) principle that simultaneously generalizes the maximum entropy and standard least squares principles, and leads to a matrix approximation that is optimal among all generalized additive models in a certain natural parameter space. Analysis based on this principle yields an elegant meta algorithm, special cases of which include most previously known alternate minimization based clustering algorithms such as kmeans and co-clustering algorithms such as information theoretic (, 2003b) and minimum sum-squared residue co-clustering (, 2004). To demonstrate the generality and flexibility of our co-clustering framework, we provide examples and empirical evidence on a variety of problem domains and also describe novel co-clustering applications such as missing value prediction and compression of categorical data matrices.", "Fuzzy co-clustering is an unsupervised technique that performs simultaneous fuzzy clustering of objects and features. In this paper, we propose a new flexible fuzzy co-clustering algorithm which incorporates feature-cluster weighting in the formulation. We call it Flexible Fuzzy Co-clustering with Feature-cluster Weighting (FFCFW). By flexible we mean the algorithm allows the number of object clusters to be different from the number of feature clusters. There are two motivations behind this work. First, in the fuzzy framework, many co-clustering algorithms still require the number of object clusters to be the same as the number of feature clusters [1][2][3][4]. This is despite the fact that such rigid structure is hardly found in real-world applications. The second motivation is that while there have been numerous attempts for flexible co-clustering, it is common that in such scheme the relationships between object and feature clusters are not clearly represented. For this reason we incorporate a feature-cluster weighting scheme for each object cluster generated by FFCFW so that the relationships between the two types of clusters are manifested in the feature-cluster weights. This enables the new algorithm to generate more accurate representation of fuzzy co-clusters. FFCFW is formulated by fusing together the core components of two existing algorithms [2][5]. Like its predecessors, FFCFW adopts an iterative optimization procedure. We discuss in details the derivation of the proposed algorithm and the advantages it has over other existing works. Experiments on several large benchmark document datasets reveal the feasibility of our proposed algorithm." ], "cite_N": [ "@cite_35", "@cite_18", "@cite_36", "@cite_9", "@cite_23", "@cite_5", "@cite_34" ], "mid": [ "", "", "2007405844", "2079361215", "2616227410", "1929593512", "2152814272" ] }
Scalable Bottom-up Subspace Clustering using FP-Trees for High Dimensional Data
Subspace clustering aims to find groups of similar objects, or clusters, that exist in lower dimensional subspaces from a high dimensional dataset. This has a wide range of applications, including the rapidly growing fields of the Internet of Things (IoT) [1] and bioinformatics [2]. Applications such as these generate large volumes of high dimensional data, which bring new challenges to the subspace clustering problem. In this paper we propose a novel approach to subspace clustering that addresses two key challenges in these applications: scalability to large datasets and non-disjoint subspaces. The first challenge lies in handling large inputs. This is essential for many applications nowadays since the captured data can grow to million of records in a short period of time. It has been shown [3], [4] that many existing algorithms have high computational costs and take considerable time to cluster relatively small inputs, e.g., STATPC [3] needs more than 13 hours to cluster 7,500 records of 16 dimensions. Table I illustrates how our algorithm can scale to inputs with large volumes of data, in comparison to state-of-the-art subspace clustering algorithms SWCC [2], SSC [5], and LRR [6]. The running time of our algorithm over 100,000 data points is half that required by SWCC (which is a highly efficient coclustering algorithm, but cannot find clusters in non-disjoint subspaces). The state-of-the-art subspace clustering algorithms SSC and LRR also suffer as the number of data points increases. SSC triggers memory errors when the numbers of data points reaches 15,000, while LRR cannot terminate in 12 hours for just 5,000 points. 5 The second challenge involves finding clusters in nondisjoint subspaces [7]. Many recent algorithms [5], [6] assume that clusters are located in disjoint subspaces, which do not have any intersection except for the origin. This is a strong assumption that can be unrealistic, because real-life data may be correlated in different overlapping subsets of dimensions, also known as the property of local feature relevance [8]. For example, with gene expression data, a particular gene can be involved in multiple genetic pathways, which can result in different symptoms among different sets of patients [9]. Hence, a gene can belong to different clusters that have dimensions in common while differing in other dimensions [10]. Figure 1 presents another example of clusters in non-disjoint subspaces that are observed in data collected from IoT applications. The heatmap visualizes the subspace clustering results of a car parking occupancy dataset at 10 locations from 9am to 1pm, where each column represents a car parking bay, and each row represents an hour of the day. It can be observed that clusters C 1 and C 2 are in non-disjoint subspaces since they share the dimensions of parking bays P2 and P3 in common. In the case of C 1 , this can be interpreted as the utilisation of these two parking bays following some pattern that is also observed at P1 between 9am-10am. On the other hand, cluster C 2 shows that P2 and P3 follow a different pattern between 11am-1pm, and share that pattern with P4 and P5. Further analysis of the data can suggest that {P 2, P 3, P 1} are busy parking bays during morning peaks, whereas {P 2, P 3, P 4, P 5} have higher occupancy levels during lunch time. To address these challenges, we propose a novel algorithm that can find clusters in non-disjoint subspaces and scale well with large inputs. The algorithm follows a bottom-up strategy and comprises two phases. First, it searches for potential clusters in low dimensional subspaces, which we call base clusters. We start with base clusters instead of dense units in separate dimensions, which are used in existing bottomup clustering algorithms [8]. This allows our algorithm to preserve the covariance of data between different dimensions, which is also a critical factor when clustering high dimensional data, as we further elaborate in Section 4.1. In addition, this approach makes our algorithm more stable and tolerant to variations in parameters settings. In the second phase, base clusters that share similar sets of data points are aggregated together to form clusters in higher dimensional subspaces. This process of aggregation is non-trivial. One of the main challenges lies in keeping the number of aggregated clusters tractable. This not only directly affects the computational costs of the algorithm, but also ensures that the final result is presented in an appropriate number of meaningful clusters. Many existing algorithms [11], [12] depend on combinatorial search to combine low dimensional clusters (dense units). If there are on average m dense units in each dimension, the first level of aggregation of CLIQUE [11] (to combine one-dimensional dense units into two-dimensional clusters) would need to check |m| d pairwise possible aggregations, where d is the number of dimensions. Further aggregation would need to be applied sequentially for each subsequent higher dimension. We alleviate this heavy computation by transforming the aggregation problem into a frequent pattern mining problem [13] to achieve efficient and robust aggregation of base clusters. This approach also allows us to avoid the construction of a similarity matrix, which has quadratic complexity with respect to the input volume. Therefore, we reduce both time and space complexity and enable the algorithm to work with very large inputs. During this process, a base cluster may be aggregated into more than one cluster in different higher dimensional subspaces that have overlapping dimensions, which enables us to find non-disjoint subspace clusters. The general steps of our algorithms are summarized in Figure 2 and are detailed in Section 4. We make the following contributions: • We propose a novel subspace clustering algorithm that can find clusters in non-disjoint subspaces and handle very large inputs. The novelty of our approach is reflected in both phases of the algorithm. First, we search for base clusters in low dimensional subspaces to preserve the covariance of data between different dimensions. Second, we transform the process of sequential aggregation of low dimensional clusters to a problem of frequent pattern mining to construct high dimensional clusters. • We demonstrate that the proposed algorithm outperforms traditional subspace clustering algorithms using bottomup strategies, as well as state-of-the-art algorithms with other clustering strategies, in terms of accuracy and scalability on large volumes of data. • We conduct a range of experiments to demonstrate the effectiveness of our algorithm in different practical applications. Specifically, we present how the algorithm can be applied to (1) real-life sensor data from the City of Melbourne, Australia [14], and (2) 10 different gene expression datasets [9], and produce comparable or better results than state-of-the-art algorithms. III. PROBLEM STATEMENT We first present the notation used in this paper. • S (k) i is a subspace of k dimensions, which is represented as a set of its component dimensions: S (k) i = {d i1 , ..., d ij , ..., d ik }, d ij represents the j th dimension. • X j or {x j } is a set of points; x j denotes a point: x j = {x ji } k i=1 , x ji is the coordinate in the i th dimension. • C Xj Si is a cluster formed by points X j in subspace S i . Let X = {x i ∈ R d : i = 1..n} be a set of n points in a d-dimensional space, and X j be a subset of X. The set of all subspace clusters is denoted as Y = {C Xj Si , i : 1..s, j : 1..c}. Here, s denotes the number of subspaces containing clusters, and c denotes the number of all clusters. More than one cluster can exist in a subspace, i.e., c ≥ s. Our subspace clustering algorithm finds all clusters by identifying their corresponding subspaces and point sets. We take a bottom-up approach to find the clusters in subspaces starting from finding base clusters in low dimensional subspaces. The algorithm to find the base clusters is orthogonal to our study. We use k-means in the experiments for simplicity, although any low dimensional clustering algorithms may be used. Once the base clusters are found, our algorithm aggregates them to form clusters in higher-dimensional subspaces. We follow a probabilistic approach together with the downward closure property of density to guarantee the validity of the formation of clusters in higher dimensional subspaces. This is formulated as Lemma 1. Lemma 1: Given two points x 1 and x 2 in subspace S i , the probability that x 1 and x 2 belong to the same cluster in subspace S i is proportional to the cardinality |{S i ′ }| (S i ′ ⊂ S i ) in which x 1 and x 2 belong to the same cluster. Proof: Let C Si denote the event where two points x 1 and x 2 belong to the same cluster in subspace S i . Assume that we already perform clustering in lower dimensional subspaces and find that these two points belong to the same cluster in a set of p subspaces S = {S i1 , ..., S ij , ..., S ip } (S ij ⊂ S i ). Given this knowledge, the probability that x 1 and x 2 belong to the same cluster in S i is: P 1 = P (C Si | C Si1 , ..., C Sip ) = P (C Si , C Si1 , ..., C Sip ) P (C Si1 , ..., C Sip ) We show that the probability P 1 increases as new evidence of the cluster formation of x 1 and x 2 is found in other subspaces of S i . Specifically, let these two points also belong to a cluster in a certain subspace S im ⊂ S i (S im ⊂ S, i.e., S im is indeed a newly discovered subspace in which x 1 and x 2 belong to the same cluster). The probability of them belonging to the same cluster in S i becomes: P 2 = P (C S i | C S i1 , ..., C S ip , C S im ) = P (C S i , C S i1 , ..., C S ip , C S im ) P (C S i1 , ..., C S ip , C S im ) By applying the chain rule, we can show that P 2 > P 1 : P2 P1 = P (CS i , CS i1 , ..., CS ip , CS im ) P (CS i1 , ..., CS ip , CS im ) × P (CS i1 , ..., CS ip ) P (CS i , CS i1 , ..., CS ip ) According to the downward closure property of density, if x 1 and x 2 are near in S i , they are also near in all subspaces of S i , including S im . Hence, P (C Sim | C Si ) = 1, or P (C Sim , C Si ) = P (C Si ). Therefore, P (C Si , C Si1 , ..., C Sip , C Sim ) = P (C Si , C Si1 , ..., C Sip ). The previous equation can then be rewritten as: P2 P1 = P (CS i1 , ..., CS ip ) P (CS i1 , ..., CS ip , CS im ) = C S im P (CS i1 , ..., CS ip , CS im ) P (CS i1 , ..., CS ip , CS im ) By marginalising the numerator over C Sim , we can deduce that P 2 P 1 ≥ 1. We therefore show that additional evidence of x 1 and x 2 belonging to the same cluster in another subspace S mi ⊂ S i increases the probability that these two points belong to the same cluster in S i . Thus, Lemma 1 is proved. The intuition of Lemma 1 is that the formation of clusters in lower dimensional subspaces can be used as evidence to reinforce and increase the posterior probability of the formation of a cluster for the same set of points in the higher dimensional super subspaces. Therefore, we say that there is a high probability that a set of points form a cluster in a high dimensional subspace if they form clusters in a sufficiently large number of its subspaces. IV. PROPOSED METHOD We propose a two-phase subspace clustering algorithm as summarised in Algorithm 1. A. Phase 1: Base Cluster Search Our first phase searches for lower dimensional clusters. These are called base clusters as they are the basis that form higher dimensional clusters. Unlike traditional bottomup subspace clustering algorithms such as CLIQUE [11], ENCLUS and MAFIA [12] that search for dense units in individual dimensions, we search for base clusters in subspaces with two or more dimensions. This approach can preserve the covariance between different dimensions. Not only is the proximity between points in each dimension important but also the covariances of values in different dimensions are critical to decide the formation of clusters. Figure 3a shows a distribution of 300 points in a 3-dimensional space. Points {x i } 100 i=1 are from a normal distribution N (1, 2) and form a dense unit in dimension d 1 . Similarly, {x i } 200 i=101 and {x i } 300 i=201 follow two normal distributions N (7, 2) and N (10, 2) in d 2 and d 3 , and form two dense units in these dimensions respectively. When clustering these points in 2D and 3D spaces, where covariance is implicitly implied, these points do not form any cluster, as confirmed by k-means or visual inspection of Figure 3a. This can be explained with the normal probability density distribution in Figure 3b. While the first 100 points {x i } 100 i=1 are close to each other in d 1 , the same set of points have large variances in d 2 and d 3 , and cannot be considered close in higher dimensional space. The correlation between different dimensions is omitted when each dimension is considered separately. Figure 3c shows an example where missing clusters can be prevented. It contains 300 points whose coordinates in each N (10, 2). In fact, if we consider each dimension separately, the values in each dimension are the same as the previous distribution shown in Figure 3a. However, in this example, we enforce that for each point x i , its coordinates in all dimensions must be drawn from the same distribution. No dense units are found in each individual dimension since the points are normally distributed, as can be observed from the probability density distributions in Figure 3d (which do not show any significant peaks, compared to Figure 3b). With no dense unit, no cluster is found by the aforementioned methods. However, it is visually evident that 3 clusters exist in this dataset. Note that the dimensionality of the final clusters is higher than p if the search for base clusters starts with a p-dimensional subspace. For example, if the algorithm performs phase 1 with 3-dimensional (3D) subspaces, it assumes there is no cluster in 2D or 1D subspaces. For this reason, it is ideal to start phase 1 in subspaces that are low dimensional, i.e., keeping p small. Another factor that affects the algorithm is the number of subspaces that need to be searched. If the dimensionality of the full space is low, it is feasible to perform the search in all of its p-dimensional subspaces. As an example with a 50D dataset, the total number of 2D subspaces is 50 2 = 1225. If the number of dimensions is high, it is possible to perform sampling of subspaces instead of considering all of them, as long as each dimension is sampled sufficiently frequently. In this paper, we search for base clusters in all 2D subspaces if the number of dimensions is less than 100, while in higher dimensional datasets we perform subspace sampling. We find that in practice this provides a good balance between clustering quality and computational complexity. Table II shows an example of the output of phase 1. Note that we use the following notation: C Si,j denotes the j th cluster in the subspace S i . It searches for clusters in 6 subspaces {S 1 , ..., S 6 } of the full data space S. Points x 1 , x 2 and x 3 belong to the same cluster C S1,1 in subspace S 1 . They also belong to cluster C S2,1 in subspace S 2 , while sharing no common cluster in other subspaces. The base clusters found that cover similar sets of data points are aggregated together to form clusters in higher dimensional subspaces. Subspace S i of a high dimensional cluster is constituted of all the dimensions of its aggregated base clusters. According to Lemma 1, these base clusters can be considered as evidence to increase the posterior probability of the formation of the high dimensional cluster. B. Phase 2: High Dimensional Cluster Construction Phase 2 learns the patterns of proximity among the points from the output of phase 1, which is denoted as Z (Table II), to derive the final clusters and present them in a succinct and interpretable way. To this end, we consider Z as a transaction database where each point corresponds to a transaction and the base clusters covering that point are the items of that transaction. From Table II, the first row is the transaction of point x 1 , and the corresponding items are C S11 , C S21 , C S31 , C S61 . Subsequently, we use Z as the input to build an FP-Tree [13], in which each branch is an aggregation of base clusters and represents a high dimensional cluster. Effectively, each frequent pattern mined from the tree indicates a sufficiently large group of points that form clusters in a high dimensional subspace. The minimal size of a cluster is controlled by the minimum support (min_sup) [13] of the frequent pattern mining process. In practice, the choice of the min_sup parameter can be guided by the expected minimum cluster size. Note that not all frequent patterns are useful as they can produce redundant clusters. For any cluster defined by the frequent pattern F i , all subsets of F i are also frequent, and correspond to clusters in lower dimensions, but none of them form a cluster as complete as F i does. Therefore, we only need to mine the maximal frequent patterns. In addition, it is important to control the number of frequent patterns since these can quickly grow. Prior to the extraction of maximal frequent patterns, phase 2 analyses the frequencies of patterns at different levels of the FP-Tree, and prunes small branches with low frequencies. These branches correspond to insignificant patterns and only reflect the characteristics of a small portion of the points that do not justify a cluster. This Figure 4. Subspaces of base clusters Points S 1 S 2 S 3 S 4 S 5 S 6 x 1 C S 1 ,1 C S 2 ,1 C S 3 ,1 ∅ ∅ C S 6 ,1 x 2 C S 1 ,1 C S 2 ,1 C S 3 ,2 C S 4 ,1 ∅ C S 6 ,1 x 3 C S 1 1 C S 2 ,1 C S 3 ,3 ∅ C S 5 ,1 C S 6 ,1 x 4 C S 1 ,2 C S 2 ,2 C S 3 ,4 C S 4 ,2 C S 5 ,1 C S 6 ,1 x 5 C S 1 ,2 C S 2 ,2 C S 3 ,4 C S 4 ,2 C S 5 ,2 C S 6 ,1C 1 {x 1 , x 2 , x 3 } {C S 6 ,1 , C S 1 ,1 , C S 2 ,1 } C 2 {x 4 , x 5 } {C S 6 ,1 , C S 1 ,2 , C S 2 ,2 , C S 3 ,4 } is essential to prevent the algorithm from producing a huge number of small and meaningless clusters. To this end, phase 2 first performs a scan on the FP-Tree and records the frequency on each branch at each depth level of the tree. It then finds the knee-point [30], which indicates the level after which the frequencies significantly drop. Subsequently, the remainder of that branch is pruned. We present a running example using Table II as the input, with min_sup set to 0.4. Figure 4 shows the FP-Tree before being pruned. The pruning eliminates the node C S51 , which has a frequency of 1 (i.e., the patterns only apply to x 3 ) and hence should not justify a separate cluster. The branch that starts at node C S51 on the right branch of the tree is also pruned (the patterns only apply to x 4 ). Eventually, two clusters are found as presented in Table III. The process of building the tree and mining maximal frequent patterns only requires two passes over the input Z. The process of pruning the tree performs one traversal of the tree, which is linear with respect to the size of Z. This contributes to the low computational complexity and therefore improves the scalability of the algorithm. V. EVALUATION We evaluate our algorithm using real-life datasets from a variety of applications. First, we apply our algorithm to ten gene expression datasets, and compare its accuracy with six clustering algorithms that are commonly used for biomedical data. Next, we apply the algorithm to a real-life dataset of car parking occupancy in a major city, and quantitatively evaluate the result. Finally, we evaluate the algorithm using synthetic datasets of different sizes and dimensions, and compare the results with traditional bottom-up clustering algorithms [3] as well as other state-of-the-art subspace clustering algorithms [5], [6]. We also evaluate the scalability of our algorithm on large datasets. All experiments are conducted with MATLAB on an Intel Core i7-4790 3.6GHz CPU and 16GB of RAM. A. Clustering Gene Expression Data We first perform clustering on ten gene expression datasets that were widely used in different studies [2]. The sizes and characteristics of these datasets are summarised in Table IV. The performance of our proposed algorithm is compared with 7 other algorithms, including EWKM [31], BBAC-S [26], ITCC [27], FFCFW [28], HICC [29], and SWCC [2]. The metric used to measure the correctness of the result is normalised mutual information (NMI) [32]. Note that we also used precision, recall, f-measure, and accuracy to evaluate the clustering results but do not present the comparison numbers here because they are not directly comparable to those presented in the previous papers [2]. Our approach to true/false positives and true/false negatives for clustering is slightly different from the one used in the aforementioned papers. After finding the clusters, these algorithms use the Hungarian algorithm [33] to find the best mapping between the clustering result and the given labels. However, the Hungarian algorithm requires that the algorithms find the correct number of clusters, which is guaranteed in [2] because this is given as an input parameter. Our algorithm does not require the number of clusters to be specified in advance, and hence it is not always guaranteed to produce the correct number of clusters. Instead, we use the approach presented in [32] to determine true/false positives and true/false negatives. Next, we present the parameter settings for the algorithms in this experiment. In phase 1, we start the search for base clusters in two-dimensional subspaces (2D), and use k-means to find the base clusters in each of these subspaces. Therefore, there are only two parameters required by our algorithm: the number of base clusters k in each subspace, and the expected minimum size of a cluster, reflected in min sup. We We compute NMI for each clustering result and compare the average results of all algorithms in Table V. A t-test [34] is performed with a significance level of 5% to determine if the average NMI values produced by our algorithm are significantly different from those produced by the other algorithms. In Table V, the cells of the other algorithms are color-coded to highlight the relative performance of our algorithm. A white cell of a baseline algorithm indicates that the baseline algorithm performs worse than ours with statistical significance, a black cell indicates the baseline algorithm has a higher NMI value than ours, whereas a grey cell shows no statistical difference between the results. For example, the last row of the table indicates that the result of our algorithm is better than most of the other algorithms, has no statistical difference compared to BBAC-S, and is worse than k-means. It can be observed from the results that our algorithm produces comparable or better results than all other algorithms for the datasets of ADE, BR2, COL, PRO, and SRB (except for k-means). Our algorithm also performs better than ITCC, FFCFW, and HICC on all datasets. In summary, this demonstrates that we can achieve as good or better accuracy than state-of-the-art algorithms over a variety of genomic datasets. B. Clustering Car Parking Occupancy Data Next, we demonstrate the capability of our algorithm to work with data collected from a real-life IoT application. The City of Melbourne has deployed sensors to record parking events at parking bays around the central business district (CBD). We extract the start and end time of all parking events to compile the parking occupancy at 276 locations at 15 minutes intervals between 09:00-18:00, yielding an input of size 276×36 for each day. The aim is to find clusters of car parking spots that have similar patterns of occupancy at certain times of the day. Each clustering task is performed on five days worth of data to find the patterns of parking occupancy during weekdays. Parking occupancy is an important metric that indicates the efficiency of car park utilisation [35], which heavily affects traffic, ease of commute and business in the CBD. Analysing the car occupancy can reveal patterns in parking behaviour at different car parks during different times of the day, which can then be used to review the parking hotspots or tariffs. By clustering the parking occupancy data, each cluster C Xj Si represents a parking pattern observed at the locations (points) X j during the times (dimensions) defined by S i . The results are evaluated using two methods. First, we analysed the coherence of each cluster by statistically verifying whether the clustered parking bays have small deviations in the values of parking occupancy during the corresponding time periods, compared to the rest of the data. The examples of two clusters are shown in Figure 5, where each blue bar represents the mean and standard deviation of the parking occupancy at a certain time of the day, observed at parking bays grouped by the cluster. For example, Cluster 1 in Figure 5a shows the pattern shared by a group of parking bays during 9:00-10:30 and 14:45-17:45 with small standard deviations, compared to significant deviations at other times of the day. Similarly, Cluster 2 shows another pattern that has an occupancy rate of 55% around midday, while such correlation is not observed at other times of the day. Second, to quantify the effectiveness of the method, we use the clusering result to construct an ensemble prediction model to predict the parking occupancy over the next few hours, and compare the accuracy of our model with other models. The details of the prediction models are as follows: • Model 1 applies decision tree regression [36] directly on the occupancy data. • Model 2 first clusters the data using the proposed algorithm and then fits a decision tree regression on the set of car parks in each cluster separately. • Model 3 follows the same approach as Model 2 except that it uses the k-means algorithm in the first phase. Each cluster ideally represents a pattern of parking occupancy shared by a group of parking bays. Fitting a submodel to each cluster allows each submodel to learn the data in more detail and predict with higher accuracy if the values are coherent. Therefore, the accuracy of the prediction model directly reflects the quality of the clusters. This approach of using clustering in an ensemble prediction model has previously been used in [37], [38]. Each prediction model uses the values between 09:00-12:45 as training data to predict the occupancy rates of the next two hours. The coefficient of determination (R2) [39] is used to measure the accuracy. Figure 6 shows that our model (m2) outperforms the other two, reflected in higher R2 scores. It can also be observed that Model 3, which relies on k-means, is not as accurate as Model 1, which implies that fitting submodels to the input does not always translate to higher accuracy. In fact, the accuracy can deteriorate if the values in each submodel are not coherent. In summary, by incorporating the clustering results into decision tree regression to improve the prediction accuracy, we quantitatively show that our clustering algorithm can cluster data into meaningful partitions that share similar patterns. It also demonstrates its capability of handling real datasets with high levels of noise and outliers. C. Experiments with Synthetic Data We further evaluate our algorithm on a variety of synthetic datasets in order to assess (1) its capability to find clusters in disjoint and non-disjoint subspaces, and (2) its capability to scale with large inputs. Figure 7 shows the grayscale heatmap of a sample dataset containing 900 points in a 35-dimensional space. The points {x i } 300 i=1 form a cluster in the subspace S 1:10 , which is constituted by the dimensions {d 1 , ...d 10 }; points {x i } 600 i=301 form a cluster in the subspace S 11:30 , which is constituted by the dimensions {d 11 , ..., d 30 }. The points within the same clusters are more coherent, which is reflected by the more uniform shade of gray of the heatmap. These two clusters intersect only at the origin and hence are disjoint. On the other hand, Figure 7b is an example of data having clusters In this experiment, we start the search for base clusters in 2D subspaces. k-means is used to find base clusters in phase 1. Two parameters are required for our algorithm, which is the number of base clusters k in each subspace, and the minimum support min_sup required for the construction of the FP-Tree. Note that the value of min_sup can be deduced from the minimum expected number of points of a cluster. Setting an appropriate value for k is non-trivial. As we argued earlier, the purpose of phase 1 is to find the similarity in cluster membership of the points in the low dimensional subspaces, rather than the exact cluster of each point. We invoke 12 iterations of our algorithm with k ∈ {3, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55} and take the best result. For the baseline algorithms, we also analyse the properties of the synthetic data to derive the data density, the correct number of clusters, and the average dimensions of clusters to provide the ideal range of parameters. The parameters for CLIQUE, SUBCLU, DOC, and STATPC are replicated from [3]. Each of the baseline algorithms is executed 30 times and the average results are recorded. 1) Initial Tests against Baseline Algorithms: In this section we benchmark our algorithm with clustering algorithms including CLIQUE, SUBCLU, DOC, P3C, and STATPC [3], as well as state-of-the-art algorithms including SSC [5], LRR [6], and SSWC [2]. The number of points of the datasets is set to 1000 and the number of dimensions varies from 10 to 100. The running time limit of each algorithm is set to 30 minutes. The result is summarized in Table VI. It can be observed that our algorithm produces comparable or better results compared to SSC and SWCC across all the datasets. These three algorithms, along with STATPC, are the only algorithms that can run to completion within the time threshold. DOC gives consistently high accuracy provided that all five parameters of the algorithm are well-tuned. However, it has significantly higher running time and cannot cluster data larger than 1000 × 40 within 30 mintues. We also analyse the effect of the setting for the parameter k on the clustering results, as shows in Figure 8a. This shows that the clustering results of our algorithm are reasonably insensitive to the setting of k over a wide range of values. For each dataset, there is a value of k at which the clustering result peaks, after which the result deteriorates. We can also observe there is a wide range of k values for which the clustering results are reasonably stable. In practice, the algorithm can be set to run multiple times with different parameters to find the ideal setting. 2) Clustering Non-disjoint Subspaces: We verify the capability of our algorithm to find clusters in non-disjoint subspaces. In this evaluation we use 1000 data points, where the number of dimensions varies between 20 and 100, and the clusters reside in overlapping subspaces, as illustrated Figure 7b. The other algorithms that produce comparable results in the previous section are not included since they are not able to find clusters in overlapping subspaces: SSC and LRR are only able to find clusters in disjoint subspaces [5] [6]. Moreover, SWCC assigns weights for each column according to its membership to all clusters and the weights of each column are summed up to 1. This indicates that the memberships of each column to different clusters are exclusive. The result of this evaluation is presented in Figure 8b. The consistently high NMI values (≥ 0.5) confirm the capability of the proposed algorithm in finding clusters in nondisjoint subspaces. 3) Scalability Tests against SSC and SWCC: We evaluate the scalability of our algorithm to the number of data points by generating data having 10 dimensions and varying the number of data points from 1,000 to 1,000,000. We include only SSC and SSWC in this scalability evaluation because they are the fastest baseline algorithms with high accuracy. The execution time is presented in Figure 8c. It shows that our algorithm and SWCC can cluster up to 1 million data points while SSC triggers memory errors when the number of points exceeds 15,000. In summary, these tests on the synthetic datasets demonstrate that our algorithm is relatively insensitive to the choice of parameter settings, while achieving the best overall performance as the number of data points increases. VI. CONCLUSION We proposed a subspace clustering algorithm to find clusters in non-disjoint subspaces. Unlike traditional bottom-up clustering algorithms, our algorithm starts the search for base clusters in low dimensional subspaces instead of in individual dimensions, in order to capture the covariances of values between dimensions, and to increase the tolerance of the algorithm to variations in the parameter settings. Our algorithm aggregates the base clusters to form clusters in higher dimensional subspaces based on the technique of frequent pattern mining. Our approach not only avoids the combinatorial complexity of existing bottom-up algorithms, but also ensures more meaningful clustering results by keeping the numbers of final clusters tractable. Our experiments show that the proposed algorithm finds subspace clusters with high accuracy and scales to large inputs, in terms of both the number of records and the number of dimensions. This makes the algorithm practical to many applications in real life, as demonstrated in clustering gene expression data and car parking occupancy data.
5,950
1811.02616
2899821082
Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFC-based SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views.
There is a growing body of recent works on multi-view learning algorithms, e.g., @cite_41 @cite_10 @cite_31 , that attempt to integrate information across the multiple views to optimize the predictive performance of the classifier (see @cite_0 @cite_12 ). Some multi-view learning methods seek to maximize the agreement between views using regularization @cite_13 @cite_20 whereas others seek to optimally selecting subsets of features from different views for each prediction task @cite_5 @cite_32 However, these methods were not designed for network embedding. Most of the existing multi-view learning algorithms are either not directly applicable to multi-view networks or are not designed to cope with high degrees of data sparsity, a key challenge in modeling real-world multi-view networks.
{ "abstract": [ "The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.", "Many real-world datasets are comprised of different representations or views which often provide information complementary to each other. To integrate information from multiple views in the unsupervised setting, multiview clustering algorithms have been developed to cluster multiple views simultaneously to derive a solution which uncovers the common latent structure shared by multiple views. In this paper, we propose a novel NMFbased multi-view clustering algorithm by searching for a factorization that gives compatible clustering solutions across multiple views. The key idea is to formulate a joint matrix factorization process with the constraint that pushes clustering solution of each view towards a common consensus instead of fixing it directly. The main challenge is how to keep clustering solutions across different views meaningful and comparable. To tackle this challenge, we design a novel and effective normalization strategy inspired by the connection between NMF and PLSA. Experimental results on synthetic and several real datasets demonstrate the effectiveness of our approach.", "Real-world relations among entities can often be observed and determined by different perspectives views. For example, the decision made by a user on whether to adopt an item relies on multiple aspects such as the contextual information of the decision, the item’s attributes, the user’s profile and the reviews given by other users. Different views may exhibit multi-way interactions among entities and provide complementary information. In this paper, we introduce a multi-tensor-based approach that can preserve the underlying structure of multi-view data in a generic predictive model. Specifically, we propose structural factorization machines (SFMs) that learn the common latent spaces shared by multi-view tensors and automatically adjust the importance of each view in the predictive model. Furthermore, the complexity of SFMs is linear in the number of parameters, which make SFMs suitable to large-scale problems. Extensive experiments on real-world datasets demonstrate that the proposed SFMs outperform several state-of-the-art methods in terms of prediction accuracy and computational cost.", "In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.", "", "Multi-view clustering has become a widely studied problem in the area of unsupervised learning. It aims to integrate multiple views by taking advantages of the consensus and complimentary information from multiple views. Most of the existing works in multi-view clustering utilize the vector-based representation for features in each view. However, in many real-world applications, instances are represented by graphs, where those vector-based models cannot fully capture the structure of the graphs from each view. To solve this problem, in this paper we propose a Multi-view Clustering framework on graph instances with Graph Embedding (MCGE). Specifically, we model the multi-view graph data as tensors and apply tensor factorization to learn the multi-view graph embeddings, thereby capturing the local structure of graphs. We build an iterative framework by incorporating multi-view graph embedding into the multi-view clustering task on graph instances, jointly performing multi-view clustering and multi-view graph embedding simultaneously. The multi-view clustering results are used for refining the multi-view graph embedding, and the updated multi-view graph embedding results further improve the multi-view clustering. Extensive experiments on two real brain network datasets (i.e., HIV and Bipolar) demonstrate the superior performance of the proposed MCGE approach in multi-view connectome analysis for clinical investigation and application.", "In many clustering problems, we have access to multiple views of the data each of which could be individually used for clustering. Exploiting information from multiple views, one can hope to find a clustering that is more accurate than the ones obtained using the individual views. Often these different views admit same underlying clustering of the data, so we can approach this problem by looking for clusterings that are consistent across the views, i.e., corresponding data points in each view should have same cluster membership. We propose a spectral clustering framework that achieves this goal by co-regularizing the clustering hypotheses, and propose two co-regularization schemes to accomplish this. Experimental comparisons with a number of baselines on two synthetic and three real-world datasets establish the efficacy of our proposed approaches.", "Graph classification has traditionally focused on graphs generated from a single feature view. In many applications, it is common to have useful information from different channels views to describe objects, which naturally results in a new representation with multiple graphs generated from different feature views being used to describe one object. In this paper, we formulate a new Multi-Graph-View learning task for graph classification, where each object to be classified contains graphs from multiple graph-views. This problem setting is essentially different from traditional single-graph-view graph classification, where graphs are from one single feature view. To solve the problem, we propose a Cross Graph-View Sub graph Feature based Learning (gCGVFL) algorithm that explores an optimal set of sub graphs, across multiple graph-views, as features to represent graphs. Specifically, we derive an evaluation criterion to estimate the discriminative power and the redundancy of sub graph features across all views, and assign proper weight values to each view to indicate its importance for graph classification. The iterative cross graph-view sub graph scoring and graph-view weight updating form a closed loop to find optimal sub graphs to represent graphs for multi-graph-view learning. Experiments and comparisons on real-world tasks demonstrate the algorithm's performance.", "Multi-view learning or learning with multiple distinct feature sets is a rapidly growing direction in machine learning with well theoretical underpinnings and great practical success. This paper reviews theories developed to understand the properties and behaviors of multi-view learning and gives a taxonomy of approaches according to the machine learning mechanisms involved and the fashions in which multiple views are exploited. This survey aims to provide an insightful organization of current developments in the field of multi-view learning, identify their limitations, and give suggestions for further research. One feature of this survey is that we attempt to point out specific open problems which can hopefully be useful to promote the research of multi-view machine learning." ], "cite_N": [ "@cite_13", "@cite_41", "@cite_32", "@cite_0", "@cite_5", "@cite_31", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "2133348086", "2405459681", "2788522804", "1670132599", "", "2767892721", "2154415691", "2016440973", "2085789144" ] }
Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regularized Multi-View Agreement
Abstract-Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFCbased SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views. Index Terms-multi-view learning, network embedding, representation learning I. INTRODUCTION Social networks e.g., Facebook, social media e.g., Flickr, and e-commerce platforms, e.g., Amazon, can be seen as very large heterogeneous networks where the nodes correspond to diverse types of entities, e.g., articles, images, videos, music, etc. In such networks, an individual can link to multiple other individuals via different types of social or other relationships e.g., friendship, co-authorship, etc [4], [12], [37]. Examples include Google+ which allows members to specify different 'circles' that correspond to different types of social relation-ships; DBLP which contains multiple types of relationships that link authors to articles, publication venues, institutions, etc. Such networks are naturally represented as multi-view networks wherein the nodes denote individuals and links denote relationships such that each network view corresponds to a single type of relationship, e.g., friendship, family membership, etc [2], [6], [17], [33]. Such networks present several problems of interest, e.g., recommending products, activities or membership in specific interest groups to individuals based on the attributes of individuals, the multiple relationships that link them to entities or other individuals, etc. [3], [13]. When multiple sources of data are available about entities of interest, multi-view learning offers a promising approach to integrating complementary information provided by the different data sources (views) to optimize the performance of predictive models [36], [40]. Examples of such multiview learning algorithms include: multi-view support vector machines [7], [20], multi-view matrix (tensor) factorization [23], [24], and multi-view clustering via canonical correlation analysis [9], [11]. However, most of the existing multi-view learning algorithms are not (i) directly applicable to multiview networks; and (ii) designed to cope with data sparsity, which is one of the key challenges in modeling real-world multi-view networks: although the number of nodes in realworld networks is often in the millions, typically each node is linked to only a small subset of other nodes. Low-dimensional network embeddings offer a promising approach to dealing with such sparse networks [10]. However, barring a few exceptions [6], [25], [31], [34], most of the work on network embedding has focused on methods for single view networks [16], [29], [37]. Against this background, the key contributions of this paper are as follows: 1) We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional embeddings of nodes in multi-view networks. MVNE exploits recently discovered connection between network adjacency matrix factorization and network embedding [30]. Specifically, we use the graph factorization clustering (GFC) [41] algorithm to obtain single view network embedding. MVNE extends the resulting single view network node embedding algorithm (SVNE) to the multi-view setting. Inspired by [19], MVNE integrates both local and global context of nodes in networks to construct effective embeddings of multi-view networks. Specifically, MVNE uses a novel objective function that maximizes the agreement between views based on both the local and global structure of the underlying multiview graph. 2) We present results of experiments with several benchmark real-world data that demonstrate the effectiveness of MVNE relative to state-of-the-art network embedding methods. Specifically, we show that (i) SVNE is competitive with or superior to the state-of-the-art single view graph embedding methods when the embeddings are used for labeling unlabeled nodes in single view networks. (ii) MVNE substantially outperforms the state-ofthe-art single view and multi-view embedding methods for aggregating information from multiple views, when the embeddings are used for labeling nodes in multiview networks. (iii) MVNE is able to augment information from any target view with relevant information extracted from other views so as to improve node labeling performance on the target view in multi-view networks. The rest of the paper is organized as follows. In Section 2, we formally define the problem of multi-view network embedding. In Section 3, we describe the proposed MVNE framework. In Section 4, we present results of experiments that compare the performance of MVNE with state-of-the-art single view network node embedding methods and their multiview extensions. In Section 5, we conclude with a summary, discussion of related work, and some directions for further research. II. PRELIMINARIES Definition 1. (Multi-view Network) A multi-view network is defined by 6-tuple G = (V, E, T V , T E , φ V , φ E ) where V is a set of nodes, E is a set of edges, T V and T E respectively denote sets of node and relation types, and φ V : V → P(T V ) and φ E : E → T E (where P(S) is the power set of set S), are functions that associate each node v ∈ V with a subset of types in T V and each edge e ∈ E with their corresponding type in T E respectively. Note that a node can have multiple types. For example, in an academic network with nodes types authors (A), professors (R), papers (P), venues (V), organizations (O), topics (T), relation types may denote the coauthor (A-A), publish (A-P), published-in (P-V), has-expertise (R-T), and affiliation (O-A) relationships. An individual in an academic network can be an author, professor, or both. Note that the node types are selected from the set V of nodes |T V | (potentially overlapping) subsets V (1) , V (2) · · · V (|TV |) . Each view of a multi-view network is represented by an adjacency matrix for each type of edge t ∈ T E . For an edge type that denotes relationships between nodes in V (i) , the corresponding adjacency matrix W (t) will be of size |V (i) | × |V (i) |. Thus, a multi-view network G can be represented by a set of single view networks G (1) · · · G (|TE |) where G (t) is represented by the adjacency matrix W (t) . Definition 2. (Node label prediction problem) Suppose we are given a multi-view network G in which only some of the nodes of each node type t ∈ T V are assigned a finite subset of labels in L t , where L t is the set of possible labels for nodes of type t. Given such a network G, node label prediction entails completing the labeling of G, that is, for each node of type t that does not already have a label l ∈ L t , specifying whether it should be labeled with l based on the information provided by the nodes and edges of the multi-view network G. In the academic network described above, given a subset of papers that have been labeled as high impact papers, and/or review papers, node labeling might require, for example, predicting which among the rest of papers are also likely to be high impact papers and/or review papers. The link (label) prediction problem can be analogously defined. In the case of real-world multi-view networks, because each node is typically linked to only a small subset of the other nodes, a key challenge that needs to be addressed in solving the node (and link) labeling problems has to do with the sparsity of the underlying network. A related problem has to do with the computational challenge of working with very large adjacency matrices. Network embeddings, or low-dimensional representation of each network node that summarizes the information provided about the node by the rest of the network, offers a promising approach to addressing both these problems. Definition 3. (Multi-view Network Embedding) Given a multi-view network G, multi-view network embedding entails learning of d-dimensional latent representations X ∈ ℜ |V |×d , where d << |V | that preserve the structural and semantic relations among them adequately for performing one or more tasks, e.g., node label prediction. The quality of specific network embeddings (and hence that of the algorithms that produce them) have to be invariably evaluated in the context of specific applications, e.g., the predictive performance of node label predictors trained using the low-dimensional representations of nodes along with their labels, evaluated on nodes that were not part of the training data. The key challenge presented by multi-view network embedding over and above that of single view embedding has to do with integration of information from multiple views. Here, we can draw inspiration from multi-view learning [5], [36], [40], where in the simplest case, each view corresponds to a different subset of features, perhaps obtained from a different modality. Multi-view learning algorithms [22], [27] typically aim to maximize the agreement (with respect to the output of classifiers trained on each view, similarity of, or mutual information between low-dimensional latent representations of each view, etc). III. MULTI-VIEW NETWORK EMBEDDING As noted already, our approach to solving multi-view network embedding problem leverages a single view network embedding (SVNE) method inspired by a graph soft clustering algorithm, namely, the graph factorization clustering (GFC) [41]. To solve the multi-view embedding problem, MVNE combines the information from the multiple views into the coregularized factorization wherein the agreement between the multiple views is maximized using suitably designed objective function. MVNE combines the information from multiple views into the co-regularized factorization space. A. Single view network embedding Consider a single view network G = (V, E) consisting of nodes V and edges E. Let K(V, U, F ) be a bipartite graph where U is a set of nodes that is disjoint from V and F contains all the edges connecting nodes in V with nodes in U . Let B = {b ij } denote the |V | × |U | adjacency matrix with b ij ≥ 0 being the weight for the edge between v i ∈ V and u j ∈ U . The bipartite graph K induces a weight between v i and v j w ij = p b ip b jp = (BΛ −1 B T ) ij (1) where Λ = diag(λ 1 . . . λ |U| ) with λ p = i b ip denotes the degree of vertex u p ∈ U . We can normalize W in Eq. (1) such that ij w ij = 1 and w ij = p(v i , v j ) according to the stationary probability of transition between v i and v j [41]. Because in a bipartite graph K(V, U, F ), there are no direct links between nodes in V , and all the paths from v i to v j must pass through nodes in U , we have: p(v i , v j ) = p(v i |v j )p(v j )(2) We can estimate this distribution as: p(v i , v j ) = wij ij wij , p(v j ) is given by deg(vj ) ij wij where deg(v j ) represents the degree of v j and p(v i |v j ) = |U| p=1 p(v i |u p )p(u p |v j ). The transition probabilities between the graph G and the communities U (nodes of the bipartite graph) are given by p(v i |u p ) = bip λp and p(u p |v j ) = bpj deg(vj ) where matrix B denotes the weights between graph G and U and λ p denotes the degree of u p . Hence, the transition probability between two nodes v i , v j is given by: w ij = d p=1 b ip b pj λ p = (BΛ −1 B T ) ij(3) Both the local and the global information in G are thus encoded by matrix B and diagonal matrix Λ. We can optimally preserve the information in G by minimizing the objective function L(W, BΛ −1 B T ) where L(X, Y ) = Σ ij (x ij log xij yij − x ij + y ij ) is a variant of the K-L divergence. Replacing B by HΛ, we obtain the following objective function: min H,Λ L(W, HΛH T )(4) The objective function Eq.(4) is proved to be non-increasing under the update rules Eq.(5) and Eq. (6) for H and Λ [41]: h ip ∝h ip Σ j log W ij (HΛH T ) ij λ p h jp s.t. d p=1h ip = 1 (5) λ p ∝λ p Σ j log W ij (HΛH T ) ij h ip h jp s.t. d p=1λ p = ij W ij(6) In SVNE, the factorization H ∈ R n×d corresponds to the the single view network embedding where d is the embedding dimension. Because the size of the adjacency matrix representation of the network is quadratic in the number of nodes, matrix-factorization based embedding methods typically do not scale to large networks. Hence, inspired by [15], we make use of more efficient encodings of the network structure: Instead of directly input the adjacent matrix, we use a vectorized representation of adjacency matrix to perform matrix factorization. B. Multi-view Network Embedding Given a multi-view network G = {G (1) , G (2) , . . . G (k) }, the key idea behind extending SVNE to MVNE is to design the co-regularized objective function that in addition to preserving the information in each view, seeks to maximize the agreement between the views. To accomplish this goal, we propose the following co-regularized objective function in Eq. (7) which is designed to minimize the cost in each view: min H (i) ,Λ (i) k i=1 βiL(W (i) , H (i) Λ (i) H (i) T ) +α k p,q=1 ||H (p) Λ (p) − H (q) Λ (q) ||2 s.t. k i=1 βi = 1(7) Here, H (i) and Λ (i) represents the matrix factorization in view i. α denotes the regularization hyperparameter. β i is the parameter used to tune the relative importance of the different views and the role they play in maximizing the agreement between views. If we know that some views are more informative than others, one might want to set the β i accordingly. In contrast, if we know that some views are likely to be noisy, we might want to deemphasize such views by setting the respective β i values to be small as compared to those of other views. In the absence of any information about the relative importance or reliability of the different views, we set β i equal to |V (i) | k i=1 |V (i) | . To minimize the cost and maximize the agreement, we constrain the matrix factorization in each view to be the latent matrix factorization H and Λ. This yields the objective function shown in Eq. (9): min H,Λ k i=1 β i L(W (i) , HΛH T )(8) We find that minimizing the objective function in Eq.(9) is equivalent to the following equation by ignoring the constant term: min H,Λ L( k i=1 β i W (i) , HΛH T )(9) We co-regularize the views by choosingW = k i=1 β i W (j) to maximize the agreement across views. The corresponding update rules are obtained analogous to the single view case in Eq. (5) and Eq.(6) by replacing W withW . Computational Complexity In the naive implementation of MVNE, each optimization iteration takes O(d|V | 2 ) time where |V | is the total number of nodes and d is dimension of embedding space. However, in typical applications, G is usually very sparse. In this case the time complexity of one optimization iteration using adjacency list based representation of the adjacency matrices [15] is O(|V | + |E|) (with d assumed to be constant), where |E| denotes the total number of edges across all of the views. IV. EXPERIMENTAL RESULTS We report results of experiments designed to address the following questions: . Some basic statistics about the datasets described above are summarized in Table I. The results of our analyses of Last.fm and Flickr data suggest that their node degree distributions obey the power law, a desirable property, for the application of skip-gram based models [29]. • Parameter Tuning: SVNE (and MVNE) are compared with other single view methods (and their multi-view extensions) using the code provided by the authors of the respective methods (with the relevant parameters set or tuned as specified in the respective papers). We explored several different settings for d, the dimension of the embedding space (64, 128, 256, 512) for all the methods. We used grid search over γ ∈ {40, 80} for Deepwalk and p, q ∈ {0.25, 0.50, 1, 2, 4} for node2vec. Performance Evaluation: In experiments 1-2, we measure the performance on the node label prediction task using different fractions of the available data (10% to 90% in increments of 10%) for training and the remaining for testing the predictors. In experiment 3, we use 50% of the nodes in each view for training and the rest for testing. We repeat this procedure 10 times, and report the performance (as measured by Micro F1 and Macro F1) averaged across the 10 runs. In each case, the embeddings are evaluated with respect to the performance of a standard one-versus-rest L2-regularized sparse logistic regression classifiers [14] trained to perform node label prediction. B. Exp. 1: Single view methods compared Experiment compares SVNE with three state-of-the-art single view embedding methods on three standard single view benchmark datasets mentioned above (Note that MVNE applied to a single view dataset yields a single view embedding): • Deepwalk which constructs a network embedding such that two nodes are close in the embedding if the short random walks originating in the nodes are similar (i.e., generated by similar language models) [29]. • LINE which constructs a network embedding such that two nodes are close in the embedding space if their first and second order network neighborhoods are similar [37]. • Node2Vec which constructs a network embedding that maximizes the likelihood of preserving network neighborhoods of nodes using a biased random walk procedure to efficiently explores diverse neighborhoods [16]. Results: The results of comparison of SVNE with Deepwalk, LINE, and Node2Vec are shown in Figure 1. In the case of LINE, we report results for LINE(1st+2nd) (which uses 1st and 2nd order neighborhoods), in our experiments, the best performing of the 3 variants of LINE, with d = 256. In the case of Deepwalk, we report the best results obtained with γ = 40, w = 10, t = 40 and d = 128. For node2vec, we report the best results obtained with p, q = 1. For SVNE, we report the results with optimal d, which was found to be 128 for Blogcatalog, PPI and Wikipedia. The results summarized in Figure 1 show that on Blogcatalog data, SVNE consistently outperforms Node2vec and LINE and is competitive with Deepwalk. On PPI data, SVNE outperforms all other methods in terms of Micro-F1 score and in terms of Macro-F1 when more than 50% of the nodes are labeled. On wikipedia data, SVNE performs better than LINE(1st+2nd) and Deepwalk methods and is competitive with Node2vec. C. Exp. 2: MVNE Compared with the State-of-the-Art Multi-View Methods We first compare MVNE with traditional network embeddings methods such as Deepwalk, LINE and node2vec on two multi-view datasets Last.fm and Flickr. Since the methods are designed to work with single view networks, we combine multiple views to obtain an integrated view such that each pair of nodes is linked by an edge in the integrated view if the corresponding pair is linked by an edge in at least one of the constituent views. We next compare MVNE with three other baseline multiview learning methods: • Co-RegSC which constructs a representation of the multi-view network using co-regularized eigenvectors of the graph Laplacians of each view [18] • MultiNMF which constructs a latent representation of the multi-view network where in the common subspace is obtained by regularized joint matrix factorization of each of the views [21] • MVWE which constructs a multi-view network embedding by combining the single view embeddings using a weighted voting scheme [31] Similar to the previous works [31], in our experiments, we use the centroid eigenvectors produced by Co-RegSC and consensus matrix produced by MultiNMF respectively as the multi-view network embedding. We explored several different settings for d, the dimension of the embedding space (64, 128, 256) for the three baseline methods. Results: The results of comparison of MVNE with other methods are shown in Tables II and III. MVNE consistently, and often substantially, outperforms both (i) the state-of-the-art single view methods on the integrated view and (ii) Co-RegSC, MultiNMF, MVWE. We observe that the performance of MVWE deteriorates as the views become increasingly incomplete (i.e., large fractions of the nodes appear in only small subsets of the views). In contrast, MVNE copes with incomplete views through coregularization of nodes that are missing in each of the views. D. Exp. 3: MVNE compared with SVNE on Node Labeling in a Single Target View Experiment 3 investigates whether MVNE outperforms SVNE on node label prediction on any single target view by leveraging information from the all of the views. Considering each view of the Last.fm and Flickr data as the target view, we compare the node labeling performance using embeddings obtained using SVNE applied to the target view alone with MVNE that integrates information from all of the views. Results: Because of space constraints, we show only the results of comparison of MVNE with SVNE when each of the 5 views of the Flickr dataset and each of the 6 views (1 with the most nodes (Userview), one with the most edges (Event), two with most edges per node (TagView, TopTagView), and two with the fewest edges per node(NeighborView, ShoutView)) selected from the 12 views of the Last.fm dataset are designated as the target view. The results summarized in Figure 2 show that MVNE consistently outperforms SVNE on each target view. We conclude that even when the goal is to predict the labels of nodes in a single target view, MVNE is able to leverage information from all of the views to outperform SVNE applied only to the target view, by 10% points or better. Similar results were observed with MVNE relative to SVNE when tested on the rest of the views of last.fm data (results not shown). Furthermore, similar trends were observed for all the multi-view embedding methods considered in the paper relative to their single view counterparts (results not shown). V. SUMMARY AND DISCUSSION We have introduced MVNE, a novel Multi-View Network Embedding (MVNE) algorithm for constructing lowdimensional embeddings of multi-view networks. MVNE uses a novel objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view network. We have shown that (i) SVNE, the single view version of MVNE, is competitive with or superior to the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks; (ii) MVNE substantially outperforms single view methods on integrated view, as well as state-of-the-art multi-view graph methods for aggregating information from multiple views, when the embeddings are used for labeling nodes in multi-view networks; and (iii) MVNE outperforms SVNE, when used to predict node labels in any target view, suggesting that it is able to effectively integrate from all of the views, information that is useful for labeling nodes in the target view. B. Future Directions Work in progress is aimed at extending MVNE (i) to cope with dynamic update of graphs e.g., using asynchronous stochastic gradient descent (SGD) to update the latent space with the only newly added or deleted edges or nodes; and (ii) work with multi-modal networks that include richly structured digital objects (text, images, videos, etc).
4,184
1811.02616
2899821082
Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFC-based SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views.
Network embedding methods aim to produce information preserving low-dimensional embeddings of nodes in large networks. State-of-the-art network embedding methods include Deepwalk @cite_3 , LINE @cite_40 and node2vec @cite_18 are limited to single view networks, i.e, networks with a single type of links. However, most real-world networks are comprised of multiple types of nodes and links @cite_25 @cite_40 @cite_38 wherein each type of link induces a view. Hence, there is a growing interest in network embedding methods for multi-view networks @cite_11 @cite_29 @cite_39 @cite_15 . Some multi-view network embedding methods use canonical correlation analysis (CCA) @cite_9 @cite_2 @cite_8 to integrate information from multiple views. Others construct multi-view embeddings by integrating embeddings obtained from the individual views. Examples include MVWE @cite_14 which uses a weighted voting mechanism to combine information from multiple views; MVE2vec @cite_21 which attempts to balance the preservation of unique information provided by specific views against information that is shared by multiple views; and DMNE @cite_28 which uses a co-regularized cost function to combine information from different views. MVWE, MVE2vec, and DMNE use deep neural network models at their core. Specifically, MVWE and MVE2vec are based on a skip-gram model and DMNE is based on an AutoEncoder.
{ "abstract": [ "Complex networks have been receiving increasing attention by the scientific community, thanks also to the increasing availability of real-world network data. So far, network analysis has focused on the characterization and measurement of local and global properties of graphs, such as diameter, degree distribution, centrality, and so on. In the last years, the multidimensional nature of many real world networks has been pointed out, i.e. many networks containing multiple connections between any pair of nodes have been analyzed. Despite the importance of analyzing this kind of networks was recognized by previous works, a complete framework for multidimensional network analysis is still missing. Such a framework would enable the analysts to study different phenomena, that can be either the generalization to the multidimensional setting of what happens in monodimensional networks, or a new class of phenomena induced by the additional degree of complexity that multidimensionality provides in real networks. The aim of this paper is then to give the basis for multidimensional network analysis: we present a solid repertoire of basic concepts and analytical measures, which take into account the general structure of multidimensional networks. We tested our framework on different real world multidimensional networks, showing the validity and the meaningfulness of the measures introduced, that are able to extract important and non-random information about complex phenomena in such networks.", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "Learning distributed node representations in networks has been attracting increasing attention recently due to its effectiveness in a variety of applications. Existing approaches usually study networks with a single type of proximity between nodes, which defines a single view of a network. However, in reality there usually exists multiple types of proximities between nodes, yielding networks with multiple views. This paper studies learning node representations for networks with multiple views, which aims to infer robust node representations across different views. We propose a multi-view representation learning approach, which promotes the collaboration of different views and lets them vote for the robust representations. During the voting process, an attention mechanism is introduced, which enables each node to focus on the most informative views. Experimental results on real-world networks show that the proposed approach outperforms existing state-of-the-art approaches for network representation learning with a single view and other competitive approaches with multiple views.", "Low-dimensional vector representations are widely used as stand-ins for the text of words, sentences, and entire documents. These embeddings are used to identify similar words or make predictions about documents. In this work, we consider embeddings for social media users and demonstrate that these can be used to identify users who behave similarly or to predict attributes of users. In order to capture information from all aspects of a user’s online life, we take a multiview approach, applying a weighted variant of Generalized Canonical Correlation Analysis (GCCA) to a collection of over 100,000 Twitter users. We demonstrate the utility of these multiview embeddings on three downstream tasks: user engagement, friend selection, and demographic attribute prediction.", "Network embedding aims to learn a low-dimensional vector representation (or embedding) for each node in the social and information networks, with the constraint to preserve network structures. Most existing methods focus on single network embedding, ignoring the relationship between multiple networks. In many real-world applications, however, related multiple networks (e.g., social networks from different platforms) may contain complementary information which can lead to further refined node embeddings. Thus, in this paper, we propose a novel multi-network embedding method, DMNE. DMNE is flexible, which allows different networks to have different sizes, to be (un)weighted and (un)directed. It leverages multiple networks via cross-network relationships between nodes in different networks, which may form many-to-many node mappings, and be associated with weights. To model the non-linearity of the network data, we develop DMNE to have a new deep learning architecture, which coordinates multiple neural networks (one for each input network data) with a co-regularized loss function to manipulate cross-network relationships. With multiple layers of non-linear mappings, DMNE progressively transforms each input network into a highly non-linear latent space, and in the meantime, adapts different latent spaces to each other through a co-regularized learning schema. Extensive experimental results on four real-life datasets demonstrate the effectiveness of our method.", "Networks are a convenient way to represent complex systems of interacting entities. Many networks contain “communities” of nodes that are more densely connected to each other than to nodes in the rest of the network. In this paper, we investigate the detection of communities in temporal networks represented as multilayer networks. As a focal example, we study time-dependent financial-asset correlation networks. We first argue that the use of the “modularity” quality function---which is defined by comparing edge weights in an observed network to expected edge weights in a “null network''---is application-dependent. We differentiate between “null networks” and “null models” in our discussion of modularity maximization, and we highlight that the same null network can correspond to different null models. We then investigate a multilayer modularity-maximization problem to identify communities in temporal networks. Our multilayer analysis depends only on the form of the maximization problem and not on the specific quality function that one chooses. We introduce a diagnostic to measure persistence of community structure in a multilayer network partition. We prove several results that describe how the multilayer maximization problem measures a trade-off between static community structure within layers and larger values of persistence across layers. We also discuss some computational issues that the popular “Louvain” heuristic faces with temporal multilayer networks and suggest ways to mitigate them.", "We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks.", "Multi-view networks are ubiquitous in real-world applications. In order to extract knowledge or business value, it is of interest to transform such networks into representations that are easily machine-actionable. Meanwhile, network embedding has emerged as an effective approach to generate distributed network representations. Therefore, we are motivated to study the problem of multi-view network embedding, with a focus on the characteristics that are specific and important in embedding this type of networks. In our practice of embedding real-world multi-view networks, we identify two such characteristics, which we refer to as preservation and collaboration. We then explore the feasibility of achieving better embedding quality by simultaneously modeling preservation and collaboration, and propose the mvn2vec algorithms. With experiments on a series of synthetic datasets, an internal Snapchat dataset, and two public datasets, we further confirm the presence and importance of preservation and collaboration. These experiments also demonstrate that better embedding can be obtained by simultaneously modeling the two characteristics, while not over-complicating the model or requiring additional supervision.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "This paper reviews the development of social network analysis and examines its major areas of application in sociology. Current developments, including those from outside the social sciences, are examined and their prospects for advances in substantive knowledge are considered. A concluding section looks at the implications of data mining techniques and highlights the need for interdisciplinary cooperation if significant work is to ensue.", "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .", "", "Real world social networks typically consist of actors (individuals) that are linked to other actors or different types of objects via links of multiple types. Different types of relationships induce different views of the underlying social network. We consider the problem of labeling actors in such multi-view networks based on the connections among them. Given a social network in which only a subset of the actors are labeled, our goal is to predict the labels of the rest of the actors. We introduce a new random walk kernel, namely the Inter-Graph Random Walk Kernel (IRWK), for labeling actors in multi-view social networks. IRWK combines information from within each of the views as well as the links across different views. The results of our experiments on two real-world multi-view social networks show that: (i) IRWK classifiers outperform or are competitive with several state-of-the-art methods for labeling actors in a social network; (ii) IRWKs are robust with respect to different choices of user-specified parameters; and (iii) IRWK kernel computation converges very fast within a few iterations.", "Multilayer networks, in particular multilayer social networks, where users belong to and interact on different networks at the same time, are an active research area in social network analysis, computer science, and physics. These networks have traditionally been studied within these separate research communities, leading to the development of several independent models and methods to deal with the same set of problems. This book unifies and consolidates existing practical and theoretical knowledge on multilayer networks including data collection and analysis, modeling, and mining of multilayer social network systems, the evolution of interconnected social networks, and dynamic processes such as information spreading. A single real dataset is used to illustrate the concepts presented throughout the book, demonstrating both the practical utility and the potential shortcomings of the various methods. Researchers from all areas of network analysis will learn new aspects and future directions of this emerging field.", "" ], "cite_N": [ "@cite_38", "@cite_18", "@cite_14", "@cite_8", "@cite_28", "@cite_29", "@cite_9", "@cite_21", "@cite_3", "@cite_39", "@cite_40", "@cite_2", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2059270055", "2962756421", "2962779748", "2512549881", "2788816357", "2963104673", "1523385540", "2784777439", "2154851992", "2010133698", "1888005072", "", "2583417665", "2492277691", "" ] }
Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regularized Multi-View Agreement
Abstract-Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFCbased SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views. Index Terms-multi-view learning, network embedding, representation learning I. INTRODUCTION Social networks e.g., Facebook, social media e.g., Flickr, and e-commerce platforms, e.g., Amazon, can be seen as very large heterogeneous networks where the nodes correspond to diverse types of entities, e.g., articles, images, videos, music, etc. In such networks, an individual can link to multiple other individuals via different types of social or other relationships e.g., friendship, co-authorship, etc [4], [12], [37]. Examples include Google+ which allows members to specify different 'circles' that correspond to different types of social relation-ships; DBLP which contains multiple types of relationships that link authors to articles, publication venues, institutions, etc. Such networks are naturally represented as multi-view networks wherein the nodes denote individuals and links denote relationships such that each network view corresponds to a single type of relationship, e.g., friendship, family membership, etc [2], [6], [17], [33]. Such networks present several problems of interest, e.g., recommending products, activities or membership in specific interest groups to individuals based on the attributes of individuals, the multiple relationships that link them to entities or other individuals, etc. [3], [13]. When multiple sources of data are available about entities of interest, multi-view learning offers a promising approach to integrating complementary information provided by the different data sources (views) to optimize the performance of predictive models [36], [40]. Examples of such multiview learning algorithms include: multi-view support vector machines [7], [20], multi-view matrix (tensor) factorization [23], [24], and multi-view clustering via canonical correlation analysis [9], [11]. However, most of the existing multi-view learning algorithms are not (i) directly applicable to multiview networks; and (ii) designed to cope with data sparsity, which is one of the key challenges in modeling real-world multi-view networks: although the number of nodes in realworld networks is often in the millions, typically each node is linked to only a small subset of other nodes. Low-dimensional network embeddings offer a promising approach to dealing with such sparse networks [10]. However, barring a few exceptions [6], [25], [31], [34], most of the work on network embedding has focused on methods for single view networks [16], [29], [37]. Against this background, the key contributions of this paper are as follows: 1) We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional embeddings of nodes in multi-view networks. MVNE exploits recently discovered connection between network adjacency matrix factorization and network embedding [30]. Specifically, we use the graph factorization clustering (GFC) [41] algorithm to obtain single view network embedding. MVNE extends the resulting single view network node embedding algorithm (SVNE) to the multi-view setting. Inspired by [19], MVNE integrates both local and global context of nodes in networks to construct effective embeddings of multi-view networks. Specifically, MVNE uses a novel objective function that maximizes the agreement between views based on both the local and global structure of the underlying multiview graph. 2) We present results of experiments with several benchmark real-world data that demonstrate the effectiveness of MVNE relative to state-of-the-art network embedding methods. Specifically, we show that (i) SVNE is competitive with or superior to the state-of-the-art single view graph embedding methods when the embeddings are used for labeling unlabeled nodes in single view networks. (ii) MVNE substantially outperforms the state-ofthe-art single view and multi-view embedding methods for aggregating information from multiple views, when the embeddings are used for labeling nodes in multiview networks. (iii) MVNE is able to augment information from any target view with relevant information extracted from other views so as to improve node labeling performance on the target view in multi-view networks. The rest of the paper is organized as follows. In Section 2, we formally define the problem of multi-view network embedding. In Section 3, we describe the proposed MVNE framework. In Section 4, we present results of experiments that compare the performance of MVNE with state-of-the-art single view network node embedding methods and their multiview extensions. In Section 5, we conclude with a summary, discussion of related work, and some directions for further research. II. PRELIMINARIES Definition 1. (Multi-view Network) A multi-view network is defined by 6-tuple G = (V, E, T V , T E , φ V , φ E ) where V is a set of nodes, E is a set of edges, T V and T E respectively denote sets of node and relation types, and φ V : V → P(T V ) and φ E : E → T E (where P(S) is the power set of set S), are functions that associate each node v ∈ V with a subset of types in T V and each edge e ∈ E with their corresponding type in T E respectively. Note that a node can have multiple types. For example, in an academic network with nodes types authors (A), professors (R), papers (P), venues (V), organizations (O), topics (T), relation types may denote the coauthor (A-A), publish (A-P), published-in (P-V), has-expertise (R-T), and affiliation (O-A) relationships. An individual in an academic network can be an author, professor, or both. Note that the node types are selected from the set V of nodes |T V | (potentially overlapping) subsets V (1) , V (2) · · · V (|TV |) . Each view of a multi-view network is represented by an adjacency matrix for each type of edge t ∈ T E . For an edge type that denotes relationships between nodes in V (i) , the corresponding adjacency matrix W (t) will be of size |V (i) | × |V (i) |. Thus, a multi-view network G can be represented by a set of single view networks G (1) · · · G (|TE |) where G (t) is represented by the adjacency matrix W (t) . Definition 2. (Node label prediction problem) Suppose we are given a multi-view network G in which only some of the nodes of each node type t ∈ T V are assigned a finite subset of labels in L t , where L t is the set of possible labels for nodes of type t. Given such a network G, node label prediction entails completing the labeling of G, that is, for each node of type t that does not already have a label l ∈ L t , specifying whether it should be labeled with l based on the information provided by the nodes and edges of the multi-view network G. In the academic network described above, given a subset of papers that have been labeled as high impact papers, and/or review papers, node labeling might require, for example, predicting which among the rest of papers are also likely to be high impact papers and/or review papers. The link (label) prediction problem can be analogously defined. In the case of real-world multi-view networks, because each node is typically linked to only a small subset of the other nodes, a key challenge that needs to be addressed in solving the node (and link) labeling problems has to do with the sparsity of the underlying network. A related problem has to do with the computational challenge of working with very large adjacency matrices. Network embeddings, or low-dimensional representation of each network node that summarizes the information provided about the node by the rest of the network, offers a promising approach to addressing both these problems. Definition 3. (Multi-view Network Embedding) Given a multi-view network G, multi-view network embedding entails learning of d-dimensional latent representations X ∈ ℜ |V |×d , where d << |V | that preserve the structural and semantic relations among them adequately for performing one or more tasks, e.g., node label prediction. The quality of specific network embeddings (and hence that of the algorithms that produce them) have to be invariably evaluated in the context of specific applications, e.g., the predictive performance of node label predictors trained using the low-dimensional representations of nodes along with their labels, evaluated on nodes that were not part of the training data. The key challenge presented by multi-view network embedding over and above that of single view embedding has to do with integration of information from multiple views. Here, we can draw inspiration from multi-view learning [5], [36], [40], where in the simplest case, each view corresponds to a different subset of features, perhaps obtained from a different modality. Multi-view learning algorithms [22], [27] typically aim to maximize the agreement (with respect to the output of classifiers trained on each view, similarity of, or mutual information between low-dimensional latent representations of each view, etc). III. MULTI-VIEW NETWORK EMBEDDING As noted already, our approach to solving multi-view network embedding problem leverages a single view network embedding (SVNE) method inspired by a graph soft clustering algorithm, namely, the graph factorization clustering (GFC) [41]. To solve the multi-view embedding problem, MVNE combines the information from the multiple views into the coregularized factorization wherein the agreement between the multiple views is maximized using suitably designed objective function. MVNE combines the information from multiple views into the co-regularized factorization space. A. Single view network embedding Consider a single view network G = (V, E) consisting of nodes V and edges E. Let K(V, U, F ) be a bipartite graph where U is a set of nodes that is disjoint from V and F contains all the edges connecting nodes in V with nodes in U . Let B = {b ij } denote the |V | × |U | adjacency matrix with b ij ≥ 0 being the weight for the edge between v i ∈ V and u j ∈ U . The bipartite graph K induces a weight between v i and v j w ij = p b ip b jp = (BΛ −1 B T ) ij (1) where Λ = diag(λ 1 . . . λ |U| ) with λ p = i b ip denotes the degree of vertex u p ∈ U . We can normalize W in Eq. (1) such that ij w ij = 1 and w ij = p(v i , v j ) according to the stationary probability of transition between v i and v j [41]. Because in a bipartite graph K(V, U, F ), there are no direct links between nodes in V , and all the paths from v i to v j must pass through nodes in U , we have: p(v i , v j ) = p(v i |v j )p(v j )(2) We can estimate this distribution as: p(v i , v j ) = wij ij wij , p(v j ) is given by deg(vj ) ij wij where deg(v j ) represents the degree of v j and p(v i |v j ) = |U| p=1 p(v i |u p )p(u p |v j ). The transition probabilities between the graph G and the communities U (nodes of the bipartite graph) are given by p(v i |u p ) = bip λp and p(u p |v j ) = bpj deg(vj ) where matrix B denotes the weights between graph G and U and λ p denotes the degree of u p . Hence, the transition probability between two nodes v i , v j is given by: w ij = d p=1 b ip b pj λ p = (BΛ −1 B T ) ij(3) Both the local and the global information in G are thus encoded by matrix B and diagonal matrix Λ. We can optimally preserve the information in G by minimizing the objective function L(W, BΛ −1 B T ) where L(X, Y ) = Σ ij (x ij log xij yij − x ij + y ij ) is a variant of the K-L divergence. Replacing B by HΛ, we obtain the following objective function: min H,Λ L(W, HΛH T )(4) The objective function Eq.(4) is proved to be non-increasing under the update rules Eq.(5) and Eq. (6) for H and Λ [41]: h ip ∝h ip Σ j log W ij (HΛH T ) ij λ p h jp s.t. d p=1h ip = 1 (5) λ p ∝λ p Σ j log W ij (HΛH T ) ij h ip h jp s.t. d p=1λ p = ij W ij(6) In SVNE, the factorization H ∈ R n×d corresponds to the the single view network embedding where d is the embedding dimension. Because the size of the adjacency matrix representation of the network is quadratic in the number of nodes, matrix-factorization based embedding methods typically do not scale to large networks. Hence, inspired by [15], we make use of more efficient encodings of the network structure: Instead of directly input the adjacent matrix, we use a vectorized representation of adjacency matrix to perform matrix factorization. B. Multi-view Network Embedding Given a multi-view network G = {G (1) , G (2) , . . . G (k) }, the key idea behind extending SVNE to MVNE is to design the co-regularized objective function that in addition to preserving the information in each view, seeks to maximize the agreement between the views. To accomplish this goal, we propose the following co-regularized objective function in Eq. (7) which is designed to minimize the cost in each view: min H (i) ,Λ (i) k i=1 βiL(W (i) , H (i) Λ (i) H (i) T ) +α k p,q=1 ||H (p) Λ (p) − H (q) Λ (q) ||2 s.t. k i=1 βi = 1(7) Here, H (i) and Λ (i) represents the matrix factorization in view i. α denotes the regularization hyperparameter. β i is the parameter used to tune the relative importance of the different views and the role they play in maximizing the agreement between views. If we know that some views are more informative than others, one might want to set the β i accordingly. In contrast, if we know that some views are likely to be noisy, we might want to deemphasize such views by setting the respective β i values to be small as compared to those of other views. In the absence of any information about the relative importance or reliability of the different views, we set β i equal to |V (i) | k i=1 |V (i) | . To minimize the cost and maximize the agreement, we constrain the matrix factorization in each view to be the latent matrix factorization H and Λ. This yields the objective function shown in Eq. (9): min H,Λ k i=1 β i L(W (i) , HΛH T )(8) We find that minimizing the objective function in Eq.(9) is equivalent to the following equation by ignoring the constant term: min H,Λ L( k i=1 β i W (i) , HΛH T )(9) We co-regularize the views by choosingW = k i=1 β i W (j) to maximize the agreement across views. The corresponding update rules are obtained analogous to the single view case in Eq. (5) and Eq.(6) by replacing W withW . Computational Complexity In the naive implementation of MVNE, each optimization iteration takes O(d|V | 2 ) time where |V | is the total number of nodes and d is dimension of embedding space. However, in typical applications, G is usually very sparse. In this case the time complexity of one optimization iteration using adjacency list based representation of the adjacency matrices [15] is O(|V | + |E|) (with d assumed to be constant), where |E| denotes the total number of edges across all of the views. IV. EXPERIMENTAL RESULTS We report results of experiments designed to address the following questions: . Some basic statistics about the datasets described above are summarized in Table I. The results of our analyses of Last.fm and Flickr data suggest that their node degree distributions obey the power law, a desirable property, for the application of skip-gram based models [29]. • Parameter Tuning: SVNE (and MVNE) are compared with other single view methods (and their multi-view extensions) using the code provided by the authors of the respective methods (with the relevant parameters set or tuned as specified in the respective papers). We explored several different settings for d, the dimension of the embedding space (64, 128, 256, 512) for all the methods. We used grid search over γ ∈ {40, 80} for Deepwalk and p, q ∈ {0.25, 0.50, 1, 2, 4} for node2vec. Performance Evaluation: In experiments 1-2, we measure the performance on the node label prediction task using different fractions of the available data (10% to 90% in increments of 10%) for training and the remaining for testing the predictors. In experiment 3, we use 50% of the nodes in each view for training and the rest for testing. We repeat this procedure 10 times, and report the performance (as measured by Micro F1 and Macro F1) averaged across the 10 runs. In each case, the embeddings are evaluated with respect to the performance of a standard one-versus-rest L2-regularized sparse logistic regression classifiers [14] trained to perform node label prediction. B. Exp. 1: Single view methods compared Experiment compares SVNE with three state-of-the-art single view embedding methods on three standard single view benchmark datasets mentioned above (Note that MVNE applied to a single view dataset yields a single view embedding): • Deepwalk which constructs a network embedding such that two nodes are close in the embedding if the short random walks originating in the nodes are similar (i.e., generated by similar language models) [29]. • LINE which constructs a network embedding such that two nodes are close in the embedding space if their first and second order network neighborhoods are similar [37]. • Node2Vec which constructs a network embedding that maximizes the likelihood of preserving network neighborhoods of nodes using a biased random walk procedure to efficiently explores diverse neighborhoods [16]. Results: The results of comparison of SVNE with Deepwalk, LINE, and Node2Vec are shown in Figure 1. In the case of LINE, we report results for LINE(1st+2nd) (which uses 1st and 2nd order neighborhoods), in our experiments, the best performing of the 3 variants of LINE, with d = 256. In the case of Deepwalk, we report the best results obtained with γ = 40, w = 10, t = 40 and d = 128. For node2vec, we report the best results obtained with p, q = 1. For SVNE, we report the results with optimal d, which was found to be 128 for Blogcatalog, PPI and Wikipedia. The results summarized in Figure 1 show that on Blogcatalog data, SVNE consistently outperforms Node2vec and LINE and is competitive with Deepwalk. On PPI data, SVNE outperforms all other methods in terms of Micro-F1 score and in terms of Macro-F1 when more than 50% of the nodes are labeled. On wikipedia data, SVNE performs better than LINE(1st+2nd) and Deepwalk methods and is competitive with Node2vec. C. Exp. 2: MVNE Compared with the State-of-the-Art Multi-View Methods We first compare MVNE with traditional network embeddings methods such as Deepwalk, LINE and node2vec on two multi-view datasets Last.fm and Flickr. Since the methods are designed to work with single view networks, we combine multiple views to obtain an integrated view such that each pair of nodes is linked by an edge in the integrated view if the corresponding pair is linked by an edge in at least one of the constituent views. We next compare MVNE with three other baseline multiview learning methods: • Co-RegSC which constructs a representation of the multi-view network using co-regularized eigenvectors of the graph Laplacians of each view [18] • MultiNMF which constructs a latent representation of the multi-view network where in the common subspace is obtained by regularized joint matrix factorization of each of the views [21] • MVWE which constructs a multi-view network embedding by combining the single view embeddings using a weighted voting scheme [31] Similar to the previous works [31], in our experiments, we use the centroid eigenvectors produced by Co-RegSC and consensus matrix produced by MultiNMF respectively as the multi-view network embedding. We explored several different settings for d, the dimension of the embedding space (64, 128, 256) for the three baseline methods. Results: The results of comparison of MVNE with other methods are shown in Tables II and III. MVNE consistently, and often substantially, outperforms both (i) the state-of-the-art single view methods on the integrated view and (ii) Co-RegSC, MultiNMF, MVWE. We observe that the performance of MVWE deteriorates as the views become increasingly incomplete (i.e., large fractions of the nodes appear in only small subsets of the views). In contrast, MVNE copes with incomplete views through coregularization of nodes that are missing in each of the views. D. Exp. 3: MVNE compared with SVNE on Node Labeling in a Single Target View Experiment 3 investigates whether MVNE outperforms SVNE on node label prediction on any single target view by leveraging information from the all of the views. Considering each view of the Last.fm and Flickr data as the target view, we compare the node labeling performance using embeddings obtained using SVNE applied to the target view alone with MVNE that integrates information from all of the views. Results: Because of space constraints, we show only the results of comparison of MVNE with SVNE when each of the 5 views of the Flickr dataset and each of the 6 views (1 with the most nodes (Userview), one with the most edges (Event), two with most edges per node (TagView, TopTagView), and two with the fewest edges per node(NeighborView, ShoutView)) selected from the 12 views of the Last.fm dataset are designated as the target view. The results summarized in Figure 2 show that MVNE consistently outperforms SVNE on each target view. We conclude that even when the goal is to predict the labels of nodes in a single target view, MVNE is able to leverage information from all of the views to outperform SVNE applied only to the target view, by 10% points or better. Similar results were observed with MVNE relative to SVNE when tested on the rest of the views of last.fm data (results not shown). Furthermore, similar trends were observed for all the multi-view embedding methods considered in the paper relative to their single view counterparts (results not shown). V. SUMMARY AND DISCUSSION We have introduced MVNE, a novel Multi-View Network Embedding (MVNE) algorithm for constructing lowdimensional embeddings of multi-view networks. MVNE uses a novel objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view network. We have shown that (i) SVNE, the single view version of MVNE, is competitive with or superior to the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks; (ii) MVNE substantially outperforms single view methods on integrated view, as well as state-of-the-art multi-view graph methods for aggregating information from multiple views, when the embeddings are used for labeling nodes in multi-view networks; and (iii) MVNE outperforms SVNE, when used to predict node labels in any target view, suggesting that it is able to effectively integrate from all of the views, information that is useful for labeling nodes in the target view. B. Future Directions Work in progress is aimed at extending MVNE (i) to cope with dynamic update of graphs e.g., using asynchronous stochastic gradient descent (SGD) to update the latent space with the only newly added or deleted edges or nodes; and (ii) work with multi-modal networks that include richly structured digital objects (text, images, videos, etc).
4,184
1811.02616
2899821082
Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFC-based SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views.
In contrast to the existing multi-view network embedding methods, MVNE exploits a recently discovered connection between network adjacency matrix factorization and network embedding @cite_34 to utilize GFC @cite_7 , a graph factorization method, to perform single view network embedding. MVNE extends the resulting single view network embedding algorithm to the multi-view setting. Inspired by @cite_17 , MVNE uses a novel objective function that maximizes the agreement between views while combining information derived from the local as well as the global structure of the underlying multi-view networks. Like DMNE @cite_28 , MVNE uses a co-regularized objective function to maximize the agreement in the embedding space and to control the embedding dimension. Unlike DMNE which requires on computationally expensive training of a deep neural network, MVNE is considerably more efficient and hence scalable to large networks.
{ "abstract": [ "Network embedding aims to learn a low-dimensional vector representation (or embedding) for each node in the social and information networks, with the constraint to preserve network structures. Most existing methods focus on single network embedding, ignoring the relationship between multiple networks. In many real-world applications, however, related multiple networks (e.g., social networks from different platforms) may contain complementary information which can lead to further refined node embeddings. Thus, in this paper, we propose a novel multi-network embedding method, DMNE. DMNE is flexible, which allows different networks to have different sizes, to be (un)weighted and (un)directed. It leverages multiple networks via cross-network relationships between nodes in different networks, which may form many-to-many node mappings, and be associated with weights. To model the non-linearity of the network data, we develop DMNE to have a new deep learning architecture, which coordinates multiple neural networks (one for each input network data) with a co-regularized loss function to manipulate cross-network relationships. With multiple layers of non-linear mappings, DMNE progressively transforms each input network into a highly non-linear latent space, and in the meantime, adapts different latent spaces to each other through a co-regularized learning schema. Extensive experimental results on four real-life datasets demonstrate the effectiveness of our method.", "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "We propose a simple clustering framework on graphs encoding pairwise data similarities. Unlike usual similarity-based methods, the approach softly assigns data to clusters in a probabilistic way. More importantly, a hierarchical clustering is naturally derived in this framework to gradually merge lower-level clusters into higher-level ones. A random walk analysis indicates that the algorithm exposes clustering structures in various resolutions, i.e., a higher level statistically models a longer-term diffusion on graphs and thus discovers a more global clustering structure. Finally we provide very encouraging experimental results.", "We investigate an unsupervised generative approach for network embedding. A multi-task Siamese neural network structure is formulated to connect embedding vectors and our objective to preserve the global node ranking and local proximity of nodes. We provide deeper analysis to connect the proposed proximity objective to link prediction and community detection in the network. We show our model can satisfy the following design properties: scalability, asymmetry, unity and simplicity. Experiment results not only verify the above design properties but also demonstrate the superior performance in learning-to-rank, classification, regression, and link prediction tasks." ], "cite_N": [ "@cite_28", "@cite_34", "@cite_7", "@cite_17" ], "mid": [ "2788816357", "2761896323", "2146058975", "2770604839" ] }
Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regularized Multi-View Agreement
Abstract-Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFCbased SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views. Index Terms-multi-view learning, network embedding, representation learning I. INTRODUCTION Social networks e.g., Facebook, social media e.g., Flickr, and e-commerce platforms, e.g., Amazon, can be seen as very large heterogeneous networks where the nodes correspond to diverse types of entities, e.g., articles, images, videos, music, etc. In such networks, an individual can link to multiple other individuals via different types of social or other relationships e.g., friendship, co-authorship, etc [4], [12], [37]. Examples include Google+ which allows members to specify different 'circles' that correspond to different types of social relation-ships; DBLP which contains multiple types of relationships that link authors to articles, publication venues, institutions, etc. Such networks are naturally represented as multi-view networks wherein the nodes denote individuals and links denote relationships such that each network view corresponds to a single type of relationship, e.g., friendship, family membership, etc [2], [6], [17], [33]. Such networks present several problems of interest, e.g., recommending products, activities or membership in specific interest groups to individuals based on the attributes of individuals, the multiple relationships that link them to entities or other individuals, etc. [3], [13]. When multiple sources of data are available about entities of interest, multi-view learning offers a promising approach to integrating complementary information provided by the different data sources (views) to optimize the performance of predictive models [36], [40]. Examples of such multiview learning algorithms include: multi-view support vector machines [7], [20], multi-view matrix (tensor) factorization [23], [24], and multi-view clustering via canonical correlation analysis [9], [11]. However, most of the existing multi-view learning algorithms are not (i) directly applicable to multiview networks; and (ii) designed to cope with data sparsity, which is one of the key challenges in modeling real-world multi-view networks: although the number of nodes in realworld networks is often in the millions, typically each node is linked to only a small subset of other nodes. Low-dimensional network embeddings offer a promising approach to dealing with such sparse networks [10]. However, barring a few exceptions [6], [25], [31], [34], most of the work on network embedding has focused on methods for single view networks [16], [29], [37]. Against this background, the key contributions of this paper are as follows: 1) We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional embeddings of nodes in multi-view networks. MVNE exploits recently discovered connection between network adjacency matrix factorization and network embedding [30]. Specifically, we use the graph factorization clustering (GFC) [41] algorithm to obtain single view network embedding. MVNE extends the resulting single view network node embedding algorithm (SVNE) to the multi-view setting. Inspired by [19], MVNE integrates both local and global context of nodes in networks to construct effective embeddings of multi-view networks. Specifically, MVNE uses a novel objective function that maximizes the agreement between views based on both the local and global structure of the underlying multiview graph. 2) We present results of experiments with several benchmark real-world data that demonstrate the effectiveness of MVNE relative to state-of-the-art network embedding methods. Specifically, we show that (i) SVNE is competitive with or superior to the state-of-the-art single view graph embedding methods when the embeddings are used for labeling unlabeled nodes in single view networks. (ii) MVNE substantially outperforms the state-ofthe-art single view and multi-view embedding methods for aggregating information from multiple views, when the embeddings are used for labeling nodes in multiview networks. (iii) MVNE is able to augment information from any target view with relevant information extracted from other views so as to improve node labeling performance on the target view in multi-view networks. The rest of the paper is organized as follows. In Section 2, we formally define the problem of multi-view network embedding. In Section 3, we describe the proposed MVNE framework. In Section 4, we present results of experiments that compare the performance of MVNE with state-of-the-art single view network node embedding methods and their multiview extensions. In Section 5, we conclude with a summary, discussion of related work, and some directions for further research. II. PRELIMINARIES Definition 1. (Multi-view Network) A multi-view network is defined by 6-tuple G = (V, E, T V , T E , φ V , φ E ) where V is a set of nodes, E is a set of edges, T V and T E respectively denote sets of node and relation types, and φ V : V → P(T V ) and φ E : E → T E (where P(S) is the power set of set S), are functions that associate each node v ∈ V with a subset of types in T V and each edge e ∈ E with their corresponding type in T E respectively. Note that a node can have multiple types. For example, in an academic network with nodes types authors (A), professors (R), papers (P), venues (V), organizations (O), topics (T), relation types may denote the coauthor (A-A), publish (A-P), published-in (P-V), has-expertise (R-T), and affiliation (O-A) relationships. An individual in an academic network can be an author, professor, or both. Note that the node types are selected from the set V of nodes |T V | (potentially overlapping) subsets V (1) , V (2) · · · V (|TV |) . Each view of a multi-view network is represented by an adjacency matrix for each type of edge t ∈ T E . For an edge type that denotes relationships between nodes in V (i) , the corresponding adjacency matrix W (t) will be of size |V (i) | × |V (i) |. Thus, a multi-view network G can be represented by a set of single view networks G (1) · · · G (|TE |) where G (t) is represented by the adjacency matrix W (t) . Definition 2. (Node label prediction problem) Suppose we are given a multi-view network G in which only some of the nodes of each node type t ∈ T V are assigned a finite subset of labels in L t , where L t is the set of possible labels for nodes of type t. Given such a network G, node label prediction entails completing the labeling of G, that is, for each node of type t that does not already have a label l ∈ L t , specifying whether it should be labeled with l based on the information provided by the nodes and edges of the multi-view network G. In the academic network described above, given a subset of papers that have been labeled as high impact papers, and/or review papers, node labeling might require, for example, predicting which among the rest of papers are also likely to be high impact papers and/or review papers. The link (label) prediction problem can be analogously defined. In the case of real-world multi-view networks, because each node is typically linked to only a small subset of the other nodes, a key challenge that needs to be addressed in solving the node (and link) labeling problems has to do with the sparsity of the underlying network. A related problem has to do with the computational challenge of working with very large adjacency matrices. Network embeddings, or low-dimensional representation of each network node that summarizes the information provided about the node by the rest of the network, offers a promising approach to addressing both these problems. Definition 3. (Multi-view Network Embedding) Given a multi-view network G, multi-view network embedding entails learning of d-dimensional latent representations X ∈ ℜ |V |×d , where d << |V | that preserve the structural and semantic relations among them adequately for performing one or more tasks, e.g., node label prediction. The quality of specific network embeddings (and hence that of the algorithms that produce them) have to be invariably evaluated in the context of specific applications, e.g., the predictive performance of node label predictors trained using the low-dimensional representations of nodes along with their labels, evaluated on nodes that were not part of the training data. The key challenge presented by multi-view network embedding over and above that of single view embedding has to do with integration of information from multiple views. Here, we can draw inspiration from multi-view learning [5], [36], [40], where in the simplest case, each view corresponds to a different subset of features, perhaps obtained from a different modality. Multi-view learning algorithms [22], [27] typically aim to maximize the agreement (with respect to the output of classifiers trained on each view, similarity of, or mutual information between low-dimensional latent representations of each view, etc). III. MULTI-VIEW NETWORK EMBEDDING As noted already, our approach to solving multi-view network embedding problem leverages a single view network embedding (SVNE) method inspired by a graph soft clustering algorithm, namely, the graph factorization clustering (GFC) [41]. To solve the multi-view embedding problem, MVNE combines the information from the multiple views into the coregularized factorization wherein the agreement between the multiple views is maximized using suitably designed objective function. MVNE combines the information from multiple views into the co-regularized factorization space. A. Single view network embedding Consider a single view network G = (V, E) consisting of nodes V and edges E. Let K(V, U, F ) be a bipartite graph where U is a set of nodes that is disjoint from V and F contains all the edges connecting nodes in V with nodes in U . Let B = {b ij } denote the |V | × |U | adjacency matrix with b ij ≥ 0 being the weight for the edge between v i ∈ V and u j ∈ U . The bipartite graph K induces a weight between v i and v j w ij = p b ip b jp = (BΛ −1 B T ) ij (1) where Λ = diag(λ 1 . . . λ |U| ) with λ p = i b ip denotes the degree of vertex u p ∈ U . We can normalize W in Eq. (1) such that ij w ij = 1 and w ij = p(v i , v j ) according to the stationary probability of transition between v i and v j [41]. Because in a bipartite graph K(V, U, F ), there are no direct links between nodes in V , and all the paths from v i to v j must pass through nodes in U , we have: p(v i , v j ) = p(v i |v j )p(v j )(2) We can estimate this distribution as: p(v i , v j ) = wij ij wij , p(v j ) is given by deg(vj ) ij wij where deg(v j ) represents the degree of v j and p(v i |v j ) = |U| p=1 p(v i |u p )p(u p |v j ). The transition probabilities between the graph G and the communities U (nodes of the bipartite graph) are given by p(v i |u p ) = bip λp and p(u p |v j ) = bpj deg(vj ) where matrix B denotes the weights between graph G and U and λ p denotes the degree of u p . Hence, the transition probability between two nodes v i , v j is given by: w ij = d p=1 b ip b pj λ p = (BΛ −1 B T ) ij(3) Both the local and the global information in G are thus encoded by matrix B and diagonal matrix Λ. We can optimally preserve the information in G by minimizing the objective function L(W, BΛ −1 B T ) where L(X, Y ) = Σ ij (x ij log xij yij − x ij + y ij ) is a variant of the K-L divergence. Replacing B by HΛ, we obtain the following objective function: min H,Λ L(W, HΛH T )(4) The objective function Eq.(4) is proved to be non-increasing under the update rules Eq.(5) and Eq. (6) for H and Λ [41]: h ip ∝h ip Σ j log W ij (HΛH T ) ij λ p h jp s.t. d p=1h ip = 1 (5) λ p ∝λ p Σ j log W ij (HΛH T ) ij h ip h jp s.t. d p=1λ p = ij W ij(6) In SVNE, the factorization H ∈ R n×d corresponds to the the single view network embedding where d is the embedding dimension. Because the size of the adjacency matrix representation of the network is quadratic in the number of nodes, matrix-factorization based embedding methods typically do not scale to large networks. Hence, inspired by [15], we make use of more efficient encodings of the network structure: Instead of directly input the adjacent matrix, we use a vectorized representation of adjacency matrix to perform matrix factorization. B. Multi-view Network Embedding Given a multi-view network G = {G (1) , G (2) , . . . G (k) }, the key idea behind extending SVNE to MVNE is to design the co-regularized objective function that in addition to preserving the information in each view, seeks to maximize the agreement between the views. To accomplish this goal, we propose the following co-regularized objective function in Eq. (7) which is designed to minimize the cost in each view: min H (i) ,Λ (i) k i=1 βiL(W (i) , H (i) Λ (i) H (i) T ) +α k p,q=1 ||H (p) Λ (p) − H (q) Λ (q) ||2 s.t. k i=1 βi = 1(7) Here, H (i) and Λ (i) represents the matrix factorization in view i. α denotes the regularization hyperparameter. β i is the parameter used to tune the relative importance of the different views and the role they play in maximizing the agreement between views. If we know that some views are more informative than others, one might want to set the β i accordingly. In contrast, if we know that some views are likely to be noisy, we might want to deemphasize such views by setting the respective β i values to be small as compared to those of other views. In the absence of any information about the relative importance or reliability of the different views, we set β i equal to |V (i) | k i=1 |V (i) | . To minimize the cost and maximize the agreement, we constrain the matrix factorization in each view to be the latent matrix factorization H and Λ. This yields the objective function shown in Eq. (9): min H,Λ k i=1 β i L(W (i) , HΛH T )(8) We find that minimizing the objective function in Eq.(9) is equivalent to the following equation by ignoring the constant term: min H,Λ L( k i=1 β i W (i) , HΛH T )(9) We co-regularize the views by choosingW = k i=1 β i W (j) to maximize the agreement across views. The corresponding update rules are obtained analogous to the single view case in Eq. (5) and Eq.(6) by replacing W withW . Computational Complexity In the naive implementation of MVNE, each optimization iteration takes O(d|V | 2 ) time where |V | is the total number of nodes and d is dimension of embedding space. However, in typical applications, G is usually very sparse. In this case the time complexity of one optimization iteration using adjacency list based representation of the adjacency matrices [15] is O(|V | + |E|) (with d assumed to be constant), where |E| denotes the total number of edges across all of the views. IV. EXPERIMENTAL RESULTS We report results of experiments designed to address the following questions: . Some basic statistics about the datasets described above are summarized in Table I. The results of our analyses of Last.fm and Flickr data suggest that their node degree distributions obey the power law, a desirable property, for the application of skip-gram based models [29]. • Parameter Tuning: SVNE (and MVNE) are compared with other single view methods (and their multi-view extensions) using the code provided by the authors of the respective methods (with the relevant parameters set or tuned as specified in the respective papers). We explored several different settings for d, the dimension of the embedding space (64, 128, 256, 512) for all the methods. We used grid search over γ ∈ {40, 80} for Deepwalk and p, q ∈ {0.25, 0.50, 1, 2, 4} for node2vec. Performance Evaluation: In experiments 1-2, we measure the performance on the node label prediction task using different fractions of the available data (10% to 90% in increments of 10%) for training and the remaining for testing the predictors. In experiment 3, we use 50% of the nodes in each view for training and the rest for testing. We repeat this procedure 10 times, and report the performance (as measured by Micro F1 and Macro F1) averaged across the 10 runs. In each case, the embeddings are evaluated with respect to the performance of a standard one-versus-rest L2-regularized sparse logistic regression classifiers [14] trained to perform node label prediction. B. Exp. 1: Single view methods compared Experiment compares SVNE with three state-of-the-art single view embedding methods on three standard single view benchmark datasets mentioned above (Note that MVNE applied to a single view dataset yields a single view embedding): • Deepwalk which constructs a network embedding such that two nodes are close in the embedding if the short random walks originating in the nodes are similar (i.e., generated by similar language models) [29]. • LINE which constructs a network embedding such that two nodes are close in the embedding space if their first and second order network neighborhoods are similar [37]. • Node2Vec which constructs a network embedding that maximizes the likelihood of preserving network neighborhoods of nodes using a biased random walk procedure to efficiently explores diverse neighborhoods [16]. Results: The results of comparison of SVNE with Deepwalk, LINE, and Node2Vec are shown in Figure 1. In the case of LINE, we report results for LINE(1st+2nd) (which uses 1st and 2nd order neighborhoods), in our experiments, the best performing of the 3 variants of LINE, with d = 256. In the case of Deepwalk, we report the best results obtained with γ = 40, w = 10, t = 40 and d = 128. For node2vec, we report the best results obtained with p, q = 1. For SVNE, we report the results with optimal d, which was found to be 128 for Blogcatalog, PPI and Wikipedia. The results summarized in Figure 1 show that on Blogcatalog data, SVNE consistently outperforms Node2vec and LINE and is competitive with Deepwalk. On PPI data, SVNE outperforms all other methods in terms of Micro-F1 score and in terms of Macro-F1 when more than 50% of the nodes are labeled. On wikipedia data, SVNE performs better than LINE(1st+2nd) and Deepwalk methods and is competitive with Node2vec. C. Exp. 2: MVNE Compared with the State-of-the-Art Multi-View Methods We first compare MVNE with traditional network embeddings methods such as Deepwalk, LINE and node2vec on two multi-view datasets Last.fm and Flickr. Since the methods are designed to work with single view networks, we combine multiple views to obtain an integrated view such that each pair of nodes is linked by an edge in the integrated view if the corresponding pair is linked by an edge in at least one of the constituent views. We next compare MVNE with three other baseline multiview learning methods: • Co-RegSC which constructs a representation of the multi-view network using co-regularized eigenvectors of the graph Laplacians of each view [18] • MultiNMF which constructs a latent representation of the multi-view network where in the common subspace is obtained by regularized joint matrix factorization of each of the views [21] • MVWE which constructs a multi-view network embedding by combining the single view embeddings using a weighted voting scheme [31] Similar to the previous works [31], in our experiments, we use the centroid eigenvectors produced by Co-RegSC and consensus matrix produced by MultiNMF respectively as the multi-view network embedding. We explored several different settings for d, the dimension of the embedding space (64, 128, 256) for the three baseline methods. Results: The results of comparison of MVNE with other methods are shown in Tables II and III. MVNE consistently, and often substantially, outperforms both (i) the state-of-the-art single view methods on the integrated view and (ii) Co-RegSC, MultiNMF, MVWE. We observe that the performance of MVWE deteriorates as the views become increasingly incomplete (i.e., large fractions of the nodes appear in only small subsets of the views). In contrast, MVNE copes with incomplete views through coregularization of nodes that are missing in each of the views. D. Exp. 3: MVNE compared with SVNE on Node Labeling in a Single Target View Experiment 3 investigates whether MVNE outperforms SVNE on node label prediction on any single target view by leveraging information from the all of the views. Considering each view of the Last.fm and Flickr data as the target view, we compare the node labeling performance using embeddings obtained using SVNE applied to the target view alone with MVNE that integrates information from all of the views. Results: Because of space constraints, we show only the results of comparison of MVNE with SVNE when each of the 5 views of the Flickr dataset and each of the 6 views (1 with the most nodes (Userview), one with the most edges (Event), two with most edges per node (TagView, TopTagView), and two with the fewest edges per node(NeighborView, ShoutView)) selected from the 12 views of the Last.fm dataset are designated as the target view. The results summarized in Figure 2 show that MVNE consistently outperforms SVNE on each target view. We conclude that even when the goal is to predict the labels of nodes in a single target view, MVNE is able to leverage information from all of the views to outperform SVNE applied only to the target view, by 10% points or better. Similar results were observed with MVNE relative to SVNE when tested on the rest of the views of last.fm data (results not shown). Furthermore, similar trends were observed for all the multi-view embedding methods considered in the paper relative to their single view counterparts (results not shown). V. SUMMARY AND DISCUSSION We have introduced MVNE, a novel Multi-View Network Embedding (MVNE) algorithm for constructing lowdimensional embeddings of multi-view networks. MVNE uses a novel objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view network. We have shown that (i) SVNE, the single view version of MVNE, is competitive with or superior to the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks; (ii) MVNE substantially outperforms single view methods on integrated view, as well as state-of-the-art multi-view graph methods for aggregating information from multiple views, when the embeddings are used for labeling nodes in multi-view networks; and (iii) MVNE outperforms SVNE, when used to predict node labels in any target view, suggesting that it is able to effectively integrate from all of the views, information that is useful for labeling nodes in the target view. B. Future Directions Work in progress is aimed at extending MVNE (i) to cope with dynamic update of graphs e.g., using asynchronous stochastic gradient descent (SGD) to update the latent space with the only newly added or deleted edges or nodes; and (ii) work with multi-modal networks that include richly structured digital objects (text, images, videos, etc).
4,184
1811.02328
2964167901
Face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information. However, previous face hallucination approaches largely ignore facial identity recovery. This paper proposes Super-Identity Convolutional Neural Network (SICNN) to recover identity information for generating faces closed to the real identity. Specifically, we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space. However, directly using this loss will lead to a Dynamic Domain Divergence problem, which is caused by the large margin between the high-resolution domain and the hallucination domain. To overcome this challenge, we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains. Extensive experimental evaluations demonstrate that the proposed SICNN achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12 ( ) 14 faces with an 8 ( ) upscaling factor. In addition, SICNN significantly improves the recognizability of ultra-low-resolution faces.
For subspace-based methods. @cite_27 employed a Principal Component Analysis (PCA) based global appearance model to hallucinate LR faces and a local non-parametric model to enhance the details. @cite_41 used multiple local exemplar patches sampled from aligned HR facial images to hallucinate LR faces. @cite_42 resolved to sparse representation on local face patches. These subspace-based methods require precisely aligned reference HR and LR facial images with the same pose and facial expression.
{ "abstract": [ "A novel face hallucination method is proposed in this paper for the reconstruction of a high-resolution face image from a low-resolution observation based on a set of high- and low-resolution training image pairs. Different from most of the established methods based on probabilistic or manifold learning models, the proposed method hallucinates the high-resolution image patch using the same position image patches of each training image. The optimal weights of the training image position-patches are estimated and the hallucinated patches are reconstructed using the same weights. The final high-resolution facial image is formed by integrating the hallucinated patches. The necessity of two-step framework or residue compensation and the differences between hallucination based on patch and global image are discussed. Experiments show that the proposed method without residue compensation generates higher-quality images and costs less computational time than some recent face image super-resolution (hallucination) techniques.", "In this paper, we study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images. Our theoretical contribution is a two-step statistical modeling approach that integrates both a global parametric model and a local nonparametric model. At the first step, we derive a global linear model to learn the relationship between the high-resolution face images and their smoothed and down-sampled lower resolution ones. At the second step, we model the residue between an original high-resolution image and the reconstructed high-resolution image after applying the learned linear model by a patch-based non-parametric Markov network to capture the high-frequency content. By integrating both global and local models, we can generate photorealistic face images. A practical contribution is a robust warping algorithm to align the low-resolution face images to obtain good hallucination results. The effectiveness of our approach is demonstrated by extensive experiments generating high-quality hallucinated face images from low-resolution input with no manual alignment.", "In this paper, we propose a face-hallucination method, namely face hallucination based on sparse local-pixel structure. In our framework, a high resolution (HR) face is estimated from a single frame low resolution (LR) face with the help of the facial dataset. Unlike many existing face-hallucination methods such as the from local-pixel structure to global image super-resolution method (LPS-GIS) and the super-resolution through neighbor embedding, where the prior models are learned by employing the least-square methods, our framework aims to shape the prior model using sparse representation. Then this learned prior model is employed to guide the reconstruction process. Experiments show that our framework is very flexible, and achieves a competitive or even superior performance in terms of both reconstruction error and visual quality. Our method still exhibits an impressive ability to generate plausible HR facial images based on their sparse local structures. Our framework aims to shape the prior model using sparse representation.Global structure and local-pixel structure are incorporated to produce plausible facial details.A method to learn local-pixel structures based on sparse representation is proposed.The proposed method is competitive with other, state-of-the-art face-hallucination methods." ], "cite_N": [ "@cite_41", "@cite_27", "@cite_42" ], "mid": [ "1972002222", "2003749430", "2070038402" ] }
Super-Identity Convolutional Neural Network for Face Hallucination
Face hallucination, which generates high-resolution (HR) facial images from lowresolution (LR) inputs, has attracted great interests in the past few years. However, most of existing works do not take the recovery of identity information into consideration such that they cannot generate faces closed to the real identity. Fig. 1 shows some examples of hallucinated facial images generated by bicubic and several state-of-the-art methods. Though they generate clearer facial images than bicubic, the identity similarities are still low, which means that they cannot recover accurate identity-related facial details. On the other hand, human perception of face heavily relies on identity information [3]. Pixel-level cues cannot fully account for the perception process of the brain. These facts suggest that recovering identity information may improve both the recognizability and performance of hallucination. Motivated by the above observations, this paper proposes Super-Identity Convolutional Neural Network (SICNN) for identity-enhanced face hallucination. Different from previous methods, we additionally minimize the identity difference between the hallucinated face and its corresponding high-resolution face. To do so, (i) we introduce a robust identity metric space in the training process; (ii) we define a super-identity loss to measure the identity difference; (iii) we propose a novel training approach to efficiently utilize the super-identity loss. More details as follows: For identity metric space, we use a hypersphere space [20] as the identity metric space due to its state-of-the-art performance of facial identity representation. Specifically, our SICNN is composed of a face hallucination network cascaded with a recognition network to extract identity-related feature, and an Euclidean normalization operation to project the feature into the hypersphere space. For loss function, perceptual loss [12], computed by feature Euclidean distance, can construct convincing HR images. Differently, in our work, we need to minimize the identity distance of face pairs in the metric space. Here, we modified the perceptual loss to the super-identity loss calculated by normalized Euclidean distance (equivalent to geodesic distance) between the hallucinated face and its corresponding high-resolution face in the hypersphere identity metric space. This also facilitates our analysis on the training process (see Sec. 3.5). For training approach, using conventional training approaches to directly train the model with super-identity loss is difficult due to the large margin between the hallucination domain and the HR domain in the hypersphere identity metric space. This is critical during the early training stage when face hallucina-tion network cannot predict high quality hallucinated face images. Moreover, the hallucination domain keeps changing during the hallucination network learning, which makes the training with super-identity loss unstable. We summarize this challenge as a dynamic domain divergence problem. To overcome this problem, we propose a Domain Integrated Training algorithm that alternately updates the face recognition network and the hallucination network by minimizing the different loss in each iteration. In this alterative optimization, the hallucinated face and HR face will gradually move closer to each other in the hypersphere identity metric space while keep the discrimination of this metric space. The main contributions of this paper are as summarized as follows: -We propose Super-identity Convolutional Neural Network (SICNN) for enhancing the identity information in face hallucination. -We propose Domain-Integrated Training method to overcome the problem caused by dynamic domain divergence when training SICNN. -Compared with existing state-of-the-art hallucination methods, the SICNN achieves superior visual quality and identity recognizability when superresolving a facial image of size 12×14 pixels with an 8× upscaling factor. Super-Identity CNN In this section, we will first describe the architecture of our face hallucination network. Then we will introduce the proposed super-resolution loss and superidentity loss for identity recovery. After that, we will analyze the challenge, dynamic domain divergence problem, in super-identity training. At the last, we introduce the proposed domain-integrated training algorithm to overcome this challenge. Face Hallucination Network Architecture As shown in Fig. 2 (a), the face hallucination network can be decomposed into feature extraction, deconvolution, mapping, and reconstruction. We use dense block [10] to extract semantic features from LR inputs. More specifically, in the dense block, we set the growth rate to 32 and the kernel size to 3×3. Deconvolution layer consists of learnable upscaling filters to enlarge the resolutions of input features. Mapping is implemented by a convolutional layer to reduce the dimension of features to reduce computational cost. Reconstruction also exploits a convolutional layer to predict HR images from semantic features. Here, we denote a convolutional layer as Conv(s, c) and a deconvolutional layer as DeConv(s, c), where the variables s and c represent the filter size and the number of channels, respectively. In addition, PReLU [8] activation function achieves promising performance in CNN-based super-resolution [6] and we use it after each layer except the reconstruction stage. Super-Resolution Loss We use the pixel-wise Euclidean loss, called super-resolution loss, to constrain the overall visual appearance. For LR face input I LR i , we penalize the pixel-wise Euclidean distance between the hallucinated face and its corresponding HR face: L SR (I LR i , I HR i ) = CN N H (I LR i ) − I HR i 2 2 ,(1) Hypersphere Identity Metric Space Super-resolution loss can constrain pixel-level appearance. And we further use a constrain on the identity level. To measure the identity level difference, the first step is to find a robust identity metric space. Here we employ the hypersphere space [20] due to its state-of-the-art performance on identity representation. As shown in Fig. 2 (b), our hallucination network is cascaded with a face recognition network (i.e. CN N R ) and an Euclidean normalization operation that projects faces to the constructed hypersphere identity metric space. CN N R is a Resnet-like [9] CNN (see Tab. 1). It is trained by A-Softmax loss function [20] which encourages the CNN to learn discriminate identity features (i.e. maximizing inter-class distance and minimizing intra-class distance) by an angular margin. In this paper, we denote this loss function as the recognition loss L F R . For a face input I i belonging to the y i -th identity. The face recognition loss is represented as: L F R (I i ) = − log( e CN N R (Ii) ϕ(mΘy i ) e CN N R (Ii) ϕ(mΘy i ) + j =yi e CN N R (Ii) ϕ(Θj ) ),(2) where the Θ yi denotes the learned angle for identity y i , ϕ(Θ yi ) is a monotonically decreasing function generalized from cos(Θ yi ), and m is the hyper parameter of angular margin constrain. More details can be found in Sphereface [20]. [9]. We use PReLU [8] activation function after each convolution layer. The output of FC1 is the identity representation. Super-Identity Loss To impose the identity information in the training process, one choice is to use a loss computed by features Euclidean distance between face pairs, such as perceptual loss [12]. However, in this paper, since our goal is to minimize identity distance in hypersphere metric space, the original perceptual loss, computed by L2 distance is not the best choice in our task. Therefore, we propose a modified perceptual loss, called Super-Identity (SI) loss, to compute the normalized Euclidean distance (equivalent to geodesic distance). This modification makes the loss directly related to identity in hypersphere space and facilitate our investigation in Sec. 3.5. For a LR face input I LR i , we penalize the normalized Euclidean distance between the hallucinated face and its corresponding HR face in the constructed hypersphere identity metric space: L SI (I LR i , I HR i ) = CN N R (I SR i ) − CN N R (I HR i ) 2 2(3) where CN N R (I SR i ) and CN N R (I HR i ) are the identity features extracted from face recognition model (CN N R ) for facial images I SR i and I HR i , respectively. CN N R (I SR i ) = CN N R (I SR i ) CN N R (I SR i ) 2 is the identity representation projected to the unit hypersphere. In addition to L SI , we want to have some discussions about perceptual loss beyond our work. In general, the perceptual loss is computed by L2 distance. However, in most CNNs, inner-product operation is used in fully-connected and convolutional layers. These outputs are related to the feature's norm, weight's norm and the angular between them. Therefore, for different tasks and different metric space (e.g. [21,5,25]), some modifications about computational metric space of perceptual loss are necessary (L SI is one of the cases). Challenges of Training with Super-Identity Loss Super-identity loss imposes an identity level constrain. We examine different training methods as follows: Baseline training approach I. A straightforward way to train our framework is jointly using the L SR , L SI and L F R to train both CN N H and CN N R from scratch. The optimization objective can be represented as: min θ CN N H θ CN N R 1 n n i=1 L SR (I LR i , I HR i ) + αL SI (I LR i , I HR i ) + βL F R (I SR i , I HR i ),(4) where α and β denotes the loss weight of the L SI and L F R respectively, θ CN N H and θ CN N R denotes the learnable parameters. Observation I. This training approach generates artifacts (see Fig. 3, first column) and the loss is too difficult to converge. The reasons may come from: (1) In the early training stage, the hallucinated faces are quite different from HR faces, so the CN N R is too difficult to be optimized from scratch. (2) The objective of L F R (i.e. minimizing the intra-class variance) is different from the objective of L SI and L SR loss (minimizing the pair-wise distance), which is disadvantageous to CN N R and CN N H learning. So, we cannot use the L SI in CN N R learning and also cannot use the L F R in CN N H learning. Baseline training approach II. To solve above problems, one possible training approach used in perceptual loss [12] can be used. In particular, we train a CN N R using HR faces and then jointly use the L SR and the L SI to train the CN N H . The joint objective of L SI and L SR can be represented as: min θ CN N H 1 n n i=1 L SR (I LR i , I HR i ) + αL SI (I LR i , I HR i ),(5) Observation II. We have two observations while using this training approach: (1) The L SI is difficult to converge. (2) The visual results are noisy (see Fig. 3, second column). To investigate these challenges, we first visualized the learned identity features (after Euclidean normalization, as shown in Fig. 4) and found that there exists a large margin between the hallucination domain and the HR domain. We formulate this challenge as domain divergence problem. It specifies the failure of the CN N R , trained by HR faces, to project faces from hallucination domains to a measurable hypersphere identity metric space. In other words, this face recognition model cannot extract effective identity representation for hallucinated faces. This makes the L SI very difficult to converge and easily get stuck in local minima (i.e. occur many noises in hallucination results). Fig. 4. The distribution of identity features (after Euclidean normalization) from hallucination domain (triangle) and HR domain (dot). These identities are randomly selected from the training set. Different colors denote different identities. We use t-SNE [32] to reduce the dimensions for better understanding. We can observe that there is a large gap between above two domains in the identity metric space. Algorithm 1 Mini-batch SGD based domain-integrated training approach θ CN N R 1 N N i=1 L F R ( I SR i , I HR i ) 5: Update the hallucination model CN NH by descending its stochastic gradient: θ CN N H 1 N N i=1 L SR (I LR i , I HR i ) + αL SI (I LR i , I HR i ) 6: end while Baseline training approach III. To overcome the domain divergence challenge, a straightforward alternately training strategy can be used. In particular, we first trained a CN N H only using the L SR . Then we trained a CN N R using hallucinated faces and HR faces. Finally, we finetune the CN N R jointly using the L SR and the L SI following baseline training approach II. Observation III. Although this alternately training strategy seems able to overcome the domain divergence problem, it still produces artifacts (as shown in Fig. 3, third column). The reason is that the hallucination domain keeps changing when the CN N H is being updated. If the hallucination domain has changed, the face recognition model cannot extract effective and measurable identity representation of hallucinated faces anymore. In short, above observations can be concluded into a dynamic domain divergence problem as following: a large margin exists between the hallucination domain and HR domain and the hallucination domain keeps changing if the hallucination model keeps learning. Domain-Integrated Training Algorithm To overcome the dynamic domain divergence problem, we propose a new training procedure. From above the above observations, we see that alternately training strategy (Baseline Training Approach III) can alleviate the dynamic domain divergence problem. We further propose to do this alternately training in each iteration. More specifically, we first train a CN N R using HR facial images and a CN N H using the L SR . Then, we propose to use domain-integrated training approach (Algorithm 1) to finetune CN N R and CN N H alternately in each iteration. In particular, in each iteration, we first update the CN N R using the recognition loss, which allows the CN N R to perform accurate identity representation in this mini-batch of faces from different domains. Then, we jointly use the L SR and the L SI to update the CN N H . This training approach can encourage the CN N R to construct a robust mapping from faces to the measurable hypersphere identity metric space in each iteration for L SI optimization whatever the CN N H is changing. The alternative optimization process is conducted until converged. Some hallucination examples are shown in Fig. 3, fourth column, where we can observe a much better visual result with this training approach. Comparison to Adversarial Training Domain-Integrated (DI) training and adversarial training [7] can be related to their alternative learning strategy. But they are quite different in several aspects as follows: (1) Generally speaking, DI training is essentially a cooperative process in which CN N H collaborates with CN N R to minimize the identity difference. The learning objective is the same in each sub-iteration. However, in adversarial training, generator and discriminator compete against each other to improve the performance. The learning objective is alternatively challenging during two models learning. (2) The loss functions and optimization style are different. In DI training, we minimize L F R in CN N R constructing a marginal identity metric space and then minimize L SI for CN N H reducing pair-wise identity difference. Differently, in adversarial training, the classification loss is minimized for discriminator learning and maximized for generator learning. Experiments In this section, we will first describe the training and testing details. Then we perform an ablation study to evaluate the effectiveness of the proposed Super-Identity loss and Domain-Integrated training. Further, we evaluate our proposed method with other state-of-the-art methods. After that, we evaluate our method on the higher input size. At the last, we evaluate the benefit of our method for low-resolution face recognition. Training Details Training data. For a fair comparison with other state-of-the-art methods, we do face alignment in facial images. In particular, we use similarity transformation based on five landmarks detected by MTCNN [41]. We have removed the images and identities overlap between training and testing. For face recognition training, we use web-collected facial images including CASIA-WebFace [38], CACD2000 [4], CelebA [22], VGG Faces [24] as Set A. It roughly goes to 1.5M images of 17,680 unique persons. For face hallucination training, we select 1.1M HR facial images (larger than 96×112 pixels) from the same 1.5M images as Set B. Training details. For recognition model training, we use Set A with the batch size of 512 and m (angular margin constrain in Eq. 2) of 4. The learning rate is started from 0.1 and divided by 10 at the 20K, 30K iterations. The training process is finished at 35K iterations. For hallucination model training, we use Set B with the batch size of 128. The learning rate is started from 0.02 and divided by 10 at the 30K, 60K iterations. A complete training is finished at 80K iterations. For domain-integrated training, we use Set B with the batch size of 128 for CN N H and 256 for CN N R . The learning rate is started from 0.01 and divided by 10 at the 6K iterations. A complete training is finished at 9K iterations. Testing Details Testing data. We randomly select 1,000 identities with 10,000 HR facial images (larger than 96×112 pixels) from UMD-Face [1] dataset as Set C. The dataset is used for face hallucination and identity recovery evaluation. Evaluation protocols. In this section, we perform three kinds of evaluations: For identity recovery, we evaluate the performance of recovering identity information while super-resolving faces. In particular, we use the CN N R trained by Set A as identity features extractor. And the identity features are taken from the output of the first fully connected layer. Then we compute the identity similarity (i.e. cosine similarity) between the hallucinated face and its corresponding HR faces on Set C. The average similarities over the testing set are reported. For identity recognizability, we evaluate the recognizability of hallucinated faces. In particular, we first downsample Set A to 12×14 pixels as Set A -LR. Then we use different methods to super-resolve Set A -LR to 96×112 pixels as different Set A -SR. At last, we use the Set A -SR to train different CN N R and evaluate them on LFW [11] and YTF [36]. LR HR α = 0 α = 2 α = 4 α = 8 α = 16 α = 32 Ablation Experiment Loss weight. The hyper parameter α (see Algorithm 1) dominates the identity recovery. To verify the effectiveness of the proposed Super-Identity loss, we vary α from 0 (i.e. only use super-resolution loss) to 32 to learn different models. From Tab. 2 and Fig. 5, we observe that larger α make the facial images sharper with more details and brings the better performance of identity recovery and recognizability. But too large α also makes the texture look slightly unnatural. And, since the performances of identity recovery and identity recognizability are stable when α is larger than 8, we fix α to 8 in other experiments. Table 2. Quantitative comparison of different α on identity recovery and identity recognizability evaluation. Larger α brings better performance and it is stable when α is larger than 8. Training approach. We evaluate different training approaches introduced in Sec. 3.5 and Sec. 3.6. Some visual results are shown in Fig. 3. We can see that Domain-Integrated training achieves the best visual results. Besides, from Tab. 3, Domain-Integrated training also achieves the best performance of identity recovery and identity recognizability. Evaluation on Face Hallucination We compare SICNN with other state-of-the-art methods and bicubic interpolation on Set C for face hallucination. In particular, we follow EnhanceNet [26] training another UR-DGN, called UR-DGN*, with additional perceptual loss computed in end of the second and the last ResBlock in CN N R . All methods are re-trained in same training set -Set B. Some visual examples are shown in Fig. 6. More visual results are included in our supplementary material. We also report the results of average Peak Signalto-Noise Ratio (PSNR) and Structural Similarity (SSIM) in Tab. 4. But as the claim of other works [12,26,14], PSNR and SSIM results are useless for sematic super-resolution evaluation while visual quality and recognizability are more valuable. From the visual results, it is clear that our method achieves the best results over other methods. We analyze the results as follows: (1) For Ma et al.'s method, exemplar patches based, the results are oversmooth and suffer from obvious blocking for such low low-resolution input with large up-sampling scale. (2) For LapSRN [13], since it is based on L2 pixel-wise loss, it makes the hallucinated faces over-smooth. (3) For UR-DGN [39], it jointly uses pixel-wise Euclidean loss and adversarial loss to generate a realistic facial image closest to the average of all potential images. Thus, though the generated facial images look realistic, they are quite different from the original HR images. (4) For UR-DGN*, it uses an additional loss -perceptual loss computed in our CN N R as the pair-wise semantic loss for identity recovery. Though this pixels-wise loss + adversarial loss + perceptual loss is the state-of-the-art superresolution training approach (i.e. EnhancementNet [26] Evaluation on Higher Input Resolution For more comprehensive analysis, in this section, we trained our model for 24×28 inputs with 4× upscaling factor. Specifically, we modify the hallucination network (i.e., CN N H ) by removing the first DB, DeConv and Conv layers. As shown in Fig. 7, our method performs very well visual quality in higher resolution inputs with 4x upscaling factor. For identity recovery and identity recognizability evaluation, our method also achieves very good results: Average identity similarity: 0.8868, LFW accuracy: 99.21%, YTF accuracy: 94.86%, which are very close to the performance on HR faces. Evaluation on Identity Recovery We perform an evaluation on identity recovery with other state-of-the-art methods. All models for evaluation are the same as last experiment (i.e. Sec. 4.4). From the Tab. 5, we observe that our method achieves the best performance. Besides, we also observe that UR-DGN, trained by pixels-wise loss and adversarial loss, even shows inferior performance than LapSRN though with sharper visual results (See Sec. 4.4). It means that UR-DGN will lose some identity information while super-resolving a face because the adversarial loss is not a pair-wise loss. And if add perceptual loss (i.e. UR-DGN*), pair-wise semantic loss, the results can be improved, but still inferior to our method. Evaluation on Identity Recognizability Follow last two experiments (i.e. Sec. 4.4, 4.6)., we further perform an evaluation on identity recognizability with other state-of-the-art methods. From the Tab. 5, we observe that our method achieves the best performance. We also obtain similar observations as last experiment. Besides, we also observe that though several methods (LapSRN. Ma et al., and UR-DGN) obtain better visual results than Bicubic interpolation, the identity recognizability of superresolved face is similar or even inferior. It means that these methods cannot generate discriminative faces with better identity recognizability. Method Ours Table 6. Face verification performance of different methods on LFW [11] and YTF [36] benchmark. It shows that our method can help the recognition model to archive high accuracy with ultra-low-resolution inputs. Evaluation on Low-Resolution Face Recognition To evaluate the benefit of our method for low-resolution face recognition, we compare our method (SICN N + CN N R ) with other state-of-the-art recognition methods on LFW [11] and YTF [36] benchmark. From the results in Table 6, we find that these methods' input sizes are relatively large (area size from 15.3× to 298× compared with our method). Moreover, using our face hallucination method, the recognition model can still achieve reasonable results in such ultra-low resolution. We also tried using unaligned faces in training and testing and our proposed method still can achieve similar improvement of performance. Conclusion In this paper, we present Super-Identity CNN (SICNN) to enhance the identity information during super resolving face images of size 12×14 pixels with an 8× upscaling factor. Specifically, SICNN aims to minimize the identity difference between the hallucinated face and its corresponding HR face. In addition, we propose a domain-integrated training approach to overcome the dynamic domain divergence problem when training SICNN. Extensive experiments demonstrate that SICNN not only achieves superior hallucination results but also significantly improves the performance of low-resolution face recognition. Acknowledgement This work was supported in part by MediaTek Inc and the Ministry of Science and Technology, Taiwan, under Grant MOST 107-2634-F-002 -007. We also benefit from the grants from NVIDIA and the NVIDIA DGX-1 AI Supercomputer.
4,096
1811.02328
2964167901
Face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information. However, previous face hallucination approaches largely ignore facial identity recovery. This paper proposes Super-Identity Convolutional Neural Network (SICNN) to recover identity information for generating faces closed to the real identity. Specifically, we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space. However, directly using this loss will lead to a Dynamic Domain Divergence problem, which is caused by the large margin between the high-resolution domain and the hallucination domain. To overcome this challenge, we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains. Extensive experimental evaluations demonstrate that the proposed SICNN achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12 ( ) 14 faces with an 8 ( ) upscaling factor. In addition, SICNN significantly improves the recognizability of ultra-low-resolution faces.
. Recently, deep convolutional neural networks (DCNNs) achieve remarkable progresses in a variety of face analysis tasks, such as face recognition @cite_34 @cite_20 @cite_14 , face detection @cite_3 @cite_25 , facial attribute recognition @cite_8 @cite_1 @cite_43 @cite_18 . @cite_2 proposed a bichannel CNN to hallucinate blurry facial images in the wild. For un-aligned faces, @cite_4 proposed to jointly learn face hallucination and facial dense spatial correspondence field estimation. The approach of @cite_0 is a GAN-based method to generate realistic facial images. These works ignore the identity information recovery that is important for recognizability and hallucination quality. @cite_23 and @cite_38 relied on perceptual loss function closer to perceptual similarity to recover visually more convincing HR images for general image SR. In this paper we modified the perceptual loss to facilitate identity hypersphere space and propose a novel training approach to overcome the challenging while using the loss.
{ "abstract": [ "Inverse problems in image and audio, and super-resolution in particular, can be seen as high-dimensional structured prediction problems, where the goal is to characterize the conditional distribution of a high-resolution output given its low-resolution corrupted observation. When the scaling ratio is small, point estimates achieve impressive performance, but soon they suffer from the regression-to-the-mean problem, result of their inability to capture the multi-modality of this conditional distribution. Modeling high-dimensional image and audio distributions is a hard task, requiring both the ability to model complex geometrical structures and textured regions. In this paper, we propose to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks. The features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture. These properties imply that the resulting sufficient statistics minimize the uncertainty of the target signals given the degraded observations, while being highly informative. The filters of the CNN are initialized by multiscale complex wavelets, and then we propose an algorithm to fine-tune them by estimating the gradient of the conditional log-likelihood, which bears some similarities with Generative Adversarial Networks. We evaluate experimentally the proposed approach in the image super-resolution task, but the approach is general and could be used in other challenging ill-posed problems such as audio bandwidth extension.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.", "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.", "Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance.", "Conventional face super-resolution methods, also known as face hallucination, are limited up to (2 ! ! 4 ) scaling factors where (4 16 ) additional pixels are estimated for each given pixel. Besides, they become very fragile when the input low-resolution image size is too small that only little information is available in the input image. To address these shortcomings, we present a discriminative generative network that can ultra-resolve a very low resolution face image of size (16 16 ) pixels to its (8 ) larger version by reconstructing 64 pixels from a single pixel. We introduce a pixel-wise ( _2 ) regularization term to the generative model and exploit the feedback of the discriminative network to make the upsampled face images more similar to real ones. In our framework, the discriminative network learns the essential constituent parts of the faces and the generative network blends these parts in the most accurate fashion to the input image. Since only frontal and ordinary aligned images are used in training, our method can ultra-resolve a wide range of very low-resolution images directly regardless of pose and facial expression variations. Our extensive experimental evaluations demonstrate that the presented ultra-resolution by discriminative generative networks (UR-DGN) achieves more appealing results than the state-of-the-art.", "", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Face hallucination method is proposed to generate high-resolution images from low-resolution ones for better visualization. However, conventional hallucination methods are often designed for controlled settings and cannot handle varying conditions of pose, resolution degree, and blur. In this paper, we present a new method of face hallucination, which can consistently improve the resolution of face images even with large appearance variations. Our method is based on a novel network architecture called Bi-channel Convolutional Neural Network (Bi-channel CNN). It extracts robust face representations from raw input by using deep convolu-tional network, then adaptively integrates two channels of information (the raw input image and face representations) to predict the high-resolution image. Experimental results show our system outperforms the prior state-of-the-art methods.", "", "", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks." ], "cite_N": [ "@cite_38", "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_43", "@cite_23", "@cite_2", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "2196707239", "2613718673", "2963466847", "2507235960", "2163605009", "1834627138", "2341528187", "2520930090", "", "2331128040", "2201706299", "", "", "2520774990" ] }
Super-Identity Convolutional Neural Network for Face Hallucination
Face hallucination, which generates high-resolution (HR) facial images from lowresolution (LR) inputs, has attracted great interests in the past few years. However, most of existing works do not take the recovery of identity information into consideration such that they cannot generate faces closed to the real identity. Fig. 1 shows some examples of hallucinated facial images generated by bicubic and several state-of-the-art methods. Though they generate clearer facial images than bicubic, the identity similarities are still low, which means that they cannot recover accurate identity-related facial details. On the other hand, human perception of face heavily relies on identity information [3]. Pixel-level cues cannot fully account for the perception process of the brain. These facts suggest that recovering identity information may improve both the recognizability and performance of hallucination. Motivated by the above observations, this paper proposes Super-Identity Convolutional Neural Network (SICNN) for identity-enhanced face hallucination. Different from previous methods, we additionally minimize the identity difference between the hallucinated face and its corresponding high-resolution face. To do so, (i) we introduce a robust identity metric space in the training process; (ii) we define a super-identity loss to measure the identity difference; (iii) we propose a novel training approach to efficiently utilize the super-identity loss. More details as follows: For identity metric space, we use a hypersphere space [20] as the identity metric space due to its state-of-the-art performance of facial identity representation. Specifically, our SICNN is composed of a face hallucination network cascaded with a recognition network to extract identity-related feature, and an Euclidean normalization operation to project the feature into the hypersphere space. For loss function, perceptual loss [12], computed by feature Euclidean distance, can construct convincing HR images. Differently, in our work, we need to minimize the identity distance of face pairs in the metric space. Here, we modified the perceptual loss to the super-identity loss calculated by normalized Euclidean distance (equivalent to geodesic distance) between the hallucinated face and its corresponding high-resolution face in the hypersphere identity metric space. This also facilitates our analysis on the training process (see Sec. 3.5). For training approach, using conventional training approaches to directly train the model with super-identity loss is difficult due to the large margin between the hallucination domain and the HR domain in the hypersphere identity metric space. This is critical during the early training stage when face hallucina-tion network cannot predict high quality hallucinated face images. Moreover, the hallucination domain keeps changing during the hallucination network learning, which makes the training with super-identity loss unstable. We summarize this challenge as a dynamic domain divergence problem. To overcome this problem, we propose a Domain Integrated Training algorithm that alternately updates the face recognition network and the hallucination network by minimizing the different loss in each iteration. In this alterative optimization, the hallucinated face and HR face will gradually move closer to each other in the hypersphere identity metric space while keep the discrimination of this metric space. The main contributions of this paper are as summarized as follows: -We propose Super-identity Convolutional Neural Network (SICNN) for enhancing the identity information in face hallucination. -We propose Domain-Integrated Training method to overcome the problem caused by dynamic domain divergence when training SICNN. -Compared with existing state-of-the-art hallucination methods, the SICNN achieves superior visual quality and identity recognizability when superresolving a facial image of size 12×14 pixels with an 8× upscaling factor. Super-Identity CNN In this section, we will first describe the architecture of our face hallucination network. Then we will introduce the proposed super-resolution loss and superidentity loss for identity recovery. After that, we will analyze the challenge, dynamic domain divergence problem, in super-identity training. At the last, we introduce the proposed domain-integrated training algorithm to overcome this challenge. Face Hallucination Network Architecture As shown in Fig. 2 (a), the face hallucination network can be decomposed into feature extraction, deconvolution, mapping, and reconstruction. We use dense block [10] to extract semantic features from LR inputs. More specifically, in the dense block, we set the growth rate to 32 and the kernel size to 3×3. Deconvolution layer consists of learnable upscaling filters to enlarge the resolutions of input features. Mapping is implemented by a convolutional layer to reduce the dimension of features to reduce computational cost. Reconstruction also exploits a convolutional layer to predict HR images from semantic features. Here, we denote a convolutional layer as Conv(s, c) and a deconvolutional layer as DeConv(s, c), where the variables s and c represent the filter size and the number of channels, respectively. In addition, PReLU [8] activation function achieves promising performance in CNN-based super-resolution [6] and we use it after each layer except the reconstruction stage. Super-Resolution Loss We use the pixel-wise Euclidean loss, called super-resolution loss, to constrain the overall visual appearance. For LR face input I LR i , we penalize the pixel-wise Euclidean distance between the hallucinated face and its corresponding HR face: L SR (I LR i , I HR i ) = CN N H (I LR i ) − I HR i 2 2 ,(1) Hypersphere Identity Metric Space Super-resolution loss can constrain pixel-level appearance. And we further use a constrain on the identity level. To measure the identity level difference, the first step is to find a robust identity metric space. Here we employ the hypersphere space [20] due to its state-of-the-art performance on identity representation. As shown in Fig. 2 (b), our hallucination network is cascaded with a face recognition network (i.e. CN N R ) and an Euclidean normalization operation that projects faces to the constructed hypersphere identity metric space. CN N R is a Resnet-like [9] CNN (see Tab. 1). It is trained by A-Softmax loss function [20] which encourages the CNN to learn discriminate identity features (i.e. maximizing inter-class distance and minimizing intra-class distance) by an angular margin. In this paper, we denote this loss function as the recognition loss L F R . For a face input I i belonging to the y i -th identity. The face recognition loss is represented as: L F R (I i ) = − log( e CN N R (Ii) ϕ(mΘy i ) e CN N R (Ii) ϕ(mΘy i ) + j =yi e CN N R (Ii) ϕ(Θj ) ),(2) where the Θ yi denotes the learned angle for identity y i , ϕ(Θ yi ) is a monotonically decreasing function generalized from cos(Θ yi ), and m is the hyper parameter of angular margin constrain. More details can be found in Sphereface [20]. [9]. We use PReLU [8] activation function after each convolution layer. The output of FC1 is the identity representation. Super-Identity Loss To impose the identity information in the training process, one choice is to use a loss computed by features Euclidean distance between face pairs, such as perceptual loss [12]. However, in this paper, since our goal is to minimize identity distance in hypersphere metric space, the original perceptual loss, computed by L2 distance is not the best choice in our task. Therefore, we propose a modified perceptual loss, called Super-Identity (SI) loss, to compute the normalized Euclidean distance (equivalent to geodesic distance). This modification makes the loss directly related to identity in hypersphere space and facilitate our investigation in Sec. 3.5. For a LR face input I LR i , we penalize the normalized Euclidean distance between the hallucinated face and its corresponding HR face in the constructed hypersphere identity metric space: L SI (I LR i , I HR i ) = CN N R (I SR i ) − CN N R (I HR i ) 2 2(3) where CN N R (I SR i ) and CN N R (I HR i ) are the identity features extracted from face recognition model (CN N R ) for facial images I SR i and I HR i , respectively. CN N R (I SR i ) = CN N R (I SR i ) CN N R (I SR i ) 2 is the identity representation projected to the unit hypersphere. In addition to L SI , we want to have some discussions about perceptual loss beyond our work. In general, the perceptual loss is computed by L2 distance. However, in most CNNs, inner-product operation is used in fully-connected and convolutional layers. These outputs are related to the feature's norm, weight's norm and the angular between them. Therefore, for different tasks and different metric space (e.g. [21,5,25]), some modifications about computational metric space of perceptual loss are necessary (L SI is one of the cases). Challenges of Training with Super-Identity Loss Super-identity loss imposes an identity level constrain. We examine different training methods as follows: Baseline training approach I. A straightforward way to train our framework is jointly using the L SR , L SI and L F R to train both CN N H and CN N R from scratch. The optimization objective can be represented as: min θ CN N H θ CN N R 1 n n i=1 L SR (I LR i , I HR i ) + αL SI (I LR i , I HR i ) + βL F R (I SR i , I HR i ),(4) where α and β denotes the loss weight of the L SI and L F R respectively, θ CN N H and θ CN N R denotes the learnable parameters. Observation I. This training approach generates artifacts (see Fig. 3, first column) and the loss is too difficult to converge. The reasons may come from: (1) In the early training stage, the hallucinated faces are quite different from HR faces, so the CN N R is too difficult to be optimized from scratch. (2) The objective of L F R (i.e. minimizing the intra-class variance) is different from the objective of L SI and L SR loss (minimizing the pair-wise distance), which is disadvantageous to CN N R and CN N H learning. So, we cannot use the L SI in CN N R learning and also cannot use the L F R in CN N H learning. Baseline training approach II. To solve above problems, one possible training approach used in perceptual loss [12] can be used. In particular, we train a CN N R using HR faces and then jointly use the L SR and the L SI to train the CN N H . The joint objective of L SI and L SR can be represented as: min θ CN N H 1 n n i=1 L SR (I LR i , I HR i ) + αL SI (I LR i , I HR i ),(5) Observation II. We have two observations while using this training approach: (1) The L SI is difficult to converge. (2) The visual results are noisy (see Fig. 3, second column). To investigate these challenges, we first visualized the learned identity features (after Euclidean normalization, as shown in Fig. 4) and found that there exists a large margin between the hallucination domain and the HR domain. We formulate this challenge as domain divergence problem. It specifies the failure of the CN N R , trained by HR faces, to project faces from hallucination domains to a measurable hypersphere identity metric space. In other words, this face recognition model cannot extract effective identity representation for hallucinated faces. This makes the L SI very difficult to converge and easily get stuck in local minima (i.e. occur many noises in hallucination results). Fig. 4. The distribution of identity features (after Euclidean normalization) from hallucination domain (triangle) and HR domain (dot). These identities are randomly selected from the training set. Different colors denote different identities. We use t-SNE [32] to reduce the dimensions for better understanding. We can observe that there is a large gap between above two domains in the identity metric space. Algorithm 1 Mini-batch SGD based domain-integrated training approach θ CN N R 1 N N i=1 L F R ( I SR i , I HR i ) 5: Update the hallucination model CN NH by descending its stochastic gradient: θ CN N H 1 N N i=1 L SR (I LR i , I HR i ) + αL SI (I LR i , I HR i ) 6: end while Baseline training approach III. To overcome the domain divergence challenge, a straightforward alternately training strategy can be used. In particular, we first trained a CN N H only using the L SR . Then we trained a CN N R using hallucinated faces and HR faces. Finally, we finetune the CN N R jointly using the L SR and the L SI following baseline training approach II. Observation III. Although this alternately training strategy seems able to overcome the domain divergence problem, it still produces artifacts (as shown in Fig. 3, third column). The reason is that the hallucination domain keeps changing when the CN N H is being updated. If the hallucination domain has changed, the face recognition model cannot extract effective and measurable identity representation of hallucinated faces anymore. In short, above observations can be concluded into a dynamic domain divergence problem as following: a large margin exists between the hallucination domain and HR domain and the hallucination domain keeps changing if the hallucination model keeps learning. Domain-Integrated Training Algorithm To overcome the dynamic domain divergence problem, we propose a new training procedure. From above the above observations, we see that alternately training strategy (Baseline Training Approach III) can alleviate the dynamic domain divergence problem. We further propose to do this alternately training in each iteration. More specifically, we first train a CN N R using HR facial images and a CN N H using the L SR . Then, we propose to use domain-integrated training approach (Algorithm 1) to finetune CN N R and CN N H alternately in each iteration. In particular, in each iteration, we first update the CN N R using the recognition loss, which allows the CN N R to perform accurate identity representation in this mini-batch of faces from different domains. Then, we jointly use the L SR and the L SI to update the CN N H . This training approach can encourage the CN N R to construct a robust mapping from faces to the measurable hypersphere identity metric space in each iteration for L SI optimization whatever the CN N H is changing. The alternative optimization process is conducted until converged. Some hallucination examples are shown in Fig. 3, fourth column, where we can observe a much better visual result with this training approach. Comparison to Adversarial Training Domain-Integrated (DI) training and adversarial training [7] can be related to their alternative learning strategy. But they are quite different in several aspects as follows: (1) Generally speaking, DI training is essentially a cooperative process in which CN N H collaborates with CN N R to minimize the identity difference. The learning objective is the same in each sub-iteration. However, in adversarial training, generator and discriminator compete against each other to improve the performance. The learning objective is alternatively challenging during two models learning. (2) The loss functions and optimization style are different. In DI training, we minimize L F R in CN N R constructing a marginal identity metric space and then minimize L SI for CN N H reducing pair-wise identity difference. Differently, in adversarial training, the classification loss is minimized for discriminator learning and maximized for generator learning. Experiments In this section, we will first describe the training and testing details. Then we perform an ablation study to evaluate the effectiveness of the proposed Super-Identity loss and Domain-Integrated training. Further, we evaluate our proposed method with other state-of-the-art methods. After that, we evaluate our method on the higher input size. At the last, we evaluate the benefit of our method for low-resolution face recognition. Training Details Training data. For a fair comparison with other state-of-the-art methods, we do face alignment in facial images. In particular, we use similarity transformation based on five landmarks detected by MTCNN [41]. We have removed the images and identities overlap between training and testing. For face recognition training, we use web-collected facial images including CASIA-WebFace [38], CACD2000 [4], CelebA [22], VGG Faces [24] as Set A. It roughly goes to 1.5M images of 17,680 unique persons. For face hallucination training, we select 1.1M HR facial images (larger than 96×112 pixels) from the same 1.5M images as Set B. Training details. For recognition model training, we use Set A with the batch size of 512 and m (angular margin constrain in Eq. 2) of 4. The learning rate is started from 0.1 and divided by 10 at the 20K, 30K iterations. The training process is finished at 35K iterations. For hallucination model training, we use Set B with the batch size of 128. The learning rate is started from 0.02 and divided by 10 at the 30K, 60K iterations. A complete training is finished at 80K iterations. For domain-integrated training, we use Set B with the batch size of 128 for CN N H and 256 for CN N R . The learning rate is started from 0.01 and divided by 10 at the 6K iterations. A complete training is finished at 9K iterations. Testing Details Testing data. We randomly select 1,000 identities with 10,000 HR facial images (larger than 96×112 pixels) from UMD-Face [1] dataset as Set C. The dataset is used for face hallucination and identity recovery evaluation. Evaluation protocols. In this section, we perform three kinds of evaluations: For identity recovery, we evaluate the performance of recovering identity information while super-resolving faces. In particular, we use the CN N R trained by Set A as identity features extractor. And the identity features are taken from the output of the first fully connected layer. Then we compute the identity similarity (i.e. cosine similarity) between the hallucinated face and its corresponding HR faces on Set C. The average similarities over the testing set are reported. For identity recognizability, we evaluate the recognizability of hallucinated faces. In particular, we first downsample Set A to 12×14 pixels as Set A -LR. Then we use different methods to super-resolve Set A -LR to 96×112 pixels as different Set A -SR. At last, we use the Set A -SR to train different CN N R and evaluate them on LFW [11] and YTF [36]. LR HR α = 0 α = 2 α = 4 α = 8 α = 16 α = 32 Ablation Experiment Loss weight. The hyper parameter α (see Algorithm 1) dominates the identity recovery. To verify the effectiveness of the proposed Super-Identity loss, we vary α from 0 (i.e. only use super-resolution loss) to 32 to learn different models. From Tab. 2 and Fig. 5, we observe that larger α make the facial images sharper with more details and brings the better performance of identity recovery and recognizability. But too large α also makes the texture look slightly unnatural. And, since the performances of identity recovery and identity recognizability are stable when α is larger than 8, we fix α to 8 in other experiments. Table 2. Quantitative comparison of different α on identity recovery and identity recognizability evaluation. Larger α brings better performance and it is stable when α is larger than 8. Training approach. We evaluate different training approaches introduced in Sec. 3.5 and Sec. 3.6. Some visual results are shown in Fig. 3. We can see that Domain-Integrated training achieves the best visual results. Besides, from Tab. 3, Domain-Integrated training also achieves the best performance of identity recovery and identity recognizability. Evaluation on Face Hallucination We compare SICNN with other state-of-the-art methods and bicubic interpolation on Set C for face hallucination. In particular, we follow EnhanceNet [26] training another UR-DGN, called UR-DGN*, with additional perceptual loss computed in end of the second and the last ResBlock in CN N R . All methods are re-trained in same training set -Set B. Some visual examples are shown in Fig. 6. More visual results are included in our supplementary material. We also report the results of average Peak Signalto-Noise Ratio (PSNR) and Structural Similarity (SSIM) in Tab. 4. But as the claim of other works [12,26,14], PSNR and SSIM results are useless for sematic super-resolution evaluation while visual quality and recognizability are more valuable. From the visual results, it is clear that our method achieves the best results over other methods. We analyze the results as follows: (1) For Ma et al.'s method, exemplar patches based, the results are oversmooth and suffer from obvious blocking for such low low-resolution input with large up-sampling scale. (2) For LapSRN [13], since it is based on L2 pixel-wise loss, it makes the hallucinated faces over-smooth. (3) For UR-DGN [39], it jointly uses pixel-wise Euclidean loss and adversarial loss to generate a realistic facial image closest to the average of all potential images. Thus, though the generated facial images look realistic, they are quite different from the original HR images. (4) For UR-DGN*, it uses an additional loss -perceptual loss computed in our CN N R as the pair-wise semantic loss for identity recovery. Though this pixels-wise loss + adversarial loss + perceptual loss is the state-of-the-art superresolution training approach (i.e. EnhancementNet [26] Evaluation on Higher Input Resolution For more comprehensive analysis, in this section, we trained our model for 24×28 inputs with 4× upscaling factor. Specifically, we modify the hallucination network (i.e., CN N H ) by removing the first DB, DeConv and Conv layers. As shown in Fig. 7, our method performs very well visual quality in higher resolution inputs with 4x upscaling factor. For identity recovery and identity recognizability evaluation, our method also achieves very good results: Average identity similarity: 0.8868, LFW accuracy: 99.21%, YTF accuracy: 94.86%, which are very close to the performance on HR faces. Evaluation on Identity Recovery We perform an evaluation on identity recovery with other state-of-the-art methods. All models for evaluation are the same as last experiment (i.e. Sec. 4.4). From the Tab. 5, we observe that our method achieves the best performance. Besides, we also observe that UR-DGN, trained by pixels-wise loss and adversarial loss, even shows inferior performance than LapSRN though with sharper visual results (See Sec. 4.4). It means that UR-DGN will lose some identity information while super-resolving a face because the adversarial loss is not a pair-wise loss. And if add perceptual loss (i.e. UR-DGN*), pair-wise semantic loss, the results can be improved, but still inferior to our method. Evaluation on Identity Recognizability Follow last two experiments (i.e. Sec. 4.4, 4.6)., we further perform an evaluation on identity recognizability with other state-of-the-art methods. From the Tab. 5, we observe that our method achieves the best performance. We also obtain similar observations as last experiment. Besides, we also observe that though several methods (LapSRN. Ma et al., and UR-DGN) obtain better visual results than Bicubic interpolation, the identity recognizability of superresolved face is similar or even inferior. It means that these methods cannot generate discriminative faces with better identity recognizability. Method Ours Table 6. Face verification performance of different methods on LFW [11] and YTF [36] benchmark. It shows that our method can help the recognition model to archive high accuracy with ultra-low-resolution inputs. Evaluation on Low-Resolution Face Recognition To evaluate the benefit of our method for low-resolution face recognition, we compare our method (SICN N + CN N R ) with other state-of-the-art recognition methods on LFW [11] and YTF [36] benchmark. From the results in Table 6, we find that these methods' input sizes are relatively large (area size from 15.3× to 298× compared with our method). Moreover, using our face hallucination method, the recognition model can still achieve reasonable results in such ultra-low resolution. We also tried using unaligned faces in training and testing and our proposed method still can achieve similar improvement of performance. Conclusion In this paper, we present Super-Identity CNN (SICNN) to enhance the identity information during super resolving face images of size 12×14 pixels with an 8× upscaling factor. Specifically, SICNN aims to minimize the identity difference between the hallucinated face and its corresponding HR face. In addition, we propose a domain-integrated training approach to overcome the dynamic domain divergence problem when training SICNN. Extensive experiments demonstrate that SICNN not only achieves superior hallucination results but also significantly improves the performance of low-resolution face recognition. Acknowledgement This work was supported in part by MediaTek Inc and the Ministry of Science and Technology, Taiwan, under Grant MOST 107-2634-F-002 -007. We also benefit from the grants from NVIDIA and the NVIDIA DGX-1 AI Supercomputer.
4,096
1811.02341
2899689260
In this paper, we study the problem of dynamic channel allocation for URLLC traffic in a multiuser multi-channel wireless network where urgent packets have to be successfully received in a timely manner. We formulate the problem as a finite-horizon Markov Decision Process with a stochastic constraint related to the QoS requirement, defined as the packet loss rate for each user. We propose a novel weighted formulation that takes into account both the total expected reward (number of successfully decoded packets) and the risk which we define as the QoS requirement violation. First, we use the value iteration algorithm to find the optimal policy, which assumes a perfect knowledge of the controller of all the parameters, namely the channel statistics. We then propose a Q-learning algorithm where the controller learns the optimal policy without having knowledge of neither the CSI nor the channel statistics. We illustrate the performance of our algorithms with numerical studies.
The issue of deadline-constrained traffic scheduling has been investigated by several works including @cite_20 @cite_18 @cite_14 @cite_16 . For example, in @cite_16 , the authors study the problem of dynamic channel allocation in a single user multi-channel system with service costs and deadline-constrained traffic. They propose online algorithms to enable the controller to learn the optimal policy based on Thompson sampling for multi-armed bandit problems. The MDP framework and reinforcement learning approaches for downlink packet scheduling are considered in @cite_20 @cite_14 @cite_13 @cite_4 @cite_8 @cite_12 . In @cite_20 , the authors propose an MDP for deadline-constrained packet scheduling problem and use dynamic programming to find the optimal scheduling policies. The authors do not consider QoS constraints in the scheduling problem.
{ "abstract": [ "", "This paper studies Ultra-Reliable Low-Latency Communications (URLLC), an important service class of emerging 5G networks. In this class, multiple unreliable transmissions must be combined to achieve reliable latency: a user experiences a frame success when the entire L bits are received correctly within a deadline, and its latency performance is reliable when the frame success rate is above a threshold. When jointly serving multiple users, a natural URLLC scheduling question arises: given the uncertainty of the wireless channel, can we find a scheduling policy that allows all users to meet a target reliable latency objective? This is called the URLLC SLA Satisfaction (USS) problem. The USS problem is an infinite horizon constrained Markov Decision Process, for which, after establishing a convenient property, we are able to derive an optimal policy based on dynamic programming. Our policy suffers from the curse of dimensionality, hence for large instances we propose a class of knapsack-inspired computationally efficient — but not necessarily optimal — policies. We prove that every policy in that class becomes optimal in a fluid regime, where both the deadline and L scale to infinity, while our simulations show that the policies perform well even in small practical instances of the USS problem.", "We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users.", "In this paper, we study resource allocation in a downlink OFDMA system assuming imperfect channel state information (CSI) at the transmitter. To achieve the individual QoS of the users in OFDMA system, adaptive resource allocation is very important, and has therefore been an active area of research. However, in most of the the previous work perfect CSI at the transmitter is assumed which is rarely possible due to channel estimation error and feedback delay. In this paper, we study the effect of channel estimation error on resource allocation in a downlink OFDMA system. We assume that each user terminal estimates its channel by using an MMSE estimator and sends its CSI back to the base station through a feedback channel. We approach the problem by using convex optimization framework, provide an explicit closed form expression for the users' transmit power and then develop an optimal margin adaptive resource allocation algorithm. Our proposed algorithm minimizes the total transmit power of the system subject to constraints on users' average data rate. The algorithm has polynomial complexity and solves the problem with zero optimality gaps. Simulation results show that our algorithm highly improves the system performance in the presence of imperfect channel estimation.", "", "In OFDMA downlink resource allocation, the base station exploits knowledge of the users' channel realizations in order to opportunistically assign users to appropriate subchannels, as well as to optimize the rates and powers across those subchannels. Because reverse-link bandwidth is scarce, the base station's channel knowledge must be obtained via some form of limited feedback. While the typical assumption for this feedback is that it comes in the form of heavily quantized SNR estimates computed at the user terminals, we propose to use ACK NAK feedback that is already provided by higher-layer ARQ. Towards this aim, we propose a greedy resource allocation scheme, based on distributional (rather than point) estimates of SNR. We also show how these SNR distributions can be updated recursively for Markov time-varying channels.1", "In this paper, we study resource allocation and relay selection in a decode-and-forward downlink OFDMA cooperative network assuming imperfect channel state information (CSI) at the base station (BS). We assume that due to feedback delay, the BS has only outdated CSI of BS to users and inter-user links. We propose a centralized optimization framework in which the BS takes decisions on the basis of available outdated CSI given the conditional probability distribution of the current CSI. We approach the problem by using dual optimization framework, decompose it into per-subcarrier optimization problem, and develop an optimal resource allocation and relay selection algorithm. The proposed algorithm minimizes the total transmit power of the system by jointly selecting the best relay node for each source-destination pair, and optimally allocating power and subcarriers to the source and relay nodes under constraints on users' conditional expected data rates. The duality gaps of the solution are virtually zero. Furthermore, we show that the per-subcarrier decomposition of the problem reduces the complexity of the solution from exponential to polynomial. Simulation results show that the system performance may be badly affected by feedback delay. Simulation results also show that users' cooperation significantly reduces the power consumption compared to non-cooperative communication.", "Next-generation cellular wireless communication networks aim to provide a variety of quality-of-service (QoS)-sensitive packet-based services to downlink users. Included among these are real-time multimedia services, which have stringent delay requirements. Downlink packet scheduling at the base station plays a key role in efficiently allocating system resources to meet the desired level of QoS for various users. In this paper, we employ dynamic programming (DP) to study the design of a downlink packet scheduler capable of supporting real-time multimedia applications. Under well-justified modeling reductions, we extensively characterize structural properties of the optimal control associated with the DP problem. We leverage intuition gained from these properties to propose a heuristic scheduling policy, namely, Channel-Aware Earliest Due Date (CA-EDD), which is based on a \"quasi- static\" approach to scheduling. The per-time-slot implementation complexity of CA-EDD is only O(K) for a system with K downlink users. Experimental results show that CA-EDD delivers up to 50 percent of performance gains over benchmark schedulers. CA-EDD achieves these performance gains by using channel and deadline information in conjunction with application layer information (relative importance of packets) in a systematic and unified way for scheduling." ], "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_16", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "", "2803466884", "2136340918", "1987954156", "", "2115334467", "2144593186", "2109672207" ] }
Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks
In the fifth generation (5G) wireless networks, there are new service categories with heterogeneous and challenging requirements, among them the Ultra Reliable Low Latency (URLLC) traffic [6], designed for delay and reliability sensitive applications like real-time remote control, autonomous driving, and mission-critical traffic. In URLLC traffic, the Endto-End (E2E) latency defined by 3GPP is lower than 1 ms along with a reliability requirement of 1 − 10 −5 to 1 − 10 −9 [6], [15]. A plausible solution to address the latency requirement issue is to make transmissions without Channel State Information (CSI) knowledge at the transmitter side. To increase reliability, exploiting frequency diversity is beneficial, and occurs by making parallel transmissions of the same packet over different subcarriers in an Orthogonal Frequency Division Multiplexing (OFDM) system where each subcarrier experiences different channel characteristics. However, this solution is costly in terms of system capacity. Therefore, the number of parallel transmissions should not be fixed in advance but should rather be variable and depending on many parameters such as the position of a user in the cell, or the statistics about his packet losses over the previous time slots. For example, if a user experienced a high number of packet losses in the previous time slots, it should be allocated a high number of subchannels to increase his success probability, whereas a user with a low number of dropped packets may be assigned a low number of subcarriers. Hence, it is crucial to design efficient dynamic schemes able to adapt the number of parallel transmissions for each user to his experienced QoS. In this work, we study the problem of dynamic channel allocation for URLLC traffic in a multi-user multi-channel wireless network under QoS constraints. A channel here refers to a frequency band or a subcarrier in an OFDM system, and the QoS is related to the packet loss rate for each user, defined as the average number of dropped packets. Besides, we introduce the notion of risk related to the violation of the QoS requirements; more precisely, a risk occurs or equivalently, a risk state is reached when the QoS requirement is violated for a user. Furthermore, we consider that the transmitter does not have neither the CSI nor the channel statistics at the transmission moment. In fact, due to the urgency of URLLC packets mentioned previously, there is not enough time for the BS to make channel estimation and probing techniques like in conventional wireless communications. B. Addressed Issues and Contribution In this work, we address the following issues: • We formulate the dynamic channel allocation problem for URLLC traffic as a finite-horizon MDP wherein the state represents the QoS of the users, that is, the average number of dropped packets or packet loss rate of the users. The decision variable is the number of channels to assign to each user. We define a risk state as any state where the QoS requirement is violated for at least one user. Besides, we define a stochastic constraint related to the risk state visitation probability. • Assuming the channel statistics are known to the controller, we use the finite-horizon value iteration algorithm to find the optimal policy to the weighted formulation of the problem, which takes into account both the total expected reward over the planning horizon and the risk criterion (QoS requirement violation probability). • When the channel statistics are unknown to the controller, we propose a reinforcement learning algorithm (Q-learning) for the weighted formulation of the problem, which enables the controller to learn the optimal policy. We illustrate the performance of our algorithms with numerical studies. C. Paper Structure In Section II, we present the system model for the multiuser multi-channel wireless network with URLLC packets and time-varying channels along with the QoS definition. In Section III, we introduce the constrained MDP formulation with all its components. In Section IV, we present both the finite-horizon value iteration algorithm and the reinforcement learning algorithm. Section V is devoted to numerical results. Finally, we conclude the paper in Section VI. II. SYSTEM MODEL We consider a multi-user multi-channel wireless network where URLLC packets have to be transmitted over timevarying and fading channels. Due to the strict latency requirement of URLLC packets in 5G networks mentioned previously, there is not enough time for the BS to estimate the channel, and the packets are then immediately transmitted in the absence of CSI at the transmitter side. When a packet is successfully decoded, the receiver sends an acknowledgment feedback, which is assumed to be instantaneous and errorfree. We consider a centralized controller which dynamically Controller user 1 distributes the channels to the users based on their QoS (see Fig. 1). user K ρ 1 (t + 1) ρ K (t + 1) . . . a 1 (t) a K (t) ℓ 1 (t) ℓ K (t) Furthermore, we make the following assumptions: Packet arrival process: the packet arrival process is considered as an independent and identically distributed (i.i.d.) random process over a finite set I = {0, 1, .., A max }, where A max is a positive constant, and is identical for all the users. Let α a denote the probability that a ∈ I packets arrive for a given user at the beginning of a time slot. Deadline-constrained traffic: regarding the strict URLLC latency requirement specified by 3GPP (lower than 1 ms), each packet has a lifetime of one time slot and can either be served or dropped; if there are available channels, the packet will be transmitted, otherwise, it will be dropped because after one time slot it becomes outdated and useless. Furthermore, one packet is transmitted per channel. Channel model: we consider i.i.d. Bernoulli channels with a mean µ ∈ [0, 1]. In millimeter-wave communications, the links are characterized by their intermittence and high sensitivity, and this channel model reflects the existence of a light-of-sight (LOS) channel state [4], [8]. To increase reliability, a user can be assigned more channels than the number of waiting packets (depending on his experienced QoS). Some packets are then simultaneously sent over multiple parallel channels. Channel split: for each user, all the packets are equally important: when the number of available channels is larger than that of waiting packets, we assume that some packets are picked uniformly at random to be replicated. A packet is obviously more likely to be successfully transmitted when sent over many channels simultaneously. However, assigning more channels to a user will affect the QoS experienced by the other users. Note that channel split across the packets (which occurs in the same manner for all the users) should not be confused with the channel split across the users (which takes into account the QoS perceived by the users). For user k, the distribution of available channels ℓ k over the waiting packets a k occurs as follows: each packet is transmitted over (ℓ k ∧ a k ) channels and may be furthermore replicated once with a probability ( ℓ k ∨a k a k ), where the symbol ℓ k ∧ a k denotes the larger integer m such that ma k ℓ k , and ℓ k ∨ a k denotes the remaining integer of the division of ℓ k by a k . The probability that a packet is successfully transmitted given that there are a k waiting packets at the transmitter and ℓ k assigned channels can then be expressed by ν k (a k , ℓ k ) = 1 − ℓ k ∨ a k a k 1 − (1 − µ) (ℓ k ∧a k ) + ℓ k ∨ a k a k 1 − (1 − µ) 1+(ℓ k ∧a k ) .(1) The expected number of successfully transmitted packets for user k is then given by E [N k (ℓ k )] = a k ∈I a k α a k ν k (a k , ℓ k ).(2) QoS criterion: for each user k, we define the packet loss rate at time slot t, ρ k (t), as follows ρ k (t) = 1 t t−1 i=0 n k (i) a k (i) , t 1,(3) where n k (t) denotes the number of lost packets for user k at time slot t. Note that ρ k ∈ [0, 1] (n k (t) a k (t)). A packet is lost when either of the two following events occurs: (i) it is not transmitted because of insufficient available channels, (ii) is transmitted but ACK feedback is not received. The parameter ρ k reflects the QoS perceived by user k: higher values of ρ k mean a higher number of lost packets and poor QoS whereas lower values of ρ k mean good QoS. To ensure good QoS for the users, the resource allocation scheme should take account of their experienced QoS and keep this parameter values for all users within an acceptable range. Finally, the decision variable is the number of channels associated to each user k at each time slot, denoted by ℓ k , which satisfies K k=1 ℓ k (t) = L,(4) where L denotes the number of available channels. III. CONSTRAINED MDP FRAMEWORK The stochastic nature of the wireless channel incites us to consider an MDP framework to solve the decision problem. In this section, we first introduce the constrained MDP formulation along with its components. We then derive the optimality equations. A. Model Formulation We define the following finite-horizon MDP (3), and the symbol × stands for the Cartesian product. • Action Space: is the finite set L = {(ℓ 1 , .., ℓ K ) satisfying (4)}, where ℓ k denotes the number of channels assigned to user k. • State Space: is the finite set T × S where T = {0, .., T }, S = {ρ 1 × .. × ρ K }, ρ k for k = 1, .., K is defined in • Reward: we define the reward r at time slot t, when the controller chooses action ℓ ∈ L in state s t , as the expected total number of successfully transmitted packets over all the users, that is, r(s t , ℓ) = E K k=1 N k (ℓ k ) .(5) Note that the reward depends only on the number of channels allocated for each user (the action), and not on the current state s t . Besides, the reward is a non-linear function of the action. • Transition Probabilities: First, we define the probability that n packets are lost for user k as a function of the number of waiting packets a k and the number of assigned channels ℓ k at a given time slot as follows σ k (n, a k , ℓ k ) = a k n 1 − ν k (a k , ℓ k ) n ν k (a k , ℓ k ) a k −n , where n a k and a k n denotes the binomial coefficient. The state transition probability for user k is given by p(ρ ′ k | ρ k (t), ℓ k ) = α a k σ k (n, a k , ℓ k ),(6) where ρ ′ k = t t + 1 ρ k + 1 t + 1 n a k .(7) Finally, let s t+1 = ρ ′ 1 × .. × ρ ′ K and s t = ρ 1 × .. × ρ K , the transition probability from state s t to state s t+1 given when action l is taken, is then given by p(s t+1 | s t , ℓ) = K k=1 p(ρ ′ k | ρ k (t), ℓ k ).(8) Regarding the strict requirements of URLLC packets described earlier, we introduce in the following the notion of a risk-state. Definition 1. We define a risk state any state where ρ k > ρ max for any k ∈ {1, .., K} with ρ max > 0 is constant fixed by the controller. The set of risk states Φ is then, Φ = {ρ 1 × .. × ρ K where there ∃ k such that ρ k > ρ max }. Besides, a risk-state is an absorbing state, that is, the process ends when it reaches a risk state [12]. A deterministic policy π assigns at each time step and for each state an action. Our goal is to find an optimal deterministic policy π * which maximizes the total expected reward V π T (s) given by V π T (s) = E π T t=0 r(s t , π(s t ))| s 0 = s ,(9) with the reward r is defined in (5), while satisfying the QoS constraint given by η π (s) < w,(10) where η π (s) denotes the probability of visiting a risk state over the planning horizon, given that the initial state (at time slot 0) is s and policy π is followed, and w is a positive constant. Formally, η π (s) = P π (∃ t such that s t ∈ Φ|s 0 = s). In order to explicitly characterize η π (s), we introduce in the following the risk signal r. Definition 2. We define a risk signal r as follows r(s t , ℓ t , s t+1 ) = 1 if s t+1 ∈ Φ 0 otherwise,(12) where s t and ℓ t denote the state and action at time slot t, respectively, and s t+1 denotes the subsequent state. Proposition 1. The probability of visiting a risk-state, η π (s), is given by η π (s) = V π T (s),(13) where we set V π T (s) = E π T t=0 r (s t , π(s t ), s t+1 ) | s 0 = s .(14) Proof. The random sequence r(t = 0), r(t = 1),.., r(t = T ) may contain 1 if a risk state is visited, otherwise all its components are equal to zero (recall that a risk state is an absorbing state). Therefore, T t=0 r(t) is a Bernoulli random variable with a mean equal to the probability of reaching a risk state, that is, relation (13) holds. B. Optimality Equations By virtue of Proposition 1, we associate a state value function V π T to the probability of visiting a risk state. Now, we define a new weighted value function V π ξ,T , which incorporates both the reward and the risk, as follows V π ξ,T (s) = ξV π T (s) − V π T (s),(15) where ξ > 0 is the weighting parameter, determined by the risk level the controller is willing to tolerate. The function V π ξ,T can be seen as a standard value function associated to the reward ξr − r. The case ξ = 0 corresponds to a minimum-risk policy whereas the case ξ → ∞ corresponds to a maximum-value policy. Let Π denote the set of deterministic policies, and define V * T (s) = max π∈Π V π T (s), V * T (s) = min π∈Π V π T (s), V * ξ,T (s) = max π∈Π V π ξ,T (s). Besides, we define u π t , u π t , and u π ξ,t for 0 t T respectively by u π t (s) = E π T i=t r(s i , π(s i ))| s t = s ,(16)u π t (s) = E π T i=t r(s i , π(s i ), s i+1 )| s t = s ,(17) u π ξ,t (s) = ξu π t (s) − u π t (s). Note that V π T incorporates the total expected reward over the entire planning horizon whereas u t incorporates the rewards from decision epoch t to the end of the planning horizon only. Besides, u t (s) is the probability of visiting a risk state given that at time t the system is in state s ∈ {S/Φ}, and is thus a measure of the risk. The optimality equations are given by (the proof is similar to that in [18], chap. 4 and skipped here for brevity) u * t (s) = max ℓ∈L r(s t , ℓ) + j∈S p(j|s t , ℓ)u * t+1 (j) (19) u * t (s) = min ℓ∈L j∈S p(j|s t , ℓ) r(s t , ℓ, j) + u * t+1 (j) (20) u * ξ,t (s) = max ℓ∈L j∈S p(j|s t , ℓ) ξr(s t , ℓ) − r(s t , ℓ, j) +u * ξ,t+1 (j) ,(21) for t = 0, .., T − 1. For the boundary conditions, that is at time slot T , u * T (s), u * T (s), and u * ξ,T (s) are set to zero for each s ∈ S. In a non-risk state, the reward r is given in (5) and the risk signal is equal to zero whereas in a risk state the reward r is set to zero and the risk signal r is set to one. IV. ALGORITHM DESIGN In this section, we present two algorithms: (i) finite-horizon value iteration algorithm which assumes that all the model parameters are known to the controller, namely the channel statistics (model-based algorithm), and (ii) reinforcement learning algorithm which does not require the controller knowledge of channel statistics (model-free algorithm). A. Value Iteration Algorithm In order to find a policy that maximizes the weighted value function defined in (15), we use the value iteration algorithm [18]. In this algorithm, we proceed backwards: we start by determining the optimal action at time slot T for each state, and successively consider the previous stages, until reaching time slot 0 (see Algorithm 1). Algorithm 1 Finite-Horizon Value Iteration B. Risk-Sensitive Reinforcement Learning Algorithm During the learning phase, the controller gets estimates of the value of each state-action pair. It updates its estimates through the interaction with the environment where at each iteration it performs an action and then observes the reward, risk signal r, and the next state (see Fig. 2). The learning controller chooses an action at each learning step following the ε-greedy policy, that is, it selects an action that maximizes its current estimate with probability 1 − ε, or a random action with probability ε. The parameter ε captures the exploration-and-exploitation trade-off: when ε → 0, the controller tends to choose an action that maximizes its current state's estimated value; whereas when ε → 1, the controller tends to choose randomly an action and to favor the exploration for optimality. The state-action value function is given by [19], [21] Q π (s t , ℓ) = r(s t , ℓ) + j∈S p(j|s t , ℓ) u π t+1 (j), where the first term denotes the immediate reward, that is the number of successfully transmitted packets over all the users, when the action l is performed in state s t ; and the second term denotes the expected reward when the policy π is followed in the subsequent decision stages. Similarly to the state-action value function associated to the reward, we define the stateaction value function associated to the risk Q π as Q π (s t , ℓ) = j∈S p(j|s t , ℓ) r(s t , ℓ, j) + u π t+1 (j) . Note that the introduction of the signal risk r enabled us to define a state-action value function, Q to the risk. Besides, the state-action value function associated to the weighted formulation, Q π ξ , is given by Q π ξ (s t , ℓ) = ξQ π (s t , ℓ) − Q π (s t , ℓ). Finally, the Q-function updates at the learning step n (which should not be confused with the decision epoch t) are given by [21] Q (n+1) (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) (s t , ℓ) + α n (s t , ℓ) r + max ℓ∈L {Q (n) (s t+1 , ℓ)} ,(22)Q (n+1) (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) (s t , ℓ) + α n (s t , ℓ) r + min ℓ∈L {Q (n) (s t+1 , ℓ)} ,(23) and, Q (n+1) ξ (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) ξ (s t , ℓ) + α n (s t , ℓ) ξr − r + max ℓ∈L {Q (n) ξ (s t+1 , ℓ)} ,(24) where α n (s t , ℓ) denotes the learning rate parameter at step n when the state s t and action ℓ are visited. The learning algorithm converges to the optimal stateaction value function when each state-action pair is performed infinitely often and when the learning rate parameter satisfies for each (s t , ℓ) pair (the proof is given in [7], [21] and skipped here for brevity), In this case, the Q-functions are related to the value functions as follows max ℓ∈L {Q(s t , ℓ)} = u * t (s t ), min ℓ∈L Q(s t , ℓ) = u * t (s t ), max ℓ∈L {Q ξ (s t , ℓ)} = u * ξ,t (s t ). When a risk state is reached during the learning phase, the system is restarted according to the uniform distribution to a non-risk state. In addition, when t T , we consider that an artificial absorbing state is reached and we reinitialize t (see Algorithm 2). Algorithm 2 Q-learning Algorithm 1: Initialization t ← 0, s 0 ← s, n ← 1, 2: for each ℓ ∈ L 3: Q(s 0 , ℓ) ← 0, Q(s 0 , ℓ) ← 0, Q ξ (s 0 , ℓ) ← 0 4: End for 5: Repeat 6: observe current state s t 7: select and perform action ℓ in state s t 8: observe the new state s t+1 , reward r and the risk r 9: update the Q-functions Q(s t , l), Q(s t , l), Q ξ (s t , ℓ) according to (22), (23), (24) respectively 10: t ← t + 1 11: n ← n + 1 12: update α n 13: if t = T , then t ← 0 artificial absorbing state reached 14: if s t ∈ Φ, then s t ∼ Unif{S/Φ} absorbing state reached 15: until convergence V. PERFORMANCE EVALUATION In this section, we present the numerical results obtained with the value iteration and the learning algorithms in a variety of scenarios. We consider the setting of two users along with a number of channels L = 5. For the arrival traffic, we consider the following truncated Poisson distribution Prob(a = m) = λ m /m! Amax i=0 λ i /i! if m A max zero otherwise,(25) where λ = 3 and A max = 6. The mean of the Bernoulli channel µ and the value of the parameter ρ max throughout this section are fixed to 0.6 and 0.55 respectively. A. Minimum-risk vs maximum-value policy First, we compare the performance of the minimum-risk policy (obtained when ξ = 0), maximum-value policy (obtained when ξ → ∞), weighted policy (when ξ > 0), and the fixed policy which consists is assigning the same number of channels for each user at each time slot (ℓ 1 = 2 and ℓ 2 = 3). We depict in Fig. 3-top the reward u t (s) given in (19) as a function of time when s = 0.3 × 0 and different policies are followed. We observe that the maximum-value policy clearly outperforms the fixed and the minimum-risk policy. In Fig. 3-bottom showing u t (s) given in (20), we observe that the probability of visiting a risk-state when the fixed policy is followed is much higher than that obtained when the minimum-risk policy π * is performed. For example, at the time step t = 5, u t (s) is equal to 0.42 when the policy π f is performed whereas this value reduces to 0.02 when the policy π * is followed. In fact, the fixed policy does not take account of the experienced QoS of the users, and therefore, it is the policy which results in the highest risk-state visitation probability. Besides, this probability decreases over time for all the policies. In fact, as time goes on, the probability of entering a risk-state over the remaining time steps decreases. The reward u t (s) increases for the lower values of t until reaching a maximum value and then it decreases, for all the policies. In fact, for the lower values of t, the probability of visiting a risk-state is high, and this affects the expected value of the reward (recall that in the risk state, the reward is equal to zero). As time goes on, this probability decreases, and thus the expected reward increases. However, at the further time steps, the number of remaining decision stages is low and hence the expected reward (total number of successfully transmitted packets over the remaining time slots) decreases. B. Learning In the learning algorithm, we simulate the wireless channel with a Bernoulli random variable with a number of trials equal to the number of channels associated to each packet for each user. For the learning rate parameter α n , we considered the following expression [11]: 3: Performance of the minimum-risk policy π * , the maximum-value policy π * , the weighted-policy π * ξ with ξ = 0.1, and the fixed policy π f . On the top, u t (s), on the bottom, u t (s) where s = 0.3 × 0 and T = 9. α n = 1 (1 + n(s t , ℓ)) γ ,(26) where n(s t , ℓ) denotes the number of times the state-action pair (s t , ℓ) was visited until iteration n, and γ is a positive parameter ∈ [0.5, 1] [11]. We depict in Fig. 4 the optimal (minimum-risk) policy (number of channels to assign to user 1 , ℓ 1 ∈ [0, .., 5]) computed by the learning algorithm, as a function of time steps (decision epochs) and ρ 1 , when ρ 2 is fixed to 0. The figure shows a monotony property: the number of channels to assign to user 1 increases with time and with ρ 1 . In fact, as the QoS of user 1 degrades (ρ 1 increases), more channels are assigned to it to compensate for this degradation; and as time goes on, this policy is more sensitive to this degradation as more channels are assigned for the same values of ρ 1 , but at further time steps. VI. CONCLUSION In this work, we studied the problem of dynamic channel allocation for URLLC traffic in a multi-user multi-channel wireless network within a novel framework. Due to the stochastic nature of the problem related to time-varying, fading channels and random arrival traffic, we considered a finite-horizon MDP framework. We determined explicitly the probability of visiting a risk state and we wrote it as a cumulative return (risk signal). We then introduced a weighted global value function which incorporates two criteria: reward and risk. By virtue of the value iteration algorithm, we determined the optimal policy. Furthermore, we used a Q-learning algorithm to enable the controller to learn the optimal policy in the absence of channel statistics. We illustrated the performance of our algorithms with numerical studies, and we showed that by adapting the number of parallel transmissions in a smart way, the performance of the system can be substantially enhanced. In the future work, we would like to take account of spatial diversity in the dynamic allocation scheme where both the BS and the user terminals can be equipped with multiple antennas to enhance the system performance.
4,653
1811.02341
2899689260
In this paper, we study the problem of dynamic channel allocation for URLLC traffic in a multiuser multi-channel wireless network where urgent packets have to be successfully received in a timely manner. We formulate the problem as a finite-horizon Markov Decision Process with a stochastic constraint related to the QoS requirement, defined as the packet loss rate for each user. We propose a novel weighted formulation that takes into account both the total expected reward (number of successfully decoded packets) and the risk which we define as the QoS requirement violation. First, we use the value iteration algorithm to find the optimal policy, which assumes a perfect knowledge of the controller of all the parameters, namely the channel statistics. We then propose a Q-learning algorithm where the controller learns the optimal policy without having knowledge of neither the CSI nor the channel statistics. We illustrate the performance of our algorithms with numerical studies.
Most risk-sensitive approaches consist in analyzing higher order statistics than the average metric such as the variance of the reward @cite_19 @cite_2 @cite_0 @cite_17 . For instance, a risk-sensitive reinforcement learning is studied in @cite_5 in millimeter-wave communications to optimize both the bandwidth and transmit power. The authors consider a utility (data rate) that incorporates both the average and the variance to capture the tail distribution of the rate, useful for the reliability requirement of URLLC traffic. The authors do not exploit frequency diversity.
{ "abstract": [ "In the context of standard Markov decision processes (MDPs), the connection between Dynamic Program (DP) and Linear Program (LP) is well understood and is well established under sufficiently general conditions. LP based approach facilitates solving the constrained MDPs. Multiplicative or Risk sensitive MDPs, introduced to control the fluctuations variations around the expected value, are relatively less studied objects. DP equations are considerably well understood even in the context of Risk MDPs, however the LP connection is not known. We consider a finite horizon risk MDP problem and establish the connections between the DP and LP approaches. We augment the state space with a suitable component, to obtain the optimal policies for constrained risk MDPs. We apply this results to a server selection problem in Ber M K K queues, with a constraint on the utilization of the fast server. We discuss some interesting structural properties of the risk optimal policies.", "Ensuring ultra-reliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture, and topology (across access, edge, and core) and decision-making under uncertainty is sorely lacking. The overarching goal of this article is a first step to fill this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a plethora of techniques and methodologies pertaining to the requirements of ultra-reliable and low-latency communication, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliable wireless networks.", "Variance-penalized Markov decision processes (MDPs) for an infinite time horizon have been studied in the literature for asymptotic and one-step variance; in these models, the objective function is generally the expected long-run reward minus a constant times the variance, where variance is used as a measure of risk. For the finite time horizon, asymptotic variance has been considered in Collins [1], but this model accounts for only a terminal reward, i.e., reward is earned at the end of the time horizon. In this paper, we seek to develop a framework for one-step variance in the finite time horizon in which rewards can be non-zero in every state. We develop a solution algorithm based on the stochastic shortest path algorithm of Bertsekas and Tsitsiklis [2]. We also present a Q-Learning algorithm for a simulation-based scenario which applies in the absence of the transition probability model, along with some preliminary convergence results.", "In this letter, we investigate the problem of providing gigabit wireless access with reliable communication in 5G millimeter-wave (mmWave) massive multiple-input multiple-output networks. In contrast to the classical network design based on average metrics, we propose a distributed risk-sensitive reinforcement learning-based framework to jointly optimize the beamwidth and transmit power, while taking into account the sensitivity of mmWave links due to blockage. Numerical results show that our proposed algorithm achieves more than 9 Gbps of user throughput with a guaranteed probability of 90 , whereas the baselines guarantee less than 7.5 Gbps. More importantly, there exists a rate-reliability-network density tradeoff, in which as the user density increases from 16 to 96 per km2, the fraction of users that achieves 4 Gbps is reduced by 11.61 and 39.11 in the proposed and the baseline models, respectively.", "" ], "cite_N": [ "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2288730793", "2782068308", "2551230555", "2786608596", "" ] }
Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks
In the fifth generation (5G) wireless networks, there are new service categories with heterogeneous and challenging requirements, among them the Ultra Reliable Low Latency (URLLC) traffic [6], designed for delay and reliability sensitive applications like real-time remote control, autonomous driving, and mission-critical traffic. In URLLC traffic, the Endto-End (E2E) latency defined by 3GPP is lower than 1 ms along with a reliability requirement of 1 − 10 −5 to 1 − 10 −9 [6], [15]. A plausible solution to address the latency requirement issue is to make transmissions without Channel State Information (CSI) knowledge at the transmitter side. To increase reliability, exploiting frequency diversity is beneficial, and occurs by making parallel transmissions of the same packet over different subcarriers in an Orthogonal Frequency Division Multiplexing (OFDM) system where each subcarrier experiences different channel characteristics. However, this solution is costly in terms of system capacity. Therefore, the number of parallel transmissions should not be fixed in advance but should rather be variable and depending on many parameters such as the position of a user in the cell, or the statistics about his packet losses over the previous time slots. For example, if a user experienced a high number of packet losses in the previous time slots, it should be allocated a high number of subchannels to increase his success probability, whereas a user with a low number of dropped packets may be assigned a low number of subcarriers. Hence, it is crucial to design efficient dynamic schemes able to adapt the number of parallel transmissions for each user to his experienced QoS. In this work, we study the problem of dynamic channel allocation for URLLC traffic in a multi-user multi-channel wireless network under QoS constraints. A channel here refers to a frequency band or a subcarrier in an OFDM system, and the QoS is related to the packet loss rate for each user, defined as the average number of dropped packets. Besides, we introduce the notion of risk related to the violation of the QoS requirements; more precisely, a risk occurs or equivalently, a risk state is reached when the QoS requirement is violated for a user. Furthermore, we consider that the transmitter does not have neither the CSI nor the channel statistics at the transmission moment. In fact, due to the urgency of URLLC packets mentioned previously, there is not enough time for the BS to make channel estimation and probing techniques like in conventional wireless communications. B. Addressed Issues and Contribution In this work, we address the following issues: • We formulate the dynamic channel allocation problem for URLLC traffic as a finite-horizon MDP wherein the state represents the QoS of the users, that is, the average number of dropped packets or packet loss rate of the users. The decision variable is the number of channels to assign to each user. We define a risk state as any state where the QoS requirement is violated for at least one user. Besides, we define a stochastic constraint related to the risk state visitation probability. • Assuming the channel statistics are known to the controller, we use the finite-horizon value iteration algorithm to find the optimal policy to the weighted formulation of the problem, which takes into account both the total expected reward over the planning horizon and the risk criterion (QoS requirement violation probability). • When the channel statistics are unknown to the controller, we propose a reinforcement learning algorithm (Q-learning) for the weighted formulation of the problem, which enables the controller to learn the optimal policy. We illustrate the performance of our algorithms with numerical studies. C. Paper Structure In Section II, we present the system model for the multiuser multi-channel wireless network with URLLC packets and time-varying channels along with the QoS definition. In Section III, we introduce the constrained MDP formulation with all its components. In Section IV, we present both the finite-horizon value iteration algorithm and the reinforcement learning algorithm. Section V is devoted to numerical results. Finally, we conclude the paper in Section VI. II. SYSTEM MODEL We consider a multi-user multi-channel wireless network where URLLC packets have to be transmitted over timevarying and fading channels. Due to the strict latency requirement of URLLC packets in 5G networks mentioned previously, there is not enough time for the BS to estimate the channel, and the packets are then immediately transmitted in the absence of CSI at the transmitter side. When a packet is successfully decoded, the receiver sends an acknowledgment feedback, which is assumed to be instantaneous and errorfree. We consider a centralized controller which dynamically Controller user 1 distributes the channels to the users based on their QoS (see Fig. 1). user K ρ 1 (t + 1) ρ K (t + 1) . . . a 1 (t) a K (t) ℓ 1 (t) ℓ K (t) Furthermore, we make the following assumptions: Packet arrival process: the packet arrival process is considered as an independent and identically distributed (i.i.d.) random process over a finite set I = {0, 1, .., A max }, where A max is a positive constant, and is identical for all the users. Let α a denote the probability that a ∈ I packets arrive for a given user at the beginning of a time slot. Deadline-constrained traffic: regarding the strict URLLC latency requirement specified by 3GPP (lower than 1 ms), each packet has a lifetime of one time slot and can either be served or dropped; if there are available channels, the packet will be transmitted, otherwise, it will be dropped because after one time slot it becomes outdated and useless. Furthermore, one packet is transmitted per channel. Channel model: we consider i.i.d. Bernoulli channels with a mean µ ∈ [0, 1]. In millimeter-wave communications, the links are characterized by their intermittence and high sensitivity, and this channel model reflects the existence of a light-of-sight (LOS) channel state [4], [8]. To increase reliability, a user can be assigned more channels than the number of waiting packets (depending on his experienced QoS). Some packets are then simultaneously sent over multiple parallel channels. Channel split: for each user, all the packets are equally important: when the number of available channels is larger than that of waiting packets, we assume that some packets are picked uniformly at random to be replicated. A packet is obviously more likely to be successfully transmitted when sent over many channels simultaneously. However, assigning more channels to a user will affect the QoS experienced by the other users. Note that channel split across the packets (which occurs in the same manner for all the users) should not be confused with the channel split across the users (which takes into account the QoS perceived by the users). For user k, the distribution of available channels ℓ k over the waiting packets a k occurs as follows: each packet is transmitted over (ℓ k ∧ a k ) channels and may be furthermore replicated once with a probability ( ℓ k ∨a k a k ), where the symbol ℓ k ∧ a k denotes the larger integer m such that ma k ℓ k , and ℓ k ∨ a k denotes the remaining integer of the division of ℓ k by a k . The probability that a packet is successfully transmitted given that there are a k waiting packets at the transmitter and ℓ k assigned channels can then be expressed by ν k (a k , ℓ k ) = 1 − ℓ k ∨ a k a k 1 − (1 − µ) (ℓ k ∧a k ) + ℓ k ∨ a k a k 1 − (1 − µ) 1+(ℓ k ∧a k ) .(1) The expected number of successfully transmitted packets for user k is then given by E [N k (ℓ k )] = a k ∈I a k α a k ν k (a k , ℓ k ).(2) QoS criterion: for each user k, we define the packet loss rate at time slot t, ρ k (t), as follows ρ k (t) = 1 t t−1 i=0 n k (i) a k (i) , t 1,(3) where n k (t) denotes the number of lost packets for user k at time slot t. Note that ρ k ∈ [0, 1] (n k (t) a k (t)). A packet is lost when either of the two following events occurs: (i) it is not transmitted because of insufficient available channels, (ii) is transmitted but ACK feedback is not received. The parameter ρ k reflects the QoS perceived by user k: higher values of ρ k mean a higher number of lost packets and poor QoS whereas lower values of ρ k mean good QoS. To ensure good QoS for the users, the resource allocation scheme should take account of their experienced QoS and keep this parameter values for all users within an acceptable range. Finally, the decision variable is the number of channels associated to each user k at each time slot, denoted by ℓ k , which satisfies K k=1 ℓ k (t) = L,(4) where L denotes the number of available channels. III. CONSTRAINED MDP FRAMEWORK The stochastic nature of the wireless channel incites us to consider an MDP framework to solve the decision problem. In this section, we first introduce the constrained MDP formulation along with its components. We then derive the optimality equations. A. Model Formulation We define the following finite-horizon MDP (3), and the symbol × stands for the Cartesian product. • Action Space: is the finite set L = {(ℓ 1 , .., ℓ K ) satisfying (4)}, where ℓ k denotes the number of channels assigned to user k. • State Space: is the finite set T × S where T = {0, .., T }, S = {ρ 1 × .. × ρ K }, ρ k for k = 1, .., K is defined in • Reward: we define the reward r at time slot t, when the controller chooses action ℓ ∈ L in state s t , as the expected total number of successfully transmitted packets over all the users, that is, r(s t , ℓ) = E K k=1 N k (ℓ k ) .(5) Note that the reward depends only on the number of channels allocated for each user (the action), and not on the current state s t . Besides, the reward is a non-linear function of the action. • Transition Probabilities: First, we define the probability that n packets are lost for user k as a function of the number of waiting packets a k and the number of assigned channels ℓ k at a given time slot as follows σ k (n, a k , ℓ k ) = a k n 1 − ν k (a k , ℓ k ) n ν k (a k , ℓ k ) a k −n , where n a k and a k n denotes the binomial coefficient. The state transition probability for user k is given by p(ρ ′ k | ρ k (t), ℓ k ) = α a k σ k (n, a k , ℓ k ),(6) where ρ ′ k = t t + 1 ρ k + 1 t + 1 n a k .(7) Finally, let s t+1 = ρ ′ 1 × .. × ρ ′ K and s t = ρ 1 × .. × ρ K , the transition probability from state s t to state s t+1 given when action l is taken, is then given by p(s t+1 | s t , ℓ) = K k=1 p(ρ ′ k | ρ k (t), ℓ k ).(8) Regarding the strict requirements of URLLC packets described earlier, we introduce in the following the notion of a risk-state. Definition 1. We define a risk state any state where ρ k > ρ max for any k ∈ {1, .., K} with ρ max > 0 is constant fixed by the controller. The set of risk states Φ is then, Φ = {ρ 1 × .. × ρ K where there ∃ k such that ρ k > ρ max }. Besides, a risk-state is an absorbing state, that is, the process ends when it reaches a risk state [12]. A deterministic policy π assigns at each time step and for each state an action. Our goal is to find an optimal deterministic policy π * which maximizes the total expected reward V π T (s) given by V π T (s) = E π T t=0 r(s t , π(s t ))| s 0 = s ,(9) with the reward r is defined in (5), while satisfying the QoS constraint given by η π (s) < w,(10) where η π (s) denotes the probability of visiting a risk state over the planning horizon, given that the initial state (at time slot 0) is s and policy π is followed, and w is a positive constant. Formally, η π (s) = P π (∃ t such that s t ∈ Φ|s 0 = s). In order to explicitly characterize η π (s), we introduce in the following the risk signal r. Definition 2. We define a risk signal r as follows r(s t , ℓ t , s t+1 ) = 1 if s t+1 ∈ Φ 0 otherwise,(12) where s t and ℓ t denote the state and action at time slot t, respectively, and s t+1 denotes the subsequent state. Proposition 1. The probability of visiting a risk-state, η π (s), is given by η π (s) = V π T (s),(13) where we set V π T (s) = E π T t=0 r (s t , π(s t ), s t+1 ) | s 0 = s .(14) Proof. The random sequence r(t = 0), r(t = 1),.., r(t = T ) may contain 1 if a risk state is visited, otherwise all its components are equal to zero (recall that a risk state is an absorbing state). Therefore, T t=0 r(t) is a Bernoulli random variable with a mean equal to the probability of reaching a risk state, that is, relation (13) holds. B. Optimality Equations By virtue of Proposition 1, we associate a state value function V π T to the probability of visiting a risk state. Now, we define a new weighted value function V π ξ,T , which incorporates both the reward and the risk, as follows V π ξ,T (s) = ξV π T (s) − V π T (s),(15) where ξ > 0 is the weighting parameter, determined by the risk level the controller is willing to tolerate. The function V π ξ,T can be seen as a standard value function associated to the reward ξr − r. The case ξ = 0 corresponds to a minimum-risk policy whereas the case ξ → ∞ corresponds to a maximum-value policy. Let Π denote the set of deterministic policies, and define V * T (s) = max π∈Π V π T (s), V * T (s) = min π∈Π V π T (s), V * ξ,T (s) = max π∈Π V π ξ,T (s). Besides, we define u π t , u π t , and u π ξ,t for 0 t T respectively by u π t (s) = E π T i=t r(s i , π(s i ))| s t = s ,(16)u π t (s) = E π T i=t r(s i , π(s i ), s i+1 )| s t = s ,(17) u π ξ,t (s) = ξu π t (s) − u π t (s). Note that V π T incorporates the total expected reward over the entire planning horizon whereas u t incorporates the rewards from decision epoch t to the end of the planning horizon only. Besides, u t (s) is the probability of visiting a risk state given that at time t the system is in state s ∈ {S/Φ}, and is thus a measure of the risk. The optimality equations are given by (the proof is similar to that in [18], chap. 4 and skipped here for brevity) u * t (s) = max ℓ∈L r(s t , ℓ) + j∈S p(j|s t , ℓ)u * t+1 (j) (19) u * t (s) = min ℓ∈L j∈S p(j|s t , ℓ) r(s t , ℓ, j) + u * t+1 (j) (20) u * ξ,t (s) = max ℓ∈L j∈S p(j|s t , ℓ) ξr(s t , ℓ) − r(s t , ℓ, j) +u * ξ,t+1 (j) ,(21) for t = 0, .., T − 1. For the boundary conditions, that is at time slot T , u * T (s), u * T (s), and u * ξ,T (s) are set to zero for each s ∈ S. In a non-risk state, the reward r is given in (5) and the risk signal is equal to zero whereas in a risk state the reward r is set to zero and the risk signal r is set to one. IV. ALGORITHM DESIGN In this section, we present two algorithms: (i) finite-horizon value iteration algorithm which assumes that all the model parameters are known to the controller, namely the channel statistics (model-based algorithm), and (ii) reinforcement learning algorithm which does not require the controller knowledge of channel statistics (model-free algorithm). A. Value Iteration Algorithm In order to find a policy that maximizes the weighted value function defined in (15), we use the value iteration algorithm [18]. In this algorithm, we proceed backwards: we start by determining the optimal action at time slot T for each state, and successively consider the previous stages, until reaching time slot 0 (see Algorithm 1). Algorithm 1 Finite-Horizon Value Iteration B. Risk-Sensitive Reinforcement Learning Algorithm During the learning phase, the controller gets estimates of the value of each state-action pair. It updates its estimates through the interaction with the environment where at each iteration it performs an action and then observes the reward, risk signal r, and the next state (see Fig. 2). The learning controller chooses an action at each learning step following the ε-greedy policy, that is, it selects an action that maximizes its current estimate with probability 1 − ε, or a random action with probability ε. The parameter ε captures the exploration-and-exploitation trade-off: when ε → 0, the controller tends to choose an action that maximizes its current state's estimated value; whereas when ε → 1, the controller tends to choose randomly an action and to favor the exploration for optimality. The state-action value function is given by [19], [21] Q π (s t , ℓ) = r(s t , ℓ) + j∈S p(j|s t , ℓ) u π t+1 (j), where the first term denotes the immediate reward, that is the number of successfully transmitted packets over all the users, when the action l is performed in state s t ; and the second term denotes the expected reward when the policy π is followed in the subsequent decision stages. Similarly to the state-action value function associated to the reward, we define the stateaction value function associated to the risk Q π as Q π (s t , ℓ) = j∈S p(j|s t , ℓ) r(s t , ℓ, j) + u π t+1 (j) . Note that the introduction of the signal risk r enabled us to define a state-action value function, Q to the risk. Besides, the state-action value function associated to the weighted formulation, Q π ξ , is given by Q π ξ (s t , ℓ) = ξQ π (s t , ℓ) − Q π (s t , ℓ). Finally, the Q-function updates at the learning step n (which should not be confused with the decision epoch t) are given by [21] Q (n+1) (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) (s t , ℓ) + α n (s t , ℓ) r + max ℓ∈L {Q (n) (s t+1 , ℓ)} ,(22)Q (n+1) (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) (s t , ℓ) + α n (s t , ℓ) r + min ℓ∈L {Q (n) (s t+1 , ℓ)} ,(23) and, Q (n+1) ξ (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) ξ (s t , ℓ) + α n (s t , ℓ) ξr − r + max ℓ∈L {Q (n) ξ (s t+1 , ℓ)} ,(24) where α n (s t , ℓ) denotes the learning rate parameter at step n when the state s t and action ℓ are visited. The learning algorithm converges to the optimal stateaction value function when each state-action pair is performed infinitely often and when the learning rate parameter satisfies for each (s t , ℓ) pair (the proof is given in [7], [21] and skipped here for brevity), In this case, the Q-functions are related to the value functions as follows max ℓ∈L {Q(s t , ℓ)} = u * t (s t ), min ℓ∈L Q(s t , ℓ) = u * t (s t ), max ℓ∈L {Q ξ (s t , ℓ)} = u * ξ,t (s t ). When a risk state is reached during the learning phase, the system is restarted according to the uniform distribution to a non-risk state. In addition, when t T , we consider that an artificial absorbing state is reached and we reinitialize t (see Algorithm 2). Algorithm 2 Q-learning Algorithm 1: Initialization t ← 0, s 0 ← s, n ← 1, 2: for each ℓ ∈ L 3: Q(s 0 , ℓ) ← 0, Q(s 0 , ℓ) ← 0, Q ξ (s 0 , ℓ) ← 0 4: End for 5: Repeat 6: observe current state s t 7: select and perform action ℓ in state s t 8: observe the new state s t+1 , reward r and the risk r 9: update the Q-functions Q(s t , l), Q(s t , l), Q ξ (s t , ℓ) according to (22), (23), (24) respectively 10: t ← t + 1 11: n ← n + 1 12: update α n 13: if t = T , then t ← 0 artificial absorbing state reached 14: if s t ∈ Φ, then s t ∼ Unif{S/Φ} absorbing state reached 15: until convergence V. PERFORMANCE EVALUATION In this section, we present the numerical results obtained with the value iteration and the learning algorithms in a variety of scenarios. We consider the setting of two users along with a number of channels L = 5. For the arrival traffic, we consider the following truncated Poisson distribution Prob(a = m) = λ m /m! Amax i=0 λ i /i! if m A max zero otherwise,(25) where λ = 3 and A max = 6. The mean of the Bernoulli channel µ and the value of the parameter ρ max throughout this section are fixed to 0.6 and 0.55 respectively. A. Minimum-risk vs maximum-value policy First, we compare the performance of the minimum-risk policy (obtained when ξ = 0), maximum-value policy (obtained when ξ → ∞), weighted policy (when ξ > 0), and the fixed policy which consists is assigning the same number of channels for each user at each time slot (ℓ 1 = 2 and ℓ 2 = 3). We depict in Fig. 3-top the reward u t (s) given in (19) as a function of time when s = 0.3 × 0 and different policies are followed. We observe that the maximum-value policy clearly outperforms the fixed and the minimum-risk policy. In Fig. 3-bottom showing u t (s) given in (20), we observe that the probability of visiting a risk-state when the fixed policy is followed is much higher than that obtained when the minimum-risk policy π * is performed. For example, at the time step t = 5, u t (s) is equal to 0.42 when the policy π f is performed whereas this value reduces to 0.02 when the policy π * is followed. In fact, the fixed policy does not take account of the experienced QoS of the users, and therefore, it is the policy which results in the highest risk-state visitation probability. Besides, this probability decreases over time for all the policies. In fact, as time goes on, the probability of entering a risk-state over the remaining time steps decreases. The reward u t (s) increases for the lower values of t until reaching a maximum value and then it decreases, for all the policies. In fact, for the lower values of t, the probability of visiting a risk-state is high, and this affects the expected value of the reward (recall that in the risk state, the reward is equal to zero). As time goes on, this probability decreases, and thus the expected reward increases. However, at the further time steps, the number of remaining decision stages is low and hence the expected reward (total number of successfully transmitted packets over the remaining time slots) decreases. B. Learning In the learning algorithm, we simulate the wireless channel with a Bernoulli random variable with a number of trials equal to the number of channels associated to each packet for each user. For the learning rate parameter α n , we considered the following expression [11]: 3: Performance of the minimum-risk policy π * , the maximum-value policy π * , the weighted-policy π * ξ with ξ = 0.1, and the fixed policy π f . On the top, u t (s), on the bottom, u t (s) where s = 0.3 × 0 and T = 9. α n = 1 (1 + n(s t , ℓ)) γ ,(26) where n(s t , ℓ) denotes the number of times the state-action pair (s t , ℓ) was visited until iteration n, and γ is a positive parameter ∈ [0.5, 1] [11]. We depict in Fig. 4 the optimal (minimum-risk) policy (number of channels to assign to user 1 , ℓ 1 ∈ [0, .., 5]) computed by the learning algorithm, as a function of time steps (decision epochs) and ρ 1 , when ρ 2 is fixed to 0. The figure shows a monotony property: the number of channels to assign to user 1 increases with time and with ρ 1 . In fact, as the QoS of user 1 degrades (ρ 1 increases), more channels are assigned to it to compensate for this degradation; and as time goes on, this policy is more sensitive to this degradation as more channels are assigned for the same values of ρ 1 , but at further time steps. VI. CONCLUSION In this work, we studied the problem of dynamic channel allocation for URLLC traffic in a multi-user multi-channel wireless network within a novel framework. Due to the stochastic nature of the problem related to time-varying, fading channels and random arrival traffic, we considered a finite-horizon MDP framework. We determined explicitly the probability of visiting a risk state and we wrote it as a cumulative return (risk signal). We then introduced a weighted global value function which incorporates two criteria: reward and risk. By virtue of the value iteration algorithm, we determined the optimal policy. Furthermore, we used a Q-learning algorithm to enable the controller to learn the optimal policy in the absence of channel statistics. We illustrated the performance of our algorithms with numerical studies, and we showed that by adapting the number of parallel transmissions in a smart way, the performance of the system can be substantially enhanced. In the future work, we would like to take account of spatial diversity in the dynamic allocation scheme where both the BS and the user terminals can be equipped with multiple antennas to enhance the system performance.
4,653
1811.02341
2899689260
In this paper, we study the problem of dynamic channel allocation for URLLC traffic in a multiuser multi-channel wireless network where urgent packets have to be successfully received in a timely manner. We formulate the problem as a finite-horizon Markov Decision Process with a stochastic constraint related to the QoS requirement, defined as the packet loss rate for each user. We propose a novel weighted formulation that takes into account both the total expected reward (number of successfully decoded packets) and the risk which we define as the QoS requirement violation. First, we use the value iteration algorithm to find the optimal policy, which assumes a perfect knowledge of the controller of all the parameters, namely the channel statistics. We then propose a Q-learning algorithm where the controller learns the optimal policy without having knowledge of neither the CSI nor the channel statistics. We illustrate the performance of our algorithms with numerical studies.
In this work, we consider an alternative approach to the risk which consists in minimizing the risk state visitation probability. In fact, due to the stochastic nature of the problem (time-varying channel and random arrival traffic in our context), giving a low reward to an undesirable or a risk-state may be insufficient to minimize the probability of visiting such state @cite_7 . Therefore, in addition to the maximization of the total expected reward, we propose to consider a second criterion which consists in minimizing the probability of visiting risk states where a risk state here is related to the violation of QoS requirements.
{ "abstract": [ "In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed." ], "cite_N": [ "@cite_7" ], "mid": [ "2101075098" ] }
Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks
In the fifth generation (5G) wireless networks, there are new service categories with heterogeneous and challenging requirements, among them the Ultra Reliable Low Latency (URLLC) traffic [6], designed for delay and reliability sensitive applications like real-time remote control, autonomous driving, and mission-critical traffic. In URLLC traffic, the Endto-End (E2E) latency defined by 3GPP is lower than 1 ms along with a reliability requirement of 1 − 10 −5 to 1 − 10 −9 [6], [15]. A plausible solution to address the latency requirement issue is to make transmissions without Channel State Information (CSI) knowledge at the transmitter side. To increase reliability, exploiting frequency diversity is beneficial, and occurs by making parallel transmissions of the same packet over different subcarriers in an Orthogonal Frequency Division Multiplexing (OFDM) system where each subcarrier experiences different channel characteristics. However, this solution is costly in terms of system capacity. Therefore, the number of parallel transmissions should not be fixed in advance but should rather be variable and depending on many parameters such as the position of a user in the cell, or the statistics about his packet losses over the previous time slots. For example, if a user experienced a high number of packet losses in the previous time slots, it should be allocated a high number of subchannels to increase his success probability, whereas a user with a low number of dropped packets may be assigned a low number of subcarriers. Hence, it is crucial to design efficient dynamic schemes able to adapt the number of parallel transmissions for each user to his experienced QoS. In this work, we study the problem of dynamic channel allocation for URLLC traffic in a multi-user multi-channel wireless network under QoS constraints. A channel here refers to a frequency band or a subcarrier in an OFDM system, and the QoS is related to the packet loss rate for each user, defined as the average number of dropped packets. Besides, we introduce the notion of risk related to the violation of the QoS requirements; more precisely, a risk occurs or equivalently, a risk state is reached when the QoS requirement is violated for a user. Furthermore, we consider that the transmitter does not have neither the CSI nor the channel statistics at the transmission moment. In fact, due to the urgency of URLLC packets mentioned previously, there is not enough time for the BS to make channel estimation and probing techniques like in conventional wireless communications. B. Addressed Issues and Contribution In this work, we address the following issues: • We formulate the dynamic channel allocation problem for URLLC traffic as a finite-horizon MDP wherein the state represents the QoS of the users, that is, the average number of dropped packets or packet loss rate of the users. The decision variable is the number of channels to assign to each user. We define a risk state as any state where the QoS requirement is violated for at least one user. Besides, we define a stochastic constraint related to the risk state visitation probability. • Assuming the channel statistics are known to the controller, we use the finite-horizon value iteration algorithm to find the optimal policy to the weighted formulation of the problem, which takes into account both the total expected reward over the planning horizon and the risk criterion (QoS requirement violation probability). • When the channel statistics are unknown to the controller, we propose a reinforcement learning algorithm (Q-learning) for the weighted formulation of the problem, which enables the controller to learn the optimal policy. We illustrate the performance of our algorithms with numerical studies. C. Paper Structure In Section II, we present the system model for the multiuser multi-channel wireless network with URLLC packets and time-varying channels along with the QoS definition. In Section III, we introduce the constrained MDP formulation with all its components. In Section IV, we present both the finite-horizon value iteration algorithm and the reinforcement learning algorithm. Section V is devoted to numerical results. Finally, we conclude the paper in Section VI. II. SYSTEM MODEL We consider a multi-user multi-channel wireless network where URLLC packets have to be transmitted over timevarying and fading channels. Due to the strict latency requirement of URLLC packets in 5G networks mentioned previously, there is not enough time for the BS to estimate the channel, and the packets are then immediately transmitted in the absence of CSI at the transmitter side. When a packet is successfully decoded, the receiver sends an acknowledgment feedback, which is assumed to be instantaneous and errorfree. We consider a centralized controller which dynamically Controller user 1 distributes the channels to the users based on their QoS (see Fig. 1). user K ρ 1 (t + 1) ρ K (t + 1) . . . a 1 (t) a K (t) ℓ 1 (t) ℓ K (t) Furthermore, we make the following assumptions: Packet arrival process: the packet arrival process is considered as an independent and identically distributed (i.i.d.) random process over a finite set I = {0, 1, .., A max }, where A max is a positive constant, and is identical for all the users. Let α a denote the probability that a ∈ I packets arrive for a given user at the beginning of a time slot. Deadline-constrained traffic: regarding the strict URLLC latency requirement specified by 3GPP (lower than 1 ms), each packet has a lifetime of one time slot and can either be served or dropped; if there are available channels, the packet will be transmitted, otherwise, it will be dropped because after one time slot it becomes outdated and useless. Furthermore, one packet is transmitted per channel. Channel model: we consider i.i.d. Bernoulli channels with a mean µ ∈ [0, 1]. In millimeter-wave communications, the links are characterized by their intermittence and high sensitivity, and this channel model reflects the existence of a light-of-sight (LOS) channel state [4], [8]. To increase reliability, a user can be assigned more channels than the number of waiting packets (depending on his experienced QoS). Some packets are then simultaneously sent over multiple parallel channels. Channel split: for each user, all the packets are equally important: when the number of available channels is larger than that of waiting packets, we assume that some packets are picked uniformly at random to be replicated. A packet is obviously more likely to be successfully transmitted when sent over many channels simultaneously. However, assigning more channels to a user will affect the QoS experienced by the other users. Note that channel split across the packets (which occurs in the same manner for all the users) should not be confused with the channel split across the users (which takes into account the QoS perceived by the users). For user k, the distribution of available channels ℓ k over the waiting packets a k occurs as follows: each packet is transmitted over (ℓ k ∧ a k ) channels and may be furthermore replicated once with a probability ( ℓ k ∨a k a k ), where the symbol ℓ k ∧ a k denotes the larger integer m such that ma k ℓ k , and ℓ k ∨ a k denotes the remaining integer of the division of ℓ k by a k . The probability that a packet is successfully transmitted given that there are a k waiting packets at the transmitter and ℓ k assigned channels can then be expressed by ν k (a k , ℓ k ) = 1 − ℓ k ∨ a k a k 1 − (1 − µ) (ℓ k ∧a k ) + ℓ k ∨ a k a k 1 − (1 − µ) 1+(ℓ k ∧a k ) .(1) The expected number of successfully transmitted packets for user k is then given by E [N k (ℓ k )] = a k ∈I a k α a k ν k (a k , ℓ k ).(2) QoS criterion: for each user k, we define the packet loss rate at time slot t, ρ k (t), as follows ρ k (t) = 1 t t−1 i=0 n k (i) a k (i) , t 1,(3) where n k (t) denotes the number of lost packets for user k at time slot t. Note that ρ k ∈ [0, 1] (n k (t) a k (t)). A packet is lost when either of the two following events occurs: (i) it is not transmitted because of insufficient available channels, (ii) is transmitted but ACK feedback is not received. The parameter ρ k reflects the QoS perceived by user k: higher values of ρ k mean a higher number of lost packets and poor QoS whereas lower values of ρ k mean good QoS. To ensure good QoS for the users, the resource allocation scheme should take account of their experienced QoS and keep this parameter values for all users within an acceptable range. Finally, the decision variable is the number of channels associated to each user k at each time slot, denoted by ℓ k , which satisfies K k=1 ℓ k (t) = L,(4) where L denotes the number of available channels. III. CONSTRAINED MDP FRAMEWORK The stochastic nature of the wireless channel incites us to consider an MDP framework to solve the decision problem. In this section, we first introduce the constrained MDP formulation along with its components. We then derive the optimality equations. A. Model Formulation We define the following finite-horizon MDP (3), and the symbol × stands for the Cartesian product. • Action Space: is the finite set L = {(ℓ 1 , .., ℓ K ) satisfying (4)}, where ℓ k denotes the number of channels assigned to user k. • State Space: is the finite set T × S where T = {0, .., T }, S = {ρ 1 × .. × ρ K }, ρ k for k = 1, .., K is defined in • Reward: we define the reward r at time slot t, when the controller chooses action ℓ ∈ L in state s t , as the expected total number of successfully transmitted packets over all the users, that is, r(s t , ℓ) = E K k=1 N k (ℓ k ) .(5) Note that the reward depends only on the number of channels allocated for each user (the action), and not on the current state s t . Besides, the reward is a non-linear function of the action. • Transition Probabilities: First, we define the probability that n packets are lost for user k as a function of the number of waiting packets a k and the number of assigned channels ℓ k at a given time slot as follows σ k (n, a k , ℓ k ) = a k n 1 − ν k (a k , ℓ k ) n ν k (a k , ℓ k ) a k −n , where n a k and a k n denotes the binomial coefficient. The state transition probability for user k is given by p(ρ ′ k | ρ k (t), ℓ k ) = α a k σ k (n, a k , ℓ k ),(6) where ρ ′ k = t t + 1 ρ k + 1 t + 1 n a k .(7) Finally, let s t+1 = ρ ′ 1 × .. × ρ ′ K and s t = ρ 1 × .. × ρ K , the transition probability from state s t to state s t+1 given when action l is taken, is then given by p(s t+1 | s t , ℓ) = K k=1 p(ρ ′ k | ρ k (t), ℓ k ).(8) Regarding the strict requirements of URLLC packets described earlier, we introduce in the following the notion of a risk-state. Definition 1. We define a risk state any state where ρ k > ρ max for any k ∈ {1, .., K} with ρ max > 0 is constant fixed by the controller. The set of risk states Φ is then, Φ = {ρ 1 × .. × ρ K where there ∃ k such that ρ k > ρ max }. Besides, a risk-state is an absorbing state, that is, the process ends when it reaches a risk state [12]. A deterministic policy π assigns at each time step and for each state an action. Our goal is to find an optimal deterministic policy π * which maximizes the total expected reward V π T (s) given by V π T (s) = E π T t=0 r(s t , π(s t ))| s 0 = s ,(9) with the reward r is defined in (5), while satisfying the QoS constraint given by η π (s) < w,(10) where η π (s) denotes the probability of visiting a risk state over the planning horizon, given that the initial state (at time slot 0) is s and policy π is followed, and w is a positive constant. Formally, η π (s) = P π (∃ t such that s t ∈ Φ|s 0 = s). In order to explicitly characterize η π (s), we introduce in the following the risk signal r. Definition 2. We define a risk signal r as follows r(s t , ℓ t , s t+1 ) = 1 if s t+1 ∈ Φ 0 otherwise,(12) where s t and ℓ t denote the state and action at time slot t, respectively, and s t+1 denotes the subsequent state. Proposition 1. The probability of visiting a risk-state, η π (s), is given by η π (s) = V π T (s),(13) where we set V π T (s) = E π T t=0 r (s t , π(s t ), s t+1 ) | s 0 = s .(14) Proof. The random sequence r(t = 0), r(t = 1),.., r(t = T ) may contain 1 if a risk state is visited, otherwise all its components are equal to zero (recall that a risk state is an absorbing state). Therefore, T t=0 r(t) is a Bernoulli random variable with a mean equal to the probability of reaching a risk state, that is, relation (13) holds. B. Optimality Equations By virtue of Proposition 1, we associate a state value function V π T to the probability of visiting a risk state. Now, we define a new weighted value function V π ξ,T , which incorporates both the reward and the risk, as follows V π ξ,T (s) = ξV π T (s) − V π T (s),(15) where ξ > 0 is the weighting parameter, determined by the risk level the controller is willing to tolerate. The function V π ξ,T can be seen as a standard value function associated to the reward ξr − r. The case ξ = 0 corresponds to a minimum-risk policy whereas the case ξ → ∞ corresponds to a maximum-value policy. Let Π denote the set of deterministic policies, and define V * T (s) = max π∈Π V π T (s), V * T (s) = min π∈Π V π T (s), V * ξ,T (s) = max π∈Π V π ξ,T (s). Besides, we define u π t , u π t , and u π ξ,t for 0 t T respectively by u π t (s) = E π T i=t r(s i , π(s i ))| s t = s ,(16)u π t (s) = E π T i=t r(s i , π(s i ), s i+1 )| s t = s ,(17) u π ξ,t (s) = ξu π t (s) − u π t (s). Note that V π T incorporates the total expected reward over the entire planning horizon whereas u t incorporates the rewards from decision epoch t to the end of the planning horizon only. Besides, u t (s) is the probability of visiting a risk state given that at time t the system is in state s ∈ {S/Φ}, and is thus a measure of the risk. The optimality equations are given by (the proof is similar to that in [18], chap. 4 and skipped here for brevity) u * t (s) = max ℓ∈L r(s t , ℓ) + j∈S p(j|s t , ℓ)u * t+1 (j) (19) u * t (s) = min ℓ∈L j∈S p(j|s t , ℓ) r(s t , ℓ, j) + u * t+1 (j) (20) u * ξ,t (s) = max ℓ∈L j∈S p(j|s t , ℓ) ξr(s t , ℓ) − r(s t , ℓ, j) +u * ξ,t+1 (j) ,(21) for t = 0, .., T − 1. For the boundary conditions, that is at time slot T , u * T (s), u * T (s), and u * ξ,T (s) are set to zero for each s ∈ S. In a non-risk state, the reward r is given in (5) and the risk signal is equal to zero whereas in a risk state the reward r is set to zero and the risk signal r is set to one. IV. ALGORITHM DESIGN In this section, we present two algorithms: (i) finite-horizon value iteration algorithm which assumes that all the model parameters are known to the controller, namely the channel statistics (model-based algorithm), and (ii) reinforcement learning algorithm which does not require the controller knowledge of channel statistics (model-free algorithm). A. Value Iteration Algorithm In order to find a policy that maximizes the weighted value function defined in (15), we use the value iteration algorithm [18]. In this algorithm, we proceed backwards: we start by determining the optimal action at time slot T for each state, and successively consider the previous stages, until reaching time slot 0 (see Algorithm 1). Algorithm 1 Finite-Horizon Value Iteration B. Risk-Sensitive Reinforcement Learning Algorithm During the learning phase, the controller gets estimates of the value of each state-action pair. It updates its estimates through the interaction with the environment where at each iteration it performs an action and then observes the reward, risk signal r, and the next state (see Fig. 2). The learning controller chooses an action at each learning step following the ε-greedy policy, that is, it selects an action that maximizes its current estimate with probability 1 − ε, or a random action with probability ε. The parameter ε captures the exploration-and-exploitation trade-off: when ε → 0, the controller tends to choose an action that maximizes its current state's estimated value; whereas when ε → 1, the controller tends to choose randomly an action and to favor the exploration for optimality. The state-action value function is given by [19], [21] Q π (s t , ℓ) = r(s t , ℓ) + j∈S p(j|s t , ℓ) u π t+1 (j), where the first term denotes the immediate reward, that is the number of successfully transmitted packets over all the users, when the action l is performed in state s t ; and the second term denotes the expected reward when the policy π is followed in the subsequent decision stages. Similarly to the state-action value function associated to the reward, we define the stateaction value function associated to the risk Q π as Q π (s t , ℓ) = j∈S p(j|s t , ℓ) r(s t , ℓ, j) + u π t+1 (j) . Note that the introduction of the signal risk r enabled us to define a state-action value function, Q to the risk. Besides, the state-action value function associated to the weighted formulation, Q π ξ , is given by Q π ξ (s t , ℓ) = ξQ π (s t , ℓ) − Q π (s t , ℓ). Finally, the Q-function updates at the learning step n (which should not be confused with the decision epoch t) are given by [21] Q (n+1) (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) (s t , ℓ) + α n (s t , ℓ) r + max ℓ∈L {Q (n) (s t+1 , ℓ)} ,(22)Q (n+1) (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) (s t , ℓ) + α n (s t , ℓ) r + min ℓ∈L {Q (n) (s t+1 , ℓ)} ,(23) and, Q (n+1) ξ (s t , ℓ) ← 1 − α n (s t , ℓ) Q (n) ξ (s t , ℓ) + α n (s t , ℓ) ξr − r + max ℓ∈L {Q (n) ξ (s t+1 , ℓ)} ,(24) where α n (s t , ℓ) denotes the learning rate parameter at step n when the state s t and action ℓ are visited. The learning algorithm converges to the optimal stateaction value function when each state-action pair is performed infinitely often and when the learning rate parameter satisfies for each (s t , ℓ) pair (the proof is given in [7], [21] and skipped here for brevity), In this case, the Q-functions are related to the value functions as follows max ℓ∈L {Q(s t , ℓ)} = u * t (s t ), min ℓ∈L Q(s t , ℓ) = u * t (s t ), max ℓ∈L {Q ξ (s t , ℓ)} = u * ξ,t (s t ). When a risk state is reached during the learning phase, the system is restarted according to the uniform distribution to a non-risk state. In addition, when t T , we consider that an artificial absorbing state is reached and we reinitialize t (see Algorithm 2). Algorithm 2 Q-learning Algorithm 1: Initialization t ← 0, s 0 ← s, n ← 1, 2: for each ℓ ∈ L 3: Q(s 0 , ℓ) ← 0, Q(s 0 , ℓ) ← 0, Q ξ (s 0 , ℓ) ← 0 4: End for 5: Repeat 6: observe current state s t 7: select and perform action ℓ in state s t 8: observe the new state s t+1 , reward r and the risk r 9: update the Q-functions Q(s t , l), Q(s t , l), Q ξ (s t , ℓ) according to (22), (23), (24) respectively 10: t ← t + 1 11: n ← n + 1 12: update α n 13: if t = T , then t ← 0 artificial absorbing state reached 14: if s t ∈ Φ, then s t ∼ Unif{S/Φ} absorbing state reached 15: until convergence V. PERFORMANCE EVALUATION In this section, we present the numerical results obtained with the value iteration and the learning algorithms in a variety of scenarios. We consider the setting of two users along with a number of channels L = 5. For the arrival traffic, we consider the following truncated Poisson distribution Prob(a = m) = λ m /m! Amax i=0 λ i /i! if m A max zero otherwise,(25) where λ = 3 and A max = 6. The mean of the Bernoulli channel µ and the value of the parameter ρ max throughout this section are fixed to 0.6 and 0.55 respectively. A. Minimum-risk vs maximum-value policy First, we compare the performance of the minimum-risk policy (obtained when ξ = 0), maximum-value policy (obtained when ξ → ∞), weighted policy (when ξ > 0), and the fixed policy which consists is assigning the same number of channels for each user at each time slot (ℓ 1 = 2 and ℓ 2 = 3). We depict in Fig. 3-top the reward u t (s) given in (19) as a function of time when s = 0.3 × 0 and different policies are followed. We observe that the maximum-value policy clearly outperforms the fixed and the minimum-risk policy. In Fig. 3-bottom showing u t (s) given in (20), we observe that the probability of visiting a risk-state when the fixed policy is followed is much higher than that obtained when the minimum-risk policy π * is performed. For example, at the time step t = 5, u t (s) is equal to 0.42 when the policy π f is performed whereas this value reduces to 0.02 when the policy π * is followed. In fact, the fixed policy does not take account of the experienced QoS of the users, and therefore, it is the policy which results in the highest risk-state visitation probability. Besides, this probability decreases over time for all the policies. In fact, as time goes on, the probability of entering a risk-state over the remaining time steps decreases. The reward u t (s) increases for the lower values of t until reaching a maximum value and then it decreases, for all the policies. In fact, for the lower values of t, the probability of visiting a risk-state is high, and this affects the expected value of the reward (recall that in the risk state, the reward is equal to zero). As time goes on, this probability decreases, and thus the expected reward increases. However, at the further time steps, the number of remaining decision stages is low and hence the expected reward (total number of successfully transmitted packets over the remaining time slots) decreases. B. Learning In the learning algorithm, we simulate the wireless channel with a Bernoulli random variable with a number of trials equal to the number of channels associated to each packet for each user. For the learning rate parameter α n , we considered the following expression [11]: 3: Performance of the minimum-risk policy π * , the maximum-value policy π * , the weighted-policy π * ξ with ξ = 0.1, and the fixed policy π f . On the top, u t (s), on the bottom, u t (s) where s = 0.3 × 0 and T = 9. α n = 1 (1 + n(s t , ℓ)) γ ,(26) where n(s t , ℓ) denotes the number of times the state-action pair (s t , ℓ) was visited until iteration n, and γ is a positive parameter ∈ [0.5, 1] [11]. We depict in Fig. 4 the optimal (minimum-risk) policy (number of channels to assign to user 1 , ℓ 1 ∈ [0, .., 5]) computed by the learning algorithm, as a function of time steps (decision epochs) and ρ 1 , when ρ 2 is fixed to 0. The figure shows a monotony property: the number of channels to assign to user 1 increases with time and with ρ 1 . In fact, as the QoS of user 1 degrades (ρ 1 increases), more channels are assigned to it to compensate for this degradation; and as time goes on, this policy is more sensitive to this degradation as more channels are assigned for the same values of ρ 1 , but at further time steps. VI. CONCLUSION In this work, we studied the problem of dynamic channel allocation for URLLC traffic in a multi-user multi-channel wireless network within a novel framework. Due to the stochastic nature of the problem related to time-varying, fading channels and random arrival traffic, we considered a finite-horizon MDP framework. We determined explicitly the probability of visiting a risk state and we wrote it as a cumulative return (risk signal). We then introduced a weighted global value function which incorporates two criteria: reward and risk. By virtue of the value iteration algorithm, we determined the optimal policy. Furthermore, we used a Q-learning algorithm to enable the controller to learn the optimal policy in the absence of channel statistics. We illustrated the performance of our algorithms with numerical studies, and we showed that by adapting the number of parallel transmissions in a smart way, the performance of the system can be substantially enhanced. In the future work, we would like to take account of spatial diversity in the dynamic allocation scheme where both the BS and the user terminals can be equipped with multiple antennas to enhance the system performance.
4,653
1811.02318
2899877876
We consider the problem of learning knowledge graph (KG) embeddings for entity alignment (EA). Current methods use the embedding models mainly focusing on triple-level learning, which lacks the ability of capturing long-term dependencies existing in KGs. Consequently, the embedding-based EA methods heavily rely on the amount of prior (known) alignment, due to the identity information in the prior alignment cannot be efficiently propagated from one KG to another. In this paper, we propose RSN4EA (recurrent skipping networks for EA), which leverages biased random walk sampling for generating long paths across KGs and models the paths with a novel recurrent skipping network (RSN). RSN integrates the conventional recurrent neural network (RNN) with residual learning and can largely improve the convergence speed and performance with only a few more parameters. We evaluated RSN4EA on a series of datasets constructed from real-world KGs. Our experimental results showed that it outperformed a number of state-of-the-art embedding-based EA methods and also achieved comparable performance for KG completion.
KG representation learning has been widely studied in recent years @cite_13 . One of the most famous translational methods is TransE @cite_2 , which models a triple @math as @math . TransE works well for one-to-one relationships, but fails to model more complex relationships like one-to-many and many-to-many. TransR @cite_21 tries to solve this problem by involving a relation-specific matrix @math to project @math by @math . PTransE @cite_5 leverages path information to learn inferences among relations. For example, if there exist two triples @math , which form a path in KG, and another triple @math holds simultaneously, PTransE models the path information by learning @math , where @math denotes the operator used to merge @math . KG completion is the most prevalent task for KG representation learning, and there also exist some non-translation methods that are particularly tailored for KG completion @cite_17 @cite_1 .
{ "abstract": [ "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models -- which potentially limits performance. In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -- which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set -- however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets -- deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets.", "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text.", "Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth.", "" ], "cite_N": [ "@cite_21", "@cite_1", "@cite_2", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2184957013", "2728059831", "2127795553", "2952854166", "2759136286", "" ] }
Recurrent Skipping Networks for Entity Alignment
Knowledge graphs (KGs) have become one of the most important resources for many areas, e.g., question answering and recommendation. Many KGs are created and maintained by different parties and in various languages, which makes them inevitably heterogeneous. Entity alignment (EA) aims to address this problem. It finds entities in two KGs referring to the same real-world object. Recently, a number of methods start to consider leveraging the representation learning techniques for EA (Chen et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018;Chen et al. 2018). Most of them are based on a classical KG embedding model called TransE (Bordes et al. 2013), which interprets each triple (s, l, o) in a KG as s + l ≈ o, where s and o denote the subject and object entities respectively, and l denotes the relation label between them. However, these methods may suffer from the problem of modeling multirelational triples (Lin et al. 2015a). Moreover, they only concern triple-level embeddings, i.e., they train a triple (s, l, o) only using the embeddings of s, l and o. Although the information of multi-hop neighbors can be passed during several rounds of mini-batches using back propagation (Wang et al. 2017), the efficiency would be severely affected, especially for the case of crossing KGs. A path-based method IPTransE (Zhu et al. 2017) tries to learn inferences among relations, but it still concentrates on the triple-level embedding learning. The long-term dependencies of entities are ignored by the current methods. For EA, the triple-level embedding learning limits the identity information propagating across KGs, especially for the entities which are not well connected with other entities or far away from the entities in prior alignment (i.e., entity alignment known ahead of time). Also, the triple-level learning only uses triples involved in prior alignment to deliver information across KGs, it also makes the current methods heavily rely on the amount of prior alignment. KGs can be regarded as multi-relational graphs and triples are just paths of length 1. If a KG embedding model is capable of being aware of the associations among entities in long paths, the trained embeddings would contain much richer information and thus help EA. However, none of the current EA methods takes modeling KG paths into consideration. To model KG paths, there exist two challenges that need to be solved. The first one is how to obtain these paths. A KG may have millions (even billions) of triples and the number of its paths is also huge. It is difficult, if not impossible, to use all of them for training. The second challenge is how to model these paths. The edges in the paths have labels and directions. We cannot simply ignore them when modeling the dependencies among entities. In this paper, we propose a new method, called RSN4EA (recurrent skipping networks for EA), which employs random walk sampling to efficiently sample paths across KGs, and models the paths with a novel recurrent skipping network (RSN). According to the network representation learning (Perozzi, Al-Rfou, and Skiena 2014;Grover and Leskovec 2016), an appropriate sampling method reduces computational complexity and often brings good performance. So, sampling paths from KGs is also worth exploring. Compared with networks, which typically consider edges with no labels or directions, KGs have more complex graph structures. Furthermore, our problem requires to propagate the identity information through the paths across KGs. To deal with these issues, we design a biased random walk sampling method to fluently control the depth and cross-KG biases of generated paths. To model paths or sentences, Skip-gram (Mikolov, Yih, and Zweig 2013) is widely used in the natural language pro-cessing area. It can efficiently encode the neighboring information into embeddings, which is important for discovering clusters or communities of related nodes (words). However, Skip-gram does not consider the order of nodes, while relations in KGs have different directions and enormous labels. The recurrent neural network (RNN) is a popular sequential model. It assumes that the next element only depends on the current input and the previous hidden state. But this assumption has inconsiderations for KG path modeling. Take a path (s, l, o), (o, l , o ) for example, RNN uses the input l and the previous hidden state h o to infer l → o . However, all the context of l is mixed in h o , which overlooks the importance of o. Note that this path is also constituted by two triples. To predict the object entity of (o, l , ?), both o and l should be more appreciated than others. To achieve this, we combine the idea of residual learning (He et al. 2016) with RNN to let the output hidden state of l learn a residual between the subject o and the desired prediction o , which leads to our recurrent skipping network (RSN). To evaluate RSN4EA, we built a series of datasets from real-world KGs. The previous work did not carefully consider the density and degree distributions of their datasets, which makes the datasets used in their experiments much denser than the original KGs. Also, their sampling methods are vague. In this paper, we created four couples of datasets, which were sampled with a reliable method and consider mono/cross-lingual scenarios and normal/high density. The main contributions of this paper are listed below: • We propose RSN4EA, an end-to-end framework for EA, which is capable of capturing long-term dependencies existing in KGs. • We design a biased random walk sampling method specific to EA, which generates desired paths with controllable depth and cross-KG biases. • To revise the inconsideration of RNN for KG path modeling, we present RSN, which leverages the idea of residual learning and can largely improve the convergence speed and performance. • To demonstrate the feasibility of our method, we carried out EA experiments on the datasets with different density and languages. The results showed that our method stably outperformed the existing methods. Also, RSN4EA achieved comparable performance for KG completion. KG Representation Learning KG representation learning has been widely studied in recent years (Wang et al. 2017). One of the most famous translational methods is TransE (Bordes et al. 2013), which models a triple (s, l, o) as s + l ≈ o. TransE works well for one-to-one relationships, but fails to model more complex relationships like one-to-many and many-to-many. TransR (Lin et al. 2015a) tries to solve this problem by involving a relation-specific matrix W l to project s, o by W l . PTransE (Lin et al. 2015b) leverages path information to learn inferences among relations. For example, if there exist two triples (e 1 , l 1 , e 2 ), (e 2 , l 2 , e 3 ), which form a path in KG, and another triple (e 1 , l x , e 3 ) holds simultaneously, PTransE models the path information by learning l 1 ⊕ l 2 ≈ l x , where ⊕ denotes the operator used to merge l 1 , l 2 . KG completion is the most prevalent task for KG representation learning, and there also exist some non-translation methods that are particularly tailored for KG completion (Trouillon et al. 2016;Dettmers et al. 2018). Embedding-based Entity Alignment Existing embedding-based EA methods are usually based on TransE. Specifically, MTransE (Chen et al. 2017) separately trains the entity embeddings of two KGs and learns various transformations to align the embeddings. JAPE (Sun, Hu, and Li 2017) is also based on TransE but learns the embeddings of two KGs in a unified space. Additionally, JAPE leverages attributes to refine entity embeddings. IPTransE (Zhu et al. 2017) employs an iterative process on the original PTransE (Lin et al. 2015b) for EA. Different from our method, it still concentrates on triple-level learning and does not consider the dependencies among entities in KG paths. BootEA (Sun et al. 2018) takes bootstrapping into consideration and uses a sophisticated strategy to update alignment during iterations. KDCoE (Chen et al. 2018) leverages cotraining for separately training entity relations and entity descriptions. Like bootstrapping, propagating alignment to each other may involve errors. Moreover, it requires extra resources like pre-trained multi-lingual word embeddings and descriptions. Because all the aforementioned methods use TransE-like models as the basic model, they are not capable of capturing long-term dependencies in KGs and the identity information propagating between different KGs is also limited. Network Representation Learning DeepWalk (Perozzi, Al-Rfou, and Skiena 2014) is one of the most well-known models in the network representation learning area. It uses uniform random walks to sample paths in a network, and applies Skip-Gram (Mikolov, Yih, and Zweig 2013) to model the generated paths. Skip-Gram learns the embedding of a node by maximizing the probabilities of its neighbors, which captures the information among the nodes. node2vec (Grover and Leskovec 2016) proposes biased random walks to refine the process of sampling paths from a network. It smoothly controls the node selection strategy to make the random walks explore neighbors in a breadth-first-search as well as a depth-first-search fashion. In this paper, the proposed EA-specific random walk sampling is inspired by node2vec, but concentrates on generating long and cross-KG paths. The methods in the network representation learning area mainly focus on discovering clusters or communities of related nodes. However, they are inappropriate to EA, since EA requires identifying entity alignment in two KGs. Method Overview A KG is defined as a directed multi-relational graph whose nodes correspond to entities and edges are of the form (subject, label, object) (denoted as (s, l, o)), each of which indicates that there exists a relation of name label between the entities subject and object. EA is the task of finding entities in two KGs that refer to the same real-world object. In many cases (e.g., Linked Open Data), a subset of aligned entities, called prior alignment, is known as training data. Based on it, many existing methods, such as (Zhu et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018), merge the two KGs into a connected joint graph and learn entity embeddings on it. Figure 1 illustrates the architecture of our method, which accepts two KGs as input and adopts an end-to-end framework for aligning the entities between them. The main modules in the framework are described as follows: • Biased random walk sampling. To leverage graph sampling for EA, we first create a joint graph between the two KGs by copying the edges of one entity in prior alignment to another. Additionally, since the relation directions between entities are often arbitrary, we add a virtual reverse relation, marked by " − ", for each existing relation. Thus, the object entity in a triple can follow the reverse relation to reach the subject entity. Figure 1 exemplifies the joint graph of KG 1 and KG 2 with reverse relations. Then, we conduct the biased random walk sampling on the joint graph to explore longer and cross-KG paths. We describe the details in the next section. Finally, each path, e.g., (e 1 , l 1 , e 2 ), (e 2 , l 2 , e 3 ), . . . , (e T −1 , l T , e T ), is converted into a KG sequence e 1 → l 1 → e 2 → · · · → e T −1 → l T → e T and fed to the next module. • Recurrent skipping network (RSN). RNN is natural and flexible to process sequential data types. However, it is not aware of different element types ("entity" vs. "relation") in KG sequences and basic KG structural units (i.e., triples). To cope with these issues, we propose RSN, which distinguishes entities from relations, and leverages the idea of residual learning by letting a subject entity skip its connection to directly participate in the object entity prediction. We present RSN in detail shortly. Each output of RSN is passed to the type-based noise contrastive estimation (NCE) for learning to predict the next element. • Type-based noise contrastive estimation. NCE (Gutmann and Hyvärinen 2010) is a very popular estimation method in natural language processing, which samples a small number of negative classes to approximate the integral distribution. As aforementioned, entities and relations are of different types. So, we design a type-based method to sample negative examples according to element types, and use different weight matrices and biases to respectively calculate the logits for the two types of elements. By back propagation, the embedding of each input element is not only learned from predicting its next, but associated with the elements along the KG sequence. • Embedding-based EA. With entity embeddings from the two KGs learned in a unified space, given a source entity, Figure 1: Architecture of the proposed method its aligned target entity can be discovered by searching the nearest neighbors in this space using the cosine similarity. Biased Random Walk Sampling for EA Random walks have been used as the sampling methods in network representation learning for a long time (Perozzi, Al-Rfou, and Skiena 2014). KGs share a lot of features with networks, such as large scale and sparsity. In this section, we present a biased random walk sampling method specific to EA, which can efficiently explore long and cross-KG sequences. Random Walk Sampling Given a start entity u in the joint graph, an unbiased random walk method obtains the probability distribution of next entities by the following equation: P (c i+1 = x | c i = v) = πvx Z if edge (v, l ? , x) exists 0 otherwise ,(1) where c i denotes the i th node in this walk and we have c 0 = u. l ? denotes an arbitrary relation from current entity v to next entity x. π vx is the unnormalized transition probability between v and x. Z is the normalizing constant. Biased Random Walk Sampling The above random walk method decides next entities in a uniform distribution. To model KGs, the basic training unit is triple, which means that the information of near entities can be updated via back propagation in different minibatches. However, delivering the information of farther entities only with triples is hard and low-effective. Capturing longer paths of KGs becomes helpful. To achieve this, we employ a 2 nd -order random walk sampling method in (Grover and Leskovec 2016) and propose a depth bias to smoothly control the depths of sampled paths. Formally, given an entity v in the joint graph, the depth bias between v's previous entity t and next entity x, denoted by b dpt (t, x), is defined as follows: b dpt (t, x) = α dist(t, x) = 2 1 − α dist(t, x) < 2 ,(2) where dist(·, ·) calculates the shortest path distance and its value must be one of {0, 1, 2}. Hyper-parameter α ∈ (0, 1) controls the depths of random walks. To favor longer paths, we let α > 0.5. For multi-edges, we treat their biases equal. Let us see Figure 1 for example. Consider a random walk that just traversed edge (t, country − , v) and now resides at v. The walk now needs to decide on the next step so it evaluates the transition probabilities π vx on edges (v, l ? , x) leading from v. We set the unnormalized transition probability to π vx = b dpt (t, x) × w vx , where w vx is the static edge weight. In the case of unweighted graphs, w vx = 1. Furthermore, specific to EA, we propose a cross-KG bias to favor paths connecting two KGs. Formally, given an entity v in the joint graph, the cross-KG bias between v's previous entity t and next entity x, denoted by b crs (t, x), is defined as follows: b crs (t, x) = β t, x belong to different KGs 1 − β otherwise ,(3) where β ∈ (0, 1) is a hyper-parameter controlling the preferences of random walks across two KGs. To favor cross-KG paths, we let β > 0.5. Similar to the depth bias, using previous and next entities avoids walking back and forth between only two entities in different KGs. Finally, we combine b dpt (t, x) and b crs (t, x) into overall bias b(t, x) and perform random walk sampling based on it: b(t, x) = b dpt (t, x) × b crs (t, x).(4) Recall the above example. According to the overall bias, the walk at v prefers W 3C and English in KG 2 to English in KG 1 . A KG sequence converted from this walk would be U nited Kingdom → country − → T im Berners-Lee → employer → W 3C. Recurrent Skipping Networks In this section, we first describe the conventional RNN. Then, we propose our RSN and discuss its characteristics. Recurrent Neural Networks RNN is a popular class of artificial neural network which performs well on sequential data types. Given a KG sequence x 1 → x 2 → . . . → x T as input, an RNN recurrently processes it with the following equation: h t = tanh(W h h t−1 + W x x t + b),(5) where h t is the output hidden state at time step t. W h , W x are the weight matrices. b is the bias. RNN is capable of using a few parameters to cope with input of any length. It has achieved state-of-the-art performance in many areas. However, there still exist a few limitations when RNN is used to process KG sequences. First, the elements in a KG sequence are of two different types, namely "entity" and "relation", which always appear in an alternant order. However, the conventional RNN regards them as the same type elements like words or nodes, which makes the procedure of capturing the information in the KG sequences less effective. Second, any KG sequences are constituted by triples, but these basic structural units are overlooked by RNN. Specifically, let x t denote a relation in a KG sequence and (x t−1 , x t , x t+1 ) denote a triple involving x t . As shown in Eq. (5), to predict x t+1 , RNN would combine the hidden state h t−1 and the current input x t , where h t−1 is a mix of the information of all the previous elements x 1 , . . . , x t−1 . However, it is expected that the information of x t−1 , x t in the triple can be more appreciated. Improving RNN with the Skipping Mechanism To better model KG sequences and remedy the semantic inconsideration of the conventional RNN, we propose the recurrent skipping network (RSN), which refines RNN with a simple but effective skipping mechanism. The basic idea of RSN is to shortcut current input entity to let it directly participate in predicting its object entity. In other words, an input element in a KG sequence whose type is "entity" can not only contribute to predicting its next relation, but also straightly take part in predicting its object entity. Figure 1 shows an RSN example. Formally, given a KG sequence x 1 → x 2 → . . . → x T as input, the skipping operation for an RSN is formulated as follows: h t = h t if x t is an entity S h h t + x t−1 if x t is a relation ,(6) where h t denotes the output hidden state of the RSN at time step t, and h t denotes the corresponding RNN output. S h is the weight matrix. In this paper, we select weighted sum for the skipping operation, but other combination methods can be supported as well. Explanation of RSN. Intuitively, RSN explicitly distinguishes entities and relations, and allows subject entities to skip their connections for directly participating in object entity predication. Behind this simple skipping operation, there exists a deeper explanation called residual learning. Let F (x) be an original mapping, where x denotes the input, and H(x) be the expected mapping. Compared to directly optimizing F (x) to fit H(x), residual learning hypothesizes that it is easier to optimize F (x) to fit the residual part H(x) − x. For an extreme case, if an identity mapping is optimal (i.e., H(x) = x), pushing the residual to zero would be much easier than fitting an identity mapping by a stack of nonlinear layers (He et al. 2016). Specifically, given a KG sequence · · · → x t−1 → x t → x t+1 → · · · , where (x t−1 , x t , x t+1 ) forms a triple, RRN leverages residual learning by regarding the process at each time step as a mini-residual network with the previous hidden state of RNN as input. Take time step t for example, RRN regards h t−1 as input, and learns the residual h t := H(h t−1 , x t ) − h t−1 , where H(h t−1 , x t ) denotes the expected mapping for (h t−1 , x t ). It still ignores the structure of KGs that x t−1 , x t should be more appreciated for predicting x t+1 . Differently, RSN leverages the residual learning in a new way. Instead of using an input as subtrahend (h t−1 ), it directly chooses the subject entity x t−1 as subtrahend. Making the output hidden state h t to fit x t+1 may be hard, but learning the residual of x t+1 and x t−1 may be easier, which is the key characteristic of RSN. Experiments and Results We evaluated RSN4EA for EA using a variety of real-world datasets. In this section, we report the results compared with several state-of-the-art embedding-based EA methods. Since RSN4EA is capable of learning KG embeddings, we also conducted experiments to assess its performance on KG completion (Bordes et al. 2013), which is a classical task for KG representation learning. Datasets Although the datasets used by existing methods (Chen et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018) are all sampled from real-world KGs, such as DBpedia and Wikidata, their density and degree distributions are quite different from the original ones. We argue that this status may prevent us from a comprehensive and accurate understanding of embeddingbased EA. In this paper, we propose a segment-based random PageRank (SRP) sampling method, which can fluently control the density of sampled datasets. Random PageRank sampling is an efficient algorithm for large graph sampling (Leskovec and Faloutsos 2006). It samples nodes according to the PageRank weights and can assign higher biases to more valuable entities. However, due to the characteristic of PageRank, it also favors high-degree nodes. To fulfill our requirements on KG sampling, we divided the entities in a KG into segments according to their degrees and performed sampling separately. To guarantee the distributions of sampled datasets following the original KGs, we used Kolmogorov-Smirnov (K-S) test to measure the difference. We set our expectation to = 5% for all the datasets. Based on the above sampling method, we obtained four couples of datasets to evaluate the performance of the embedding-based EA methods. The detailed statistics are shown in Table 1. Each dataset contains nearly 15,000 entities. For the normal datasets, they follow the density of the original KGs. For the dense datasets, we randomly deleted entities with low degrees in the original KGs to make the average degree doubled, and then conducted sampling. Therefore, the dense datasets are more similar to the datasets used by the existing methods (Chen et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018). Figure 2 shows the degree distributions of source KGs and the sampled datasets from different methods. We can see that our normal datasets best represent the original KGs. Implementation Details We built RSN4EA using TensorFlow. The embeddings and weight matrices were initialized with Xavier initializer, and the embedding size was set to 256. We used two-layer LSTM (Hochreiter and Schmidhuber 1997) with Dropout (Srivastava et al. 2014), and conducted batch normalization (Ioffe and Szegedy 2015) for both input and output of an RSN. We used Adam optimizer (Kingma and Ba 2015) with minibatch size 512 and learning rate 0.003. We trained an RSN for up to 30 epochs. The random walk biases were set to α = 0.9, β = 0.9, and the walk length was set to 15. The source code, datasets and results will be available online. For the comparative methods, we used the source code provided in their papers except KDCoE, since KDCoE has not released its source code yet. We implemented KDCoE by ourselves. We tried our best effort to adjust the hyperparameters to make the performance optimal. Following the previous work (Sun, Hu, and Li 2017;Sun et al. 2018), we used 30% of reference alignment as prior alignment and chose Hits@1, Hits@10 and mean reciprocal rank (MRR) as evaluation metrics. The best results are marked in bold. The same to the following. Results on Entity Alignment Tables 2 and 3 depict the EA results on monolingual and cross-lingual datasets, respectively. It is evident that capturing long-term dependencies by paths enables RSN4EA to outperform the existing EA methods. Generally, the heterogeneity of different KGs is more severe than a KG with different languages. A key module for embedding-based EA methods is to embed the information of entities in different KGs into a unified space. Thus, aligning entities in different KGs is more difficult for embeddingbased EA methods. With the help of establishing long-term dependencies, RSN4EA captured richer information of KGs and learned more accurate embeddings, leading to more significant improvement on the more heterogenous datasets (DBP-WD and DBP-YG). The two tables also demonstrate that the embeddingbased EA methods are sensitive to the density. The performance of all the methods on the normal datasets is significantly lower than that on the dense datasets. Although the normal datasets are more difficult, RSN4EA still showed considerable advantages compared with the other methods, since it used long paths to capture implicit connections among entities and represented them in the embeddings. It is worth noting that RSN4EA showed larger superiority in terms of Hits@1 and MRR. This is due to the fact that Hits@1 only considers the completely correct results, and MRR also favors top-ranked results. As aforementioned, RSN4EA embedded the long-term dependencies into the learned embeddings, which contains richer information to help identify aligned entities in different KGs. The better performance on these two metrics verified this point. Results on KG Completion Since RSN4EA can train KG embeddings for EA, it is also interesting to apply RSN4EA to KG completion (Bordes " †" denotes the methods executed by ourselves using the provided source code, due to some metrics were not used in literature. "-" denotes the unknown results, due to we cannot obtain the source code. et al. 2013), which is one of the most prevalent task for KG representation learning. To achieve this, we removed the cross-KG bias during the random walk sampling and conducted the KG completion experiment. Specifically, for a triple (s, l, o), KG completion aims to predict the object entity o given (s, l, ?) or predict the subject entity s given (?, l, o). FB15K and WN18 are the most widely-used benchmark datasets for KG completion (Bordes et al. 2013). However, recent studies (Toutanova and Chen 2015;Dettmers et al. 2018) exposed that these two datasets have the problem of leaking testing data. To solve this issue, a new dataset called FB15K-237 was recommended, and we used this dataset to assess RSN4EA in our experiments. The experimental results are shown in Table 4. ConvE-a method tailored to KG completion-obtained the best results on FB15K-237, followed by our RSN4EA. It is worth noting that, while predicting the entities given one triple is not the primary goal of RSN4EA, it still achieved comparable or better performance than many methods focusing on KG completion, which indicated the potential of leveraging KG paths for learning embeddings. Further Analysis Comparison with Alternative Networks To assess the feasibility of RSN, we conducted experiments to compare it with RNN and RRN. Both RNN and RRN were implemented using the same multi-layer LSTM units, Dropout and batch normalization. The comparison results are shown in Figure 3. Since RNN and RRN did not consider the structure of KG paths, they converged the embedding learning at a very slow speed. Compared with RNN, RSN achieved better performance with only 1/30 time cost, which indicated that this particular residual structure is essential for RSN4EA. Furthermore, RRN is a generic network involving residual learning in the conventional RNN. But it only achieved little improvement compared with RNN. This implied that simply combining residual learning with RNN cannot significantly help KG sequence modeling. Sensitivity to Proportion of Prior Alignment The proportion of prior alignment may significantly influence the performance of embedding-based EA methods. However, we may not obtain a large number of prior alignment in practice. We tested the performance of RSN4EA and BootEA (the second best method in our previous experiments) in terms of the proportion of prior alignment from 50% to 10% with step 10%. Due to space limitation, we only depicted the results on the DBP-WD dataset in Figure 4. The performance of the two methods continually dropped with the decreasing proportion of prior alignment. However, the curves of RSN4EA are gentler than BootEA. Specifically, on the normal dataset, for the four proportion intervals, RSN4EA lost 7.4%, 8.2%, 16.5% and 30.2% on Hits@1 respectively, while BootEA lost 11.8%, 12.0%, 22.3% and 49.8% respectively, which demonstrated that RSN4EA is a more stable method. Additionally, when the proportion was down to 10%, the Hits@1 result of RSN4EA on the normal dataset was almost twice higher than that of BootEA, which indicated that modeling paths helps RSN4EA propagate the identity information across KGs more effectively and alleviates the dependence on the proportion of prior alignment. Sensitivity to Random Walk Length We also observed how the random walk length affected the EA performance. As shown in Figure 5, on all the eight datasets, the Hits@1 results increased sharply during length 5 to 15, which indicates that modeling longer paths can help learn KG embeddings and obtain better performance. Furthermore, we observed that the performance approached to saturation for length 15 to 25. Therefore, in consideration of the efficiency, the results reported in Tables 2 and 3 are based on length 15. Conclusion and Future Work In this paper, we proposed RSN4EA, which employs biased random walks to sample paths specific to EA, and leverages RSN for learning KG embeddings. Our experimental results showed that RSN4EA not only outperformed the existing embedding-based EA methods, but also achieved superior performance compared with RNN and RRN. It also worked well for KG completion. In future work, we plan to continue exploring KG sequence learning. First, KGs often contain rich textual information like names and descriptions. Such information can be modeled with character-/word-level sequential models. RSN is capable of modeling KGs in a sequential manner, therefore it is worth studying a unified sequential model to learn KG embeddings using all valuable information. Second, in addition to paths, the neighboring information provides another type of context and may be also helpful for learning KG embeddings. We look forward to integrating the neighboring context to further improve the performance.
5,225
1811.02318
2899877876
We consider the problem of learning knowledge graph (KG) embeddings for entity alignment (EA). Current methods use the embedding models mainly focusing on triple-level learning, which lacks the ability of capturing long-term dependencies existing in KGs. Consequently, the embedding-based EA methods heavily rely on the amount of prior (known) alignment, due to the identity information in the prior alignment cannot be efficiently propagated from one KG to another. In this paper, we propose RSN4EA (recurrent skipping networks for EA), which leverages biased random walk sampling for generating long paths across KGs and models the paths with a novel recurrent skipping network (RSN). RSN integrates the conventional recurrent neural network (RNN) with residual learning and can largely improve the convergence speed and performance with only a few more parameters. We evaluated RSN4EA on a series of datasets constructed from real-world KGs. Our experimental results showed that it outperformed a number of state-of-the-art embedding-based EA methods and also achieved comparable performance for KG completion.
DeepWalk @cite_4 is one of the most well-known models in the network representation learning area. It uses uniform random walks to sample paths in a network, and applies Skip-Gram @cite_11 to model the generated paths. Skip-Gram learns the embedding of a node by maximizing the probabilities of its neighbors, which captures the information among the nodes. node2vec @cite_7 proposes biased random walks to refine the process of sampling paths from a network. It smoothly controls the node selection strategy to make the random walks explore neighbors in a breadth-first-search as well as a depth-first-search fashion. In this paper, the proposed EA-specific random walk sampling is inspired by node2vec, but concentrates on generating long and cross-KG paths.
{ "abstract": [ "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40 of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems." ], "cite_N": [ "@cite_4", "@cite_7", "@cite_11" ], "mid": [ "2154851992", "2366141641", "2141599568" ] }
Recurrent Skipping Networks for Entity Alignment
Knowledge graphs (KGs) have become one of the most important resources for many areas, e.g., question answering and recommendation. Many KGs are created and maintained by different parties and in various languages, which makes them inevitably heterogeneous. Entity alignment (EA) aims to address this problem. It finds entities in two KGs referring to the same real-world object. Recently, a number of methods start to consider leveraging the representation learning techniques for EA (Chen et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018;Chen et al. 2018). Most of them are based on a classical KG embedding model called TransE (Bordes et al. 2013), which interprets each triple (s, l, o) in a KG as s + l ≈ o, where s and o denote the subject and object entities respectively, and l denotes the relation label between them. However, these methods may suffer from the problem of modeling multirelational triples (Lin et al. 2015a). Moreover, they only concern triple-level embeddings, i.e., they train a triple (s, l, o) only using the embeddings of s, l and o. Although the information of multi-hop neighbors can be passed during several rounds of mini-batches using back propagation (Wang et al. 2017), the efficiency would be severely affected, especially for the case of crossing KGs. A path-based method IPTransE (Zhu et al. 2017) tries to learn inferences among relations, but it still concentrates on the triple-level embedding learning. The long-term dependencies of entities are ignored by the current methods. For EA, the triple-level embedding learning limits the identity information propagating across KGs, especially for the entities which are not well connected with other entities or far away from the entities in prior alignment (i.e., entity alignment known ahead of time). Also, the triple-level learning only uses triples involved in prior alignment to deliver information across KGs, it also makes the current methods heavily rely on the amount of prior alignment. KGs can be regarded as multi-relational graphs and triples are just paths of length 1. If a KG embedding model is capable of being aware of the associations among entities in long paths, the trained embeddings would contain much richer information and thus help EA. However, none of the current EA methods takes modeling KG paths into consideration. To model KG paths, there exist two challenges that need to be solved. The first one is how to obtain these paths. A KG may have millions (even billions) of triples and the number of its paths is also huge. It is difficult, if not impossible, to use all of them for training. The second challenge is how to model these paths. The edges in the paths have labels and directions. We cannot simply ignore them when modeling the dependencies among entities. In this paper, we propose a new method, called RSN4EA (recurrent skipping networks for EA), which employs random walk sampling to efficiently sample paths across KGs, and models the paths with a novel recurrent skipping network (RSN). According to the network representation learning (Perozzi, Al-Rfou, and Skiena 2014;Grover and Leskovec 2016), an appropriate sampling method reduces computational complexity and often brings good performance. So, sampling paths from KGs is also worth exploring. Compared with networks, which typically consider edges with no labels or directions, KGs have more complex graph structures. Furthermore, our problem requires to propagate the identity information through the paths across KGs. To deal with these issues, we design a biased random walk sampling method to fluently control the depth and cross-KG biases of generated paths. To model paths or sentences, Skip-gram (Mikolov, Yih, and Zweig 2013) is widely used in the natural language pro-cessing area. It can efficiently encode the neighboring information into embeddings, which is important for discovering clusters or communities of related nodes (words). However, Skip-gram does not consider the order of nodes, while relations in KGs have different directions and enormous labels. The recurrent neural network (RNN) is a popular sequential model. It assumes that the next element only depends on the current input and the previous hidden state. But this assumption has inconsiderations for KG path modeling. Take a path (s, l, o), (o, l , o ) for example, RNN uses the input l and the previous hidden state h o to infer l → o . However, all the context of l is mixed in h o , which overlooks the importance of o. Note that this path is also constituted by two triples. To predict the object entity of (o, l , ?), both o and l should be more appreciated than others. To achieve this, we combine the idea of residual learning (He et al. 2016) with RNN to let the output hidden state of l learn a residual between the subject o and the desired prediction o , which leads to our recurrent skipping network (RSN). To evaluate RSN4EA, we built a series of datasets from real-world KGs. The previous work did not carefully consider the density and degree distributions of their datasets, which makes the datasets used in their experiments much denser than the original KGs. Also, their sampling methods are vague. In this paper, we created four couples of datasets, which were sampled with a reliable method and consider mono/cross-lingual scenarios and normal/high density. The main contributions of this paper are listed below: • We propose RSN4EA, an end-to-end framework for EA, which is capable of capturing long-term dependencies existing in KGs. • We design a biased random walk sampling method specific to EA, which generates desired paths with controllable depth and cross-KG biases. • To revise the inconsideration of RNN for KG path modeling, we present RSN, which leverages the idea of residual learning and can largely improve the convergence speed and performance. • To demonstrate the feasibility of our method, we carried out EA experiments on the datasets with different density and languages. The results showed that our method stably outperformed the existing methods. Also, RSN4EA achieved comparable performance for KG completion. KG Representation Learning KG representation learning has been widely studied in recent years (Wang et al. 2017). One of the most famous translational methods is TransE (Bordes et al. 2013), which models a triple (s, l, o) as s + l ≈ o. TransE works well for one-to-one relationships, but fails to model more complex relationships like one-to-many and many-to-many. TransR (Lin et al. 2015a) tries to solve this problem by involving a relation-specific matrix W l to project s, o by W l . PTransE (Lin et al. 2015b) leverages path information to learn inferences among relations. For example, if there exist two triples (e 1 , l 1 , e 2 ), (e 2 , l 2 , e 3 ), which form a path in KG, and another triple (e 1 , l x , e 3 ) holds simultaneously, PTransE models the path information by learning l 1 ⊕ l 2 ≈ l x , where ⊕ denotes the operator used to merge l 1 , l 2 . KG completion is the most prevalent task for KG representation learning, and there also exist some non-translation methods that are particularly tailored for KG completion (Trouillon et al. 2016;Dettmers et al. 2018). Embedding-based Entity Alignment Existing embedding-based EA methods are usually based on TransE. Specifically, MTransE (Chen et al. 2017) separately trains the entity embeddings of two KGs and learns various transformations to align the embeddings. JAPE (Sun, Hu, and Li 2017) is also based on TransE but learns the embeddings of two KGs in a unified space. Additionally, JAPE leverages attributes to refine entity embeddings. IPTransE (Zhu et al. 2017) employs an iterative process on the original PTransE (Lin et al. 2015b) for EA. Different from our method, it still concentrates on triple-level learning and does not consider the dependencies among entities in KG paths. BootEA (Sun et al. 2018) takes bootstrapping into consideration and uses a sophisticated strategy to update alignment during iterations. KDCoE (Chen et al. 2018) leverages cotraining for separately training entity relations and entity descriptions. Like bootstrapping, propagating alignment to each other may involve errors. Moreover, it requires extra resources like pre-trained multi-lingual word embeddings and descriptions. Because all the aforementioned methods use TransE-like models as the basic model, they are not capable of capturing long-term dependencies in KGs and the identity information propagating between different KGs is also limited. Network Representation Learning DeepWalk (Perozzi, Al-Rfou, and Skiena 2014) is one of the most well-known models in the network representation learning area. It uses uniform random walks to sample paths in a network, and applies Skip-Gram (Mikolov, Yih, and Zweig 2013) to model the generated paths. Skip-Gram learns the embedding of a node by maximizing the probabilities of its neighbors, which captures the information among the nodes. node2vec (Grover and Leskovec 2016) proposes biased random walks to refine the process of sampling paths from a network. It smoothly controls the node selection strategy to make the random walks explore neighbors in a breadth-first-search as well as a depth-first-search fashion. In this paper, the proposed EA-specific random walk sampling is inspired by node2vec, but concentrates on generating long and cross-KG paths. The methods in the network representation learning area mainly focus on discovering clusters or communities of related nodes. However, they are inappropriate to EA, since EA requires identifying entity alignment in two KGs. Method Overview A KG is defined as a directed multi-relational graph whose nodes correspond to entities and edges are of the form (subject, label, object) (denoted as (s, l, o)), each of which indicates that there exists a relation of name label between the entities subject and object. EA is the task of finding entities in two KGs that refer to the same real-world object. In many cases (e.g., Linked Open Data), a subset of aligned entities, called prior alignment, is known as training data. Based on it, many existing methods, such as (Zhu et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018), merge the two KGs into a connected joint graph and learn entity embeddings on it. Figure 1 illustrates the architecture of our method, which accepts two KGs as input and adopts an end-to-end framework for aligning the entities between them. The main modules in the framework are described as follows: • Biased random walk sampling. To leverage graph sampling for EA, we first create a joint graph between the two KGs by copying the edges of one entity in prior alignment to another. Additionally, since the relation directions between entities are often arbitrary, we add a virtual reverse relation, marked by " − ", for each existing relation. Thus, the object entity in a triple can follow the reverse relation to reach the subject entity. Figure 1 exemplifies the joint graph of KG 1 and KG 2 with reverse relations. Then, we conduct the biased random walk sampling on the joint graph to explore longer and cross-KG paths. We describe the details in the next section. Finally, each path, e.g., (e 1 , l 1 , e 2 ), (e 2 , l 2 , e 3 ), . . . , (e T −1 , l T , e T ), is converted into a KG sequence e 1 → l 1 → e 2 → · · · → e T −1 → l T → e T and fed to the next module. • Recurrent skipping network (RSN). RNN is natural and flexible to process sequential data types. However, it is not aware of different element types ("entity" vs. "relation") in KG sequences and basic KG structural units (i.e., triples). To cope with these issues, we propose RSN, which distinguishes entities from relations, and leverages the idea of residual learning by letting a subject entity skip its connection to directly participate in the object entity prediction. We present RSN in detail shortly. Each output of RSN is passed to the type-based noise contrastive estimation (NCE) for learning to predict the next element. • Type-based noise contrastive estimation. NCE (Gutmann and Hyvärinen 2010) is a very popular estimation method in natural language processing, which samples a small number of negative classes to approximate the integral distribution. As aforementioned, entities and relations are of different types. So, we design a type-based method to sample negative examples according to element types, and use different weight matrices and biases to respectively calculate the logits for the two types of elements. By back propagation, the embedding of each input element is not only learned from predicting its next, but associated with the elements along the KG sequence. • Embedding-based EA. With entity embeddings from the two KGs learned in a unified space, given a source entity, Figure 1: Architecture of the proposed method its aligned target entity can be discovered by searching the nearest neighbors in this space using the cosine similarity. Biased Random Walk Sampling for EA Random walks have been used as the sampling methods in network representation learning for a long time (Perozzi, Al-Rfou, and Skiena 2014). KGs share a lot of features with networks, such as large scale and sparsity. In this section, we present a biased random walk sampling method specific to EA, which can efficiently explore long and cross-KG sequences. Random Walk Sampling Given a start entity u in the joint graph, an unbiased random walk method obtains the probability distribution of next entities by the following equation: P (c i+1 = x | c i = v) = πvx Z if edge (v, l ? , x) exists 0 otherwise ,(1) where c i denotes the i th node in this walk and we have c 0 = u. l ? denotes an arbitrary relation from current entity v to next entity x. π vx is the unnormalized transition probability between v and x. Z is the normalizing constant. Biased Random Walk Sampling The above random walk method decides next entities in a uniform distribution. To model KGs, the basic training unit is triple, which means that the information of near entities can be updated via back propagation in different minibatches. However, delivering the information of farther entities only with triples is hard and low-effective. Capturing longer paths of KGs becomes helpful. To achieve this, we employ a 2 nd -order random walk sampling method in (Grover and Leskovec 2016) and propose a depth bias to smoothly control the depths of sampled paths. Formally, given an entity v in the joint graph, the depth bias between v's previous entity t and next entity x, denoted by b dpt (t, x), is defined as follows: b dpt (t, x) = α dist(t, x) = 2 1 − α dist(t, x) < 2 ,(2) where dist(·, ·) calculates the shortest path distance and its value must be one of {0, 1, 2}. Hyper-parameter α ∈ (0, 1) controls the depths of random walks. To favor longer paths, we let α > 0.5. For multi-edges, we treat their biases equal. Let us see Figure 1 for example. Consider a random walk that just traversed edge (t, country − , v) and now resides at v. The walk now needs to decide on the next step so it evaluates the transition probabilities π vx on edges (v, l ? , x) leading from v. We set the unnormalized transition probability to π vx = b dpt (t, x) × w vx , where w vx is the static edge weight. In the case of unweighted graphs, w vx = 1. Furthermore, specific to EA, we propose a cross-KG bias to favor paths connecting two KGs. Formally, given an entity v in the joint graph, the cross-KG bias between v's previous entity t and next entity x, denoted by b crs (t, x), is defined as follows: b crs (t, x) = β t, x belong to different KGs 1 − β otherwise ,(3) where β ∈ (0, 1) is a hyper-parameter controlling the preferences of random walks across two KGs. To favor cross-KG paths, we let β > 0.5. Similar to the depth bias, using previous and next entities avoids walking back and forth between only two entities in different KGs. Finally, we combine b dpt (t, x) and b crs (t, x) into overall bias b(t, x) and perform random walk sampling based on it: b(t, x) = b dpt (t, x) × b crs (t, x).(4) Recall the above example. According to the overall bias, the walk at v prefers W 3C and English in KG 2 to English in KG 1 . A KG sequence converted from this walk would be U nited Kingdom → country − → T im Berners-Lee → employer → W 3C. Recurrent Skipping Networks In this section, we first describe the conventional RNN. Then, we propose our RSN and discuss its characteristics. Recurrent Neural Networks RNN is a popular class of artificial neural network which performs well on sequential data types. Given a KG sequence x 1 → x 2 → . . . → x T as input, an RNN recurrently processes it with the following equation: h t = tanh(W h h t−1 + W x x t + b),(5) where h t is the output hidden state at time step t. W h , W x are the weight matrices. b is the bias. RNN is capable of using a few parameters to cope with input of any length. It has achieved state-of-the-art performance in many areas. However, there still exist a few limitations when RNN is used to process KG sequences. First, the elements in a KG sequence are of two different types, namely "entity" and "relation", which always appear in an alternant order. However, the conventional RNN regards them as the same type elements like words or nodes, which makes the procedure of capturing the information in the KG sequences less effective. Second, any KG sequences are constituted by triples, but these basic structural units are overlooked by RNN. Specifically, let x t denote a relation in a KG sequence and (x t−1 , x t , x t+1 ) denote a triple involving x t . As shown in Eq. (5), to predict x t+1 , RNN would combine the hidden state h t−1 and the current input x t , where h t−1 is a mix of the information of all the previous elements x 1 , . . . , x t−1 . However, it is expected that the information of x t−1 , x t in the triple can be more appreciated. Improving RNN with the Skipping Mechanism To better model KG sequences and remedy the semantic inconsideration of the conventional RNN, we propose the recurrent skipping network (RSN), which refines RNN with a simple but effective skipping mechanism. The basic idea of RSN is to shortcut current input entity to let it directly participate in predicting its object entity. In other words, an input element in a KG sequence whose type is "entity" can not only contribute to predicting its next relation, but also straightly take part in predicting its object entity. Figure 1 shows an RSN example. Formally, given a KG sequence x 1 → x 2 → . . . → x T as input, the skipping operation for an RSN is formulated as follows: h t = h t if x t is an entity S h h t + x t−1 if x t is a relation ,(6) where h t denotes the output hidden state of the RSN at time step t, and h t denotes the corresponding RNN output. S h is the weight matrix. In this paper, we select weighted sum for the skipping operation, but other combination methods can be supported as well. Explanation of RSN. Intuitively, RSN explicitly distinguishes entities and relations, and allows subject entities to skip their connections for directly participating in object entity predication. Behind this simple skipping operation, there exists a deeper explanation called residual learning. Let F (x) be an original mapping, where x denotes the input, and H(x) be the expected mapping. Compared to directly optimizing F (x) to fit H(x), residual learning hypothesizes that it is easier to optimize F (x) to fit the residual part H(x) − x. For an extreme case, if an identity mapping is optimal (i.e., H(x) = x), pushing the residual to zero would be much easier than fitting an identity mapping by a stack of nonlinear layers (He et al. 2016). Specifically, given a KG sequence · · · → x t−1 → x t → x t+1 → · · · , where (x t−1 , x t , x t+1 ) forms a triple, RRN leverages residual learning by regarding the process at each time step as a mini-residual network with the previous hidden state of RNN as input. Take time step t for example, RRN regards h t−1 as input, and learns the residual h t := H(h t−1 , x t ) − h t−1 , where H(h t−1 , x t ) denotes the expected mapping for (h t−1 , x t ). It still ignores the structure of KGs that x t−1 , x t should be more appreciated for predicting x t+1 . Differently, RSN leverages the residual learning in a new way. Instead of using an input as subtrahend (h t−1 ), it directly chooses the subject entity x t−1 as subtrahend. Making the output hidden state h t to fit x t+1 may be hard, but learning the residual of x t+1 and x t−1 may be easier, which is the key characteristic of RSN. Experiments and Results We evaluated RSN4EA for EA using a variety of real-world datasets. In this section, we report the results compared with several state-of-the-art embedding-based EA methods. Since RSN4EA is capable of learning KG embeddings, we also conducted experiments to assess its performance on KG completion (Bordes et al. 2013), which is a classical task for KG representation learning. Datasets Although the datasets used by existing methods (Chen et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018) are all sampled from real-world KGs, such as DBpedia and Wikidata, their density and degree distributions are quite different from the original ones. We argue that this status may prevent us from a comprehensive and accurate understanding of embeddingbased EA. In this paper, we propose a segment-based random PageRank (SRP) sampling method, which can fluently control the density of sampled datasets. Random PageRank sampling is an efficient algorithm for large graph sampling (Leskovec and Faloutsos 2006). It samples nodes according to the PageRank weights and can assign higher biases to more valuable entities. However, due to the characteristic of PageRank, it also favors high-degree nodes. To fulfill our requirements on KG sampling, we divided the entities in a KG into segments according to their degrees and performed sampling separately. To guarantee the distributions of sampled datasets following the original KGs, we used Kolmogorov-Smirnov (K-S) test to measure the difference. We set our expectation to = 5% for all the datasets. Based on the above sampling method, we obtained four couples of datasets to evaluate the performance of the embedding-based EA methods. The detailed statistics are shown in Table 1. Each dataset contains nearly 15,000 entities. For the normal datasets, they follow the density of the original KGs. For the dense datasets, we randomly deleted entities with low degrees in the original KGs to make the average degree doubled, and then conducted sampling. Therefore, the dense datasets are more similar to the datasets used by the existing methods (Chen et al. 2017;Sun, Hu, and Li 2017;Sun et al. 2018). Figure 2 shows the degree distributions of source KGs and the sampled datasets from different methods. We can see that our normal datasets best represent the original KGs. Implementation Details We built RSN4EA using TensorFlow. The embeddings and weight matrices were initialized with Xavier initializer, and the embedding size was set to 256. We used two-layer LSTM (Hochreiter and Schmidhuber 1997) with Dropout (Srivastava et al. 2014), and conducted batch normalization (Ioffe and Szegedy 2015) for both input and output of an RSN. We used Adam optimizer (Kingma and Ba 2015) with minibatch size 512 and learning rate 0.003. We trained an RSN for up to 30 epochs. The random walk biases were set to α = 0.9, β = 0.9, and the walk length was set to 15. The source code, datasets and results will be available online. For the comparative methods, we used the source code provided in their papers except KDCoE, since KDCoE has not released its source code yet. We implemented KDCoE by ourselves. We tried our best effort to adjust the hyperparameters to make the performance optimal. Following the previous work (Sun, Hu, and Li 2017;Sun et al. 2018), we used 30% of reference alignment as prior alignment and chose Hits@1, Hits@10 and mean reciprocal rank (MRR) as evaluation metrics. The best results are marked in bold. The same to the following. Results on Entity Alignment Tables 2 and 3 depict the EA results on monolingual and cross-lingual datasets, respectively. It is evident that capturing long-term dependencies by paths enables RSN4EA to outperform the existing EA methods. Generally, the heterogeneity of different KGs is more severe than a KG with different languages. A key module for embedding-based EA methods is to embed the information of entities in different KGs into a unified space. Thus, aligning entities in different KGs is more difficult for embeddingbased EA methods. With the help of establishing long-term dependencies, RSN4EA captured richer information of KGs and learned more accurate embeddings, leading to more significant improvement on the more heterogenous datasets (DBP-WD and DBP-YG). The two tables also demonstrate that the embeddingbased EA methods are sensitive to the density. The performance of all the methods on the normal datasets is significantly lower than that on the dense datasets. Although the normal datasets are more difficult, RSN4EA still showed considerable advantages compared with the other methods, since it used long paths to capture implicit connections among entities and represented them in the embeddings. It is worth noting that RSN4EA showed larger superiority in terms of Hits@1 and MRR. This is due to the fact that Hits@1 only considers the completely correct results, and MRR also favors top-ranked results. As aforementioned, RSN4EA embedded the long-term dependencies into the learned embeddings, which contains richer information to help identify aligned entities in different KGs. The better performance on these two metrics verified this point. Results on KG Completion Since RSN4EA can train KG embeddings for EA, it is also interesting to apply RSN4EA to KG completion (Bordes " †" denotes the methods executed by ourselves using the provided source code, due to some metrics were not used in literature. "-" denotes the unknown results, due to we cannot obtain the source code. et al. 2013), which is one of the most prevalent task for KG representation learning. To achieve this, we removed the cross-KG bias during the random walk sampling and conducted the KG completion experiment. Specifically, for a triple (s, l, o), KG completion aims to predict the object entity o given (s, l, ?) or predict the subject entity s given (?, l, o). FB15K and WN18 are the most widely-used benchmark datasets for KG completion (Bordes et al. 2013). However, recent studies (Toutanova and Chen 2015;Dettmers et al. 2018) exposed that these two datasets have the problem of leaking testing data. To solve this issue, a new dataset called FB15K-237 was recommended, and we used this dataset to assess RSN4EA in our experiments. The experimental results are shown in Table 4. ConvE-a method tailored to KG completion-obtained the best results on FB15K-237, followed by our RSN4EA. It is worth noting that, while predicting the entities given one triple is not the primary goal of RSN4EA, it still achieved comparable or better performance than many methods focusing on KG completion, which indicated the potential of leveraging KG paths for learning embeddings. Further Analysis Comparison with Alternative Networks To assess the feasibility of RSN, we conducted experiments to compare it with RNN and RRN. Both RNN and RRN were implemented using the same multi-layer LSTM units, Dropout and batch normalization. The comparison results are shown in Figure 3. Since RNN and RRN did not consider the structure of KG paths, they converged the embedding learning at a very slow speed. Compared with RNN, RSN achieved better performance with only 1/30 time cost, which indicated that this particular residual structure is essential for RSN4EA. Furthermore, RRN is a generic network involving residual learning in the conventional RNN. But it only achieved little improvement compared with RNN. This implied that simply combining residual learning with RNN cannot significantly help KG sequence modeling. Sensitivity to Proportion of Prior Alignment The proportion of prior alignment may significantly influence the performance of embedding-based EA methods. However, we may not obtain a large number of prior alignment in practice. We tested the performance of RSN4EA and BootEA (the second best method in our previous experiments) in terms of the proportion of prior alignment from 50% to 10% with step 10%. Due to space limitation, we only depicted the results on the DBP-WD dataset in Figure 4. The performance of the two methods continually dropped with the decreasing proportion of prior alignment. However, the curves of RSN4EA are gentler than BootEA. Specifically, on the normal dataset, for the four proportion intervals, RSN4EA lost 7.4%, 8.2%, 16.5% and 30.2% on Hits@1 respectively, while BootEA lost 11.8%, 12.0%, 22.3% and 49.8% respectively, which demonstrated that RSN4EA is a more stable method. Additionally, when the proportion was down to 10%, the Hits@1 result of RSN4EA on the normal dataset was almost twice higher than that of BootEA, which indicated that modeling paths helps RSN4EA propagate the identity information across KGs more effectively and alleviates the dependence on the proportion of prior alignment. Sensitivity to Random Walk Length We also observed how the random walk length affected the EA performance. As shown in Figure 5, on all the eight datasets, the Hits@1 results increased sharply during length 5 to 15, which indicates that modeling longer paths can help learn KG embeddings and obtain better performance. Furthermore, we observed that the performance approached to saturation for length 15 to 25. Therefore, in consideration of the efficiency, the results reported in Tables 2 and 3 are based on length 15. Conclusion and Future Work In this paper, we proposed RSN4EA, which employs biased random walks to sample paths specific to EA, and leverages RSN for learning KG embeddings. Our experimental results showed that RSN4EA not only outperformed the existing embedding-based EA methods, but also achieved superior performance compared with RNN and RRN. It also worked well for KG completion. In future work, we plan to continue exploring KG sequence learning. First, KGs often contain rich textual information like names and descriptions. Such information can be modeled with character-/word-level sequential models. RSN is capable of modeling KGs in a sequential manner, therefore it is worth studying a unified sequential model to learn KG embeddings using all valuable information. Second, in addition to paths, the neighboring information provides another type of context and may be also helpful for learning KG embeddings. We look forward to integrating the neighboring context to further improve the performance.
5,225
1811.01850
2950099317
Can we perform an end-to-end sound source separation (SSS) with a variable number of sources using a deep learning model? This paper presents an extension of the Wave-U-Net model which allows end-to-end monaural source separation with a non-fixed number of sources. Furthermore, we propose multiplicative conditioning with instrument labels at the bottleneck of the Wave-U-Net and show its effect on the separation results. This approach can be further extended to other types of conditioning such as audio-visual SSS and score-informed SSS.
Traditionally, people have attempted to solve audio source separation through matrix-factorization algorithms. Independent Component Analysis (ICA) @cite_16 and Non-negative Matrix Factorization (NMF) @cite_1 are two common techniques used for source separation.
{ "abstract": [ "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject.", "An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a time-varying gain. Each sound source, in turn, is modeled as a sum of one or more components. The parameters of the components are estimated by minimizing the reconstruction error between the input spectrogram and the model, while restricting the component spectrograms to be nonnegative and favoring components whose gains are slowly varying and sparse. Temporal continuity is favored by using a cost term which is the sum of squared differences between the gains in adjacent frames, and sparseness is favored by penalizing nonzero gains. The proposed iterative estimation algorithm is initialized with random values, and the gains and the spectra are then alternatively updated using multiplicative update rules until the values converge. Simulation experiments were carried out using generated mixtures of pitched musical instrument samples and drum sounds. The performance of the proposed method was compared with independent subspace analysis and basic nonnegative matrix factorization, which are based on the same linear model. According to these simulations, the proposed method enables a better separation quality than the previous algorithms. Especially, the temporal continuity criterion improved the detection of pitched musical sounds. The sparseness criterion did not produce significant improvements" ], "cite_N": [ "@cite_16", "@cite_1" ], "mid": [ "2123649031", "2150415460" ] }
END-TO-END SOUND SOURCE SEPARATION CONDITIONED ON INSTRUMENT LABELS
The goal of music source separation is to extract the mixture of audio sources into their individually separated source tracks. Undoubtedly, this is a challenging problem to solve and many attempts have been made to estimate the source signals as closely as possible from the observation of the mixture signals. The most common cases may vary with respect to the target task (such as singing voice [2,3] or multi-instrument source separation [4,5]), use of additional information (blind [5,3] or informed source separation [4,6]), and the amount of channels used for reconstruction (monaural [5] or multichannel [4,6] source separation). There are many challenging aspects related to audio source separation. Most importantly, accurate separation with minimal distortion is desired. Supplementary information such as the number of sources present in the mix, musical notes in the form of MIDI or sheet music can be helpful but not widely available in most cases. However, information such as the source instrument labels can be easily found from video recordings of musical performances readily available on the web. Therefore, it sounds reasonable to learn to integrate the instrument label information into the source separation pipeline. At the same time, many sophisticated score-and timbre-informed methods have been proposed in the literature already [2]. We admire the idea of simplifying Equal contribution. Work done during the Deep Learning Camp Jeju 2018. those frameworks, which became possible only recently with the advent of end-to-end deep neural networks. In this paper, we study how to separate musical recordings of small ensembles (from duets to quintets) into individual audio tracks. We propose an extension of the Wave-U-Net [1], an end-to-end convolutional encoder-decoder model with skip connections, which supports a non-fixed number of sources and takes advantage of instrument labels in assisting source separation. EXPANDED WAVE-U-NET Multi-Source Extention The challenge with the original Wave-U-Net model is that it can only support a predefined number of input sources (2 and 4 sources in the original settings), limiting its application to only the specific group of instruments that it was trained on. We aim to build a more flexible model that can support a dynamic number of input sources and, therefore, be more suitable for separating classical music recordings. In classical music, the number of instruments playing in an ensemble may vary a lot but the instruments themselves are often known in advance. Here we don't tackle the problem of separating different parts played by the same instrument (like violin1 vs violin2) but rather try to separate a sound track played by the same instrument (violin1+violin2 vs viola). Therefore, we can fix a maximum number of output sources to the number of all different instruments which are present in the dataset. This is still not a true dynamic model since the number of sources has to be specified in advance. Thus, in order to have a more general model we fix the number of sources to a reasonable large number. For the sources that are not available in the mix, the model is trained with silent audio as a substitute. Therefore, the model outputs all possible sources and it is forced to associate each output with a certain instrument and output silence for the sources that are not present in the mix. Note that at the training time we implicitly specify which source should be aligned with a particular instrument, but it is not needed at the inference time. We can instead use an energy threshold for extracting the sources of interest. We will refer to this model as Exp-Wave-U-Net. Label Conditioning In order to enhance the source separation results, we propose a conditioned label-informed Wave-U-Net model (CExp-Wave-U-Net). In particular, we use a binary vector whose size is the maximum number of sources considered. Each position of the vector is associated with a certain instrument: 1 indicates that the instrument is being played and 0 indicates either a non present instrument or a silent instrument (non-playing). Conditioning is a term used to describe the process of fusing information of a different medium in the context of another medium. In case of Wave-U-Net, there are three locations where the use of conditioning is appropriate and corresponds to different fusion strategies: • for early fusion, the conditioning can be applied to the top layer of the encoder, before downsampling; • for middle fusion, we can integrate label information at the bottleneck of the Wave-U-Net; • for late fusion, we can aggregate labels with audio output of the last decoder layer (after upsampling). Moreover, there is a possibility of using several conditioning mechanisms (as described in [13]) such as • concatenation-based conditioning; • conditional biasing (additive bias); • conditional scaling (multiplicative bias). In this paper, we experiment with multiplicative conditioning using instrument labels at the bottleneck of the Wave-U-Net model. Therefore, the overall idea is to cancel out the unwanted sources at the most compressed part of Wave-U-Net while emphasizing the sources of interest. Even though the early fusion approach can be more abundant as it allows to integrate more information from the very beginning, we use multiplicative middle fusion as it provides a reasonable trade-off between expressiveness and memory and computational costs. At the same time, we leave additive bias and concatenation-based conditioning for further investigation. IMPLEMENTATION DETAILS AND RESULTS Dataset As described in Sec. 3, the model takes the input in a form of a mix of the output sources where each source is either an instrumental track or a silent audio track for instruments not present in the mix. Instrument labels can be included optionally. We took advantage of the University of Rochester Musical Performance Dataset (URMP) [14] which consists of 44 pieces (11 duets, 12 trios, 14 quartets and 7 quintets) played by 13 different instruments (see Figure 1). We used 33 pieces for training and validation, and 11 pieces for testing. Baseline For the evaluation, we compare two proposed models with a Timbre-Informed NMF method from [6]. In this method, the authors first learn a timbre model for each note of each instrument, and apply this trained templates as the basis functions Method SDR SIR SAR InformedNMF [6] -0. 16 in NMF factorization procedure. Note that the timbre templates are trained with RWC [15], a dataset which consists of recordings of individual notes for different instruments. Unlike our approach, Timbre-Informed NMF requires specifying the timbre models for each piece at the inference time. We used learned timbre models for all instruments except for saxophone. Implementation Details Our implementation is available online 1 and is based on the original Wave-U-Net code 2 . We improved both input and training pipelines compared to the original work. The input pipeline is implemented as a TensorFlow Dataset and now supports parallel distributed reading. The training pipeline is re-implemented via a high-level TensorFlow Estimator API and supports both local and distributed training. Our implementation also supports half-precision floating-point format, which allows us to increase both training speed and batch size without loss of quality. We train the model on a single Google Cloud TPU instance for 200k steps which takes approximately 23 hours. The best results are achieved using Adam optimizer with an initial learning rate of 1e-4. The aforementioned modifications together with the use of TPU allowed us to speed up training process by 24.8 times (35.3 times for the halfprecision case) compared to a single GPU training. Results We perform quantitative evaluation of the model performance using standard metrics for blind source separation: Source to Distortion Ratio (SDR), Source to Inference Ratio (SIR), and Source to Artifacts Ratio (SAR) [16]. Table 1 shows average values of the metrics over all pieces and instruments in the dataset. We can see that there is no single winner but each method seems to be better with respect to one of the metrics. For example, InformedNMF baseline outperforms both deep models in terms of SDR while it is inferior to Exp-Wave-U-Net in terms of SAR and to CExp-Wave-U-Net in terms of SIR. Note that we can't directly compare our results with Wave-U-Net because it would require to train Next, we analyze the separation performance in depth for each instrument. Figure 1 summarizes the results for each model and metric. We can see that the baseline approach (In-formedNMF) performs reasonable well in terms of SDR and SIR for all instruments except for trombone and tuba. Exp-Wave-U-Net performs worse in SDR and SIR for all instruments but consistently outperforms the baseline and CExp-Wave-U-Net in SAR except for violin, trombone and saxophone. CExp-Wave-U-Net performs as good as the rest two in SDR and SIR (and achieves best results for tuba, doublebass, saxophobe and viola) but consistently worse in SAR. At last, we report the separation results averaged with respect to the number of sources in the input mix in Figure 2. It is interesting to note that the performance of all methods goes down as the number of sources increases. Hovewer, it is more interesting that the performance of CExp-Wave-U-Net does not drop as much as in case of InformedNMF and Exp-Wave-U-Net. In absolute values (see Table 2 ), SDR for CExp-Wave-U-Net decreases from -0.16 dB to -2.56 dB while for the model without conditioning those values are -0.42 dB to -5.90 dB, and from 3.08 dB to -3.84 dB for the NMF baseline. The alike behaviour persists for SIR. From these results, we could anticipate that the conditioned model is more suitable for multi-instrument source separation. We would like to mention that despite their widespread use, the standard metrics are unable to estimate how well the model can discard unwanted sources (they are undefined if the ground truth is silence). Nonetheless, we would like to provide samples of separated sources which should be discarded 3 . We notice that both conditioned and unconditioned versions of Exp-Wave-U-Net systematically output quieter sources for the absent instruments than InformedNFM, initialized by all possible timbre templates. Some qualitative results for original 4 and expanded 5 Wave-U-Net can be also found online. CONCLUSION In this paper we have proposed and explored two extensions of the Wave-U-Net architecture in the context of source separation of ensemble recordings with unknown number of input sources. We have shown that both Exp-Wave-U-Net and CExp-Wave-U-Net perform fairly competitive to the In- formedNMF model despite being trained just on 33 audio mixes. We observed that CExp-Wave-U-Net outperforms the baseline approach when the number of input sources is bigger than 2. Moreover, we observed that Exp-Wave-U-Net produces a quieter output for the non-present instruments. We have also shown that the results from different models are complimentary and therefore we can further investigate the ensemble methods to mitigate their flaws. We plan to further experiment with different fusion models for conditioning and incorporate visual information available within URMP dataset. The visual guidance seems to be a prominent direction of research because in this case not only we do not need to have manually annotated instrument labels but can also get an additional information of the playing and non-playing state of each instrument by analyzing the corresponding video stream. This can be especially useful to resolve disambiguation and inference between two instruments of the same kind.
1,851
1811.01850
2950099317
Can we perform an end-to-end sound source separation (SSS) with a variable number of sources using a deep learning model? This paper presents an extension of the Wave-U-Net model which allows end-to-end monaural source separation with a non-fixed number of sources. Furthermore, we propose multiplicative conditioning with instrument labels at the bottleneck of the Wave-U-Net and show its effect on the separation results. This approach can be further extended to other types of conditioning such as audio-visual SSS and score-informed SSS.
Wave-U-Net model @cite_4 is an adaptation of the U-Net @cite_18 , a convolutional encoder-decoder network developed for image segmentation. The U-Net approach has been adapted already for singing voice separation in @cite_8 , however this model applies 2D convolutions and works with spectrograms. Instead of doing a 2D convolution, Wave-U-Net performs series of 1D convolutions, downsampling and upsampling with skip connections on a raw waveform signal. The input to this network is a single channel audio mix, and the desired output is the separated @math channels of individual audio sources, where @math is the number of sources present in the audio mix. An interesting aspect of the Wave-U-Net is that it avoids implicit zero paddings in the downsampling layers, and it performs linear interpolation as opposed to de-convolution. This means that our dimension size is not preserved, and our output results will actually become a lot shorter compared to our input. However, by doing this we can better preserve temporal continuity and avoid audio artifacts in the results.
{ "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "", "The decomposition of a music audio signal into its vocal and backing track components is analogous to image-to-image translation, where a mixed spectrogram is transformed into its constituent sources. We propose a novel application of the U-Net architecture — initially developed for medical imaging — for the task of source separation, given its proven capacity for recreating the fine, low-level detail required for high-quality audio reproduction. Through both quantitative evaluation and subjective assessment, experiments demonstrate that the proposed algorithm achieves state-of-the-art performance." ], "cite_N": [ "@cite_18", "@cite_4", "@cite_8" ], "mid": [ "1901129140", "2963452667", "2774707525" ] }
END-TO-END SOUND SOURCE SEPARATION CONDITIONED ON INSTRUMENT LABELS
The goal of music source separation is to extract the mixture of audio sources into their individually separated source tracks. Undoubtedly, this is a challenging problem to solve and many attempts have been made to estimate the source signals as closely as possible from the observation of the mixture signals. The most common cases may vary with respect to the target task (such as singing voice [2,3] or multi-instrument source separation [4,5]), use of additional information (blind [5,3] or informed source separation [4,6]), and the amount of channels used for reconstruction (monaural [5] or multichannel [4,6] source separation). There are many challenging aspects related to audio source separation. Most importantly, accurate separation with minimal distortion is desired. Supplementary information such as the number of sources present in the mix, musical notes in the form of MIDI or sheet music can be helpful but not widely available in most cases. However, information such as the source instrument labels can be easily found from video recordings of musical performances readily available on the web. Therefore, it sounds reasonable to learn to integrate the instrument label information into the source separation pipeline. At the same time, many sophisticated score-and timbre-informed methods have been proposed in the literature already [2]. We admire the idea of simplifying Equal contribution. Work done during the Deep Learning Camp Jeju 2018. those frameworks, which became possible only recently with the advent of end-to-end deep neural networks. In this paper, we study how to separate musical recordings of small ensembles (from duets to quintets) into individual audio tracks. We propose an extension of the Wave-U-Net [1], an end-to-end convolutional encoder-decoder model with skip connections, which supports a non-fixed number of sources and takes advantage of instrument labels in assisting source separation. EXPANDED WAVE-U-NET Multi-Source Extention The challenge with the original Wave-U-Net model is that it can only support a predefined number of input sources (2 and 4 sources in the original settings), limiting its application to only the specific group of instruments that it was trained on. We aim to build a more flexible model that can support a dynamic number of input sources and, therefore, be more suitable for separating classical music recordings. In classical music, the number of instruments playing in an ensemble may vary a lot but the instruments themselves are often known in advance. Here we don't tackle the problem of separating different parts played by the same instrument (like violin1 vs violin2) but rather try to separate a sound track played by the same instrument (violin1+violin2 vs viola). Therefore, we can fix a maximum number of output sources to the number of all different instruments which are present in the dataset. This is still not a true dynamic model since the number of sources has to be specified in advance. Thus, in order to have a more general model we fix the number of sources to a reasonable large number. For the sources that are not available in the mix, the model is trained with silent audio as a substitute. Therefore, the model outputs all possible sources and it is forced to associate each output with a certain instrument and output silence for the sources that are not present in the mix. Note that at the training time we implicitly specify which source should be aligned with a particular instrument, but it is not needed at the inference time. We can instead use an energy threshold for extracting the sources of interest. We will refer to this model as Exp-Wave-U-Net. Label Conditioning In order to enhance the source separation results, we propose a conditioned label-informed Wave-U-Net model (CExp-Wave-U-Net). In particular, we use a binary vector whose size is the maximum number of sources considered. Each position of the vector is associated with a certain instrument: 1 indicates that the instrument is being played and 0 indicates either a non present instrument or a silent instrument (non-playing). Conditioning is a term used to describe the process of fusing information of a different medium in the context of another medium. In case of Wave-U-Net, there are three locations where the use of conditioning is appropriate and corresponds to different fusion strategies: • for early fusion, the conditioning can be applied to the top layer of the encoder, before downsampling; • for middle fusion, we can integrate label information at the bottleneck of the Wave-U-Net; • for late fusion, we can aggregate labels with audio output of the last decoder layer (after upsampling). Moreover, there is a possibility of using several conditioning mechanisms (as described in [13]) such as • concatenation-based conditioning; • conditional biasing (additive bias); • conditional scaling (multiplicative bias). In this paper, we experiment with multiplicative conditioning using instrument labels at the bottleneck of the Wave-U-Net model. Therefore, the overall idea is to cancel out the unwanted sources at the most compressed part of Wave-U-Net while emphasizing the sources of interest. Even though the early fusion approach can be more abundant as it allows to integrate more information from the very beginning, we use multiplicative middle fusion as it provides a reasonable trade-off between expressiveness and memory and computational costs. At the same time, we leave additive bias and concatenation-based conditioning for further investigation. IMPLEMENTATION DETAILS AND RESULTS Dataset As described in Sec. 3, the model takes the input in a form of a mix of the output sources where each source is either an instrumental track or a silent audio track for instruments not present in the mix. Instrument labels can be included optionally. We took advantage of the University of Rochester Musical Performance Dataset (URMP) [14] which consists of 44 pieces (11 duets, 12 trios, 14 quartets and 7 quintets) played by 13 different instruments (see Figure 1). We used 33 pieces for training and validation, and 11 pieces for testing. Baseline For the evaluation, we compare two proposed models with a Timbre-Informed NMF method from [6]. In this method, the authors first learn a timbre model for each note of each instrument, and apply this trained templates as the basis functions Method SDR SIR SAR InformedNMF [6] -0. 16 in NMF factorization procedure. Note that the timbre templates are trained with RWC [15], a dataset which consists of recordings of individual notes for different instruments. Unlike our approach, Timbre-Informed NMF requires specifying the timbre models for each piece at the inference time. We used learned timbre models for all instruments except for saxophone. Implementation Details Our implementation is available online 1 and is based on the original Wave-U-Net code 2 . We improved both input and training pipelines compared to the original work. The input pipeline is implemented as a TensorFlow Dataset and now supports parallel distributed reading. The training pipeline is re-implemented via a high-level TensorFlow Estimator API and supports both local and distributed training. Our implementation also supports half-precision floating-point format, which allows us to increase both training speed and batch size without loss of quality. We train the model on a single Google Cloud TPU instance for 200k steps which takes approximately 23 hours. The best results are achieved using Adam optimizer with an initial learning rate of 1e-4. The aforementioned modifications together with the use of TPU allowed us to speed up training process by 24.8 times (35.3 times for the halfprecision case) compared to a single GPU training. Results We perform quantitative evaluation of the model performance using standard metrics for blind source separation: Source to Distortion Ratio (SDR), Source to Inference Ratio (SIR), and Source to Artifacts Ratio (SAR) [16]. Table 1 shows average values of the metrics over all pieces and instruments in the dataset. We can see that there is no single winner but each method seems to be better with respect to one of the metrics. For example, InformedNMF baseline outperforms both deep models in terms of SDR while it is inferior to Exp-Wave-U-Net in terms of SAR and to CExp-Wave-U-Net in terms of SIR. Note that we can't directly compare our results with Wave-U-Net because it would require to train Next, we analyze the separation performance in depth for each instrument. Figure 1 summarizes the results for each model and metric. We can see that the baseline approach (In-formedNMF) performs reasonable well in terms of SDR and SIR for all instruments except for trombone and tuba. Exp-Wave-U-Net performs worse in SDR and SIR for all instruments but consistently outperforms the baseline and CExp-Wave-U-Net in SAR except for violin, trombone and saxophone. CExp-Wave-U-Net performs as good as the rest two in SDR and SIR (and achieves best results for tuba, doublebass, saxophobe and viola) but consistently worse in SAR. At last, we report the separation results averaged with respect to the number of sources in the input mix in Figure 2. It is interesting to note that the performance of all methods goes down as the number of sources increases. Hovewer, it is more interesting that the performance of CExp-Wave-U-Net does not drop as much as in case of InformedNMF and Exp-Wave-U-Net. In absolute values (see Table 2 ), SDR for CExp-Wave-U-Net decreases from -0.16 dB to -2.56 dB while for the model without conditioning those values are -0.42 dB to -5.90 dB, and from 3.08 dB to -3.84 dB for the NMF baseline. The alike behaviour persists for SIR. From these results, we could anticipate that the conditioned model is more suitable for multi-instrument source separation. We would like to mention that despite their widespread use, the standard metrics are unable to estimate how well the model can discard unwanted sources (they are undefined if the ground truth is silence). Nonetheless, we would like to provide samples of separated sources which should be discarded 3 . We notice that both conditioned and unconditioned versions of Exp-Wave-U-Net systematically output quieter sources for the absent instruments than InformedNFM, initialized by all possible timbre templates. Some qualitative results for original 4 and expanded 5 Wave-U-Net can be also found online. CONCLUSION In this paper we have proposed and explored two extensions of the Wave-U-Net architecture in the context of source separation of ensemble recordings with unknown number of input sources. We have shown that both Exp-Wave-U-Net and CExp-Wave-U-Net perform fairly competitive to the In- formedNMF model despite being trained just on 33 audio mixes. We observed that CExp-Wave-U-Net outperforms the baseline approach when the number of input sources is bigger than 2. Moreover, we observed that Exp-Wave-U-Net produces a quieter output for the non-present instruments. We have also shown that the results from different models are complimentary and therefore we can further investigate the ensemble methods to mitigate their flaws. We plan to further experiment with different fusion models for conditioning and incorporate visual information available within URMP dataset. The visual guidance seems to be a prominent direction of research because in this case not only we do not need to have manually annotated instrument labels but can also get an additional information of the playing and non-playing state of each instrument by analyzing the corresponding video stream. This can be especially useful to resolve disambiguation and inference between two instruments of the same kind.
1,851
1811.01686
2899373513
Recently, word embedding algorithms have been applied to map the entities of recommender systems, such as users and items, to new feature spaces using textual element-context relations among them. Unlike many other domains, this approach has not achieved a desired performance in collaborative filtering problems, probably due to unavailability of appropriate textual data. In this paper we propose a new recommendation framework, called GEMRank that can be applied when the user-item matrix is the sole available souce of information. It uses the concept of profile co-occurrence for defining relations among entities and applies a factorization method for embedding the users and items. GEMRank then feeds the extracted representations to a neural network model to predict user-item like dislike relations which the final recommendations are made based on. We evaluated GEMRank in an extensive set of experiments against state of the art recommendation methods. The results show that GEMRank significantly outperforms the baseline algorithms in a variety of data sets with different degrees of density.
Traditional NCR algorithms usually suffer from sparsity of the dataset as the similarity calculation can be hard when there is not enough information about users available. Graph based methods try to solve this problem by modeling the data as a graph in order to estimate the distances more accurately when the data is sparse. These methods first construct a graph to represent data and then make recommendations by analyzing the graph. In @cite_14 different types of nodes and a multi-layer structure have been used to make context-aware recommendation through a random walk in the graph. SibRank @cite_16 uses a signed bipartite preference network for representing the data and analyzes it using a signed version of Personalized PageRank to capture users' similarities. Among more recent approaches, GRank @cite_3 is an state of the art method which uses personalized PageRank over a tripartite preference network to directly infer the total ranking of items. GRank may use unreliable paths that are inconsistent with the general idea of similarity in neighborhood collaborative ranking. ReGRank @cite_0 ranks items based on reliable recommendation paths that are in harmony with the semantics behind different approaches in neighborhood collaborative ranking.
{ "abstract": [ "Abstract GRank is a recent graph-based recommendation approach the uses a novel heterogeneous information network to model users’ priorities and analyze it to directly infer a recommendation list. Unfortunately, GRank neglects the semantics behind different types of paths in the network and during the process, it may use unreliable paths that are inconsistent with the general idea of similarity in neighborhood collaborative ranking. That negligence undermines the reliability of the recommendation list generated by GRank. This paper seeks to present a novel framework for reliable graph-based collaborative ranking, called ReGRank, that ranks items based on reliable recommendation paths that are in harmony with the semantics behind different approaches in neighborhood collaborative ranking. To our knowledge, ReGRank is the first unified framework for neighborhood collaborative ranking that in addition to traditional user-based collaborative ranking, can also be adapted for preference-based and representative-based collaborative ranking as well. Experimental results show that ReGRank significantly improves the state-of-the art neighborhood and graph-based collaborative ranking algorithms.", "Recommender systems have been successfully dealing with the problem of information overload. A considerable amount of research has been conducted on recommender systems, but most existing approaches only focus on user and item dimensions and neglect any additional contextual information, such as time and location. In this paper, we propose a Multi-Layer Context Graph (MLCG) model which incorporates a variety of contextual information into a recommendation process and models the interactions between users and items for better recommendation. Moreover, we provide a new ranking algorithm based on Personalized PageRank for recommendation in MLCG, which captures users’ preferences and current situations. The experiments on two real-world datasets demonstrate the effectiveness of our approach.", "", "Collaborative ranking is an emerging field of recommender systems that utilizes users’ preference data rather than rating values. Unfortunately, neighbor-based collaborative ranking has gained little attention despite its more flexibility and justifiability. This paper proposes a novel framework, called SibRank that seeks to improve the state of the art neighbor-based collaborative ranking methods. SibRank represents users’ preferences as a signed bipartite network, and finds similar users, through a novel personalized ranking algorithm in signed networks." ], "cite_N": [ "@cite_0", "@cite_14", "@cite_3", "@cite_16" ], "mid": [ "2963765725", "128104045", "", "2278701101" ] }
GEMRank: Global Entity Embedding For Collaborative Filtering
0
1811.00728
2898989428
Although neural machine translation (NMT) has achieved impressive progress recently, it is usually trained on the clean parallel data set and hence cannot work well when the input sentence is the production of the automatic speech recognition (ASR) system due to the enormous errors in the source. To solve this problem, we propose a simple but effective method to improve the robustness of NMT in the case of speech translation. We simulate the noise existing in the realistic output of the ASR system and inject them into the clean parallel data so that NMT can work under similar word distributions during training and testing. Besides, we also incorporate the Chinese Pinyin feature which is easy to get in speech translation to further improve the translation performance. Experiment results show that our method has a more stable performance and outperforms the baseline by an average of 3.12 BLEU on multiple noisy test sets, even while achieves a generalization improvement on the WMT'17 Chinese-English test set.
It is necessary to enhance the robustness of machine translation since the ASR system carries misrecognized transcriptions over into the downstream MT system in the SLT scenario. Prior work attempted to induce noise by considering the realistic ASR outputs as the source corpora used for training MT systems @cite_15 @cite_8 . Although the problem of error propagation could be alleviated by the promising end-to-end speech translation models @cite_5 @cite_11 . Unfortunately, there are few training data in the form of speech paired with text translations. In contrast, our approach utilizes the large-scale written parallel corpora. Recently, sperber2017neural adapted the NMT model to noise outputs from ASR, where they introduced artificially corrupted inputs during the training process and only achieved minor improvements on noisy input but harmed the translation quality on clean text. However, our approach not only significantly enhances the robustness of NMT on noisy test sets, but also improves the generalization performance.
{ "abstract": [ "Spoken language understanding system is traditionally designed as a pipeline of a number of components. First, the audio signal is processed by an automatic speech recognizer for transcription or n-best hypotheses. With the recognition results, a natural language understanding system classifies the text to structured data as domain, intent and slots for down-streaming consumers, such as dialog system, hands-free applications. These components are usually developed and optimized independently. In this paper, we present our study on an end-to-end learning system for spoken language understanding. With this unified approach, we can infer the semantic meaning directly from audio features without the intermediate text representation. This study showed that the trained model can achieve reasonable good result and demonstrated that the model can capture the semantic attention directly from the audio features.", "In spoken language translation a machine translation system takes speech as input and translates it into another language. A standard machine translation system is trained on written language data and expects written language as input. In this paper we propose an approach to close the gap between the output of automatic speech recognition and the input of machine translation by training the translation system on automatically transcribed speech. In our experiments we show improvements of up to 0.9 BLEU points on the IWSLT 2012 English-to-French speech translation task.", "We investigate end-to-end speech-to-text translation on a corpus of audiobooks specifically augmented for this task. Previous works investigated the extreme case where source language transcription is not available during learning nor decoding, but we also study a midway case where source language transcription is available at training time only. In this case, a single model is trained to decode source speech into target text in a single pass. Experimental results show that it is possible to train compact and efficient end-to-end speech translation models in this setup. We also distribute the corpus and hope that our speech translation baseline on this corpus will be challenged in the future.", "We propose a novel technique for adapting text-based statistical machine translation to deal with input from automatic speech recognition in spoken language translation tasks. We simulate likely misrecognition errors using only a source language pronunciation dictionary and language model (i.e., without an acoustic model), and use these to augment the phrase table of a standard MT system. The augmented system can thus recover from recognition errors during decoding using synthesized phrases. Using the outputs of five different English ASR systems as input, we find consistent and significant improvements in translation quality. Our proposed technique can also be used in conjunction with lattices as ASR output, leading to further improvements." ], "cite_N": [ "@cite_5", "@cite_15", "@cite_11", "@cite_8" ], "mid": [ "2787948962", "2186089609", "2786891429", "399167303" ] }
Improving the Robustness of Speech Translation
In recent years, neural machine translation (NMT) has achieved impressive progress and has outperformed statistical machine translation (SMT) systems on multiple language pairs (Sennrich, Haddow, and Birch 2016). NMT models are usually built under the encoder-decoder architecture where the encoder produces a representation for the source sentence and the decoder generates target translation from this representation word by word Sutskever, Vinyals, and Le 2014;Vaswani et al. 2017). Despite its success, NMT is sensible to the orthographic errors that human beings can comprehend as expected (Belinkov and Bisk 2017). This problem is aggravated in speech translation where the output of the automatic speech recognition (ASR) system is used as the input of NMT which usually contains more noise. The example in Table 1 shows that the conventional NMT system fails to translates the misrecognized ASR output correctly. It is reported that the increase of error rate of ASR brings a significant performance degradation of machine translation (Le, Lecouteux, and Besacier 2017). It indicates that the best NMT systems have to observe a performance decline due to high ASR error * Work done while at Sogou Inc. † Corresponding Author Speech zhè 这 fèn 份 lǐ 礼 wù 物 bǎo 饱 hán 含 yī 一 fèn 份 shēn 深 深 深 qíng 情 情 情 ASR zhè 这 fèn 份 lǐ 礼 wù 物 bǎo 饱 hán 含 yī 一 fèn 份 shēn 申 申 申 qǐng 请 请 请 Reference This gift is full of affection NMT This gift contains an application Table 1: An example of speech translation. For this example, the original word "深情"(highlighted by blue color) which means affection is misrecognized as its homophonic word "申请"(highlighted by red color) which means application, leading to an inaccurate translation generated by the conventional NMT system. rate under noisy environmental conditions, such as simultaneous interpretation, even though ASR has matured to the point of commercial applications. Conventional NMT systems are usually trained on the high-quality written parallel data which hardly contain many ASR-specific errors, resulting in a mismatch between training data and test data. An ideal solution is to train NMT systems on training data in the form of erroneous speech transcriptions paired with their counterpart translations. Unfortunately, this kind of corpora available is somewhat scarce and expensive to collect. Therefore, in addition to reducing the error rate of ASR, it is necessary to improve the robustness of NMT to the inevitable ASR errors. In this paper, our goal is to improve the robustness of NMT to erroneous ASR outputs in the speech translation scenario. We propose an effective and computationally inexpensive approach to craft a large number of ASR-specific noise training examples by simulating realistic ASR errors, in order to alleviate the problem of insufficient speech translation training data. The basic idea is to randomly substitute some correct source tokens by other noise tokens at each training iteration. And we propose four strategies of choosing ASR-specific noise symbols. Using our approach, it is easy to obtain a robust NMT model only based on the standard NMT training method without modifying the training objective and extra computation load of introducing generative adversarial networks with a training difficulty (Arjovsky, Chintala, and Bottou 2017; Cheng et al. 2018). To achieve a further improvement in translation quality, it is desirable to recover the inherent semantic relationship among source characters which is broken by introduced adhoc noise at training time. For this purpose, in addition to the standard character-level representation, we also propose to explicitly incorporate the syllable-level representation (also called Pinyin 1 ) of a Chinese character as an additional input feature, resulting in a novel Pinyin-aware embedding of Chinese characters. We conduct experiments on WMT'17 Chinese-English translation task. Experimental results show that our approaches not only significantly enhance the robustness of NMT on the artificial noisy test sets, but also improve the generalization performance of NMT on the original test set. We finally illustrate the advantage and disadvantage of our robust NMT system via two real-world examples. The Challenge of Speech Translation The dominated speech translation systems generally employ a cascaded architecture which consists of an ASR component followed by an NMT component. An ASR system ingests user utterances as inputs and generates text transcriptions as outputs. And then an NMT system consumes these transcriptions and produces translations in another language. Recently, there has been growing interest in building an end-to-end ASR system as a way of folding separate acoustic, pronunciation, and language modeling components of a conventional ASR system into a single neural network (Chiu et al. 2017). We consider Listen, Attend and Spell (LAS) (Chan et al. 2016) as an example to formally describe the basic principle of end-to-end ASR. The basic LAS model consists of three sub-modules: the listener, the attender and the speller. Let D x,z = x (n) , z (n) N n=1 be our training data of LAS. For the n-th instance, x n = x 1 , . . . , x t , . . . , x T is the input sequence of filter bank spectra features, and z (n) = z 1 , . . . , z l , . . . , z L is the output sequence of transcriptions. The listener maps x (n) to a highlevel feature representation H. The attender takes H and determines which listener features in H should be attended to predict the next output symbol z l . Finally, the speller accepts the output of the attender in order to produce a probability distribution P (z l |z <l , x (n) ). The standard training objective is to find a set of model parameters that minimizes the negative log-likelihood on the ASR training data: θ A = arg min θ A − N n=1 log P z (n) |x (n) ; θ A ,(1) where θ A is a set of ASR model parameters, and z <l = (z 1 , . . . , z l−1 ) is the sequence of previous symbols. Given a bilingual written training data D z,y = z (m) , y (m) M m=1 . For the m-th sentence pair, let z (m) = z 1 , . . . , z i , . . . , z I be our source-language sequence, and y (m) = y 1 , . . . , y j , . . . , y J be our target-language sequence. NMT usually models the translation probability as Table 2: Error rates of three error categories for ASR. For the speech input "语音翻译" which means speech translation, "译" is substituted by "一" in the first case, "音" is deleted in the second case, and "了" is inserted in the third case. P (y (m) |z (m) ; θ N ) = J j=1 P (y j |z (m) , y <j ; θ N ),(2) where θ N represents a set of NMT model parameters, and y <j = y 1 , . . . , y j−1 is a partial translation. The probability of generating the j-th target token is usually calculated as P (y j |z (m) , y <j ; θ N ) = softmax g(y j−1 , h j , c j , θ N ) ,(3) where y j−1 is the word embedding of y j−1 , h j is a hidden state at j-th step, c j is a vector representing the context on source side for generating the j-th target word, and g(·) is a non-linear activation function. The standard training objective is to find a set of model parameters that minimizes the negative log-likelihood on the training data: θ N = arg min θ N − M m=1 log P y (m) |z (m) ; θ N . (4) Since the source side of training data in NMT does not match the ASR outputs, the performance of the NMT system is adversely affected by the ASR system which is prone to recognition errors due to the regional accents of speakers or environmental noise conditions. Once the erroneous ASR system is deployed, a wise way to improve the quality of speech translation is to adapt the downstream NMT system to ASR errors which are generally classified into three categories (i.e. substitution, deletion, and insertion) based on their Levenshtein alignments between the transcription and its reference. We provide three examples to illustrate these ASR error categories in Table 2. We also investigate the word error rate (WER) of our in-house Chinese ASR system on our in-house evaluation dataset which consists of about 100 hours of Chinese speech cross multiple domains. As shown in Table 2, the substitution error rate (6.4%) becomes the majority of WER (9.4%). Our observation is consistent with prior work (Mirzaei, Meshgi, and Kawahara 2016). And It is known that over 50% of the machine translation errors are associated with substitution errors which have a greater impact on translation quality than deletion or insertion errors (Vilar et al. 2006;Ruiz and Federico 2014). Hence, our goal is to improve the robustness of NMT to the substitution errors. Approach In this section, we propose four strategies to craft ASRspecific noise training examples. To further improve the translation quality of Chinese-sourced NMT, we propose to incorporate the Chinese Pinyin as an additional input feature. Which characters to be substituted Determining which correct characters could be substituted becomes a prerequisite for crafting noise examples. Inspired by dropout Srivastava et al. 2014;Gal and Ghahramani 2016), we propose to randomly substitute some source characters of parallel data with minor noise that the conventional NMT system is not able to translate correctly with high confidence, in order to simulate the substitution errors of ASR and regularize the NMT model. For each source sentence z of NMT training data, we posit a vector r with |z| independent Bernoulli random variables, each of which has substitution probability p of being 1. The vector is sampled and multiplied element-wise with the character ID inputs of the input layer in NMT, to creates the distorted training examplesz by substituting the remaining characters labeled with 1 according to Equation 6. The distorted training examples are then used as input to the input layer. r c ∼ Bernoulli(p) (5) z = c if r c = 1 c if r c = 0,(6) wherec is a noise symbol. In this case, the original D z,y is perturbed into Dz ,y of which the source side shares similar distribution with ASR outputs. Therefore, the robustness of NMT can be improved by observing a large number of variants of ASRspecific noise examples without changing the standard training method. How to choose noise We design four noising schemes of sampling noise symbols to substitute the determined source positions: • Placeholder-based Substitution We first propose a simple and general method to only consider the special placeholder "<SUB>" which hardly appears in the wild as the noise symbol. Our motivation is that forcing the model to reduce character dependencies. Using this approach, our NMT model theoretically should observe 2 |z| variants for each source sentence z. • Uniform Distribution-based Substitution Since the placeholder "<SUB>" hardly appears in the realistic ASR outputs, there is still a mismatch between the perturbed training data and ASR outputs. We propose to substitute the source positions with a sampled noise from the uniform distribution described in Equation 7. P (c) = 1 |V| ,(7) where V is the source vocabulary. • Frequency-based Substitution It is well known that it is difficult for ASR systems to recognize infrequent tokens in the training data (Goldwater, Jurafsky, and Manning 2010). In other words, the tokens with low frequency in the utterance tend to be misrecognized as frequent tokens. To simulate the real-world Table 3: Example of our noise sampling methods. For the original source sentence "语音翻译", the second character is randomly picked to be substituted by noise (highlighted by red color). For Placeholder method, "音" is substituted with the placeholder "<SUB>". For Uniform method, since each character has equal probability to be noise, "音" is substituted with "饕" which is a low-frequency character in Chinese. But for Frequency method, "音" is substituted by "好" which is a high-frequency Chinese character. Finally for Homophone, "音" is substituted by "因", both the characters share the same pronunciation. Methods Noise Example Placeholder yǔ 语 <SUB> fān 翻 yì 译 Uniform yǔ 语 tāo 饕 饕 饕 fān 翻 yì 译 Frequency yǔ 语 hǎo 好 好 好 fān 翻 yì 译 Homophone yǔ 语 yīn 因 因 因 fān 翻 yì 译 ASR scenario, we propose to substitute the source positions with a sample from the following unigram frequency distribution: P (c) = Count(c) c ∈V\{c} Count(c ) ,(8) where Count is a function used to calculate the character frequency in the training data. • Homophone-based Substitution For ASR outputs, another important fact is that there are significant possibilities that a character is substituted by its homophones which pronounce the same as the original one but differ in meaning (Li et al. 2008). Therefore, we propose to substitute the source positions with a sample of their homophone vocabulary according to the following distribution: P (c) = Count(c) c ∈V(c)\{c} Count(c ) ,(9) where V(c) is a vocabulary where each character shares the same pronunciation with c. Using the crafted noise training examples, the model is forced to learn the more general representation of that perturbed training data and to allocate output stability on the classes of simulated errors. Chinese Pinyin-aware Input Embeddings Using the proposed methods of crafting noise examples, the source part of training data is randomly corrupted with minor confusing characters constantly at training time. It indicates that the distorted characters are rare during the whole training process, leading to an issue of data sparsity. Because Chinese is famous for its numerous homophones, more than half of the Chinese Internet homophones retain the same pronunciation as their base words (Tang 2014 When a person types a word on a keyboard, he encounters more than one variants of characters of the word, so users choose a malapropism, which is an incorrect word in place of a word with a similar sound, to express their intense emotions. For example, a web-user picks the word " zhuān 砖 jiā 家" instead of " zhuān 专 jiā 家" which is the meaning of specialist, since both the words share the same pronunciation, but the former itself is another ironic name of "specialist" who specializes in talking nonsense in the Chinese Internet language. In this case, representing each Chinese character only by their surface symbols intuitively implies that any pair of characters is as distinct as any other pair. This ignores any common Pinyin sequences shared by characters. However, human beings generally have no obstacle to understanding this kind of informal or inaccurate Chinese text as long as the pronunciation is correct. Therefore, it gives us a hint that Pinyin information is helpful to generalize knowledge learned about a character to another via their shared Pinyin sequences since the Chinese syllable level constraints are not as restrictive as surface character sequences. However, many Chinese characters share the same Pinyin without tones yet not their meanings. For example, "砖" which means brick and "专" which means specific. By considering a surface character and its Pinyin as equivalent, the performance of NMT models can be harmed by this new source of ambiguity. Due to this concern, we propose to apply a factored input embeddings by combining both character and Pinyin representations motivated by Sennrich and Haddow (2016). Given a Pinyin sequence p = p 1 , . . . , p i , . . . , p I which has the same length as the character sequence z = z 1 , . . . , z i , . . . , z I , we look up separate embedding vectors for character and pinyin, and the final factored input embedding e i for each position i can be generated by concatenating the character embedding E c [z i ] and Pinyin embedding E p (p i ) as e i = [E c [z i ]; E p [p i ]],(10) Experiments Setup We conduct all experiments on the WMT'17 Chinese-English translation task. The training data consists of 9.3M bilingual sentence pairs obtained by combining the CWMT corpora and News Commentary v12. We use newsdev2017 and newstest2017 as our validation set and clean test set, respectively. Due to lack of public datasets for speech translation, we craft three noisy test sets with different amount of homophones errors in order to simulate the homophonic substitution errors of ASR. And we construct three noisy variants for each source sentence of newstest2017 to increase the diversity of noisy characters. Therefore, the size of each artificial noisy test set is three times larger than new-stest2017. We argue that the setup is very close to the realistic speech translation scenario. It is well known that NMT benefits from the increasing amount of training data (Koehn and Knowles 2017). In addition to WMT training data, we also evaluate the best performing system on our in-house large-scale Chinese-English training data with about 80M sentence pairs. All of the following experiments are carried out based on the Transformer (Vaswani et al. 2017), which is similar to conventional NMT models except depending entirely on self-attention and position-wise, and uses fully connected layers for both the encoder and decoder instead of recurrent neural networks or convolutions. We set the size of all input and output layers to 512 and that of inner-FFN layer to 2048. Training is performed on a single server with 8 Nvidia M40 GPUs. We use a batch size of 4096 on each GPU containing a set of sentence pairs with approximately 4096 source tokens and 4096 target tokens. We train each model with the sentences of length up to 100 words in the training data. We train each model for a total of 600K steps and save the checkpoints with an interval of 1000 training steps. We use a single model obtained by averaging 20 checkpoints that perform best on the development set as the final model for testing. During decoding, we set the beam size to 4. Other training parameters are the same as the default configuration of the Transformer base model. We report case-sensitive NIST BLEU (Papineni et al. 2002) scores for all the systems. For evaluation, we first merge output tokens back to their untokenized representation using detokenizer.pl and then use mteval-v13a.pl to compute the scores as per WMT reference. In this work, we focus on crafting ASR-specific noise examples and incorporating the Chinese Pinyin feature to improve the robustness of NMT. Therefore, we consider the L noisy loss function proposed by Cheng et al. (2018) as our training objective. We will omit an exhaustive background description of the loss function and refer readers to Cheng et al. (2018). It is worth noting that our approach can be applied with other adversarial training methods proposed by Cheng et al. (2018). Robustness Performance Our character substitution has a hyper-parameter p ∈ [0, 1] which means the probability of substituting a character in the inputs. In this section, we explore the effect of tuning this hyper-parameter. The results from Table 5 shows that both the Placeholder and Uniform models work best at p = 0.2, p = 0.1 is optimal for the Frequency model, and the Homophone model achieves the best performance at p = 0.3. It indicates that different noise sampling methods have their own optimal System Table 5: Case-sensitive BLEU scores of our approaches on the clean test set (newstest2017) and three artificial noisy test sets (1 Sub, 2 Subs and 3 Subs) which are crafted by randomly substituting one, two and three original characters of each source sentence in the clean test set with their homophones, respectively. p is the substitution rate. "Placeholder" means the placeholder " SUB " is used as the noise token. "Uniform" indicates the uniform distribution based noise sampling. "Frequency" represents character frequency based noise sampling. "Homophone" denotes Chinese homophone based noise sampling. "Pinyin" means incorporating the Chinese Pinyin as an additional input feature. substitution rate. Hence, it is hard to set a universal substitution rate for all the models. It also can be seen that the Homophone model behaves stably on all noisy test sets even the substitution rate increases. However, in the case of more noise, other models suffer more performance degradation. We suspect that the homophone noise which still keeps the latent semantic information does not hinder the training process severely. Translation Performance Although dropout is used for full-connected layers in all models, the baseline model still fails to translate the noise inputs. The results in Table 5 show that the baseline model degrades significantly on the test set "1 SUB", and the performance becomes worse as noise increases in the other noisy test sets. It also demonstrates that the conventional NMT is indeed fragile to permuted inputs, which is consistent with prior work (Belinkov and Bisk 2017; Cheng et al. 2018). However, our methods make the NMT model more robust to noise inputs. First, the simple Placeholder model achieves an improvement of translation quality over the baseline model from +0.94 BLEU to +1.16 BLEU as the amount of homophone noise characters increases from 1 to 3, according to the results in Table 5. Therefore, it proves that randomly substituting some characters of inputs is a simple yet effective regularizer for conventional NMT. We also evaluate the performance of the Uniform model which uses Chinese characters as substitutions. The results in Table 5 suggest that the Uniform model achieves an improvement over the Placeholder model marginally. Then it can be seen that the Frequency model not only significantly enhances the robustness of NMT over the baseline system, but also improves further over the Uniform model up to an average of +1.18 BLEU on the noisy test sets. Compared with the Uniform model, the improvement of Frequency model is especially substantial for noise text with more than one incorrect character. Finally, we can find that the Homophone model performs best and achieved a significant improvement on noise text over the baseline model up to +2.72 BLEU. We observe that all our robustness-enhanced models outperform the baseline model on the clean test set up to +0.63 BLEU. And the translation performance of the Ho- mophone model on the "1 Sub" is also superior to that one of the baseline model on the clean test. Moreover, even on the "3 Subs" with three noise characters, the performance degradation of the Homophone model is only -0.83 BLEU, while the baseline model falls up to -3.95 BLEU. Pinyin Feature In this section, we evaluate the performance of our method incorporated with Chinese Pinyin feature. We use the Chine-seTone 2 tool to convert Chinese characters into their Pinyin counterpart without tones. For the sake of a fair comparison, we keep the total size of input embedding fixed to 512 by setting the embedding sizes of character and Pinyin to 64 and 448, respectively for each system with Pinyin. As shown in Table 5, Chinese Pinyin feature provides further robustness improvements for the baseline system on all the noisy test sets. It also can be seen that the Homophone model with Pinyin feature achieves a further improvement by an average of +0.71 BLEU on the noisy test sets and a slight generalization improvement on the clean test set. It demonstrates that Pinyin is an effective input feature for improving the robustness of Chinese-sourced NMT. It is worth noting that the Placeholder model with Pinyin feature achieves a significant improvement over the original Placeholder model on noisy test sets up to +1.59 BLEU. We suspect that Pinyin feature effectively compensates the model for lost semantic information at training time. Among all our models, the Homophone model with Pinyin feature achieves a comparable performance on the clean test set, but performs best on the noisy test sets. It suggests that the Homophone model achieves a tradeoff between robustness and generalization. Therefore, the Homophone model with substitution rate 0.1 is used as the best performing NMT model in the subsequent experiments. Training Cost We also investigate the training cost of our robust system and the baseline system. As shown in Figure 1, it is obvious that the training cost of baseline model is lower than 2 https://github.com/letiantian/ChineseTone that one of our robust system during the training process, but our robust system achieves a higher BLEU score. It indicates that our approach effectively improves the generalization performance of the conventional NMT model trained on clean training data. Effect of Source Sentence Length We also evaluate the performance of our robust system and the baseline on the noisy test sets with different source sentence lengths. As shown in Figure 2, the translation quality of both systems is improved as the length increases and then degrades as the length exceeds 50. Our observation is also consistent with prior work . It implies that more context is helpful to noise disambiguation. It also can be seen that our robust system outperforms the baseline model on all the noisy test sets. Effect of Training Data Size As shown in Table 6, increasing training data significantly improves the baseline system up to 3.68 BLEU on the clean test data, but only achieves a robustness improvement by an average of +2.01 BLEU on the noisy test sets. It demonstrates that the degradation of translation quality caused by noise is still unavoidable for the conventional NMT model even trained on a larger scale of training data. In contrast, our robust system achieves a comparable improvement on the noisy test sets to the performance on the clean data (2.72 BLEU vs. 2.9 BLEU). It shows that our method is stable and effective to NMT regardless of the amount of training data. Compared with the baseline system, it also can be seen that more training data brings more robustness improvement for our robust system on the noisy data (2.72 BLEU vs. 2.01 BLEU). It presents that our method can make better use of a larger amount of training data to enhance the robustness of NMT further. A Case Study In Table 7, we provide a realistic example to illustrate the advantage of our robust NMT system on erroneous ASR output. For this case, the syntactic structure and meaning of the original sentence are destroyed since the original character "数" which means digit is misrecognized as the char- Speech gāi 该 shù 数 数 数 zì 字 yǐ 已 jīng 经 dà 大 fú 幅 xià 下 huá 滑 jìn 近 90% ASR gāi 该 shū 书 书 书 zì 字 yǐ 已 jīng 经 dà 大 fú 幅 xià 下 huá 滑 jìn 近 90% Ref The figure has fallen sharply by almost 90% Baseline The book has fallen by nearly 90% Our approach The figure has fallen by nearly 90% Table 7: For the same erroneous ASR output, translations of the baseline NMT system and our robust NMT system. acter "书" which means book. "数" and "书" share the same pronunciation without tones. Human beings generally have no obstacle to understanding this flawed sentence with the aid of its correct pronunciation. The baseline NMT system can hardly avoid the translation of "书" which is a highfrequency character with explicit word sense. In contrast, our robust NMT system can translate this sentence correctly. We also observe that our system works well even if the original character "数" is substituted with other homophones, such as "舒" which means comfortable. It shows that our system has a powerful ability to recover the minor ASR error. We consider that the robustness improvement is mainly attributed to our proposed ASR-specific noise training and Chinese Pinyin feature. Conclusion Erroneous ASR is a challenge to speech translation. We propose a simple yet effective approach to improve the robustness of NMT to ASR noise by crafting ASR-specific noise training examples and incorporating the Chinese Pinyin feature as an additional input feature. Experiment results show that our method significantly outperforms the baseline and performs stably on three test sets with different amount of noise characters, while achieves a generalization improvement on a clean test set. In future work, we would like to investigate appropriate methods to construct noise training examples for other types of ASR errors. Moreover, it is necessary to evaluate our approach on a realistic speech translation system.
4,744
1811.00728
2898989428
Although neural machine translation (NMT) has achieved impressive progress recently, it is usually trained on the clean parallel data set and hence cannot work well when the input sentence is the production of the automatic speech recognition (ASR) system due to the enormous errors in the source. To solve this problem, we propose a simple but effective method to improve the robustness of NMT in the case of speech translation. We simulate the noise existing in the realistic output of the ASR system and inject them into the clean parallel data so that NMT can work under similar word distributions during training and testing. Besides, we also incorporate the Chinese Pinyin feature which is easy to get in speech translation to further improve the translation performance. Experiment results show that our method has a more stable performance and outperforms the baseline by an average of 3.12 BLEU on multiple noisy test sets, even while achieves a generalization improvement on the WMT'17 Chinese-English test set.
Our approach is motivated by the work of NMT incorporated with linguistic input features @cite_16 . Chinese linguistic features, such as radicals and Pinyin, have been demonstrated effective to Chinese-sourced NMT @cite_13 @cite_17 and Chinese ASR @cite_19 . We also incorporate Pinyin as an additional input feature in the robust NMT model, aiming at improving the robustness of NMT further.
{ "abstract": [ "", "Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations.", "In recent years, Neural Machine Translation (NMT) has been proven to get impressive results. While some additional linguistic features of input words improve wordlevel NMT, any additional character features have not been used to improve character-level NMT so far. In this paper, we show that the radicals of Chinese characters (or kanji), as a character feature information, can be easily provide further improvements in the character-level NMT. In experiments on WAT2016 Japanese-Chinese scientific paper excerpt corpus (ASPEC-JP), we find that the proposed method improves the translation quality according to two aspects: perplexity and BLEU. The character-level NMT with the radical input feature's model got a state-of-the-art result of 40.61 BLEU points in the test set, which is an improvement of about 8.6 BLEU points over the best system on the WAT2016 Japanese-to-Chinese translation subtask with ASPEC-JP. The improvements over the character-level NMT with no additional input feature are up to about 1.5 and 1.4 BLEU points in the development-test set and the test set of the corpus, respectively.", "Unknown word (UNK) or open vocabulary is a challenging problem for neural machine translation (NMT). For alphabetic languages such as English, German and French, transforming a word into subwords is an effective way to alleviate the UNK problem, such as the Byte Pair encoding (BPE) algorithm. However, for the stroke-based languages, such as Chinese, aforementioned method is not effective enough for translation quality. In this paper, we propose to utilize Pinyin, a romanization system for Chinese characters, to convert Chinese characters to subword units to alleviate the UNK problem. We first investigate that how Pinyin and its four diacritics denoting tones affect translation performance of NMT systems, and then propose different strategies to utilise Pinyin and tones as input factors for Chinese–English NMT. Extensive experiments conducted on Chinese–English translation demonstrate that the proposed methods can remarkably improve the translation quality, and can effectively alleviate the UNK problem for Chinese-sourced translation." ], "cite_N": [ "@cite_19", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2514969556", "2410082850", "2963321862", "2947180194" ] }
Improving the Robustness of Speech Translation
In recent years, neural machine translation (NMT) has achieved impressive progress and has outperformed statistical machine translation (SMT) systems on multiple language pairs (Sennrich, Haddow, and Birch 2016). NMT models are usually built under the encoder-decoder architecture where the encoder produces a representation for the source sentence and the decoder generates target translation from this representation word by word Sutskever, Vinyals, and Le 2014;Vaswani et al. 2017). Despite its success, NMT is sensible to the orthographic errors that human beings can comprehend as expected (Belinkov and Bisk 2017). This problem is aggravated in speech translation where the output of the automatic speech recognition (ASR) system is used as the input of NMT which usually contains more noise. The example in Table 1 shows that the conventional NMT system fails to translates the misrecognized ASR output correctly. It is reported that the increase of error rate of ASR brings a significant performance degradation of machine translation (Le, Lecouteux, and Besacier 2017). It indicates that the best NMT systems have to observe a performance decline due to high ASR error * Work done while at Sogou Inc. † Corresponding Author Speech zhè 这 fèn 份 lǐ 礼 wù 物 bǎo 饱 hán 含 yī 一 fèn 份 shēn 深 深 深 qíng 情 情 情 ASR zhè 这 fèn 份 lǐ 礼 wù 物 bǎo 饱 hán 含 yī 一 fèn 份 shēn 申 申 申 qǐng 请 请 请 Reference This gift is full of affection NMT This gift contains an application Table 1: An example of speech translation. For this example, the original word "深情"(highlighted by blue color) which means affection is misrecognized as its homophonic word "申请"(highlighted by red color) which means application, leading to an inaccurate translation generated by the conventional NMT system. rate under noisy environmental conditions, such as simultaneous interpretation, even though ASR has matured to the point of commercial applications. Conventional NMT systems are usually trained on the high-quality written parallel data which hardly contain many ASR-specific errors, resulting in a mismatch between training data and test data. An ideal solution is to train NMT systems on training data in the form of erroneous speech transcriptions paired with their counterpart translations. Unfortunately, this kind of corpora available is somewhat scarce and expensive to collect. Therefore, in addition to reducing the error rate of ASR, it is necessary to improve the robustness of NMT to the inevitable ASR errors. In this paper, our goal is to improve the robustness of NMT to erroneous ASR outputs in the speech translation scenario. We propose an effective and computationally inexpensive approach to craft a large number of ASR-specific noise training examples by simulating realistic ASR errors, in order to alleviate the problem of insufficient speech translation training data. The basic idea is to randomly substitute some correct source tokens by other noise tokens at each training iteration. And we propose four strategies of choosing ASR-specific noise symbols. Using our approach, it is easy to obtain a robust NMT model only based on the standard NMT training method without modifying the training objective and extra computation load of introducing generative adversarial networks with a training difficulty (Arjovsky, Chintala, and Bottou 2017; Cheng et al. 2018). To achieve a further improvement in translation quality, it is desirable to recover the inherent semantic relationship among source characters which is broken by introduced adhoc noise at training time. For this purpose, in addition to the standard character-level representation, we also propose to explicitly incorporate the syllable-level representation (also called Pinyin 1 ) of a Chinese character as an additional input feature, resulting in a novel Pinyin-aware embedding of Chinese characters. We conduct experiments on WMT'17 Chinese-English translation task. Experimental results show that our approaches not only significantly enhance the robustness of NMT on the artificial noisy test sets, but also improve the generalization performance of NMT on the original test set. We finally illustrate the advantage and disadvantage of our robust NMT system via two real-world examples. The Challenge of Speech Translation The dominated speech translation systems generally employ a cascaded architecture which consists of an ASR component followed by an NMT component. An ASR system ingests user utterances as inputs and generates text transcriptions as outputs. And then an NMT system consumes these transcriptions and produces translations in another language. Recently, there has been growing interest in building an end-to-end ASR system as a way of folding separate acoustic, pronunciation, and language modeling components of a conventional ASR system into a single neural network (Chiu et al. 2017). We consider Listen, Attend and Spell (LAS) (Chan et al. 2016) as an example to formally describe the basic principle of end-to-end ASR. The basic LAS model consists of three sub-modules: the listener, the attender and the speller. Let D x,z = x (n) , z (n) N n=1 be our training data of LAS. For the n-th instance, x n = x 1 , . . . , x t , . . . , x T is the input sequence of filter bank spectra features, and z (n) = z 1 , . . . , z l , . . . , z L is the output sequence of transcriptions. The listener maps x (n) to a highlevel feature representation H. The attender takes H and determines which listener features in H should be attended to predict the next output symbol z l . Finally, the speller accepts the output of the attender in order to produce a probability distribution P (z l |z <l , x (n) ). The standard training objective is to find a set of model parameters that minimizes the negative log-likelihood on the ASR training data: θ A = arg min θ A − N n=1 log P z (n) |x (n) ; θ A ,(1) where θ A is a set of ASR model parameters, and z <l = (z 1 , . . . , z l−1 ) is the sequence of previous symbols. Given a bilingual written training data D z,y = z (m) , y (m) M m=1 . For the m-th sentence pair, let z (m) = z 1 , . . . , z i , . . . , z I be our source-language sequence, and y (m) = y 1 , . . . , y j , . . . , y J be our target-language sequence. NMT usually models the translation probability as Table 2: Error rates of three error categories for ASR. For the speech input "语音翻译" which means speech translation, "译" is substituted by "一" in the first case, "音" is deleted in the second case, and "了" is inserted in the third case. P (y (m) |z (m) ; θ N ) = J j=1 P (y j |z (m) , y <j ; θ N ),(2) where θ N represents a set of NMT model parameters, and y <j = y 1 , . . . , y j−1 is a partial translation. The probability of generating the j-th target token is usually calculated as P (y j |z (m) , y <j ; θ N ) = softmax g(y j−1 , h j , c j , θ N ) ,(3) where y j−1 is the word embedding of y j−1 , h j is a hidden state at j-th step, c j is a vector representing the context on source side for generating the j-th target word, and g(·) is a non-linear activation function. The standard training objective is to find a set of model parameters that minimizes the negative log-likelihood on the training data: θ N = arg min θ N − M m=1 log P y (m) |z (m) ; θ N . (4) Since the source side of training data in NMT does not match the ASR outputs, the performance of the NMT system is adversely affected by the ASR system which is prone to recognition errors due to the regional accents of speakers or environmental noise conditions. Once the erroneous ASR system is deployed, a wise way to improve the quality of speech translation is to adapt the downstream NMT system to ASR errors which are generally classified into three categories (i.e. substitution, deletion, and insertion) based on their Levenshtein alignments between the transcription and its reference. We provide three examples to illustrate these ASR error categories in Table 2. We also investigate the word error rate (WER) of our in-house Chinese ASR system on our in-house evaluation dataset which consists of about 100 hours of Chinese speech cross multiple domains. As shown in Table 2, the substitution error rate (6.4%) becomes the majority of WER (9.4%). Our observation is consistent with prior work (Mirzaei, Meshgi, and Kawahara 2016). And It is known that over 50% of the machine translation errors are associated with substitution errors which have a greater impact on translation quality than deletion or insertion errors (Vilar et al. 2006;Ruiz and Federico 2014). Hence, our goal is to improve the robustness of NMT to the substitution errors. Approach In this section, we propose four strategies to craft ASRspecific noise training examples. To further improve the translation quality of Chinese-sourced NMT, we propose to incorporate the Chinese Pinyin as an additional input feature. Which characters to be substituted Determining which correct characters could be substituted becomes a prerequisite for crafting noise examples. Inspired by dropout Srivastava et al. 2014;Gal and Ghahramani 2016), we propose to randomly substitute some source characters of parallel data with minor noise that the conventional NMT system is not able to translate correctly with high confidence, in order to simulate the substitution errors of ASR and regularize the NMT model. For each source sentence z of NMT training data, we posit a vector r with |z| independent Bernoulli random variables, each of which has substitution probability p of being 1. The vector is sampled and multiplied element-wise with the character ID inputs of the input layer in NMT, to creates the distorted training examplesz by substituting the remaining characters labeled with 1 according to Equation 6. The distorted training examples are then used as input to the input layer. r c ∼ Bernoulli(p) (5) z = c if r c = 1 c if r c = 0,(6) wherec is a noise symbol. In this case, the original D z,y is perturbed into Dz ,y of which the source side shares similar distribution with ASR outputs. Therefore, the robustness of NMT can be improved by observing a large number of variants of ASRspecific noise examples without changing the standard training method. How to choose noise We design four noising schemes of sampling noise symbols to substitute the determined source positions: • Placeholder-based Substitution We first propose a simple and general method to only consider the special placeholder "<SUB>" which hardly appears in the wild as the noise symbol. Our motivation is that forcing the model to reduce character dependencies. Using this approach, our NMT model theoretically should observe 2 |z| variants for each source sentence z. • Uniform Distribution-based Substitution Since the placeholder "<SUB>" hardly appears in the realistic ASR outputs, there is still a mismatch between the perturbed training data and ASR outputs. We propose to substitute the source positions with a sampled noise from the uniform distribution described in Equation 7. P (c) = 1 |V| ,(7) where V is the source vocabulary. • Frequency-based Substitution It is well known that it is difficult for ASR systems to recognize infrequent tokens in the training data (Goldwater, Jurafsky, and Manning 2010). In other words, the tokens with low frequency in the utterance tend to be misrecognized as frequent tokens. To simulate the real-world Table 3: Example of our noise sampling methods. For the original source sentence "语音翻译", the second character is randomly picked to be substituted by noise (highlighted by red color). For Placeholder method, "音" is substituted with the placeholder "<SUB>". For Uniform method, since each character has equal probability to be noise, "音" is substituted with "饕" which is a low-frequency character in Chinese. But for Frequency method, "音" is substituted by "好" which is a high-frequency Chinese character. Finally for Homophone, "音" is substituted by "因", both the characters share the same pronunciation. Methods Noise Example Placeholder yǔ 语 <SUB> fān 翻 yì 译 Uniform yǔ 语 tāo 饕 饕 饕 fān 翻 yì 译 Frequency yǔ 语 hǎo 好 好 好 fān 翻 yì 译 Homophone yǔ 语 yīn 因 因 因 fān 翻 yì 译 ASR scenario, we propose to substitute the source positions with a sample from the following unigram frequency distribution: P (c) = Count(c) c ∈V\{c} Count(c ) ,(8) where Count is a function used to calculate the character frequency in the training data. • Homophone-based Substitution For ASR outputs, another important fact is that there are significant possibilities that a character is substituted by its homophones which pronounce the same as the original one but differ in meaning (Li et al. 2008). Therefore, we propose to substitute the source positions with a sample of their homophone vocabulary according to the following distribution: P (c) = Count(c) c ∈V(c)\{c} Count(c ) ,(9) where V(c) is a vocabulary where each character shares the same pronunciation with c. Using the crafted noise training examples, the model is forced to learn the more general representation of that perturbed training data and to allocate output stability on the classes of simulated errors. Chinese Pinyin-aware Input Embeddings Using the proposed methods of crafting noise examples, the source part of training data is randomly corrupted with minor confusing characters constantly at training time. It indicates that the distorted characters are rare during the whole training process, leading to an issue of data sparsity. Because Chinese is famous for its numerous homophones, more than half of the Chinese Internet homophones retain the same pronunciation as their base words (Tang 2014 When a person types a word on a keyboard, he encounters more than one variants of characters of the word, so users choose a malapropism, which is an incorrect word in place of a word with a similar sound, to express their intense emotions. For example, a web-user picks the word " zhuān 砖 jiā 家" instead of " zhuān 专 jiā 家" which is the meaning of specialist, since both the words share the same pronunciation, but the former itself is another ironic name of "specialist" who specializes in talking nonsense in the Chinese Internet language. In this case, representing each Chinese character only by their surface symbols intuitively implies that any pair of characters is as distinct as any other pair. This ignores any common Pinyin sequences shared by characters. However, human beings generally have no obstacle to understanding this kind of informal or inaccurate Chinese text as long as the pronunciation is correct. Therefore, it gives us a hint that Pinyin information is helpful to generalize knowledge learned about a character to another via their shared Pinyin sequences since the Chinese syllable level constraints are not as restrictive as surface character sequences. However, many Chinese characters share the same Pinyin without tones yet not their meanings. For example, "砖" which means brick and "专" which means specific. By considering a surface character and its Pinyin as equivalent, the performance of NMT models can be harmed by this new source of ambiguity. Due to this concern, we propose to apply a factored input embeddings by combining both character and Pinyin representations motivated by Sennrich and Haddow (2016). Given a Pinyin sequence p = p 1 , . . . , p i , . . . , p I which has the same length as the character sequence z = z 1 , . . . , z i , . . . , z I , we look up separate embedding vectors for character and pinyin, and the final factored input embedding e i for each position i can be generated by concatenating the character embedding E c [z i ] and Pinyin embedding E p (p i ) as e i = [E c [z i ]; E p [p i ]],(10) Experiments Setup We conduct all experiments on the WMT'17 Chinese-English translation task. The training data consists of 9.3M bilingual sentence pairs obtained by combining the CWMT corpora and News Commentary v12. We use newsdev2017 and newstest2017 as our validation set and clean test set, respectively. Due to lack of public datasets for speech translation, we craft three noisy test sets with different amount of homophones errors in order to simulate the homophonic substitution errors of ASR. And we construct three noisy variants for each source sentence of newstest2017 to increase the diversity of noisy characters. Therefore, the size of each artificial noisy test set is three times larger than new-stest2017. We argue that the setup is very close to the realistic speech translation scenario. It is well known that NMT benefits from the increasing amount of training data (Koehn and Knowles 2017). In addition to WMT training data, we also evaluate the best performing system on our in-house large-scale Chinese-English training data with about 80M sentence pairs. All of the following experiments are carried out based on the Transformer (Vaswani et al. 2017), which is similar to conventional NMT models except depending entirely on self-attention and position-wise, and uses fully connected layers for both the encoder and decoder instead of recurrent neural networks or convolutions. We set the size of all input and output layers to 512 and that of inner-FFN layer to 2048. Training is performed on a single server with 8 Nvidia M40 GPUs. We use a batch size of 4096 on each GPU containing a set of sentence pairs with approximately 4096 source tokens and 4096 target tokens. We train each model with the sentences of length up to 100 words in the training data. We train each model for a total of 600K steps and save the checkpoints with an interval of 1000 training steps. We use a single model obtained by averaging 20 checkpoints that perform best on the development set as the final model for testing. During decoding, we set the beam size to 4. Other training parameters are the same as the default configuration of the Transformer base model. We report case-sensitive NIST BLEU (Papineni et al. 2002) scores for all the systems. For evaluation, we first merge output tokens back to their untokenized representation using detokenizer.pl and then use mteval-v13a.pl to compute the scores as per WMT reference. In this work, we focus on crafting ASR-specific noise examples and incorporating the Chinese Pinyin feature to improve the robustness of NMT. Therefore, we consider the L noisy loss function proposed by Cheng et al. (2018) as our training objective. We will omit an exhaustive background description of the loss function and refer readers to Cheng et al. (2018). It is worth noting that our approach can be applied with other adversarial training methods proposed by Cheng et al. (2018). Robustness Performance Our character substitution has a hyper-parameter p ∈ [0, 1] which means the probability of substituting a character in the inputs. In this section, we explore the effect of tuning this hyper-parameter. The results from Table 5 shows that both the Placeholder and Uniform models work best at p = 0.2, p = 0.1 is optimal for the Frequency model, and the Homophone model achieves the best performance at p = 0.3. It indicates that different noise sampling methods have their own optimal System Table 5: Case-sensitive BLEU scores of our approaches on the clean test set (newstest2017) and three artificial noisy test sets (1 Sub, 2 Subs and 3 Subs) which are crafted by randomly substituting one, two and three original characters of each source sentence in the clean test set with their homophones, respectively. p is the substitution rate. "Placeholder" means the placeholder " SUB " is used as the noise token. "Uniform" indicates the uniform distribution based noise sampling. "Frequency" represents character frequency based noise sampling. "Homophone" denotes Chinese homophone based noise sampling. "Pinyin" means incorporating the Chinese Pinyin as an additional input feature. substitution rate. Hence, it is hard to set a universal substitution rate for all the models. It also can be seen that the Homophone model behaves stably on all noisy test sets even the substitution rate increases. However, in the case of more noise, other models suffer more performance degradation. We suspect that the homophone noise which still keeps the latent semantic information does not hinder the training process severely. Translation Performance Although dropout is used for full-connected layers in all models, the baseline model still fails to translate the noise inputs. The results in Table 5 show that the baseline model degrades significantly on the test set "1 SUB", and the performance becomes worse as noise increases in the other noisy test sets. It also demonstrates that the conventional NMT is indeed fragile to permuted inputs, which is consistent with prior work (Belinkov and Bisk 2017; Cheng et al. 2018). However, our methods make the NMT model more robust to noise inputs. First, the simple Placeholder model achieves an improvement of translation quality over the baseline model from +0.94 BLEU to +1.16 BLEU as the amount of homophone noise characters increases from 1 to 3, according to the results in Table 5. Therefore, it proves that randomly substituting some characters of inputs is a simple yet effective regularizer for conventional NMT. We also evaluate the performance of the Uniform model which uses Chinese characters as substitutions. The results in Table 5 suggest that the Uniform model achieves an improvement over the Placeholder model marginally. Then it can be seen that the Frequency model not only significantly enhances the robustness of NMT over the baseline system, but also improves further over the Uniform model up to an average of +1.18 BLEU on the noisy test sets. Compared with the Uniform model, the improvement of Frequency model is especially substantial for noise text with more than one incorrect character. Finally, we can find that the Homophone model performs best and achieved a significant improvement on noise text over the baseline model up to +2.72 BLEU. We observe that all our robustness-enhanced models outperform the baseline model on the clean test set up to +0.63 BLEU. And the translation performance of the Ho- mophone model on the "1 Sub" is also superior to that one of the baseline model on the clean test. Moreover, even on the "3 Subs" with three noise characters, the performance degradation of the Homophone model is only -0.83 BLEU, while the baseline model falls up to -3.95 BLEU. Pinyin Feature In this section, we evaluate the performance of our method incorporated with Chinese Pinyin feature. We use the Chine-seTone 2 tool to convert Chinese characters into their Pinyin counterpart without tones. For the sake of a fair comparison, we keep the total size of input embedding fixed to 512 by setting the embedding sizes of character and Pinyin to 64 and 448, respectively for each system with Pinyin. As shown in Table 5, Chinese Pinyin feature provides further robustness improvements for the baseline system on all the noisy test sets. It also can be seen that the Homophone model with Pinyin feature achieves a further improvement by an average of +0.71 BLEU on the noisy test sets and a slight generalization improvement on the clean test set. It demonstrates that Pinyin is an effective input feature for improving the robustness of Chinese-sourced NMT. It is worth noting that the Placeholder model with Pinyin feature achieves a significant improvement over the original Placeholder model on noisy test sets up to +1.59 BLEU. We suspect that Pinyin feature effectively compensates the model for lost semantic information at training time. Among all our models, the Homophone model with Pinyin feature achieves a comparable performance on the clean test set, but performs best on the noisy test sets. It suggests that the Homophone model achieves a tradeoff between robustness and generalization. Therefore, the Homophone model with substitution rate 0.1 is used as the best performing NMT model in the subsequent experiments. Training Cost We also investigate the training cost of our robust system and the baseline system. As shown in Figure 1, it is obvious that the training cost of baseline model is lower than 2 https://github.com/letiantian/ChineseTone that one of our robust system during the training process, but our robust system achieves a higher BLEU score. It indicates that our approach effectively improves the generalization performance of the conventional NMT model trained on clean training data. Effect of Source Sentence Length We also evaluate the performance of our robust system and the baseline on the noisy test sets with different source sentence lengths. As shown in Figure 2, the translation quality of both systems is improved as the length increases and then degrades as the length exceeds 50. Our observation is also consistent with prior work . It implies that more context is helpful to noise disambiguation. It also can be seen that our robust system outperforms the baseline model on all the noisy test sets. Effect of Training Data Size As shown in Table 6, increasing training data significantly improves the baseline system up to 3.68 BLEU on the clean test data, but only achieves a robustness improvement by an average of +2.01 BLEU on the noisy test sets. It demonstrates that the degradation of translation quality caused by noise is still unavoidable for the conventional NMT model even trained on a larger scale of training data. In contrast, our robust system achieves a comparable improvement on the noisy test sets to the performance on the clean data (2.72 BLEU vs. 2.9 BLEU). It shows that our method is stable and effective to NMT regardless of the amount of training data. Compared with the baseline system, it also can be seen that more training data brings more robustness improvement for our robust system on the noisy data (2.72 BLEU vs. 2.01 BLEU). It presents that our method can make better use of a larger amount of training data to enhance the robustness of NMT further. A Case Study In Table 7, we provide a realistic example to illustrate the advantage of our robust NMT system on erroneous ASR output. For this case, the syntactic structure and meaning of the original sentence are destroyed since the original character "数" which means digit is misrecognized as the char- Speech gāi 该 shù 数 数 数 zì 字 yǐ 已 jīng 经 dà 大 fú 幅 xià 下 huá 滑 jìn 近 90% ASR gāi 该 shū 书 书 书 zì 字 yǐ 已 jīng 经 dà 大 fú 幅 xià 下 huá 滑 jìn 近 90% Ref The figure has fallen sharply by almost 90% Baseline The book has fallen by nearly 90% Our approach The figure has fallen by nearly 90% Table 7: For the same erroneous ASR output, translations of the baseline NMT system and our robust NMT system. acter "书" which means book. "数" and "书" share the same pronunciation without tones. Human beings generally have no obstacle to understanding this flawed sentence with the aid of its correct pronunciation. The baseline NMT system can hardly avoid the translation of "书" which is a highfrequency character with explicit word sense. In contrast, our robust NMT system can translate this sentence correctly. We also observe that our system works well even if the original character "数" is substituted with other homophones, such as "舒" which means comfortable. It shows that our system has a powerful ability to recover the minor ASR error. We consider that the robustness improvement is mainly attributed to our proposed ASR-specific noise training and Chinese Pinyin feature. Conclusion Erroneous ASR is a challenge to speech translation. We propose a simple yet effective approach to improve the robustness of NMT to ASR noise by crafting ASR-specific noise training examples and incorporating the Chinese Pinyin feature as an additional input feature. Experiment results show that our method significantly outperforms the baseline and performs stably on three test sets with different amount of noise characters, while achieves a generalization improvement on a clean test set. In future work, we would like to investigate appropriate methods to construct noise training examples for other types of ASR errors. Moreover, it is necessary to evaluate our approach on a realistic speech translation system.
4,744
1811.00429
2898630544
Several applications of Reinforcement Learning suffer from instability due to high variance. This is especially prevalent in high dimensional domains. Regularization is a commonly used technique in machine learning to reduce variance, at the cost of introducing some bias. Most existing regularization techniques focus on spatial (perceptual) regularization. Yet in reinforcement learning, due to the nature of the Bellman equation, there is an opportunity to also exploit temporal regularization based on smoothness in value estimates over trajectories. This paper explores a class of methods for temporal regularization. We formally characterize the bias induced by this technique using Markov chain concepts. We illustrate the various characteristics of temporal regularization via a sequence of simple discrete and continuous MDPs, and show that the technique provides improvement even in high-dimensional Atari games.
Regularization in RL has been considered via several different perspectives. One line of investigation focuses on regularizing the features learned on the state space . In particular backward bootstrapping method's can be seen as regularizing in feature space based on temporal proximity @cite_2 @cite_1 @cite_7 . These approaches assume that nearby states in the state space have similar value. Other works focus on regularizing the changes in policy directly. Those approaches are often based on entropy methods . Explicit regularization in the temporal space has received much less attention. Temporal regularization in some sense may be seen as a backward'' multi-step method . The closest work to ours is possibly , where they define natural value approximator by projecting the previous states estimates by adjusting for the reward and @math . Their formulation, while sharing similarity in motivation, leads to different theory and algorithm. Convergence properties and bias induced by this class of methods were also not analyzed in .
{ "abstract": [ "Residual gradient (RG) was proposed as an alternative to TD(0) for policy evaluation when function approximation is used, but there exists little formal analysis comparing them except in very limited cases. This paper employs techniques from online learning of linear functions and provides a worst-case (non-probabilistic) analysis to compare these two types of algorithms when linear function approximation is used. No statistical assumptions are made on the sequence of observations, so the analysis applies to non-Markovian and even adversarial domains as well. In particular, our results suggest that RG may result in smaller temporal differences, while TD(0) is more likely to yield smaller prediction errors. These phenomena can be observed even in two simple Markov chain examples that are non-adversarial.", "ABSTRACT A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties.", "Sutton, Szepesvari and Maei (2009) recently introduced the first temporal-difference learning algorithm compatible with both linear function approximation and off-policy training, and whose complexity scales only linearly in the size of the function approximator. Although their gradient temporal difference (GTD) algorithm converges reliably, it can be very slow compared to conventional linear TD (on on-policy problems where TD is convergent), calling into question its practical utility. In this paper we introduce two new related algorithms with better convergence rates. The first algorithm, GTD2, is derived and proved convergent just as GTD was, but uses a different objective function and converges significantly faster (but still not as fast as conventional TD). The second new algorithm, linear TD with gradient correction, or TDC, uses the same update rule as conventional TD except for an additional term which is initially zero. In our experiments on small test problems and in a Computer Go application with a million features, the learning rate of this algorithm was comparable to that of conventional TD. This algorithm appears to extend linear TD to off-policy learning with no penalty in performance while only doubling computational requirements." ], "cite_N": [ "@cite_1", "@cite_7", "@cite_2" ], "mid": [ "1969477885", "1646707810", "2075268401" ] }
Temporal Regularization in Markov Decision Process
There has been much progress in Reinforcement Learning (RL) techniques, with some impressive success with games [30], and several interesting applications on the horizon [17,29,26,9]. However RL methods are too often hampered by high variance, whether due to randomness in data collection, effects of initial conditions, complexity of learner function class, hyper-parameter configuration, or sparsity of the reward signal [15]. Regularization is a commonly used technique in machine learning to reduce variance, at the cost of introducing some (smaller) bias. Regularization typically takes the form of smoothing over the observation space to reduce the complexity of the learner's hypothesis class. In the RL setting, we have an interesting opportunity to consider an alternative form of regularization, namely temporal regularization. Effectively, temporal regularization considers smoothing over the trajectory, whereby the estimate of the value function at one state is assumed to be related to the value function at the state(s) that typically occur before it in the trajectory. This structure arises naturally out of the fact that the value at each state is estimated using the Bellman equation. The standard Bellman equation clearly defines the dependency between value estimates. In temporal regularization, we amplify this dependency by making each state depend more strongly on estimates of previous states as opposed to multi-step methods that considers future states. This paper proposes a class of temporally regularized value function estimates. We discuss properties of these estimates, based on notions from Markov chains, under the policy evaluation setting, and extend the notion to the control case. Our experiments show that temporal regularization effectively reduces variance and estimation error in discrete and continuous MDPs. The experiments also 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. arXiv:1811.00429v1 [cs. LG] 1 Nov 2018 highlight that regularizing in the time domain rather than in the spatial domain allows more robustness to cases where state features are mispecified or noisy, as is the case in some Atari games. Related work Regularization in RL has been considered via several different perspectives. One line of investigation focuses on regularizing the features learned on the state space [11,25,24,10,21,14]. In particular backward bootstrapping method's can be seen as regularizing in feature space based on temporal proximity [34,20,1]. These approaches assume that nearby states in the state space have similar value. Other works focus on regularizing the changes in policy directly. Those approaches are often based on entropy methods [23,28,2]. Explicit regularization in the temporal space has received much less attention. Temporal regularization in some sense may be seen as a "backward" multi-step method [32]. The closest work to ours is possibly [36], where they define natural value approximator by projecting the previous states estimates by adjusting for the reward and γ. Their formulation, while sharing similarity in motivation, leads to different theory and algorithm. Convergence properties and bias induced by this class of methods were also not analyzed in Xu et al. [36]. Markov chains We begin by introducing discrete Markov chains concepts that will be used to study the properties of temporally regularized MDPs. A discrete-time Markov chain [19] is defined by a discrete set of states S and a transition function P : S × S → [0, 1] which can also be written in matrix form as P ij = P(i|j). Throughout the paper, we make the following mild assumption on the Markov chain: Assumption 1. The Markov chain P is ergodic: P has a unique stationary distribution µ. In Markov chains theory, one of the main challenge is to study the mixing time of the chain [19]. Several results have been obtained when the chain is called reversible, that is when it satisfies detailed balance. Definition 1 (Detailed balance [16]). Let P be an irreducible Markov chain with invariant stationary distribution µ 1 . A chain is said to satisfy detailed balance if and only if µ i P ij = µ j P ji ∀i, j ∈ S.(1) Intuitively this means that if we start the chain in a stationary distribution, the amount of probability that flows from i to j is equal to the one from j to i. In other words, the system must be at equilibrium. An intuitive example of a physical system not satisfying detailed balance is a snow flake in a coffee. Indeed, many chains do not satisfy this detailed balance property. In this case it is possible to use a different, but related, chain called the reversal Markov chain to infer mixing time bounds [7]. Definition 2 (Reversal Markov chain [16]). Let P the reversal Markov chain of P be defined as: P ij = µ j P ji µ i ∀i, j ∈ S.(2) If P is irreducible with invariant distribution µ, then P is also irreducible with invariant distribution µ. The reversal Markov chain P can be interpreted as the Markov chain P with time running backwards. If the chain is reversible, then P = P . Markov Decision Process A Markov Decision Process (MDP), as defined in [27], consists of a discrete set of states S, a transition function P : S × A × S → [0, 1], and a reward function r : S × A → R. On each round t, the learner observes current state s t ∈ S and selects action a t ∈ A, after which it receives reward r t = r(s t , a t ) and moves to new state s t+1 ∼ P(·|s t , a t ). We define a stationary policy π as a probability distribution over actions conditioned on states π : S × A → [0, 1]. Discounted Markov Decision Process When performing policy evaluation in the discounted case, the goal is to estimate the discounted expected return of policy π at a state s ∈ S, v π (s) = Eπ[ ∞ t=0 γ t r t+1 |s 0 = s], with discount factor γ ∈ [0, 1). This v π can be obtained as the fixed point of the Bellman operator T π such that: T π v π = r π + γP π v π ,(3) where P π denotes the |S| × |S| transition matrix under policy π, v π is the state values column-vector, and r is the reward column-vector. The matrix P π also defines a Markov chain. In the control case, the goal is to find the optimal policy π * that maximizes the discounted expected return. Under the optimal policy, the optimal value function v * is the fixed point of the non-linear optimal Bellman operator: T * v * = max a∈A [r(a) + γP (a)v * ].(4) Temporal regularization Regularization in the feature/state space, or spatial regularization as we call it, exploits the regularities that exist in the observation (or state). In contrast, temporal regularization considers the temporal structure of the value estimates through a trajectory. Practically this is done by smoothing the value estimate of a state using estimates of states that occurred earlier in the trajectory. In this section we first introduce the concept of temporal regularization and discuss its properties in the policy evaluation setting. We then show how this concept can be extended to exploit information from the entire trajectory by casting temporal regularization as a time series prediction problem. Let us focus on the simplest case where the value estimate at the current state is regularized using only the value estimate at the previous state in the trajectory, yielding updates of the form: v β (s t ) = E st+1,st−1∼π [r(s t ) + γ((1 − β)v β (s t+1 ) + βv β (s t−1 ))] = r(s t ) + γ(1 − β) st+1∈S p(s t+1 |s t )v β (s t+1 ) + γβ st−1∈S p(s t |s t−1 )p(s t−1 ) p(s t ) v β (s t−1 ),(5) for a parameter β ∈ [0, 1] and p(s t+1 |s t ) the transition probability induced by the policy π. It can be rewritten in matrix form as v β = r + γ(((1 − β)P π + β P π )v β ), where P π corresponds to the reversal Markov chain of the MDP. We define a temporally regularized Bellman operator as: T π β v β = r + γ((1 − β)P π v β + β P π v β ).(6) To alleviate the notation, we denote P π as P and P π as P . Remark. For β = 0, Eq. 6 corresponds to the original Bellman operator. We can prove that this operator has the following property. Theorem 1. The operator T π β has a unique fixed point v π β and T π β is a contraction mapping. Proof. We first prove that T π β is a contraction mapping in L ∞ norm. We have that T π β u − T π β v ∞ = r + γ((1 − β)P u + β P u) − (r + γ((1 − β)P v + β P v)) ∞ = γ ((1 − β)P + β P )(u − v) ∞ ≤ γ u − v ∞ ,(7) where the last inequality uses the fact that the convex combination of two row stochastic matrices is also row stochastic (the proof can be found in the appendix). Then using Banach fixed point theorem, we obtain that v π β is a unique fixed point. Furthermore the new induced Markov chain (1 − β)P + β P has the same stationary distribution as the original P (the proof can be found in the appendix). In the policy evaluation setting, the bias between the original value function v π and the regularized one v π β can be characterized as a function of the difference between P and its Markov reversal P , weighted by β and the reward distribution. Proposition 1. Let v π = ∞ i=0 γ i P i r and v π β = ∞ i=0 γ i ((1 − β)P + β P ) i r. We have that v π − v π β ∞ = ∞ i=0 γ i (P i − ((1 − β)P + β P ) i )r ∞ ≤ ∞ i=0 γ i (P i − ((1 − β)P + β P ) i )r ∞ .(8) This quantity is naturally bounded for γ < 1. Remark. Let P ∞ denote a matrix where columns consist of the stationary distribution µ. By the property of reversal Markov chains and lemma 1, we have that lim i→∞ P i r − P ∞ r → 0 and lim i→∞ ((1 − β)P + β P ) i r − P ∞ r → 0, such that the Marvov chain P and its reversal (1 − β)P + β P converge to the same value. Therefore, the norm (P i − ((1 − β)P + β P ) i )r p also converges to 0 in the limit. Remark. It can be interesting to note that if the chain is reversible, meaning that P = P , then the fixed point of both operators is the same, that is v π = v π β . Discounted average reward case: The temporally regularized MDP has the same discounted average reward as the original one as it is possible to define the discounted average reward [35] as a function of the stationary distribution π, the reward vector and γ . This leads to the following property (the proof can be found in the appendix). Proposition 2. For a reward vector r, the MDPs defined by the the transition matrices P and (1 − β)P + β P have the same average reward ρ. Intuitively, this means that temporal regularization only reweighs the reward on each state based on the Markov reversal, while preserving the average reward. Temporal Regularization as a time series prediction problem: It is possible to cast this problem of temporal regularization as a time series prediction problem, and use richer models of temporal dependencies, such as exponential smoothing [12], ARMA model [5], etc. We can write the update in a general form using n different regularizers ( v 0 , v 1 ... v n−1 ): v(s t ) = r(s) + γ n−1 i=0 [β(i) v i (s t+1 )],(9) where v 0 (s t+1 ) = v(s t+1 ) and n−1 i=0 β(i) = 1. For example, using exponential smoothing where v(s t+1 ) = (1 − λ)v(s t−1 ) + (1 − λ)λv(s t−2 ) ..., the update can be written in operator form as: T π β v = r + γ (1 − β) P v + β (1 − λ) ∞ i=1 λ i−1 P i v ,(10) and a similar argument as Theorem 1 can be used to show the contraction property. The bias of exponential smoothing in policy evaluation can be characterized as: v π − v π β ∞ ≤ ∞ i=0 γ i (P i − ((1 − β)P + β(1 − λ) ∞ j=1 λ j−1 P j ) i )r ∞ .(11) Using more powerful regularizers could be beneficial, for example to reduce variance by smoothing over more values (exponential smoothing) or to model the trend of the value function through the trajectory using trend adjusted model [13]. An example of a temporal policy evaluation with temporal regularization using exponential smoothing is provided in Algorithm 1. Choose a ∼ π(s) 5: Take action a, observe reward r(s) and next state s 6: v(s) = v(s) + α(r(s) + γ((1 − β)v(s ) + βp)) 7: p = (1 − λ)v(s) + λp 8: end for Control case: Temporal regularization can be extended to MDPs with actions by modifying the target of the value function (or the Q values) using temporal regularization. Experiments (Sec. 5.6) present an example of how temporal regularization can be applied within an actor-critic framework. The theoretical analysis of the control case is outside the scope of this paper. Temporal difference with function approximation: It is also possible to extend temporal regularization using function approximation such as semi-gradient TD [33]. Assuming a function v β θ parameterized by θ, we can consider r(s) + γ((1 − β)v β θ (s t+1 ) + βv β θ (s t−1 )) − v β θ (s t ) as the target and differentiate with respect to v β θ (s t ). An example of a temporally regularized semi-gradient TD algorithm can be found in the appendix. Experiment We now presents empirical results illustrating potential advantages of temporal regularization, and characterizing its bias and variance effects on value estimation and control. Mixing time This first experiment showcases that the underlying Markov chain of a MDP can have a smaller mixing time when temporally regularized. The mixing time can be seen as the number of time steps required for the Markov chain to get close enough to its stationary distribution. Therefore, the mixing time also determines the rate at which policy evaluation will converge to the optimal value function [3]. We consider a synthetic MDP with 10 states where transition probabilities are sampled from the uniform distribution. Let P ∞ denote a matrix where columns consists of the stationary distribution µ. To compare the mixing time, we evaluate the error corresponding to the distance of P i and (1 − β)P + β P i to the convergence point P ∞ after i iterations. Figure 1 displays the error curve when varying the regularization parameter β. We observe a U-shaped error curve, that intermediate values of β in this example yields faster mixing time. One explanation is that transition matrices with extreme probabilities (low or high) yield poorly conditioned transition matrices. Regularizing with the reversal Markov chain often leads to a better conditioned matrix at the cost of injecting bias. Bias It is well known that reducing variance comes at the expense of inducing (smaller) bias. This has been characterized previously (Sec. 4) in terms of the difference between the original Markov chain and the reversal weighted by the reward. In this experiment, we attempt to give an intuitive idea of what this means. More specifically, we would expect the bias to be small if values along the trajectories have similar values. To this end, we consider a synthetic MDP with 10 states where both transition functions and rewards are sampled randomly from a uniform distribution. In order to create temporal dependencies in the trajectory, we smooth the rewards of N states that are temporally close (in terms of trajectory) using the following formula: r(s t ) = r(st)+r(st+1) 2 . Figure 2 shows the difference between the regularized and un-regularized MDPs as N changes, for different values of regularization parameter β. We observe that increasing N , meaning more states get rewards close : Left plot shows absolute difference between original (v π (S 1 )) and regularized (v π β (S 1 )) state value estimates to the optimal value v * (S 1 ). Right plot shows the variance of the estimates v. to one another, results into less bias. This is due to rewards putting emphasis on states where the original and reversal Markov chain are similar. Variance The primary motivation of this work is to reduce variance, therefore we now consider an experiment targeting this aspect. Figure 3 shows an example of a synthetic, 3-state MDP, where the variance of S 1 is (relatively) high. We consider an agent that is evolving in this world, changing states following the stochastic policy indicated. We are interested in the error when estimating the optimal state value of S 1 , v * (S 1 ), with and without temporal regularization, denoted v π β (S 1 ), v π (S 1 ), respectively. Figure 4 shows these errors at each iteration, averaged over 100 runs. We observe that temporal regularization indeed reduces the variance and thus helps the learning process by making the value function easier to learn. Propagation of the information We now illustrate with a simple experiment how temporal regularization allows the information to spread faster among the different states of the MDP. For this purpose, we consider a simple MDP, where an agent walks randomly in two rooms (18 states) using four actions (up, down, left, right), and a discount factor γ = 0.9. The reward is r t = 1 everywhere and passing the door between rooms (shown in red on Figure 5) only happens 50% of the time (on attempt). The episode starts at the top left and terminates when the agent reaches the bottom right corner. The sole goal is to learn the optimal value function by walking along this MDP (this is not a race toward the end). Figure 5 shows the proximity of the estimated state value to the optimal value with and without temporal regularization. The darker the state, the closer it is to its optimal value. The heatmap scale has been adjusted at each trajectory to observe the difference between both methods. We first notice that the overall propagation of the information in the regularized MDP is faster than in the original one. We also observe that, when first entering the second room, bootstrapping on values coming from the first room allows the agent to learn the optimal value faster. This suggest that temporal regularization could help agents explore faster by using their prior from the previous visited state for learning the corresponding optimal value faster. It is also possible to consider more complex and powerful regularizers. Let us study a different time series prediction model, namely exponential averaging, as defined in (10). The complexity of such models is usually articulated by hyper-parameters, allowing complex models to improve performance by better adapting to problems. We illustrate this by comparing the performance of regularization using the previous state and an exponential averaging of all previous states. Fig. 6 shows the average error on the value estimate using past state smoothing, exponential smoothing, and without smoothing. In this setting, exponential smoothing transfers information faster, thus enabling faster convergence to the true value. Figure 6: Benefits of complex regularizers on the room domain. Noisy state representation The next experiment illustrates a major strength of temporal regularization, that is its robustness to noise in the state representation. This situation can naturally arise when the state sensors are noisy or insufficient to avoid aliasing. For this task, we consider the synthetic, one dimensional, continuous setting. A learner evolving in this environment walks randomly along this line with a discount factor γ = 0.95. Let x t ∈ [0, 1] denote the position of the agent along the line at time t. The next position Figure 7: Absolute distance from the original ( θ π ) and the regularized (θ π β ) state value estimates to the optimal parameter θ * given the noise variance σ 2 in state sensors. Error value estimate Figure 8: Impact of complex regularizer parameterization (λ) on the noisy walk using exponential smoothing. x t+1 = x t + a t , perturbed by a zero-centered Gaussian noise t , such that s t = x t + t , where t ∼ N (0, σ 2 ) are i.i.d. When the agent moves to a new position x t+1 , it receives a reward r t = x t+1 . The episode ends after 1000 steps. In this experiment we model the value function using a linear model with a single parameter θ. We are interested in the error when estimating the optimal parameter function θ * with and without temporal regularization, that is θ π β and θ π , respectively. In this case we use the TD version of temporal regularization presented at the end of Sec. 4. Figure 7 shows these errors, averaged over 1000 repetitions, for different values of noise variance σ 2 . We observe that as the noise variance increases, the un-regularized estimate becomes less accurate, while temporal regularization is more robust. Using more complex regularizer can improve performance as shown in the previous section but this potential gain comes at the price of a potential loss in case of model misfit. Fig. 8 shows the absolute distance from the regularized state estimate (using exponential smoothing) to the optimal value while varying λ (higher λ = more smoothing). Increasing smoothing improves performance up to some point, but when λ is not well fit the bias becomes too strong and performance declines. This is a classic bias-variance tradeoff. This experiment highlights a case where temporal regularization is effective even in the absence of smoothness in the state space (which other regularization methods would target). This is further highlighted in the next experiments. Deep reinforcement learning To showcase the potential of temporal regularization in high dimensional settings, we adapt an actor-critic based method (PPO [28]) using temporal regularization. More specifically, we incorporate temporal regularization as exponential smoothing in the target of the critic. PPO uses the general advantage estimator t = δ t + γλδ t+1 + ... (10). v is an exponentially decaying sum over all t previous state values encountered in the trajectory. we evaluate the performance in the Arcade Learning Environment [4], where we consider the following performance measure: + (γλ) T −t+1 δ T where δ t = r t + γv(s t+1 ) − v(s t ). We regularize δ t such that δ β t = r t + γ((1 − β)v(s t+1 ) + β v(s t−1 ))) − v(s t ) using exponential smoothing v(s t ) = (1 − λ)v(s t ) + λ v(s t−1 ) as described in Eq.regularized − baseline baseline − random .(12) The hyper-parameters for the temporal regularization are β = λ = 0.2 and a decay of 1e −5 . Those are selected on 7 games and 3 training seeds. All other hyper-parameters correspond to the one used in the PPO paper. Our implementation 2 is based on the publicly available OpenAI codebase [8]. The previous four frames are considered as the state representation [22]. For each game, 10 independent runs (10 random seeds) are performed. The results reported in Figure 9 show that adding temporal regularization improves the performance on multiple games. This suggests that the regularized optimal value function may be smoother and thus easier to learn, even when using function approximation with deep learning. Also, as shown in Figure 9: Performance (Eq. 12) of a temporally regularized PPO on a suite of Atari games. previous experiments (Sec. 5.5), temporal regularization being independent of spatial representation makes it more robust to mis-specification of the state features, which is a challenge in some of these games (e.g. when assuming full state representation using some previous frames). Discussion Noisy states: Is is often assumed that the full state can be determined, while in practice, the Markov property rarely holds. This is the case, for example, when taking the four last frames to represent the state in Atari games [22]. A problem that arises when treating a partially observable MDP (POMDP) as a fully observable is that it may no longer be possible to assume that the value function is smooth over the state space [31]. For example, the observed features may be similar for two states that are intrinsically different, leading to highly different values for states that are nearby in the state space. Previous experiments on noisy state representation (Sec. 5.5) and on the Atari games (Sec. 5.6) show that temporal regularization provides robustness to those cases. This makes it an appealing technique in real-world environments, where it is harder to provide the agent with the full state. Choice of the regularization parameter: The bias induced by the regularization parameter β can be detrimental for the learning in the long run. A first attempt to mitigate this bias is just to decay the regularization as learning advances, as it is done in the deep learning experiment (Sec. 5.6). Among different avenues that could be explored, an interesting one could be to aim for a state dependent regularization. For example, in the tabular case, one could consider β as a function of the number of visits to a particular state. Smoother objective: Previous work [18] looked at how the smoothness of the objective function relates to the convergence speed of RL algorithms. An analogy can be drawn with convex optimization where the rate of convergence is dependent on the Lipschitz (smoothness) constant [6]. By smoothing the value temporally we argue that the optimal value function can be smoother. This would be beneficial in high-dimensional state space where the use of deep neural network is required. This could explain the performance displayed using temporal regularization on Atari games (Sec. 5.6). The notion of temporal regularization is also behind multi-step methods [32]; it may be worthwhile to further explore how these methods are related. Conclusion: This paper tackles the problem of regularization in RL from a new angle, that is from a temporal perspective. In contrast with typical spatial regularization, where one assumes that rewards are close for nearby states in the state space, temporal regularization rather assumes that rewards are close for states visited closely in time. This approach allows information to propagate faster into states that are hard to reach, which could prove useful for exploration. The robustness of the proposed approach to noisy state representations and its interesting properties should motivate further work to explore novel ways of exploiting temporal information. 7 Appendix Lemma 1. P and (1 − β)P + β P have the same stationary distribution µ ∀β ∈ [0, 1]. Proof. It is known that P π and P π have the same stationary distribution. Using this fact we have that µ((1 − β)P π + β P π ) = (1 − β)µP π + βµ P π = (1 − β)µ + βµ = µ. Property 2. For a reward vector r, the MDP defined by the the transition matrix P and (1−β)P +β P have the same discounted average reward ρ. ρ 1 − α = ∞ i γ i π T r.(14) Proof. Using lemma 1, both P and (1 − β)P + β P have the same stationary distribution and so discounted average reward. Lemma 2. The convex combination of two row stochastic matrices is also row stochastic. Proof. Let e be vector a columns vectors of 1. (βP π + (1 − β) P π )e = βP π e + (1 − β) P π e = βe + (1 − β)e = e. Algorithm 2 Temporally regularized semi-gradient TD 1: Input: policy π,β,γ 2: for all steps do 3: Choose a ∼ π(s t ) 4: Take action a, observe r(s), s t+1 5: θ = θ + α(r + γ((1 − β)v θ (s t+1 ) + βv θ (s t−1 )) − v θ (s t ))∇v θ (s t ) 6: end for
4,840
1811.00740
2899178888
Traffic prediction is a fundamental and vital task in Intelligence Transportation System (ITS), but it is very challenging to get high accuracy while containing low computational complexity due to the spatiotemporal characteristics of traffic flow, especially under the metropolitan circumstances. In this work, a new topological framework, called Linkage Network, is proposed to model the road networks and present the propagation patterns of traffic flow. Based on the Linkage Network model, a novel online predictor, named Graph Recurrent Neural Network (GRNN), is designed to learn the propagation patterns in the graph. It could simultaneously predict traffic flow for all road segments based on the information gathered from the whole graph, which thus reduces the computational complexity significantly from O(nm) to O(n+m), while keeping the high accuracy. Moreover, it can also predict the variations of traffic trends. Experiments based on real-world data demonstrate that the proposed method outperforms the existing prediction methods.
There are many previous works @cite_21 considering the traffic condition as a time series and predicting for different segments separately through time series analysis, like Auto-Regressive Moving Average (ARMA) based algorithms (ARIMA, SARIMA). Additionally, some research @cite_22 @cite_3 uses the methods of statistical learning such as Bayesian Network (BN), SVR and GBDT, and adds extra information to assist the training. @cite_7 compares those methods and shows their similar performances. In these approaches, the strong spatiotemporal couplings, which exist in metropolitan circumstance particularly, lead to the dilemma of choices between the computation complexity and the sufficiency of input information.
{ "abstract": [ "Abstract Big data from floating cars supply a frequent, ubiquitous sampling of traffic conditions on the road network and provide great opportunities for enhanced short-term traffic predictions based on real-time information on the whole network. Two network-based machine learning models, a Bayesian network and a neural network, are formulated with a double star framework that reflects time and space correlation among traffic variables and because of its modular structure is suitable for an automatic implementation on large road networks. Among different mono-dimensional time-series models, a seasonal autoregressive moving average model (SARMA) is selected for comparison. The time-series model is also used in a hybrid modeling framework to provide the Bayesian network with an a priori estimation of the predicted speed, which is then corrected exploiting the information collected on other links. A large floating car data set on a sub-area of the road network of Rome is used for validation. To account for the variable accuracy of the speed estimated from floating car data, a new error indicator is introduced that relates accuracy of prediction to accuracy of measure. Validation results highlighted that the spatial architecture of the Bayesian network is advantageous in standard conditions, where a priori knowledge is more significant, while mono-dimensional time series revealed to be more valuable in the few cases of non-recurrent congestion conditions observed in the data set. The results obtained suggested introducing a supervisor framework that selects the most suitable prediction depending on the detected traffic regimes.", "", "Having access to the future traffic state information is crucial in maintaining successful intelligent transportation systems (ITS). However, predicting the future traffic state is a challenging research subject involving prediction reliability issues. Predictive performance measures, including the accuracy, efficiency, and stability, are generally considered as the most important priorities in the evaluation of prediction modules. Researchers have developed various K-nearest-neighbors-based searching algorithms that find the future state from the historical traffic patterns. Interestingly, there has not been sufficient effort made for improving the performance. For the emerging big data era, incorporating an efficient search strategy has become increasingly important since the applicability of the prediction module in ITS heavily relies on the efficiency of the searching method used. This paper develops a novel sequential search strategy for traffic state predictions. The proposed sequential strategy is found to be outperforming the conventional single-level search approach in terms of prediction measures, which are prediction accuracy, efficiency, and stability. Compared with the conventional approach, the proposed sequential method yields significantly more accurate results via internal hierarchical improvements across sublevels while maintaining excellent efficiency and stability.", "Real-time urban traffic speed estimation provides significant benefits in many real-world applications. However, existing traffic information acquisition systems only obtain coarse-grained traffic information on a small number of roads but cannot acquire fine-grained traffic information on every road. To address this problem, in this paper we study the traffic speed estimation problem, which, given a budget K, identifies K roads (called seeds) where the real traffic speeds on these seeds can be obtained using crowdsourcing, and infers the speeds of other roads (called non-seed roads) based on the speeds of these seeds. This problem includes two sub-problems: (1) Speed Inference - How to accurately infer the speeds of the non-seed roads; (2) Seed Selection - How to effectively select high-quality seeds. It is rather challenging to estimate the traffic speed accurately, because the traffic changes dynamically and the changes are hard to be predicted as many possible factors can affect the traffic. To address these challenges, we propose effective algorithms to judiciously select high-quality seeds and devise inference models to infer the speeds of the non-seed roads. On the one hand, we observe that roads have correlations and correlated roads have similar traffic trend: the speeds of correlated roads rise or fall compared with their historical average speed simultaneously. We utilize this property and propose a two-step model to estimate the traffic speed. The first step adopts a graphical model to infer the traffic trend and the second step devises a hierarchical linear model to estimate the traffic speed based on the traffic trend. On the other hand, we formulate the seed selection problem, prove that it is NP-hard, and propose several greedy algorithms with approximation guarantees. Experimental results on two large real datasets show that our method outperforms baselines by 2 orders of magnitude in efficiency and 40 in estimation accuracy." ], "cite_N": [ "@cite_7", "@cite_21", "@cite_22", "@cite_3" ], "mid": [ "2553942547", "", "2307933510", "2439965388" ] }
0
1811.00639
2963439073
In this work we investigate the reasons why Batch Normalization (BN) improves the generalization performance of deep networks. We argue that one major reason, distinguishing it from data-independent normalization methods, is randomness of batch statistics. This randomness appears in the parameters rather than in activations and admits an interpretation as a practical Bayesian learning. We apply this idea to other (deterministic) normalization techniques that are oblivious to the batch size. We show that their generalization performance can be improved significantly by Bayesian learning of the same form. We obtain test performance comparable to BN and, at the same time, better validation losses suitable for subsequent output uncertainty estimation through approximate Bayesian posterior.
The improved methods that we propose are also closely related to variational drop -out @cite_0 as discussed below. We give a new interpretation to variational dropout and apply it in combination with normalization techniques.
{ "abstract": [ "We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments." ], "cite_N": [ "@cite_0" ], "mid": [ "1826234144" ] }
0
1811.00692
2899108147
Acquiring a large vocabulary is an important aspect of human intelligence. Onecommon approach for human to populating vocabulary is to learn words duringreading or listening, and then use them in writing or speaking. This ability totransfer from input to output is natural for human, but it is difficult for machines.Human spontaneously performs this knowledge transfer in complicated multimodaltasks, such as Visual Question Answering (VQA). In order to approach human-levelArtificial Intelligence, we hope to equip machines with such ability. Therefore, toaccelerate this research, we propose a newzero-shot transfer VQA(ZST-VQA)dataset by reorganizing the existing VQA v1.0 dataset in the way that duringtraining, some words appear only in one module (i.e. questions) but not in theother (i.e. answers). In this setting, an intelligent model should understand andlearn the concepts from one module (i.e. questions), and at test time, transfer themto the other (i.e. predict the concepts as answers). We conduct evaluation on thisnew dataset using three existing state-of-the-art VQA neural models. Experimentalresults show a significant drop in performance on this dataset, indicating existingmethods do not address the zero-shot transfer problem. Besides, our analysis findsthat this may be caused by the implicit bias learned during training.
VQA has improving dramatically recently @cite_18 @cite_8 @cite_10 . We briefly introduce typical VQA methods, and recommend the surveys @cite_19 @cite_16 for a more details. Depending on the attention usage, VQA methods can be roughly divided into three groups: (i) non-attention methods, (ii) visual attention methods and (iii) visual-text co-attention methods. Non-attention methods include multimodal compact bilinear networks @cite_24 , relational networks @cite_13 , and Deeper LSTM Question+Image @cite_7 . They usually produce answers through a general network architecture with attention implicitly embedded in the model. Visual attention methods, on the contrary, utilize image-question pairs to attend to the discriminative image regions to predict the answer. For example, stacked attention networks @cite_1 , ABC-CNN @cite_20 and dynamic memory networks @cite_25 all explicitly compute the visual attention by combining top-down question contexts and bottom-up image cues. Visual-text co-attention methods such as hierarchical co-attention networks @cite_22 , dual attention networks @cite_29 and compositional attention networks @cite_27 all build explicit attention on both images and questions. Tough results are encouraging, these methods don't address zero-shot transfer problem.
{ "abstract": [ "Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.", "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "", "", "Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems. For example, to answer “is there an equal number of balls and boxes?” we can look for balls, look for boxes, count them, and compare the results. The recently proposed Neural Module Network (NMN) architecture [3, 2] implements this approach to question answering by parsing questions into linguistic substructures and assembling question-specific deep networks from smaller modules that each solve one subtask. However, existing NMN implementations rely on brittle off-the-shelf parsers, and are restricted to the module configurations proposed by these parsers rather than learning them from data. In this paper, we propose End-to-End Module Networks (N2NMNs), which learn to reason by directly predicting instance-specific network layouts without the aid of a parser. Our model learns to generate network structures (by imitating expert demonstrations) while simultaneously learning network parameters (using the downstream task loss). Experimental results on the new CLEVR dataset targeted at compositional question answering show that N2NMNs achieve an error reduction of nearly 50 relative to state-of-theart attentional approaches, while discovering interpretable network architectures specialized for each question.", "We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching.", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.", "Abstract Visual Question Answering (VQA) is a recent problem in computer vision and natural language processing that has garnered a large amount of interest from the deep learning, computer vision, and natural language processing communities. In VQA, an algorithm needs to answer text-based questions about images. Since the release of the first VQA dataset in 2014, additional datasets have been released and many algorithms have been proposed. In this review, we critically examine the current state of VQA in terms of problem formulation, existing datasets, evaluation metrics, and algorithms. In particular, we discuss the limitations of current datasets with regard to their ability to properly train and assess VQA algorithms. We then exhaustively review existing algorithms for VQA. Finally, we discuss possible future directions for VQA and image understanding research.", "We present the MAC network, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. MAC moves away from monolithic black-box neural architectures towards a design that encourages both transparency and versatility. The model approaches problems by decomposing them into a series of attention-based reasoning steps, each performed by a novel recurrent Memory, Attention, and Composition (MAC) cell that maintains a separation between control and memory. By stringing the cells together and imposing structural constraints that regulate their interaction, MAC effectively learns to perform iterative reasoning processes that are directly inferred from the data in an end-to-end approach. We demonstrate the model's strength, robustness and interpretability on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9 accuracy, halving the error rate of the previous best model. More importantly, we show that the model is computationally-efficient and data-efficient, in particular requiring 5x less data than existing models to achieve strong results.", "Visual question answering (or VQA) is a new and exciting problem that combines natural language processing and computer vision techniques. We present a survey of the various datasets and models that have been used to tackle this task. The first part of the survey details the various datasets for VQA and compares them along some common factors. The second part of this survey details the different approaches for VQA, classified into four types: non-deep learning models, deep learning models without attention, deep learning models with attention, and other models which do not fit into the first three. Finally, we compare the performances of these approaches and provide some directions for future work.", "Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.", "Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the -10k text question-answering dataset without supporting fact supervision.", "We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions." ], "cite_N": [ "@cite_13", "@cite_18", "@cite_22", "@cite_7", "@cite_8", "@cite_29", "@cite_1", "@cite_24", "@cite_19", "@cite_27", "@cite_16", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2624614404", "2949218037", "", "", "2963224792", "2951690276", "2171810632", "2412400526", "2529436507", "2786209943", "2612257250", "2613404084", "2293453011", "2174492417" ] }
Zero-Shot Transfer VQA Dataset
With the fast development of deep learning [15,27,9], Artificial Intelligence achieved human level in many domains [26,21]. However, current AIs are only designed for specific tasks, and still far from human's general intelligence. A way to make a human-like machine is to make them learn as the human does so that it is important to understand how the human learns. One characteristic of human learning is the ability to transfer from input to output. For example, to populate vocabulary, people learn words during reading and listening, and use them in writing and speaking. This transferability from input to output is natural for human, but it is difficult for machines. Human spontaneously performs this transfer with language compositionality [7] in complicated multimodal tasks, such as VQA. For example, in Fig. 1a, in order to learn from the question-answer pair ["What fruit is wearing sunglasses?", "bananas"], one needs to understand the concept of "sunglasses". Then for another question ["What is on the man's face?"], humans can provide the correct answer "sunglasses" even though they have never seen it as an answer during training. Similarly, humans can also transfer concepts learned from answers to questions. Although VQA achieved high performances in standard benchmarks [2,6,12,14,8], little investigation has been made to address abilities for zero-shot transfer learning. (a) Example of zero-shot answer task (ZSA). A zeroshot word "sunglasses" appears in question but not in answer during train, and it appears in answer for test. (b) Example of zero-shot question task (ZSQ). A zeroshot word "chair" appears in answer but not in question during train, and it appears in question for test. Figure 1: Examples of ZSA and ZSQ tasks. Transferring learned words from questions to answers or from answers to questions is required in these tasks. To facilitate this study, we create a new dataset, named as zero-shot transfer VQA (ZST-VQA), by rearranging the original VQA v1.0 dataset [2]. The dataset includes two tasks: (1) the zero-shot answer task (ZSA) as shown in Fig. 1a, and (2) the zero-shot question task (ZSQ) as shown in Fig. 1b. The dataset is also helpful for detecting whether a model is biased to remember superficial relation between input and output, because such relation can not solve zero-shot transfer learning problems. For example, when a question asks what is on the ground, the answer is likely to be snow, because this is an interesting and natural situation. We evaluate three state-of-the-art VQA models [24,32,19] on these two newly proposed tasks. Experiments show that the testing accuracy significantly decreases on the ZSQ task. What is even worse, the testing accuracy drops to zero on the ZSA task. Both suggest that current models do not have the zero-shot transfer ability. We made further analysis and found that the training data affect the biases in the networks, causing significant drop in performance. In summary, our contributions are threefold. (1) We propose the problem of zero-shot transfer learning, which is an important skill for human-level Artificial Intelligence. (2) We build the ZST-VQA dataset for this problem, and experiment on three existing methods. We show that these methods do not work well on this problem, and we analyze the reasons. (3) This dataset is also useful to detect if a model learns to simply remember superficial input and output relations in VQA tasks. Dataset Construction We consider two scenarios: (1) Zero-shot answer (ZSA), where a set of selected words are contained in training questions and testing answers, but not training answers. In this case, we expect a model to transfer the concept of words learned from questions to answers for correct outputs in testing. (2) Zero-shot question (ZSQ), where another set of words are contained in training answers and testing questions, but not in training questions. ZSA and ZSQ are similar tasks in the opposite direction. To do this, we first find a list of shared words between questions and answers in the original VQA dataset (including both training and testing) and filter out stop words from it. We then uniformly sample two mutually exclusive sets of words from this list as zero-shot words for ZSA and ZSQ tasks. Note that this is different from [29] where only words less than 20 times are selected. This is because of two main concerns: (1) Zero-shot words may appear when there are not enough training data so that some words are not observed. In this case, the frequency of the zero-shot words in the test set may be small. (2) Zero-shot words may also appear when there is concept drift (i.e. new words appear over time). In this case, the frequency of zero-shot words in the test set can be high. In order to consider both cases, we hence uniformly sample zero-shot words from words shared by all questions and answers with all possible frequencies. We then create the ZSA test set by extracting samples that contain the corresponding zero-shot words in answers. Similarly, we create the ZSQ test set with the corresponding zero-shot words in questions. The remaining training samples are used as the normal train set, and the remaining test sample as the normal test set. To avoid the normal training set sharing images with any test set and to keep the normal test set generated in the same process as the normal training set, we discard training samples if they share the same images as ZSA or ZSQ test sets. Algorithm 1 summarize our dataset creation process and Table 1 shows the basic statistics. Due to page limitation, please see more dataset details in the appendix. Experiments Baselines We evaluate three typical types of VQA algorithms on our dataset: non-attention (LSTM Q+I) [18], visual attention (SAN) [32], and visual-text co-attention (HieCoAtt) [19] methods. [18] 54.23% 47.72% 0.00% 40.03% 46.43% 0.00% 40.14% SAN [32] 55.86% 48.70% 0.00% 41.63% 46.35% 0.00% 40.54% HieCoAtt [19] 57.09% 41.70% 0.00% 35.46% 40.10% 0.00% 33.95% (a) Average scores. Implicit bias is learned and preserved to zero-shot test task. (b) Implicit bias for a sample in Fig. 1a. "sunglasses" is predicted as "frisbee". It repeats this process for two times, and a fully-connected layer is used for prediction. Hierarchical Question-Image Co-attention Networks (HieCoAtt) [19] applies attentions on both images and questions. Both attentions are computed hierarchically and the features in different hierarchical levels are combined. Then it is passed to a fully-connected layer for prediction. Results We use publicly available original implementation for each algorithm. The results in Table 2 (middle) show 0% accuracy in ZSA and significantly lower results than normal test set in ZSQ. There are some obvious reasons for the performance drop. In the baseline methods, question and answer vocabularies are constructed with training dataset. This means answer set does not contain zero-shot answers, and question vocabulary does not contain zero-shot question words. At test time, zero-shot answers will not be predicted, because the models do not have the corresponding classes, and all zero-shot question words are treated as unknown words. For ZSA problem, even if zero-shot classes are available in training, since these methods use fully-connected layer as the last layer, the bias in this layer for the zero-shot classes will be very low, because all it always receives negative gradient. These low bias will reduce the score of the zero-shot classes during testing, and prevents zero-shot answer prediction. Another problem is questions and answers don't share information so that a word is treated as different ones in question and answer, therefore it cannot be transferred. To avoid these problems, we define joint vocabulary for both questions and answers, share the word embedding and the transposed weights of the last fully-connected layer, and remove bias from the last layer. The result ( Table 2 right) shows that ZSA still has zero accuracy, and ZSQ is still significantly lower than normal test, which is consistent with previous observation. Discussions The above results indicate that there should be other reasons for performance drop. In ZSA task, a reason might be intermediate network learns an implicit bias and always maps test input to a manifold that appeared in training. We use SAN for analysis. In Fig. 2a, we plot normalized histograms of binned average scores for answers. The scores are the last representation before the final Softmax layer, so that they are monotonic to output probabilities. We consider two test datasets for normal and ZSA tasks, and two answer sets for normal and zero-shot answers, altogether four histograms. In normal test, the curve of normal answer set (green) is right to that of zero-shot answer set (purple). This indicates the model has learned an implicit bias to assign higher scores on normal answers. In ZSA test, the curve of normal answer set (blue) is also right to that of zero-shot answer set (red), and the relation (blue vs. red) is very similar to that in normal test (green vs. purple). This indicates that the implicit bias is preserved to ZSA task, so that zero-shot answers have low scores, and the model doesn't predict them as answers. We also plot histograms in Fig. 2b for a sample in Fig. 1a with question ["What is on the man's face?"], and "sunglasses" (zero-shot answer) is predicted as "frisbee" (normal answer). The plot shows the implicit bias also exists for a single sample. In ZSQ, the performance is also worse than normal test, meaning the words aren't transferred from answers to questions very well. However, unlike ZSA task, it has much higher accuracy than zero or random prediction. This might be because questions contain many words so that zero-shot words don't influence the prediction too much. In both ZSA and ZSQ tasks, one key problem might be compositionality, decoupling words and the way that they are processed. This is because transfer alone doesn't solve zero-shot problems, and the model needs the ability to process transferred zero-shot wards in the same manner as normal wards. This may be achieved with some special network architectures, such as attention and memory, appropriate loss and regularization, or other types of innovations. Conclusion We created the ZST-VQA dataset with ZSA and ZSQ tasks. We evaluated existing VQA algorithms on the new dataset, and found that the algorithms and their extensions do not address the zeroshot transfer problem. We analyzed that implicit bias may be a reason of low performance, and discussed possible solutions with compositionality. We hope this dataset will encourage the research on zero-shot transfer learning in VQA community. A Frequent Zero-Shot Words Zero-shot words are used to select samples for ZSA and ZSQ data sets. Some of these words often appears and some are not. To understand the effect of these words, we list frequent zero-shot words that appear in answer in ZSA data set (Table 3) and in question for ZSQ data set ( Table 4). The lists of all zero-shot words are publicly available as a part of the dataset. These zero-shot words are already filtered by stop words from this address. 1 B Data Movement from Original to New Datasets To better understand the connection between original VQA dataset and the new dataset, we summarized the movement of samples between the two sets (Table 5). It shows that for both questions and images, there is no intersection between original train and normal test dataset, or between original test and normal train dataset. Also, ZSA and ZSQ datasets have the same ratio of the samples from original train and test sets.
1,898
1811.00692
2899108147
Acquiring a large vocabulary is an important aspect of human intelligence. Onecommon approach for human to populating vocabulary is to learn words duringreading or listening, and then use them in writing or speaking. This ability totransfer from input to output is natural for human, but it is difficult for machines.Human spontaneously performs this knowledge transfer in complicated multimodaltasks, such as Visual Question Answering (VQA). In order to approach human-levelArtificial Intelligence, we hope to equip machines with such ability. Therefore, toaccelerate this research, we propose a newzero-shot transfer VQA(ZST-VQA)dataset by reorganizing the existing VQA v1.0 dataset in the way that duringtraining, some words appear only in one module (i.e. questions) but not in theother (i.e. answers). In this setting, an intelligent model should understand andlearn the concepts from one module (i.e. questions), and at test time, transfer themto the other (i.e. predict the concepts as answers). We conduct evaluation on thisnew dataset using three existing state-of-the-art VQA neural models. Experimentalresults show a significant drop in performance on this dataset, indicating existingmethods do not address the zero-shot transfer problem. Besides, our analysis findsthat this may be caused by the implicit bias learned during training.
Zero-shot learning was early proposed by @cite_28 and soon become an interesting research problem in the cross-modal domain spanning natural language processing and computer vision @cite_15 , where no finite set of samples can cover the diversity of the real world and all datasets naturally follow a heavy-tail distribution with new classes appearing frequently after the training @cite_15 @cite_23 . Usually, zero-shot learning requires to transfer knowledge from other sources such as attributes @cite_9 , word embedding @cite_5 or the relationship to other categories @cite_21 in order to predict the novel class labels. In our dataset, novel words are embedded inside one module (i.e. questions) and we test the model's zero-shot generalization ability to the other module (i.e. answers).
{ "abstract": [ "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "", "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 3 on some metrics to whopping 20 on a few).", "Humans can understand and produce new utterances effortlessly, thanks to their compositional skills. Once a person learns the meaning of a new verb \"dax,\" he or she can immediately understand the meaning of \"dax twice\" or \"sing and dax.\" In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can make successful zero-shot generalizations when the differences between training and test commands are small, so that they can apply \"mix-and-match\" strategies to solve the task. However, when generalization requires systematic compositional skills (as in the \"dax\" example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, suggesting that lack of systematicity might be partially responsible for neural networks' notorious training data thirst.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images." ], "cite_N": [ "@cite_28", "@cite_9", "@cite_21", "@cite_23", "@cite_5", "@cite_15" ], "mid": [ "2150295085", "", "2790040974", "2789352267", "2123024445", "2950276680" ] }
Zero-Shot Transfer VQA Dataset
With the fast development of deep learning [15,27,9], Artificial Intelligence achieved human level in many domains [26,21]. However, current AIs are only designed for specific tasks, and still far from human's general intelligence. A way to make a human-like machine is to make them learn as the human does so that it is important to understand how the human learns. One characteristic of human learning is the ability to transfer from input to output. For example, to populate vocabulary, people learn words during reading and listening, and use them in writing and speaking. This transferability from input to output is natural for human, but it is difficult for machines. Human spontaneously performs this transfer with language compositionality [7] in complicated multimodal tasks, such as VQA. For example, in Fig. 1a, in order to learn from the question-answer pair ["What fruit is wearing sunglasses?", "bananas"], one needs to understand the concept of "sunglasses". Then for another question ["What is on the man's face?"], humans can provide the correct answer "sunglasses" even though they have never seen it as an answer during training. Similarly, humans can also transfer concepts learned from answers to questions. Although VQA achieved high performances in standard benchmarks [2,6,12,14,8], little investigation has been made to address abilities for zero-shot transfer learning. (a) Example of zero-shot answer task (ZSA). A zeroshot word "sunglasses" appears in question but not in answer during train, and it appears in answer for test. (b) Example of zero-shot question task (ZSQ). A zeroshot word "chair" appears in answer but not in question during train, and it appears in question for test. Figure 1: Examples of ZSA and ZSQ tasks. Transferring learned words from questions to answers or from answers to questions is required in these tasks. To facilitate this study, we create a new dataset, named as zero-shot transfer VQA (ZST-VQA), by rearranging the original VQA v1.0 dataset [2]. The dataset includes two tasks: (1) the zero-shot answer task (ZSA) as shown in Fig. 1a, and (2) the zero-shot question task (ZSQ) as shown in Fig. 1b. The dataset is also helpful for detecting whether a model is biased to remember superficial relation between input and output, because such relation can not solve zero-shot transfer learning problems. For example, when a question asks what is on the ground, the answer is likely to be snow, because this is an interesting and natural situation. We evaluate three state-of-the-art VQA models [24,32,19] on these two newly proposed tasks. Experiments show that the testing accuracy significantly decreases on the ZSQ task. What is even worse, the testing accuracy drops to zero on the ZSA task. Both suggest that current models do not have the zero-shot transfer ability. We made further analysis and found that the training data affect the biases in the networks, causing significant drop in performance. In summary, our contributions are threefold. (1) We propose the problem of zero-shot transfer learning, which is an important skill for human-level Artificial Intelligence. (2) We build the ZST-VQA dataset for this problem, and experiment on three existing methods. We show that these methods do not work well on this problem, and we analyze the reasons. (3) This dataset is also useful to detect if a model learns to simply remember superficial input and output relations in VQA tasks. Dataset Construction We consider two scenarios: (1) Zero-shot answer (ZSA), where a set of selected words are contained in training questions and testing answers, but not training answers. In this case, we expect a model to transfer the concept of words learned from questions to answers for correct outputs in testing. (2) Zero-shot question (ZSQ), where another set of words are contained in training answers and testing questions, but not in training questions. ZSA and ZSQ are similar tasks in the opposite direction. To do this, we first find a list of shared words between questions and answers in the original VQA dataset (including both training and testing) and filter out stop words from it. We then uniformly sample two mutually exclusive sets of words from this list as zero-shot words for ZSA and ZSQ tasks. Note that this is different from [29] where only words less than 20 times are selected. This is because of two main concerns: (1) Zero-shot words may appear when there are not enough training data so that some words are not observed. In this case, the frequency of the zero-shot words in the test set may be small. (2) Zero-shot words may also appear when there is concept drift (i.e. new words appear over time). In this case, the frequency of zero-shot words in the test set can be high. In order to consider both cases, we hence uniformly sample zero-shot words from words shared by all questions and answers with all possible frequencies. We then create the ZSA test set by extracting samples that contain the corresponding zero-shot words in answers. Similarly, we create the ZSQ test set with the corresponding zero-shot words in questions. The remaining training samples are used as the normal train set, and the remaining test sample as the normal test set. To avoid the normal training set sharing images with any test set and to keep the normal test set generated in the same process as the normal training set, we discard training samples if they share the same images as ZSA or ZSQ test sets. Algorithm 1 summarize our dataset creation process and Table 1 shows the basic statistics. Due to page limitation, please see more dataset details in the appendix. Experiments Baselines We evaluate three typical types of VQA algorithms on our dataset: non-attention (LSTM Q+I) [18], visual attention (SAN) [32], and visual-text co-attention (HieCoAtt) [19] methods. [18] 54.23% 47.72% 0.00% 40.03% 46.43% 0.00% 40.14% SAN [32] 55.86% 48.70% 0.00% 41.63% 46.35% 0.00% 40.54% HieCoAtt [19] 57.09% 41.70% 0.00% 35.46% 40.10% 0.00% 33.95% (a) Average scores. Implicit bias is learned and preserved to zero-shot test task. (b) Implicit bias for a sample in Fig. 1a. "sunglasses" is predicted as "frisbee". It repeats this process for two times, and a fully-connected layer is used for prediction. Hierarchical Question-Image Co-attention Networks (HieCoAtt) [19] applies attentions on both images and questions. Both attentions are computed hierarchically and the features in different hierarchical levels are combined. Then it is passed to a fully-connected layer for prediction. Results We use publicly available original implementation for each algorithm. The results in Table 2 (middle) show 0% accuracy in ZSA and significantly lower results than normal test set in ZSQ. There are some obvious reasons for the performance drop. In the baseline methods, question and answer vocabularies are constructed with training dataset. This means answer set does not contain zero-shot answers, and question vocabulary does not contain zero-shot question words. At test time, zero-shot answers will not be predicted, because the models do not have the corresponding classes, and all zero-shot question words are treated as unknown words. For ZSA problem, even if zero-shot classes are available in training, since these methods use fully-connected layer as the last layer, the bias in this layer for the zero-shot classes will be very low, because all it always receives negative gradient. These low bias will reduce the score of the zero-shot classes during testing, and prevents zero-shot answer prediction. Another problem is questions and answers don't share information so that a word is treated as different ones in question and answer, therefore it cannot be transferred. To avoid these problems, we define joint vocabulary for both questions and answers, share the word embedding and the transposed weights of the last fully-connected layer, and remove bias from the last layer. The result ( Table 2 right) shows that ZSA still has zero accuracy, and ZSQ is still significantly lower than normal test, which is consistent with previous observation. Discussions The above results indicate that there should be other reasons for performance drop. In ZSA task, a reason might be intermediate network learns an implicit bias and always maps test input to a manifold that appeared in training. We use SAN for analysis. In Fig. 2a, we plot normalized histograms of binned average scores for answers. The scores are the last representation before the final Softmax layer, so that they are monotonic to output probabilities. We consider two test datasets for normal and ZSA tasks, and two answer sets for normal and zero-shot answers, altogether four histograms. In normal test, the curve of normal answer set (green) is right to that of zero-shot answer set (purple). This indicates the model has learned an implicit bias to assign higher scores on normal answers. In ZSA test, the curve of normal answer set (blue) is also right to that of zero-shot answer set (red), and the relation (blue vs. red) is very similar to that in normal test (green vs. purple). This indicates that the implicit bias is preserved to ZSA task, so that zero-shot answers have low scores, and the model doesn't predict them as answers. We also plot histograms in Fig. 2b for a sample in Fig. 1a with question ["What is on the man's face?"], and "sunglasses" (zero-shot answer) is predicted as "frisbee" (normal answer). The plot shows the implicit bias also exists for a single sample. In ZSQ, the performance is also worse than normal test, meaning the words aren't transferred from answers to questions very well. However, unlike ZSA task, it has much higher accuracy than zero or random prediction. This might be because questions contain many words so that zero-shot words don't influence the prediction too much. In both ZSA and ZSQ tasks, one key problem might be compositionality, decoupling words and the way that they are processed. This is because transfer alone doesn't solve zero-shot problems, and the model needs the ability to process transferred zero-shot wards in the same manner as normal wards. This may be achieved with some special network architectures, such as attention and memory, appropriate loss and regularization, or other types of innovations. Conclusion We created the ZST-VQA dataset with ZSA and ZSQ tasks. We evaluated existing VQA algorithms on the new dataset, and found that the algorithms and their extensions do not address the zeroshot transfer problem. We analyzed that implicit bias may be a reason of low performance, and discussed possible solutions with compositionality. We hope this dataset will encourage the research on zero-shot transfer learning in VQA community. A Frequent Zero-Shot Words Zero-shot words are used to select samples for ZSA and ZSQ data sets. Some of these words often appears and some are not. To understand the effect of these words, we list frequent zero-shot words that appear in answer in ZSA data set (Table 3) and in question for ZSQ data set ( Table 4). The lists of all zero-shot words are publicly available as a part of the dataset. These zero-shot words are already filtered by stop words from this address. 1 B Data Movement from Original to New Datasets To better understand the connection between original VQA dataset and the new dataset, we summarized the movement of samples between the two sets (Table 5). It shows that for both questions and images, there is no intersection between original train and normal test dataset, or between original test and normal train dataset. Also, ZSA and ZSQ datasets have the same ratio of the samples from original train and test sets.
1,898
1811.00692
2899108147
Acquiring a large vocabulary is an important aspect of human intelligence. Onecommon approach for human to populating vocabulary is to learn words duringreading or listening, and then use them in writing or speaking. This ability totransfer from input to output is natural for human, but it is difficult for machines.Human spontaneously performs this knowledge transfer in complicated multimodaltasks, such as Visual Question Answering (VQA). In order to approach human-levelArtificial Intelligence, we hope to equip machines with such ability. Therefore, toaccelerate this research, we propose a newzero-shot transfer VQA(ZST-VQA)dataset by reorganizing the existing VQA v1.0 dataset in the way that duringtraining, some words appear only in one module (i.e. questions) but not in theother (i.e. answers). In this setting, an intelligent model should understand andlearn the concepts from one module (i.e. questions), and at test time, transfer themto the other (i.e. predict the concepts as answers). We conduct evaluation on thisnew dataset using three existing state-of-the-art VQA neural models. Experimentalresults show a significant drop in performance on this dataset, indicating existingmethods do not address the zero-shot transfer problem. Besides, our analysis findsthat this may be caused by the implicit bias learned during training.
There are many VQA datasets, such as @cite_26 @cite_17 for compositionality, zero-shot VQA dataset @cite_31 @cite_14 using extra resources, and more on the surveys @cite_19 @cite_16 . Different from them, our dataset focus on transfer between input and output for natural human learning.
{ "abstract": [ "We study the problem of answering questions about images in the harder setting, where the test questions and corresponding images contain novel objects, which were not queried about in the training data. Such setting is inevitable in real world-owing to the heavy tailed distribution of the visual categories, there would be some objects which would not be annotated in the train set. We show that the performance of two popular existing methods drop significantly (up to 28 ) when evaluated on novel objects cf. known objects. We propose methods which use large existing external corpora of (i) unlabeled text, i.e. books, and (ii) images tagged with classes, to achieve novel object based visual question answering. We do systematic empirical studies, for both an oracle case where the novel objects are known textually, as well as a fully automatic case without any explicit knowledge of the novel objects, but with the minimal assumption that the novel objects are semantically related to the existing objects in training. The proposed methods for novel object based visual question answering are modular and can potentially be used with many visual question answering architectures. We show consistent improvements with the two popular architectures and give qualitative analysis of the cases where the model does well and of those where it fails to bring improvements.", "When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.", "Abstract Visual Question Answering (VQA) is a recent problem in computer vision and natural language processing that has garnered a large amount of interest from the deep learning, computer vision, and natural language processing communities. In VQA, an algorithm needs to answer text-based questions about images. Since the release of the first VQA dataset in 2014, additional datasets have been released and many algorithms have been proposed. In this review, we critically examine the current state of VQA in terms of problem formulation, existing datasets, evaluation metrics, and algorithms. In particular, we discuss the limitations of current datasets with regard to their ability to properly train and assess VQA algorithms. We then exhaustively review existing algorithms for VQA. Finally, we discuss possible future directions for VQA and image understanding research.", "Part of the appeal of Visual Question Answering (VQA) is its promise to answer new questions about previously unseen images. Most current methods demand training questions that illustrate every possible concept, and will therefore never achieve this capability, since the volume of required training data would be prohibitive. Answering general questions about images requires methods capable of Zero-Shot VQA, that is, methods able to answer questions beyond the scope of the training questions. We propose a new evaluation protocol for VQA methods which measures their ability to perform Zero-Shot VQA, and in doing so highlights significant practical deficiencies of current approaches, some of which are masked by the biases in current datasets. We propose and evaluate several strategies for achieving Zero-Shot VQA, including methods based on pretrained word embeddings, object classifiers with semantic embeddings, and test-time retrieval of example images. Our extensive experiments are intended to serve as baselines for Zero-Shot VQA, and they also achieve state-of-the-art performance in the standard VQA evaluation setting.", "Visual question answering (or VQA) is a new and exciting problem that combines natural language processing and computer vision techniques. We present a survey of the various datasets and models that have been used to tackle this task. The first part of the survey details the various datasets for VQA and compares them along some common factors. The second part of this survey details the different approaches for VQA, classified into four types: non-deep learning models, deep learning models without attention, deep learning models with attention, and other models which do not fit into the first three. Finally, we compare the performances of these approaches and provide some directions for future work.", "Visual Question Answering (VQA) has received a lot of attention over the past couple of years. A number of deep learning models have been proposed for this task. However, it has been shown that these models are heavily driven by superficial correlations in the training data and lack compositionality -- the ability to answer questions about unseen compositions of seen concepts. This compositionality is desirable and central to intelligence. In this paper, we propose a new setting for Visual Question Answering where the test question-answer pairs are compositionally novel compared to training question-answer pairs. To facilitate developing models under this setting, we present a new compositional split of the VQA v1.0 dataset, which we call Compositional VQA (C-VQA). We analyze the distribution of questions and answers in the C-VQA splits. Finally, we evaluate several existing VQA models under this new setting and show that the performances of these models degrade by a significant amount compared to the original VQA setting." ], "cite_N": [ "@cite_14", "@cite_26", "@cite_19", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "2952859532", "2561715562", "2529436507", "2555661914", "2612257250", "2608109911" ] }
Zero-Shot Transfer VQA Dataset
With the fast development of deep learning [15,27,9], Artificial Intelligence achieved human level in many domains [26,21]. However, current AIs are only designed for specific tasks, and still far from human's general intelligence. A way to make a human-like machine is to make them learn as the human does so that it is important to understand how the human learns. One characteristic of human learning is the ability to transfer from input to output. For example, to populate vocabulary, people learn words during reading and listening, and use them in writing and speaking. This transferability from input to output is natural for human, but it is difficult for machines. Human spontaneously performs this transfer with language compositionality [7] in complicated multimodal tasks, such as VQA. For example, in Fig. 1a, in order to learn from the question-answer pair ["What fruit is wearing sunglasses?", "bananas"], one needs to understand the concept of "sunglasses". Then for another question ["What is on the man's face?"], humans can provide the correct answer "sunglasses" even though they have never seen it as an answer during training. Similarly, humans can also transfer concepts learned from answers to questions. Although VQA achieved high performances in standard benchmarks [2,6,12,14,8], little investigation has been made to address abilities for zero-shot transfer learning. (a) Example of zero-shot answer task (ZSA). A zeroshot word "sunglasses" appears in question but not in answer during train, and it appears in answer for test. (b) Example of zero-shot question task (ZSQ). A zeroshot word "chair" appears in answer but not in question during train, and it appears in question for test. Figure 1: Examples of ZSA and ZSQ tasks. Transferring learned words from questions to answers or from answers to questions is required in these tasks. To facilitate this study, we create a new dataset, named as zero-shot transfer VQA (ZST-VQA), by rearranging the original VQA v1.0 dataset [2]. The dataset includes two tasks: (1) the zero-shot answer task (ZSA) as shown in Fig. 1a, and (2) the zero-shot question task (ZSQ) as shown in Fig. 1b. The dataset is also helpful for detecting whether a model is biased to remember superficial relation between input and output, because such relation can not solve zero-shot transfer learning problems. For example, when a question asks what is on the ground, the answer is likely to be snow, because this is an interesting and natural situation. We evaluate three state-of-the-art VQA models [24,32,19] on these two newly proposed tasks. Experiments show that the testing accuracy significantly decreases on the ZSQ task. What is even worse, the testing accuracy drops to zero on the ZSA task. Both suggest that current models do not have the zero-shot transfer ability. We made further analysis and found that the training data affect the biases in the networks, causing significant drop in performance. In summary, our contributions are threefold. (1) We propose the problem of zero-shot transfer learning, which is an important skill for human-level Artificial Intelligence. (2) We build the ZST-VQA dataset for this problem, and experiment on three existing methods. We show that these methods do not work well on this problem, and we analyze the reasons. (3) This dataset is also useful to detect if a model learns to simply remember superficial input and output relations in VQA tasks. Dataset Construction We consider two scenarios: (1) Zero-shot answer (ZSA), where a set of selected words are contained in training questions and testing answers, but not training answers. In this case, we expect a model to transfer the concept of words learned from questions to answers for correct outputs in testing. (2) Zero-shot question (ZSQ), where another set of words are contained in training answers and testing questions, but not in training questions. ZSA and ZSQ are similar tasks in the opposite direction. To do this, we first find a list of shared words between questions and answers in the original VQA dataset (including both training and testing) and filter out stop words from it. We then uniformly sample two mutually exclusive sets of words from this list as zero-shot words for ZSA and ZSQ tasks. Note that this is different from [29] where only words less than 20 times are selected. This is because of two main concerns: (1) Zero-shot words may appear when there are not enough training data so that some words are not observed. In this case, the frequency of the zero-shot words in the test set may be small. (2) Zero-shot words may also appear when there is concept drift (i.e. new words appear over time). In this case, the frequency of zero-shot words in the test set can be high. In order to consider both cases, we hence uniformly sample zero-shot words from words shared by all questions and answers with all possible frequencies. We then create the ZSA test set by extracting samples that contain the corresponding zero-shot words in answers. Similarly, we create the ZSQ test set with the corresponding zero-shot words in questions. The remaining training samples are used as the normal train set, and the remaining test sample as the normal test set. To avoid the normal training set sharing images with any test set and to keep the normal test set generated in the same process as the normal training set, we discard training samples if they share the same images as ZSA or ZSQ test sets. Algorithm 1 summarize our dataset creation process and Table 1 shows the basic statistics. Due to page limitation, please see more dataset details in the appendix. Experiments Baselines We evaluate three typical types of VQA algorithms on our dataset: non-attention (LSTM Q+I) [18], visual attention (SAN) [32], and visual-text co-attention (HieCoAtt) [19] methods. [18] 54.23% 47.72% 0.00% 40.03% 46.43% 0.00% 40.14% SAN [32] 55.86% 48.70% 0.00% 41.63% 46.35% 0.00% 40.54% HieCoAtt [19] 57.09% 41.70% 0.00% 35.46% 40.10% 0.00% 33.95% (a) Average scores. Implicit bias is learned and preserved to zero-shot test task. (b) Implicit bias for a sample in Fig. 1a. "sunglasses" is predicted as "frisbee". It repeats this process for two times, and a fully-connected layer is used for prediction. Hierarchical Question-Image Co-attention Networks (HieCoAtt) [19] applies attentions on both images and questions. Both attentions are computed hierarchically and the features in different hierarchical levels are combined. Then it is passed to a fully-connected layer for prediction. Results We use publicly available original implementation for each algorithm. The results in Table 2 (middle) show 0% accuracy in ZSA and significantly lower results than normal test set in ZSQ. There are some obvious reasons for the performance drop. In the baseline methods, question and answer vocabularies are constructed with training dataset. This means answer set does not contain zero-shot answers, and question vocabulary does not contain zero-shot question words. At test time, zero-shot answers will not be predicted, because the models do not have the corresponding classes, and all zero-shot question words are treated as unknown words. For ZSA problem, even if zero-shot classes are available in training, since these methods use fully-connected layer as the last layer, the bias in this layer for the zero-shot classes will be very low, because all it always receives negative gradient. These low bias will reduce the score of the zero-shot classes during testing, and prevents zero-shot answer prediction. Another problem is questions and answers don't share information so that a word is treated as different ones in question and answer, therefore it cannot be transferred. To avoid these problems, we define joint vocabulary for both questions and answers, share the word embedding and the transposed weights of the last fully-connected layer, and remove bias from the last layer. The result ( Table 2 right) shows that ZSA still has zero accuracy, and ZSQ is still significantly lower than normal test, which is consistent with previous observation. Discussions The above results indicate that there should be other reasons for performance drop. In ZSA task, a reason might be intermediate network learns an implicit bias and always maps test input to a manifold that appeared in training. We use SAN for analysis. In Fig. 2a, we plot normalized histograms of binned average scores for answers. The scores are the last representation before the final Softmax layer, so that they are monotonic to output probabilities. We consider two test datasets for normal and ZSA tasks, and two answer sets for normal and zero-shot answers, altogether four histograms. In normal test, the curve of normal answer set (green) is right to that of zero-shot answer set (purple). This indicates the model has learned an implicit bias to assign higher scores on normal answers. In ZSA test, the curve of normal answer set (blue) is also right to that of zero-shot answer set (red), and the relation (blue vs. red) is very similar to that in normal test (green vs. purple). This indicates that the implicit bias is preserved to ZSA task, so that zero-shot answers have low scores, and the model doesn't predict them as answers. We also plot histograms in Fig. 2b for a sample in Fig. 1a with question ["What is on the man's face?"], and "sunglasses" (zero-shot answer) is predicted as "frisbee" (normal answer). The plot shows the implicit bias also exists for a single sample. In ZSQ, the performance is also worse than normal test, meaning the words aren't transferred from answers to questions very well. However, unlike ZSA task, it has much higher accuracy than zero or random prediction. This might be because questions contain many words so that zero-shot words don't influence the prediction too much. In both ZSA and ZSQ tasks, one key problem might be compositionality, decoupling words and the way that they are processed. This is because transfer alone doesn't solve zero-shot problems, and the model needs the ability to process transferred zero-shot wards in the same manner as normal wards. This may be achieved with some special network architectures, such as attention and memory, appropriate loss and regularization, or other types of innovations. Conclusion We created the ZST-VQA dataset with ZSA and ZSQ tasks. We evaluated existing VQA algorithms on the new dataset, and found that the algorithms and their extensions do not address the zeroshot transfer problem. We analyzed that implicit bias may be a reason of low performance, and discussed possible solutions with compositionality. We hope this dataset will encourage the research on zero-shot transfer learning in VQA community. A Frequent Zero-Shot Words Zero-shot words are used to select samples for ZSA and ZSQ data sets. Some of these words often appears and some are not. To understand the effect of these words, we list frequent zero-shot words that appear in answer in ZSA data set (Table 3) and in question for ZSQ data set ( Table 4). The lists of all zero-shot words are publicly available as a part of the dataset. These zero-shot words are already filtered by stop words from this address. 1 B Data Movement from Original to New Datasets To better understand the connection between original VQA dataset and the new dataset, we summarized the movement of samples between the two sets (Table 5). It shows that for both questions and images, there is no intersection between original train and normal test dataset, or between original test and normal train dataset. Also, ZSA and ZSQ datasets have the same ratio of the samples from original train and test sets.
1,898
1811.00651
2899205526
Large scale cloud networks consist of distributed networking and computing elements that process critical information and thus security is a key requirement for any environment. Unfortunately, assessing the security state of such networks is a challenging task and the tools used in the past by security experts such as packet filtering, firewall, Intrusion Detection Systems (IDS) etc., provide a reactive security mechanism. In this paper, we introduce a Moving Target Defense (MTD) based proactive security framework for monitoring attacks which lets us identify and reason about multi-stage attacks that target software vulnerabilities present in a cloud network. We formulate the multi-stage attack scenario as a two-player zero-sum Markov Game (between the attacker and the network administrator) on attack graphs. The rewards and transition probabilities are obtained by leveraging the expert knowledge present in the Common Vulnerability Scoring System (CVSS). Our framework identifies an attacker's optimal policy and places countermeasures to ensure that this attack policy is always detected, thus forcing the attacker to use a sub-optimal policy with higher cost.
Sheyner @cite_9 present a formal analysis of attacks on a network along with cost-benefit analysis and security measures to defend against the network attacks. In @cite_13 , Chowdhary provide a polynomial time method for attack graph construction and network reconfiguration using a parallel computing approach, making it possible to leverage information for strategic reason of attacks in large-scale systems.
{ "abstract": [ "An attack graph is a succinct representation of all paths through a system that end in a state where an intruder has successfully achieved his goal. Today Red Teams determine the vulnerability of networked systems by drawing gigantic attack graphs by hand. Constructing attack graphs by hand is tedious, error-prone, and impractical for large systems. By viewing an attack as a violation of a safety property, we can use off-the-shelf model checking technology to produce attack graphs automatically: a successful path from the intruder's viewpoint is a counterexample produced by the model checker In this paper we present an algorithm for generating attack graphs using model checking as a subroutine. Security analysts use attack graphs for detection, defense and forensics. In this paper we present a minimization analysis technique that allows analysts to decide which minimal set of security measures would guarantee the safety of the system. We provide a formal characterization of this problem: we prove that it is polynomially equivalent to the minimum hitting set problem and we present a greedy algorithm with provable bounds. We also present a reliability analysis technique that allows analysts to perform a simple cost-benefit trade-off depending on the likelihoods of attacks. By interpreting attack graphs as Markov Decision Processes we can use the value iteration algorithm to compute the probabilities of intruder success for each attack the graph.", "Software-Defined Networking (SDN) has emerged as a framework for centralized command and control in cloud data centric environments. SDN separates data and control plane, which provides network administrator better visibility and policy enforcement capability compared to traditional networks. The SDN controller can assess reachability information of all the hosts in a network. There are many critical assets in a network which can be compromised by a malicious attacker through a multistage attack. Thus we make use of centralized controller to assess the security state of the entire network and pro-actively perform attack analysis and countermeasure selection. This approach is also known as Moving Target Defense (MTD). We use the SDN controller to assess the attack scenarios through scalable Attack Graphs (AG) and select necessary countermeasures to perform network reconfiguration to counter network attacks. Moreover, our framework has a comprehensive conflict detection and resolution module that ensures that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free policy implementation and preventing information leakage." ], "cite_N": [ "@cite_9", "@cite_13" ], "mid": [ "2157554212", "2533393212" ] }
Adaptive MTD Security using Markov Game Modeling
0
1811.00651
2899205526
Large scale cloud networks consist of distributed networking and computing elements that process critical information and thus security is a key requirement for any environment. Unfortunately, assessing the security state of such networks is a challenging task and the tools used in the past by security experts such as packet filtering, firewall, Intrusion Detection Systems (IDS) etc., provide a reactive security mechanism. In this paper, we introduce a Moving Target Defense (MTD) based proactive security framework for monitoring attacks which lets us identify and reason about multi-stage attacks that target software vulnerabilities present in a cloud network. We formulate the multi-stage attack scenario as a two-player zero-sum Markov Game (between the attacker and the network administrator) on attack graphs. The rewards and transition probabilities are obtained by leveraging the expert knowledge present in the Common Vulnerability Scoring System (CVSS). Our framework identifies an attacker's optimal policy and places countermeasures to ensure that this attack policy is always detected, thus forcing the attacker to use a sub-optimal policy with higher cost.
In the context of cloud systems, Peng discusses a risk-aware MTD strategy @cite_7 where they model the attack surface as a non-decreasing probability density function and then estimate the risk of migrating a VM to a replacement node using probabilistic inference. Kampanakis @cite_8 highlight obfuscation as a possible MTD strategy in order to deal with attacks like OS fingerprinting and network reconnaissance in the SDN environment. Furthermore, they highlight that the trade-off between such random mutations, which may disrupt any active services, require analysis of cost-benefits.
{ "abstract": [ "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration snapshotting and the diversity compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.", "Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads." ], "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "1995341513", "2090650780" ] }
Adaptive MTD Security using Markov Game Modeling
0
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets @cite_19 @cite_3 @cite_32 . The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths.
{ "abstract": [ "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45 compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network.", "" ], "cite_N": [ "@cite_19", "@cite_32", "@cite_3" ], "mid": [ "2963703618", "2785994986", "" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
Many efforts have been made in literature @cite_35 @cite_13 @cite_17 on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) @cite_35 that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets @cite_17 by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations.
{ "abstract": [ "We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI- FAR10 and rotated MNIST.", "", "We present group equivariant capsule networks, a framework to introduce guaranteed equivariance and invariance properties to the capsule network idea. Our work can be divided into two contributions. First, we present a generic routing by agreement algorithm defined on elements of a group and prove that equivariance of output pose vectors, as well as invariance of output activations, hold under certain conditions. Second, we connect the resulting equivariant capsule networks with work from the field of group convolutional networks. Through this connection, we provide intuitions of how both methods relate and are able to combine the strengths of both approaches in one deep neural network architecture. The resulting framework allows sparse evaluation of the group convolution operator, provides control over specific equivariance and invariance properties, and can use routing by agreement instead of pooling operations. In addition, it is able to provide interpretable and equivariant representation vectors as output capsules, which disentangle evidence of object existence from its pose." ], "cite_N": [ "@cite_35", "@cite_13", "@cite_17" ], "mid": [ "2279221249", "", "2807754035" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature @cite_2 @cite_1 @cite_11 . Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) @cite_34 that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy @cite_29 . Denoising auto-encoders @cite_11 attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders @cite_12 encourage to learn representations invariant to small perturbations on data. Along this direction, @cite_3 propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data.
{ "abstract": [ "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.", "A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.", "", "An autoencoder network uses a set of recognition weights to convert an input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector. We derive an objective function for training autoencoders based on the Minimum Description Length (MDL) principle. The aim is to minimize the information required to describe both the code vector and the reconstruction error. We show that this information is minimized by choosing code vectors stochastically according to a Boltzmann distribution, where the generative weights define the energy of each possible code vector given the input vector. Unfortunately, if the code vectors use distributed representations, it is exponentially expensive to compute this Boltzmann distribution because it involves all possible code vectors. We show that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length. Even when this bound is poor, it can be used as a Lyapunov function for learning both the generative and the recognition weights. We demonstrate that this approach can be used to learn factorial codes.", "", "We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining.", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite." ], "cite_N": [ "@cite_29", "@cite_1", "@cite_3", "@cite_2", "@cite_34", "@cite_12", "@cite_11" ], "mid": [ "2753738274", "2164122462", "", "2102409316", "", "2218318129", "2025768430" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs @cite_15 and their variants @cite_24 @cite_5 @cite_21 @cite_31 generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder @cite_24 @cite_5 . There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution @cite_21 @cite_31 , which can give rise to more powerful representations of data out of training examples @cite_24 @cite_5 @cite_25 . Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data.
{ "abstract": [ "", "In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks.", "The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.", "We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized versus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real distribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake samples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake samples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically." ], "cite_N": [ "@cite_31", "@cite_21", "@cite_24", "@cite_5", "@cite_15", "@cite_25" ], "mid": [ "", "2580360036", "2412320034", "2411541852", "2099471712", "2894573160" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of self-supervised signals to train deep networks. Mehdi and Favaro @cite_9 propose to solve Jigsaw puzzles to train a convolutional neural network. @cite_20 train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, @cite_4 count features that satisfy equivalence relations between downsampled and tiled images. @cite_6 propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. @cite_38 create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames @cite_10 .
{ "abstract": [ "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101).", "We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks.", "We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively).", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "The current dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it also possible to learn features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigated if the awareness of egomotion(i.e. self motion) can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We found that using the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on the tasks of scene recognition, object recognition, visual odometry and keypoint matching.", "This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations." ], "cite_N": [ "@cite_38", "@cite_4", "@cite_9", "@cite_6", "@cite_10", "@cite_20" ], "mid": [ "2148349024", "2750549109", "2321533354", "2785325870", "1520997877", "343636949" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling @cite_26 and mean teachers @cite_18 both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) @cite_7 seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data.
{ "abstract": [ "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .", "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.", "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10." ], "cite_N": [ "@cite_18", "@cite_26", "@cite_7" ], "mid": [ "2592691248", "2951970475", "2964159205" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08628
2972729785
Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying the success of Convolutional Neural Networks (CNNs). For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups of transformations. While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and transformations. This results in Generalized TERs (GTERs) equivariant against transformations in a more general fashion by capturing complex patterns of visual structures beyond the conventional linear equivariance under a transformation group. The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations. Experiments demonstrate the proposed models outperform the state-of-the-art models in both unsupervised and (semi-)supervised tasks.
The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples @cite_0 . First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping 6" to 9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem.
{ "abstract": [ "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ], "cite_N": [ "@cite_0" ], "mid": [ "2163605009" ] }
Learning Generalized Transformation Equivariant Representations via AutoEncoding Transformations
I N this paper, we aspire to show that transformations play a fundamental role in learning powerful representations by transforming images as a means to reveal the intrinsic patterns from transformed visual structures. Particularly, Transformation Equivariant Representation (TER) learning seeks to model representations that equivary to various transformations on images. In other words, the representation of an image ought to change in the same way as it is transformed. This is motivated by the assumption that image representations should capture the intrinsic visual structures such that transformations can be decoded from the representations of original and transformed images. Based on this assumption, we formally present a novel criterion of AutoEncoding Transformations (AET) to learn the TERs for various groups of transformations. Learning the TERs has been adopted in Hiton's seminal work on learning transformation equivariant capsules [1], and plays a critical role for the success of Convolutional Neural Networks (CNNs) [2]. Specifically, the representations learned by the CNNs are translation equivariant as their feature maps are shifted in the same way as input images are translated. On top of these feature maps that preserve the visual structures of translation equivariance, fully connected layers are built to output the predicted labels of input images. Obviously, the translation equivariant convolutional features play the pivotal role in delivering the state-of-the-art performances in the deep networks. Thus, they are extended beyond translations to learn more expressive representations of equivariance to generic types of transformations, such as affine, projective and homographic transformations. Aline this direction, the group equivariant CNNs [3] are developed to guarantee the transformation of input images results in the same transformation of input images. However, the group equivariant CNNs [3] and their variants [4], [5] are restricted to discrete transformations, and the resultant representations are also limited to a group representation of linear transformations. These limitations restrict their abilities to model group representations of complex transformations that could be continuous and nonlinear in many learning tasks, ranging from unsupervised, to semi-supervised and supervised learning. Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant representations. The core idea is to encode data with the representations from which the transformations can be decoded as much as possible. We will begin with an unsupervised learning of such representations without involving any labeled data, and then proceed to a generalization to semi-supervised and supervised representations by encoding label information as well. Unlike group equivariant CNNs that learn the feature maps mathematically satisfying the transformation equivariance as a function of the group of transformations, the proposed AutoEncoding Transformations (AET) presents an autoencoding architecture to learn transformation equivariant representations by reconstructing applied transformations. As long as a transformation of input images results in equivariant representations, it should be well decoded from the representations of original and transformed images. Compared with the group equivariant CNNS, the AET model is more flexible and tractable to tackle with any transformations and their compositions, since it does not rely on a strict convolutional structure to The AET is also in contrast to the conventional AutoEncoding Data (AED) paradigm that instead aims to reconstruct data rather than the transformations. Figure 1(a) and (b) illustrate the comparison between the AET and AED. Since the space of transformations (e.g., the few parameters of transformations) is of quite lower dimension than that of data space (e.g., the pixel space of images), the decoder of the AET can be quite shallower than that of the AED. This allows the backpropagated errors to more sufficiently train the encoder that models the representations of input data in the AET architecture. Moreover, an AET model can be trained from an information-theoretic perspective by maximizing the information in the learned representation about the applied transformation and the input data. This will generalize the group representations of linear transformations to more general forms that could equivary nonlinearly to input transformations. It results in Generalized Transformation Equivariant Representations (GTERs) that can capture more complex patterns of visual structure under transformations. Unfortunately, this will result in an intractable optimization problem to maximize the mutual information between representations and transformations. A variational lower bound of the mutual information can be derive by introducing a surrogate transformation decoder, yielding a novel model of Autoencoding Variational Transformation (AVT) as an alterative to the deterministic AET. (Semi-)Supervised Learning of Transformation Equivariant Representations While both AET and AVT are trained in an unsupervised fashion, they can act as the basic representation for building the (semi-)supervised classifiers. Along this direction, we can train (Semi-)Supervised Autoencoding Transformation (SAT) that jointly trains the transformation equivariant representations as well as the corresponding classifiers. Figure 1(c) illustrates the SAT model, where a classifier head is added upon the representation encoder of an AET network. The SAT can be based on either the deterministic AET or the probabilistic AVT architecture. Particularly, along the direction pointed by the AVT, we seek to train the proposed (semi-)supervised transformation equivariant classifiers by maximizing the mutual information of the learned representations with the transformations and labels. In this way, the trained SAT model can not only handle the transformed data through their equivarying representations, but also encode the labeling information through the supervised classifier. The resultant SAT also contains the deterministic model based on the AET as a special case by fixing a deterministic model to representation encoder and the transformation decoder. The transformation equivariance in the SAT model is contrary to the data augmentation by transformations in deep learning literature [2]. First, the data augmentation is only applicable to augment the labeled examples for model training, which cannot be extended to unlabeled data. This limits it in semi-supervised learning by exploring the unlabeled data. Second, the data augmentation aims to enforce the transformation invariance, in which the labels of transformed data are supposed to be invariant. This differs from the motivation to encode the inherent visual structures that equivary under various transformations. Actually, in the (semi-)supervised transformation equivariant classifiers, we aim to integrate the principles of both training transformation equivariant representations and transformation invariant classifiers seamlessly. Indeed, both principles have played the key role in compelling performances of the CNNs and their modern variants. This is witnessed by the translation equivariant convolutional feature maps and the atop classifiers that are supposed to make transformation-invariant predictions with the spatial pooling and fully connected layers. We will show that the proposed SAT extends the translation equivariance in the CNNs to cover a generic class of transformation equivariance, as well as encode the labels to train the representations and the associated transformation invariant classifiers. We hope this can deepen our understanding of the interplay between the transformation equivariance and invariance both of which play the fundamental roles in training robust classifiers with labeled and unlabeled data. The remainder of this paper is organized as follows. We will review the related works in Section 2. The unsupervised and (semi-)supervised learning of transformation equivariant representations will be presented in the autoencoding transformation framework in Section 3 and Section 4, respectively. We will present experiment results in Section 5 and Section 6 for unsupervised and semi-supervised tasks. We will conclude the paper and discuss the future works in Section 7. Transformation-Equivariant Representations Learning transformation-equivariant representations can trace back to the seminal work on training capsule nets [1], [6], [7]. The transformation equivariance is characterized by the various directions of capsules, while the confidence of belonging to a particular class is captured by their lengths. Many efforts have been made in literature [3], [4], [5] on extending the conventional translation-equivariant convolutions to cover more transformations. Among them are group equivariant convolutions (G-convolution) [3] that have been developed to equivary to more types of transformations. The idea of group equivariance has also been introduced to the capsule nets [5] by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. However, the group equivariant convolution is restricted to discrete transformations, which limits its ability to learn the representations equivariant to generic continuous transformations. Unsupervised Representation Learning Auto-Encoders and GANs. Unsupervised auto-encoders have been extensively studied in literature [8], [9], [10]. Existing auto-encoders are trained by reconstructing input data from the outputs of encoders. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) [11] that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy [12]. Denoising auto-encoders [10] attempt to reconstruct noise-corrupted data to learn robust representations, while contrastive Auto-Encoders [13] encourage to learn representations invariant to small perturbations on data. Along this direction, Hinton et al. [1] propose capsule networks to explore transformation equivariance by minimizing the discrepancy between the reconstructed and target data. On the other hand, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations. Unlike the auto-encoders, the GANs [14] and their variants [15], [16], [17], [18] generate data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder [15], [16]. There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution [17], [18], which can give rise to more powerful representations of data out of training examples [15], [16], [19]. Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of selfsupervised signals to train deep networks. Mehdi and Favaro [20] propose to solve Jigsaw puzzles to train a convolutional neural network. Doersch et al. [21] train the network by inferring the relative positions between sampled patches from an image as self-supervised information. Instead, Noroozi et al. [22] count features that satisfy equivalence relations between downsampled and tiled images. Gidaris et al. [23] propose to train RotNets by predicting a discrete set of image rotations, but they are unable to handle generic continuous transformations and their compositions. Dosovitskiy et al. [24] create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames [25]. (Semi-)Supervised Representation Learning In addition, there exist a large number of semi-supervised models in literature. Here, we particularly mention three state-of-the-art methods that will be compared in experiments. Temporal ensembling [26] and mean teachers [27] both use an ensemble of teachers to supervise the training of a student model. Temporal ensembling uses the exponential moving average of predictions made by past models on unlabeled data as targets to train the student model. Instead, mean teachers update the student model with the exponential moving average of the weights of past models. On the contrary, the Virtual Adversarial Training (VAT) [28] seeks to minimizes the change of predictions on unlabeled examples when their output values are adversarially altered. This could result in a robust model that prefers smooth predictions over unlabeled data. The SAT also differs from transformation-based data augmentation in which the transformed samples and their labels are used directly as additional training examples [2]. First, in the semi-supervised learning, unlabeled examples cannot be directly augmented to form training examples due to their missing labels. Moreover, data augmentation needs to preserve the labels on augmented images, and this prevents us from applying the transformations that could severely distort the images (e.g., shearing, rotations with arbitrary angles, and projective transformations) or invalidate the associated labels (e.g., vertically flipping "6" to "9"). In contrast, the SAT avoids using the labels of transformed images to supervisedly train the classifier directly; instead it attempts to encode the visual structures of images equivariant to various transformations without access to their labels. This leads to a label-blind TER regularizer to explore the unlabeled examples for the semi-supervised problem. UNSUPERVISED LEARNING OF TRANSFORMA-TION EQUIVARIANT REPRESENTATIONS In this section, we will first present the autoencoding transformation architecture to learn the transformation equivariant representations in a deterministic fashion. Then, a variational alternative approach will be presented to handle the uncertainty in the representation learning by maximizing the mutual information between the learned representations and the applied transformations. AET: A Deterministic Model We begin by defining the notations used in the proposed AutoEncoding Transformation (AET) architecture. Consider a random transformation t sampled from a transformation distribution p(t) (e.g., warping, projective and homographic transformations), as well as an image x drawn from a data distribution p(x) in a sample space X . Then the application of t to x results in a transformed image t(x). The goal of AET focuses on learning a representation encoder E θ : x → E θ (x) with parameters θ, which maps a sample x ∼ p(x) to its representation E θ (x) in a linear space Z. For this purpose, one need to learn a transformation decoder with parameters φ D φ : [E θ (x), E θ (t(x))] →t that makes an estimatet of the input transformation t from the representations of original and transformed samples. Since the transformation decoder takes the encoder outputs rather than original and transformed images, this pushes the encoder to capture the inherent visual structures of images to make a satisfactory estimate of the transformation. Then the AET can be trained to jointly learn the representation encoder E θ and the transformation decoder D φ . A loss function (t,t) measuring the deviation between a transformation t and its estimatet is minimized to train the AET over p(t) and p(x): min θ,φ E t∼p(t),x∼p(x) (t,t)(1) where the estimated transformationt can be written as a function of the encoder E θ and the decoder D φ such that t = D φ [E θ (x), E θ (t(x))] , and the expectation E is taken over the distributions of transformations and data. In this way, the encoder E θ and the decoder D φ can be jointly trained over mini-batches by back-propagating the gradient of the loss to update their parameters. AVT: A Probabilistic Model Alternatively, we can train transformation equivariant representations to contain as much information as possible about applied transformations to recover them. Notations Formally, our goal is to learn an encoder that maps a transformed sample t(x) to a probabilistic representation with the mean f θ and variance σ θ . This results in the following probabilistic representation z ∈ Z of t(x): z = f θ (t(x)) + σ θ (t(x)) •(2) where is sampled from a normal distribution p( ) N ( |0, I) with • denoting the element-wise product. Thus, the resultant probabilistic representation z follows a normal distribution p θ (z|t, x) N z|f θ (t(x)), σ 2 θ (t(x) ) conditioned on the randomly sampled transformation t and input data x. On the other hand, the representation of the original sample x is a special case when t is an identity transformation, which isz = f θ (x) + σ θ (x) •˜(3) whose mean and variance are computed by using the deep network with the same weights θ, and˜ ∼ p(˜ ) N (˜ |0, I). Generalized Transformation Equivariance In the conventional definition of transformation equivariance, there should exist an automorphism ρ(t) ∈ Aut(Z) : Z → Z in the representation space, such that 1 z = [ρ(t)](z) Here the transformation ρ(t) is independent of the input sample x. In other words, the representation z of a transformed sample is completely determined by the original representationz and the applied transformation t with no need to access the sample x. This is called steerability property in literature [4], which enables us to compute z by applying the sample-independent transformation directly to the original representationz. This property can be generalized without relying on the linear group representations of transformations through automorphisms. Instead of sticking with a linear ρ(t), one can seek a more general relation between z andz, independently of x. From an information theoretical point of view, this requires (z, t) should jointly contain all necessary information about z so that z can be best estimated from them without a direct access to x. This leads us to maximizing the mutual information I θ (z;z, t) to learn the generalized transformation equivariant representations. Indeed, by the chain rule and the nonnegativity of mutual information, we have I θ (z;z, t) = I θ (z;z, t, x) − I θ (z; x|z, t) ≤ I θ (z;z, t, x), 1. The transformation t in the sample space X and the corresponding transformation ρ in the representation space Z need not be the same. But the representation transformation ρ(t) should be a function of the sample transformation t. which shows I θ (z;z, t) is upper bounded by the mutual information I θ (z;z, t, x) between z and (z, t, x). Clearly, when I θ (z; x|z, t) = 0, I θ (z;z, t) attains the maximum value of its upper bound I θ (z;z, t, x). In this case, x would provide no more information about z than (z, t), which implies one can estimate z directly from (z, t) without accessing x. Thus, we propose to solve θ = arg max θ I θ (z;z, t) to learn the probabilistic encoder θ in pursuit of such a generalized TER. However, a direction maximization of the above mutual information needs to evaluate an intractable posterior p θ (t|z,z) of the transformation. Thus, we instead lower bound the mutual information by introducing a surrogate decoder q φ (t|z,z) with the parameters φ to approximate the true posterior. Variational Approach Unlike the variational autoencoder that lower-bounds data likelihood [11], we directly take a lower bound of the mutual information [29] between z and (z, t) below I θ (z;z, t) = I θ (z;z) + I θ (z; t|z) ≥ I θ (z; t|z) = H(t|z) − H(t|z,z) = H(t|z) + E p θ (t,z,z) log p θ (t|z,z) = H(t|z) + E p θ (t,z,z) log q φ (t|z,z) + E p(z,z) D(p θ (t|z,z) q φ (t|z,z)) ≥ H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (z;z, t) where H(·) denotes the (conditional) entropy, and D(p θ (t|z,z) q φ (t|z,z)) is the non-negative Kullback divergence between p θ and q φ . We choose to maximize the lower variational bound I θ,φ (z;z, t). Since H(t|z) is nonnegative and independent of the model parameters θ and φ, we choose to solve max θ,φ L unsup θ,φ E p θ (t,z,z) log q φ (t|z,z) = E p(x),p(t) E p( ),p(˜ ) log q φ (z,z)(4) to learn θ and φ under the expectation over p(t, z,z), and the equality follows from the generative process for the representations in Eqs. (2)-(3). Variational Transformation Decoder To estimate a family of continuous transformations, we choose a normal distribution N (t|d φ (z,z), σ 2 φ (z,z)) as the posterior q φ (t|z,z) of the transformation decoder, where the mean d φ (z,z) and variance σ 2 φ (z,z) are implemented by deep network respectively. For categorical transformations (e.g., horizontal vs. vertical flips, and rotations of different directions), a categorical distribution Cat(t|π φ (z,z)) can be adopted as the posterior q φ (t|z,z), where each entry of π φ (z,z) is the probability mass for a transformation type. A hybrid distribution can also be defined to combine multiple continuous and categorical transformations, making the variational transformation decoder more flexible and appealing in handling complex transformations. The posterior q φ (t|z,z) of transformation is a function of the representations of the original and transformed images. Thus, a natural choice is to use a Siamese encoder network with shared weights to output the representations of original and transformed samples, and construct the transformation decoder atop the concatenated representations. Figure 2(a) illustrates the architecture of the AVT network. Finally, it is not hard to see that the deterministic AET model would be viewed as a special case of the AVT, if the probabilistic representation encoder p θ (z|t, x) and transformation decoder q φ (t|z,z) were set to deterministic forms as in the AET. (SEMI-)SUPERVISED LEARNING OF TRANSFOR-MATION EQUIVARIANT REPRESENTATIONS Autoencoding transformations can act as the basic representation block in many learning problems. In this section, we present its role in (semi-)supervised learning tasks to enable more accurate classification of samples by capturing their transformation equivariant representations. SAT: (Semi-)Supervised Autoencoding Transformations The unsupervised learning of autoencoding transformations can be generalized to (semi-)supervised cases with labeled samples. Accordingly, the goal is formulated as learning of representations that contain as much (mutual) information as possible about not only applied transformations but also data labels. Given a labeled sample (x, y), we can define the joint distribution over the representation, transformation and label, p θ (y, t, z,z|x) = p(t)p θ (z|x)p θ (z|t, x)p(y|x) where we have assumed that y is independent of t and z once the sample x is given. In presence of sample labels, the pursuit of transformation equivariant representations can be performed by maximizing the joint mutual information I θ (y, z; t,z) such that the representation z of the original sample and the transformation t contains sufficient information to classify the label y as well as learn the representation z equivariant to the transformed sample. Like in (4) for the unsupervised case, the joint mutual information can be lower bounded in the following way, I θ (y, z;z, t) = I θ (y, z;z) + I θ (y, z; t|z) = (I θ (z;z) + I θ (y,z|z)) + (I θ (z; t|z) + I θ (y; t|z,z)) ≥ I θ (y,z|z) + I θ (z; t|z) ≥ H(y|z) + E p θ (y,z,z) log q φ (y|z,z) + H(t|z) + E p θ (t,z,z) log q φ (t|z,z) Ĩ θ,φ (y, z;z, t) where the first two equalities apply the chain rule of mutual information, and the first inequality uses the nonnegativity of the mutual information. In particular, we usually have I θ (y; t|z,z) = 0, which means the transformation should not change the label y of a sample (i.e., transformation invariance of sample labels). The second inequality follows the variational bound we derived earlier in the last section. One can also assume the surrogate posterior q φ (y|z,z) of labels can be simplified to q φ (y|z) since the representation of the original sample is supposed to provide sufficient information to predict the label. Since H(y|z) ≥ 0 and H(y, t|x) is independent of the model parameters θ and φ, we maximize the following variational lower bound max θ,φ L sup θ,φ E p θ (y,z) log q φ (y|z) + E p θ (t,z,z) log q φ (t|z,z) = E p(x) E p(y|x),p(˜ ) log q φ (y|z) + E p(t),p( ),p(˜ ) log q φ (t|z,z)(5) where z andz are sampled by following Eqs. (2)-(3) in the equality, and the ground truth y is sampled from the label distribution p(y|x) directly. In a deterministic case, it is not hard to show that the first term of (5) is related to the cross-entropy loss in training a supervised classifier, while the second term would reduce to the loss (1) in the deterministic AET model. Therefore, in this sense, the AET loss plays a role to regularize the crossentropy loss to train a supervised model. In addition, a semi-supervised model can be trained by combining the unsupervised and supervised objectives (4) and (5) max θ,φ L unsup θ,φ + λ L sup θ,φ(6) with a nonnegative balancing coefficient λ. This enables to jointly explore labeled and unlabeled examples and their representations equivariant to various transformations. We will demonstrate that the SAT can achieve superior performances to the existing state-of-the-art (semi-)supervised models. Moreover, the competitive performances also show great potentials of the model as the basic representation block in many machine learning and computer vision tasks. Figure 2(b) illustrates the architecture of the SAT model, in a comparison with its AVT counterpart. Particularly, in the SAT, the transformation and label decoders are jointly trained atop the representation encoder. EXPERIMENTS: UNSUPERVISED LEARNING In this section, we compare the proposed deterministic AET and probabilistic AVT models against the other unsupervised methods on the CIFAR-10, ImageNet and Places datasets. The evaluation follows the protocols widely adopted by many existing unsupervised methods by applying the learned representations to downstream tasks. CIFAR-10 Experiments First, we evaluate the AET and AVT models on the CIFAR-10 dataset. Experiment Settings Architecture To make a fair and direct comparison with existing models, the Network-In-Network (NIN) is adopted on the CIFAR-10 dataset for the unsupervised learning task [23], [30]. The NIN consists of four convolutional blocks, each of which contains three convolutional layers. Both AET and AVT have two NIN branches with shared weights, each taking the original and transformed images as its input, respectively. The output features of the forth block of two branches are concatenated and average-pooled to form a 384-d feature vector. Then an output layer follows to output the predicted transformation for the AET, and the mean d φ and the log-of-variance log σ 2 φ of the predicted transformation for the AVT, with the logarithm scaling the variance to a real value. The first two blocks of each branch are used as the encoder network to output the deterministic representation for the AET, and the mean f θ of the probabilistic representation for the AVT. An additional 1 × 1 convolution followed by a batch normalization layer is added upon the encoder to produce the log-of-variance log σ 2 θ . Implementation Details Both the AET and the AVT networks are trained by the SGD with a batch size of 512 original images and their transformed versions. Momentum and weight decay are set to 0.9 and 5 × 10 −4 . For the AET, the learning rate is initialized to 0.1 and scheduled to drop by a factor of 5 after 240, 480, 640, 800 and 1, 000 epochs. The network is trained for a total of 1, 500 epochs. The AVT network is trained for 4, 500 epochs, and its learning rate is initialized to 10 −3 . Then it is gradually decayed to 10 −5 from 3, 000 epochs after it is increased to 5 × 10 −3 at the epoch 50. In the AVT, a single representation is randomly sampled from the encoder p θ (z|t, x), which is fed into the decoder q φ (t|x, z). To fully exploit the uncertainty of the representations, five samples are drawn and averaged as the representation of an image to train the downstream classifiers. We found averaging randomly sampled representations could outperform only using the mean of the representation. Results Comparison with Other Methods. To evaluate the effectiveness of a learned unsupervised representation, a classifier is usually trained upon it. In our experiments, we follow the existing evaluation protocols [23], [24], [31], [32], [33] by building a classifier on top of the second convolutional block. First, we evaluate the classification results by using the AET and AVT representations with both model-based and model-free classifiers. For the model-based classifier, we follow [23] by training a non-linear classifier with three Fully-Connected (FC) layers -each of the two hidden layers has 200 neurons with batch-normalization and ReLU activations, and the output layer is a soft-max layer with ten neurons each for an image class. We also test a convolutional classifier upon the unsupervised features by adding a third NIN block whose output feature map is averaged pooled and connected to a linear soft-max classifier. Table 1 shows the results by different models. It compares both fully supervised and unsupervised methods on CIFAR-10. The unsupervised AET and AVT with the convolutional classifier almost achieves the same error rate as its fully supervised NIN counterpart with four convolutional blocks (7.82% and 7.75% vs. 7.2%). We also compare the models when trained with varying number of FC layers in Table 2. The results show that the AVT leads the AET can consistently achieve the smallest errors no matter which classifiers are used. We also note that the probabilistic AVT outperforms the deterministic AET in experiments. This is likely due to the ability of the AVT modeling the uncertainty of representations in training the downstream classifiers. We also find that the projective transformation also performs better than the affine transformation when they are used to train the AET, and thus we mainly use the projective transformation to train the AVT. Comparison based on Model-free KNN Classifiers. We also test the model-free KNN classifier based on the averaged-pooled feature representations from the second convolutional block. The KNN classifier is model-free without training a classifier from labeled examples. This enables us to make a direct evaluation on the quality of learned features. Table 3 Table 4 reports the results of different models on CIFAR-10. Both the AET and AVT outperform the fully supervised models as well as the other unsupervised models when only few labeled examples (≤ 1000 samples per class) are available. ImageNet Experiments We further evaluate the performance by AET and AVT on the ImageNet dataset. Architectures and Training Details For a fair comparison with the existing method [20], [23], [34], two AlexNet branches with shared parameters are created with original and transformed images as inputs to train unsupervised models, respectively. The 4, 096-d output features from the second last fully connected layer in each branch are concatenated and fed into the transformation decoder. We still use SGD to train the network, with a batch size of 768 images and the transformed counterparts, a momentum of 0.9, a weight decay of 5 × 10 −4 . For the AET model, the initial learning rate is set to 0.01, and it is dropped by a factor of 10 at epoch 100 and 150. The model is trained for 200 epochs in total. For the AVT, the initial learning rate is set to 10 −3 , and it is dropped by a factor of 10 at epoch 300 and 350. The AVT is trained for 400 epochs in total. We still use the average over five samples from the encoder outputs to train the downstream classifiers to evaluate the AVT. Since the projective transformation has shown better performances, we adopt it for the experiments on ImageNet. Table 5 reports the Top-1 accuracies of the compared methods on ImageNet by following the evaluation protocol in [20]. Two settings are adopted for evaluation, where Conv4 and Conv5 mean to train the remaining part of AlexNet on top of Conv4 and Conv5 with the labeled data. All the bottom convolutional layers up to Conv4 and Conv5 are frozen after they are trained in an unsupervised fashion. From the results, in both settings, the AVT model consistently outperforms the other unsupervised models, including the AET. Results We also compare with the fully supervised models that give the upper bound of the classification performance by training the AlexNet with all labeled data end-to-end. The classifiers of random models are trained on top of Conv4 and Conv5 whose weights are randomly sampled, which set the lower bounded performance. By comparison, the AET models narrow the performance gap to the upper bound supervised models from 9.7% and 15.7% by RotNet and DeepCluster on Conv4 and Conv5, to 6.5% and 12.7% by the AET, and to 5.5% and 11.3% by the AVT. Moreover, we also follow the testing protocol adopted in [40] to compare the models by training a 1, 000-way linear classifier on top of different numbers of convolutional layers in Table 6. Again, the AVT consistently outperforms all the compared unsupervised models in terms of the Top-1 accuracy. Places Experiments We also compare different models on the Places dataset. Table 7 reports the results. Unsupervised models are pretrained on the ImageNet dataset, and a linear logistic regression classifier is trained on top of different layers of convolutional feature maps with Places labels. It assesses the generalizability of unsupervised features from one dataset to another. The models are still based on AlexNet variants. We compare with the fully supervised models trained with the Places labels and ImageNet labels respectively, as well as with the random networks. Both the AET and the AVT models outperform the other unsupervised models, except performing slightly worse than Counting [40] with a shallow representation by Conv1 and Conv2. EXPERIMENTS: (SEMI-)SUPERVISED LEARNING We compare the proposed SAT model with the other stateof-the-art semi-supervised methods in this section. For the sake of fair comparison, we follow the test protocol used in literature [26], [27] on both CIFAR-10 [42] and SVHN [43], which are widely used as the benchmark datasets to evaluate the semi-supervised models. Network Architecture and Implementation Details Network Architecture For the sake of a fair comparison, a 13-layer convolutional neural network, which has been widely used in existing semi-supervised models [26], [27], [28], is adopted as the backbone to build the SAT. It consists of three convolutional blocks, each of which contains three convolution layers. The SAT has two branches of such three blocks with shared weights, each taking the original and transformed images as input, respectively. The output feature maps from the third blocks of two branches are concatenated and average-pooled, resulting in a 256-d feature vector. A fully-connected layer follows to predict the mean d φ and the log-of-variance log σ 2 φ of the transformation. The first two blocks are used as the encoder to output the mean f θ of the representation, upon which an additional 1 × 1 convolution layer with batch normalization is added to compute the log-of-variance log σ 2 θ . In addition, a classifier head is built on the representation from the encoder. Specifically, we draw five random representations of an input image, and feed their average to the classifier. The classifier head has the same structure as the third convolutional block but its weights differ from the Siamese branches of transformation decoder. The output feature map of this convolutional block is globally averagepooled to 128-d feature vector, and a softmax fully connected layer follows to predict the image label. Implementation Details The representation encoder, transformation decoder and the classifier are trained in an end-toend fashion. In particular, the SGD is adopted to iteratively update their weights over a minbatch with 500 images, their transformed counterparts, and 40 labeled examples. Momentum and weight decay are set to 0.9 and 5 × 10 −4 , respectively. The model is trained for a total of 4, 500 epochs. The learning rate is initialized to 10 −3 . It is increased to 5 × 10 −3 at epoch 50, before it is linearly decayed to 10 −5 starting from 3, 000 epochs. For a fair comparison, we adopt the entropy minimization used in the state-of-the-art virtual adversarial training [28]. A standard set of data augmentations in literature [26], [27], [28] are also adopted through experiments, which include both horizontal flips and random translations on CIFAR-10, and only random translations on SVHN. The projective transformation that performs the better than the affine transformation is adopted to train the semi-supervised representations. Results We compare with the state-of-the-art semi-supervised methods in literature [26], [27]. In particular, the proposed SAT reduces the average error rates of Mean Teacher (the second best performing method) by 30.9%, 25.6%, 22.2% relatively with 1, 000, 2, 000, and 4, 000 labels on CIFAR-10, while reducing them by 1.1%, 11%, 12.9% relatively with 250, 500, and 1, 000 labels on SVHN. The compared semi-supervised methods, including Π model [26], Temporal Ensembling [26], and Mean Teacher [27], attempt to maximize the consistency of model predictions on the transformed and original images to train semi-supervised classifiers. While they also apply the transformations to explore unlabeled examples, the competitive performance of the SAT model shows the transformationequivariant representations are more compelling for classifying images than those compared methods predicting consistent labels under transformations. It justifies the proposed criterion of pursuing the transformation equivariance as a regularizer to train a classifier. It is not hard to see that the SAT can be integrated into the other semi-supervised methods as their base representations, and we believe this could further boost their performances. This will be left to the future work as it is beyond the scope of this paper. The Impact of Entropy Minimization We also conduct an ablation study of the Entropy Minimization (EntMin) on the model performance. EntMin was used in VAT [28] that outperformed the other semi-supervised methods in literature. Here, we compare the error rates between the SAT and the VAT with or without the EntMin. As shown in Table 10, no matter if the entropy minimization is adopted, the SAT always outperforms the corresponding VAT. We also note that, even without entropy minimization, the SAT still performs better than the other state-of-the-art semi-supervised classifiers such as Mean Teacher, Temporal Ensembling, and Π model shown in Table 8. This demonstrates the compelling performance of the SAT model. Comparison with Data Augmentation by Transformations We also compare the performances between the SAT and a classification network trained with the augmented images by the transformations. Specifically, in each minibatch, input images are augmented with the same set of random projective transformations used in the SAT. The transformationaugmented images and their labels are used to train a network with the same 13-layer architecture that has been adopted as the SAT backbone. Note that the transformation augmentations are applied on top of the standard augmentations mentioned in the implementation details for a fair comparison with the SAT. Table 11 compares the results between the SAT and the Data Augmentation by Transformation (DAT) classifier on CIFAR-10. It shows the SAT significantly outperforms 6: Top-1 accuracy with linear layers on ImageNet. AlexNet is used as backbone to train the unsupervised models under comparison. A 1, 000-way linear classifier is trained upon various convolutional layers of feature maps that are spatially resized to have about 9, 000 elements. Fully supervised and random models are also reported to show the upper and the lower bounds of unsupervised model performances. Only a single crop is used and no dropout or local response normalization is used during testing, except the models denoted with * where ten crops are applied to compare results. Moreover, the projective transformations used in the SAT could severely distort training images that could incur undesired update to the model weights if the distorted images were used to naively train the network. This is witnessed by the results that the data augmentation by transformations performs even worse than the supervised-only method (see Table 8). In contrast, the SAT avoids a direct use of the transformed images to supervise the model training with their labels. Instead, it trains the learned representations to contain as much information as possible about the transformations. The superior performance demonstrates its outstanding ability of classifying images by exploring the variations of visual structures induced by transformations on both labeled and unlabeled images. CONCLUSION AND FUTURE WORKS In this paper, we present to use a novel approach of Au-toEncoding Transformations (AET) to learn representations that equivary to applied transformations on images. Unlike the group equivariant convolutions that would become intractable with a composition of complex transformations, the AET model seeks to learn representations of arbitrary forms by reconstructing transformations from the encoded representations of original and transformed images. The idea is further extended to a probabilistic model by maximizing the mutual information between the learned representation and the applied transformation. The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational Transformations (AVT). Along this direction, a (Semi-)Supervised Autoencoding Transformation (SAT) approach can be derived by maximizing the joint mutual information of the learned representation with both the transformation and the label for a given sample. The proposed AET paradigm lies a solid foundation to explore transformation equivariant representations in many learning tasks. Particularly, we conduct experiments to show its superior performances on both unsupervised learning to semi-(supervised) learning tasks following standard evaluation protocols. In future, we will explore the great potential of applying the learned AET representation as the building block on more learning tasks, such as (instance) semantic segmentation, object detection, super-resolution reconstruction, few-shot learning, and fine-grained classification. Guo-Jun Qi Guo-Jun Qi (M14-SM18) is the Chief Scientist leading and overseeing an international R&D team for multiple artificial intelligent services on the Huawei Cloud since August 2018. He was a faculty member in the Department of Computer Science and the director of MAchine Perception and LEarning (MAPLE) Lab at the University of Central Florida since August 2014. Prior to that, he was also a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning and knowledge discovery from multi-modal data sources to build smart and reliable information and decision-making systems. Dr. Qi has published more than 100 papers in a broad range of venues in pattern recognition, machine learning and computer vision. He also has served or will serve as a general co-chair for ICME 2021,
6,935
1906.08206
2949948433
After the peace agreement of 2016 with FARC, the killings of social leaders have emerged as an important post-conflict challenge for Colombia. We present a data analysis based on official records obtained from the Colombian General Attorney's Office spanning the time period from 2012 to 2017. The results of the analysis show a drastic increase in the officially recorded number of killings of democratically elected leaders of community organizations, in particular those belonging to Juntas de Accion Comunal [Community Action Boards]. These are important entities that have been part of the Colombian democratic apparatus since 1958, and enable communities to advocate for their needs. We also describe how the data analysis guided a journalistic investigation that was motivated by the Colombian government's denial of the systematic nature of social leaders killings.
In a working paper, @cite_9 use data from the Colombian nonprofit organization to investigate the killings of social leaders. The authors hypothesise that social leaders were increasingly killed by armed groups excluded from the peace process that wanted to consolidate their power, especially in areas where they took over FARC's illegal activities. In the dataset, @cite_9 also find that the category of social leaders targeted the most since the beginning of the ceasefire are local community council leaders.
{ "abstract": [ "We study the unintended consequences of the recent peace process in Colombia, that ended over five decades of internal armed conflict with the FARC insurgency. Using a triple differences empirical strategy, we show that the permanent ceasefire that started in December 2014 in the context of the peace negotiations was followed by an increase in the killing of social leaders in previously FARC-dominated territories, perpetrated by other armed groups seeking control of these areas. Con- sistent with our interpretation that local social leaders are killed to thwart collective action and mobilization at the municipal level, we show that the targeting of social leaders is not explained by the behavior of the overall homicide rate and that it is exacerbated in municipalities with weaker state capacity and an inefficient local judi- ciary. Our results suggest that partial pacification processes can exacerbate violence by other existing armed groups, aimed at controlling pacified territories." ], "cite_N": [ "@cite_9" ], "mid": [ "2810653167" ] }
0
1906.08138
2949272455
Stencil algorithms have been receiving considerable interest in HPC research for decades. The techniques used to approach multi-core stencil performance modeling and engineering span basic runtime measurements, elaborate performance models, detailed hardware counter analysis, and thorough scaling behavior evaluation. Due to the plurality of approaches and stencil patterns, we set out to develop a generalizable methodology for reproducible measurements accompanied by state-of-the-art performance models. Our open-source toolchain, and collected results are publicly available in the "Intranode Stencil Performance Evaluation Collection" (INSPECT). We present the underlying methodologies, models and tools involved in gathering and documenting the performance behavior of a collection of typical stencil patterns across multiple architectures and hardware configuration options. Our aim is to endow performance-aware application developers with reproducible baseline performance data and validated models to initiate a well-defined process of performance assessment and optimization.
published a study of a stencil kernel on multiple architectures in 2009 @cite_27 . It is based on the same modeling principles but does not provide a unified process and presentation reusable for other kernels.
{ "abstract": [ "Stencil-based kernels constitute the core of many important scientific applications on block-structured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88 of algorithmic peak while the best competing cache-based processor achieves only 54 of algorithmic peak performance." ], "cite_N": [ "@cite_27" ], "mid": [ "1997147891" ] }
Collecting and Presenting Reproducible Intranode Stencil Performance: INSPECT
Stencils are relative data access and computation patterns that emerge from the discretization of differential operators on regular grids. Stencils appear in many fields, from image processing, fluid dynamics, material sciences to mechanical engineering, and are typically embedded in loop nests that are at least as deep as the number of dimensions in the original continuous problem. Despite their apparent simplicity, stencil algorithms show a rich set of performance patterns and allow for various optimizations in terms of data access and work reduction. For instance, the performance of most simple stencil algorithms (such as the 3D 7-point constant-coefficient variant encountered with a simple finite-difference discretization of the Laplace operator on a regular Cartesian grid) is limited by the memory bandwidth for in-memory working sets on multicore CPUs. Spatial blocking can reduce the code balance to a theoretical minimum but will not decouple from memory bandwidth. Temporal blocking can finally render the implementation cache or core bound with significant performance gains, but there are many different approaches and the number of parameters is significant [23]. Moreover, even recent publications often fail to assess performance baselines correctly, rendering all reported speedups meaningless. A Stencil Baseline Performance Collection We set out to compile an extensible collection of stencil runtime performance characteristics from a variety of architectures and backed up by analytic performance models and performance counter analysis. The stencil update iteration is embedded in a Jacobi algorithm without advanced optimizations. All resulting information is available in a public online collection, organized based on an abstract classification scheme, and can be easily reproduced with the presented information and available open-source tool chain. The collection is also enriched with specific comments on the performance behavior and model predictions. It can be browsed at https://rrze-hpc.github. io/INSPECT. Stencil Classification In order to span a space of possible stencils we use a classification based on the following properties: • dimensions: dimensionality of the stencil, typically 3D or 2D • radius: longest relative offset from the center point in any dimension, usually r = 1, r = 2, or r = 3 • coefficient weighting: how are coefficients applied, e.g. homogeneous, heterogeneous, isotropic, or point-symmetric • stencil type: general shape of stencil, e.g., star or box • coefficient type: constant throughout grid or variable at each grid point • data type: numeric type of grid elements, e.g., double or float These properties may have a large performance impact depending on the details of the underlying CPU architecture and its features and configuration, making runtime predictions difficult. Much of the complexity lies in the cache hierarchy, where data transfers may be handled before they reach main memory and behavior depends on spatial and temporal locality. On the other hand, in-core bottlenecks such as pipeline latencies, throughput limits, may also play a decisive role, especially with more complicated stencils. A visual overview of the stencil classification is given in Figure 1. Isotropic coefficient weighting deserves special attention: All nodes with the same distance to the origin share the same coefficient; different distances have distinct coefficients. With the given set of classification properties this leads to at least 192 relevant combinations. We have not yet gathered data for all possible combinations and architectures available to us, but a representative set is already available. This paper is organized as follows: In Section 2 we briefly describe the computer architectural features of the benchmark systems, the analysis tools, and the performance models we employ to set up the stencil performance collection. Section 3 details on our automated workflow and includes a description of the structure and origin of the data presented on the INSPECT website. Section 4 uses a few distinct stencil examples to showcase the data presentation and possible insights gained. Finally, Section 6 gives related work and Section 7 concludes the paper. Contributions This work makes the following contributions: • a simple classification scheme of stencil characteristics, based on the underlying numerical problem, and defining the architecture dependent performance behavior (see Sec. 1.2), • support for automatic analytic performance model generation for the AMD Zen microarchitecture (see Sec.4.2), • a first-of-its-kind collection, presentation and method to match the measured performance data with automatically generated single-and multicore performance models in order to gain insight into relevant performance bottlenecks and uncover compiler deficiencies and to guide performance optimization strategies (see Sec. 3, 4 and 5), • an automatic extraction of the Phenomenological ECM model from hardware performance counters (see Sec. 2.5.3), based on ideas from [23], • a public website on which the gathered data, performance models, and reproducibility information are presented in a clear and structured way (see https://rrze-hpc.github.io/ INSPECT/), • built-in reproducibility by transparently making all necessary runtime information and system configurations available to developers-including the exact commands to execute for reproduction (see Sec. 4). STEMPEL STEMPEL is used to generate stencil codes from the parameters mentioned in Section 1 (dimensions, radius, coefficient weighting, stencil type, coefficient type and data type). The resulting kernel code is used as input for Kerncraft, but STEMPEL also supports generation of benchmarkable code which can be compiled and executed stand-alone. The generated code may also include OpenMP multithreading support or spatial blocking. The latter is used to investigate blocking behavior for INSPECT. For accurate extraction of performance data, the code is additionally instrumented with LIKWID markers to be used with the likwid-perfctr tool. ECM & Roofline Model The Execution-Cache-Memory (ECM) [28] and Roofline [30] models are resource-centered performance models that assume certain hardware limitations (such as data transfer bandwidths, instruction throughput limits, instruction latencies, etc.) and try to map the application to a simplified version of the hardware in order to expose the relevant bottleneck(s). This analysis depends on the dataset size and dimensions, as well as the computational effort during each iteration. Both models generally neglect data access latencies although they can be added as "penalties". Latency predictions would require other models and are usually not of relevance for stencil code performance. In some cases, latency penalties need to be considered for "perfect" predictions [17], but this correction is usually small and is neglected in this work. The compute performance bottleneck is analyzed based on the loop body's maximum in-core performance. Assuming that all load operations will hit the L1 cache, one can estimate the optimistic runtime the loop body in cycles. The necessary information has been published by Intel, Agner Fog [6], uops.info [1] or through the Intel Architecture Code Analyzer (IACA) [14]. Although none of those sources are complete, they are good enough for a well-informed estimation. The resulting inverse throughput of cycles per cacheline (lower is faster) exposes the bottleneck, for both ECM and Roofline. Cachelines are considered as the basic work unit, since it is also the basic unit of caches. E.g., for a double precision code with 8 Byte per element, on a machine with 64 Byte cachlines, there are eight iterations per cacheline. For Roofline most publications use performance (higher is faster) as the baseline metric, but both units can be converted into one another trivially: clock/performance × work = inverse throughput cycle second / FLOP second × FLOP iteration × iteration cacheline = cycle cacheline Memory and cache bottlenecks require a prediction of which data access will be served by which memory hierarchy level. This is either done using a cache simulator (e.g. pycachesim [11]) or the analytical layer-condition model [10,13]. The result from this prediction are the expected traffic volumes between the levels of the memory hierarchy. The Roofline model then combines the data volume per cacheline (e.g., eight iterations with double precision) for each hierarchy level with previously measured bandwidths for the same level and core count, and selects the slowest as the bottleneck. The ECM model combines all inter-cache transfers with theoretical bandwidths from documentation, and volumes between memory and last level cache with a measured full-socket bandwidth. These inverse throughputs are either combined using summation if no overlapping is assumed, maximization for full overlapping, or any more complicated function for intermediate situations. For Intel processors, assuming that no overlap between any load, store, inter-cache and memory transfers happens has proven to be the best fitting model assumption. This might change in future microarchitectures and does not hold for other vendors. For the AMD Zen microarchitecture, in-core computations and all inter-cache and register transfers overlap down to the L2 cache. Transfers between L2, L3 and main memory serialize [15]. Another approach is to measure transfers using hardware provided performance counters and base the ECM model on these empirical volumes and predict the runtime using the non-overlapping assumption. This is referred to as the Phenomenological ECM model, also discussed in Section 2.5.3. The model parameters used by the models are shown in Figure 2 and Table 1. The corresponding machine files can be found on the INSPECT webpage. For the ECM model parameters, except for the measured memory bandwidth highlighted by the preceding tilde (˜), all throughputs are published in vendor documentation. The throughputs of execution ports, scheduler and decoder are not shown here, but most may be found in official documentation, public resources, or can be benchmarked [12]. The overall instruction-level parallelism capability is represented with the different ports. While in this work the Roofline model is always presented with a single (reciprocal) throughput (TP), the ECM model is produced from architecture-dependent combinations of in-core computation TP T comp , load/store TP T RegL1 , inter-L1/L2 transfer TP T L1L2 , inter-L2/L3 transfer TP T L2L3 and main memory transfer TP T L3MEM ), with the unit of cycles per cacheline. Intel Microarchitectures The Intel microarchitectures Haswell (HSW) and Broadwell (BDW) have no differences in regard to our modeling and performance analysis. Figure 2a shows their architectural diagram. Both architectures have seven execution ports; most important are the two AVX2 fused-multiplyadd (FMA) ports, two load ports able to handle 256-bit per cycle and one 256-bit store port. AVX Memory in-socket / in-SNC* 120 GB/s* * depends on runtime configuration and load/store ratio (b) Skylake X (SKX) Figure 2. Simplified block diagrams for Intel Haswell, Broadwell and Skylake X, including the execution ports and cache hierarchy. Differences have been highlighted. The Skylake architecture (without X) has no AVX512 support. and AVX2 instructions can make use of sixteen 256-bit YMM registers. On the memory side, there is a linear inclusive cache hierarchy. Due to the double ring interconnect between individual HSW and BDW cores and separate memory controllers residing on each ring, a cluster-on-die mode can be enabled to allow NUMA separation of the two rings. With cluster-on-die mode, the last-level cache (i.e., L3) is split and only used by a core on the same ring and a slightly higher memory bandwidth can be attained. Results for HSW and BDW are presented with cluster-on-die (CoD) mode enabled. The Skylake X (SKX) microarchitecture is shown in Figure 2b. It supports AVX512, which boosts load, store and FMA ports from 256-bit to 512-bit width. To allow two AVX512 FMA instructions to be executed in parallel (i.e., combined throughput of 0.5 cycles) an AVX512 pipeline was added to Port 5 and the existing 256-bit pipelines at Ports 0 and 1 may be used in lockstep to reach 2 × 512-bit width. There are 32 512-bit YMM registers and the number of 256-bit registers was also doubled to 32. The SKX microarchitecture has a non-inclusive last level victim cache, which may cache evicted cachelines from L2 and is used to write back to but not to load from memory. All data coming from memory is loaded directly into the L2 cache of the requesting core. Unmodified cachelines may also be dropped from L2. The criteria which decide if a cacheline is evicted from L2 to the victim cache or not have not been disclosed. For our models we assume that all evicts will be passed on to the last-level cache and-if changed-stored from there into memory. Sub-NUMA-clustering (SNC) is similar to CoD, which was also enabled during our measurements. The changed cache structure of the Skylake microarchitecture with its undocumented decision heuristic for L3 cache usage and unavailable hardware counters for some of the inter-cache and memory data paths, still poses a problem. As we will see later, assuming the traditional linear inclusive cache hierarchy often yields reasonable results, but is still under investigation. In the architecture diagrams, two-way arrows represent half-duplex capabilities and two individual arrows means full-duplex capabilities. The factor along with the bandwidth is meant to emphasize the half-and full-duplex behavior (e.g., between L2 and L3 on Skylake X 128 bits per cycle may be transferred both ways concurrently). This information and more is used to construct a machine file for Kerncraft and will be explained in Section 2.5.1. The specific systems that are used for INSPECT are documented at https://rrze-hpc. github.io/INSPECT/machinefiles. A summary of the relevant configuration details, in addition to the micro architectural details in Figure 2, is given in Table 1. AMD Zen Microarchitecture The AMD Zen microarchitecture has ten ports, the first four of which (0, 1, 2 and 3) support 128-bit wide floating-point SSE instructions (see Figure 3). Each execution unit, except for divide, is present on two ports, e.g., FMA and MUL on 0 and 1, and ADD on 2 and 3. The decoder stage supports AVX instructions by utilizing two SSE ports simultaneously (similar to AVX512 on Port 0 an 1 with Skylake X). Ports 4 through 7 handle integer and control flow instructions. Ports 8 and 9 each have their own address generation unit (AGU) and can utilize the two shared load ports and the single shared store port. The store and load ports each operate on up to 128 bits and can issue one load/store to the 32 kB first-level (L1) cache. Thus either two loads or one load and one store can be executed per cycle at maximum. The 512 kB inclusive L2 cache is connected to the L1 cache with 256-bit per cycle full-duplex (i.e., a 64 B cacheline takes two cycles to transfer, and loads and stores are done in parallel). Between L2 and the last-level cache (L3), 256 bits can be transferred per cycle, but only half-duplex (i.e., either load or store). The exact heuristics of the victim cache are not publicly available. In addition to that, support for hardware performance counters is much more limited, which does not allow us to inspect many of the transfers between memory levels. The maximum main memory bandwidth achievable is close to 160 GB/s, 30% higher than what we were able to measure on Skylake X. The specific AMD CPU used here has 24 cores, split over four NUMA domains of 6 cores each. Kerncraft Kerncraft brings together static analysis of the kernel code with microarchitecture data and execution models into a coherent performance prediction model, based on the Roofline and ECM models. It also allows benchmarking of the kernel codes with single and multiple cores (using OpenMP) and collection of hardware performance counter data during execution. Machine Model The specific machine model for each microarchitecture is described in the machine files, which are provided either with Kerncraft or can be generated semi-automatically using likwid bench auto-a tool distributed with Kerncraft. All machine models mentioned in this paper are provided with Kerncraft in the examples directory. When using the semi-automatic generation, it must be executed on such a machine and the resulting file needs to be completed manually from vendor documentation, model assumptions or from existing-similar-architecture files. Machine model files contain detailed information on the architecture and memory hierarchy as well as benchmark results, necessary to construct the Roofline and ECM model. In particular STREAM [24] benchmark results, cache sizes and parameters, NUMA topology, base clock and architecture specific compiler arguments make up the majority of the description. Some of it can be collected automatically, some needs to be provided manually. The INSPECT website presents a breakdown of the architecture information in the machine files: https://rrze-hpc.github.io/INSPECT/machinefiles. There may be known issues, which are also documented on INSPECT. E.g., Haswell's L1-L2 bandwidth is theoretically 64 Byte per cycle, but benchmarks show that the achievable bandwidth may be as low as 32 B/cy. Kerncraft assumes the optimistic 64 B/cy. Model Construction The static analysis and model building is split in two parts: in-core execution and data analysis; both of which are done without execution of the kernel code and can be performed on any hardware. The in-core analysis is done via the Intel Architecture Code Analyzer tool (IACA) [14] for Intel architectures and the Open Source Architecture Code Analyzer (OSACA) [20], which yields the number of cycles each execution port is occupied by the kernel's assembly instructions (T comp and T RegL1 for the ECM model). Kerncraft takes care of compilation, unrolling and vectorization in order to correctly interpret the IACA/OSACA result and relate it to high-level loop iterations found in the kernel source code. The data analysis predicts inter-cache and memory transfer data volumes either using the analytical layer-condition model (LC) or the pycachesim [11] cache simulator. This yields T L1L2 , T L2L3 and T L3MEM for the ECM model. The LC model analysis is very fast and gives a closed form analytical model, but relies on an idealistic full-associative inclusive and least-recently-used cache hierarchy. The cache simulator can handle more realistic and complex cache configurations, such as associativity and non-inclusive cache hierarchies-at the cost of speed and without a closed form solution. Certain aspects of real hardware can not be simulated due to missing documentation, e.g., cache placement algorithm for last-level cache on current Intel microarchitectures. For multi-core scaling, we use the memory latency penalty estimation as described by Hofmann [16]. Benchmark Mode and Phenomenological ECM Benchmarking of any code can be tricky, this is the same for stencil codes. Kerncraft takes care of pinning and hardware performance counter monitoring with LIKWID [29], as well as ensuring a minimal runtime, checking machine configuration and derivation of relevant metrics from the measurements. The underlying performance counters are defined in the machine model and based on validated metrics provided by LIKWID. In addition, metrics based on measurement of the runtime, such as memory bandwidths and lattice updated per second, data transfer volumes can be measured accurately. From these Kerncraft can construct a Phenomenological ECM model. This phenomenological model is not based on the measured runtime or derived bandwidths, but uses inter-cache and memory data volumes as well as counts of executed µops per port. The overall prediction is then compiled in the same way as the analytical ECM prediction is compiled from vendor documented transfer rates, measured memory bandwidth and instruction throughput information. The specific counters necessary have been compiled from Intel documentation and their correctness validated with microbenchmarks, where possible. This process is part of the ongoing LIKWID development. Kerncraft's machine models put the counter in relation to ECM model parameters, such as L1-L2 traffic or execution port utilization. To measure all necessary counters, multiple executions are unavoidable, because only a limited number of counter registers are available for use. On Intel's server microarchitectures, many performance counters are available and a complete model can be assembled as presented in Figure 4f. On AMD Zen, however, essential contributions, such as main memory traffic can not be examined and a complete phenomenological model may not be constructed. Data Collection In order to build a comprehensive single-node stencil performance database, preexisting opensource tools STEMPEL [9], Kerncraft [13] and LIKWID [29] have been combined in the "Intranode Stencil Performance Evaluation Collection" (INSPECT). For given stencil parameters all benchmark and automated performance modeling data for the present machine can be collected with a single (job) script. The data collection work flow can be seen in Algorithm 1. For the stencil source code generation, to be supplied to kerncraft, STEMPEL is used. Possible parameters are: dimension, radius, stencil type, coefficient weighting and type as well as the data type. Examples of the produced stencil code by STEMPEL are shown in Listings 1, 2 and 3. If a custom stencil is to be be used, this step can be omitted. The stencil code is then supplied to Kerncraft in order to do layer condition analysis and determine sensible ranges for grid sizes to be examined. Data ranges are chosen such that the last stempel: generate 3D spatial blocking code for n ← 10, N L3,3D' do single core grid scaling determine 'good' 3D blocking factors likwid-perfctr: Benchmark end for csv data + graphs + website post processing end for level cache 3D layer condition is violated and a steady state will be reached as long as the available main memory per NUMA domain is not exceeded (see 1.5 · LC L3,3D in Algorithm 1). The next step is data collection. For single core grid scaling, Kerncraft is used to generate Roofline and ECM performance models with layer conditions and cache simulation, as well as benchmark and Phenomenological ECM data. Multi-core thread scaling is done for the largest previously calculated, memory bound grid size for all cores of one socket. Here Kerncraft is again used to generate Roofline, ECM and Benchmark data. In a last step spatial blocking is performed. Here STEMPEL is once more used to generate executable benchmark code with spatial blocking from the basic stencil code, generated before. This spatial blocking code is then instrumented with LIKWID to obtain the required benchmark data. In a final step all data is collected, postprocessed and archived. The outputs are the data files that are needed for the visualization on the website. Those files can be pushed to the git repository to automatically include the inspected stencil on the INSPECT website. For every stencil-machine configuration the website shows general stencil information, graphs of the measured and predicted performance and step-by-step instructions for the replication of the shown data. The general stencil information contains: stencil parameters, kernel source code, kernel assembly and layer condition analysis, as well as IACA throughput analysis and information about the state of the machine and operating system, the data was collected on. Performance prediction and benchmark data are shown in 5 different graph types: • stacked ECM (with layer condition, cache simulation and phenomenological) • Roofline performance (with layer condition and cache simulation) • data transfers for single core grid scaling • full-socket thread scaling (one grid size by default, but possibly more) • spatial blocking performance plots (3D-L3 cache blocking by default, but possibly more) An example of the plots visible for each stencil configuration is shown in Figure 4. The reproducibility information contains detailed steps on how to generate the stencil code with STEMPEL and all necessary commands to retrieve the data shown on the site. Additionally all shown data can be commented and validated with a traffic light system reflecting the quality of the shown plots. This allows to highlight problems or unintuitive results of a specific stencil or hardware configuration, that could otherwise be mistaken for incorrect data. Examples Three different exemplary stencil configurations were selected from the INSPECT website: short-/long-ranged and star/boxed stencils, on three different machines. We will start with a very basic 7-point stencil on a Haswell Xeon E5-2695v3 machine, then continue with a long-ranged stencil on Skylake Xeon Gold 6148 and compare a boxed stencil on Broadwell Xeon E5-2697v4 and Skylake Xeon Gold 6148; we will conclude with the basic 7-point stencil on an AMD EPYC 7451 machine. In addition to the presented architectures, the INSPECT website also contains analyses and measurements on Intel Sandy Bridge and Ivy Bridge architectures. × Benchmark • Roofline Tcomp T RegL1 T L1L2 T L2L3 T L3MEM A Simple Short-ranged Stencil on Haswell (Intel Xeon E5-2695v3) The first stencil configuration presented here is the simple 3D 7-point stencil on the well understood Haswell microarchitecture (Intel Xeon E5-2695v3 in CoD mode): 3D, radius 1, star stencil, constant and homogeneous coefficients with double precision floating-point accuracy. The stencil source code is shown in Listing 1. Figure 4 displays all graphs presented on the INSPECT website, as well as the workflow from stencil generation with STEMPEL to data acquisition with Kerncraft, as already outlined by Algorithm 1. The model prediction graphs show a stacked ECM prediction (T ECM = max(T comp , sum(T RegL1 , T L1L2 , T L2L3 , T L3MEM ))), together with the Roofline prediction and benchmark measurement data. All the presented data and plots on this specific kernel, including commands to reproduce, system configuration used and very verbose information on the analysis (such as the IACA output) may be viewed at: https://git.io/fjqzy. Figure 4a shows the ECM and Roofline prediction generated by Kerncraft, based on the layer condition prediction. Figure 4b is based on the cache simulation. The stacked colored regions represent the data transfers in the ECM model, where the upper boundary is equal to the ECM prediction including all data transfers (T RegL1 + T L1L2 + T L2L3 + T L3MEM ). The red line represents the compute bottleneck (T comp ), as predicted with IACA. The Roofline prediction is the thin blue line with circles. The black line with x's are the measurement results from running Kerncraft in benchmark mode. The Roofline prediction is accurate when all layer conditions are violated and all stencil data is coming from main memory. Before that, its prediction is about 25% too optimistic. In the transition zone, there are a few points where the Roofline model is too pessimistic, because the mathematical layer condition is sharp but the performance shows a smooth transition because of the cache replacement policy. The ECM model results in a much more accurate prediction. All layer condition jumps (30 3 : L1-3D, 90 3 : L2-3D, 680 3 : L1-2D, 760 3 : L3-3D) are clearly visible and correspond to the measured performance. The large deviation between models and measurement in the initial section (N < 100 3 ) comes from loop overheads and large impacts of remainder loop iterations, while this is expected, it is not modeled by neither ECM nor Roofline. Comparing the cache simulator (Fig. 4b) with the layer conditions (Fig. 4a) shows that some peaks or dips in the measurement can be explained by using a more accurate cache model as provided by the cache simulator. It also allows for a smoother transition between broken layer conditions, but nonetheless fails at accurately predicting the transition behavior. Perfect tracking of those transitions, as seen in the benchmark measurements, would only be possible with precise knowledge of the underlying caching algorithms implemented in the different cache levels. Due to a lack of information from the CPU vendors a perfect LRU cache is assumed, as well as other idealized implementation details. In the Phenomenological ECM graph, cf. Figure 4f, the smooth transition between broken layer conditions can be tracked very well. Apart from the transitions zones, the individual contributions as modeled by the analytical ECM model (( Fig. 4a and 4b) and derived from measurements in the Phenomenological ECM model match up very well. It also shows why the measured performance differs immensely from the predictions below 100 3 , which now shows as very high in-core execution time an load instructions because of short scalar loops. The difference between measurement and model towards the right side of the graph hints that there are saturation effects in the memory interface which we do not yet understand fully. The data transfer volumes predicted by the layer conditions and their comparison with measured data volumes through hardware performance counters can be seen in Figure 4e. The solid lines show the data transfer prediction between cache levels and main memory and the dashed lines are the measured data transfers. Between 100 3 and 500 3 as well as after 850 3 the predicted transfers at each level and the measured data fit perfectly and show the accuracy of this method. As layer condition break the measured cache transfers show a smooth transition, until it realigns with the predicted data volume. Figure 4c shows the impact of cache blocking for specific layer conditions. In this case, blocking was performed for the L3-3D layer condition, where only the middle loop (e.g., j-loop in Listing 1) is blocked to keep the relevant data at least in the L3 cache and reduce main memory traffic to a minimum. As intended, performance stays constant after the L3-3D layer condition is broken, with spatial cache blocking enabled (green line). This behavior can be predicted from Fig. 4a, where spatial blocking means to preserve the throughput of an earlier plateau while increasing the dataset size. Reasonable blocking factors are given by the range of the plateau (e.g., here the block dimensions should be N 1/3 block < 700). Blocking for the next lower plateau (i.e., N 1/3 block < 100) may introduce too much overhead due to short loops. Another, more complicated option would be the use of temporal blocking, which is expected to yield about the same performance as N 1/3 block < 100, because stripping the top contribution from the stacked plot would bring the throughput to the same plateau. Moving on from the single core to multi core scaling, Figure 4d shows the in-socket thread scaling behavior at 1020 3 . Due to the Cluster-on-Die configuration of the machine, the performance flattens out at the end of the first NUMA-domain (7 cores). With the addition of the second NUMAdomain a linear increase can be seen, due to the linear addition of bandwidth from the added cores based on the compact scheduling scheme. Predictions of ECM and Roofline models fit very well in the second, linear part of the graph, and the ECM is also able to capture the phase before memory bandwidth saturation. A Simple Short-ranged Stencil on AMD Zen In Figure 5 we show an analysis on the AMD Zen microarchitecture, presenting results for the same kernel as in Listing 1. These and additional results may be found at https://git. io/fj4yq. As described in Section 2.2, the AMD Zen architecture shows strong overlap in data transfers. The port execution model is based on the OSACA implementation [20], and the Kerncraft version used for this is based on the latest feature/OSACA branch. For data volume prediction we use the layer condition model, as for SKX. The ECM prediction for the AMD Zen microarchitecture is based on the following model, which has fewer serializing terms: T ECM = max(T comp , T RegL1 , T L1L2 , sum(T L2L3 , T L3MEM )) This difference is also visible in Figure 5, where the overlapping parallel terms (T comp , T RegL1 and T L1L2 ) are simple lines and the serial contribution terms (T L2L3 and T L3MEM ) are stacked onto one another. As with Skylake X, the benchmark follows the trend of the model qualitatively, but measurements yield better throughput with increased main memory traffic. This effect is seen in both the ECM and the Roofline model and we believe it is linked to the undisclosed behavior of the L3 cache. The cache simulator apparently overestimates the number of L2 or L3 misses and predicts a higher main memory traffic volume. Unfortunately, AMD Zen does not have main memory traffic hardware performance counters, so we are unable to validate this assumption. In light of the large main memory traffic contribution, we would suggest temporal blocking to bring the inverse throughput down to the T RegL1 level. T L3Mem contains all memory accesses (i.e., transfers between main memory, L3 and L2). A Long-ranged Stencil on Skylake X (Intel Xeon Gold 6148) The second example showcases a long-ranged heterogeneous star stencil on the Skylake X architecture (Intel Xeon Gold 6148), the stencil source code is shown in Listing 2. It features more floating point operations compared to the previous kernel because of the heterogeneous coefficients classification, but also higher memory traffic due to the long stencil range (i.e., range = 3 or r3 classification). For brevity, only the stacked ECM prediction with layer conditions as cache predictor is shown in Figure 6. All remaining information and graphs can be found on the INSPECT website: https://git.io/fjq2a. Qualitatively, both-Roofline and ECM prediction-represent the measured performance behavior well. The Roofline model is a bit too optimistic and the ECM model a bit too pessimistic. The reason for that is the new organization of the cache hierarchy in the Skylake microarchitecture, seen in Figure 2b and discussed in Sec. 2.3. At the moment it is not possible to correctly model the data transfers between L2 and L3 cache, in combination with main memory. With the ECM model a worst case scenario is assumed, such that all data dropped or evicted from L2 is passed onto L3. Taking that into account and with better knowledge of the actual caching algorithms and heuristics, the ECM prediction would become faster and more closely match the measured data. The layer condition analysis, correlates very well with the measured data and all relevant breaks (i.e., plateaus) can be seen. The slow performance in the beginning until 120 3 is again related to the high T comp fraction and scalar loads of the remainder loop. Data shown on the INSPECT website for full-socket thread scaling shows that the prediction fits perfectly to the measured data. Also cache blocking for the L2-3D layer condition works very well, due to the larger L2 cache in this architecture. In contrast to that, L3-3D cache blocking works very poorly, as is in accordance with Intel's recommendations: "Using just the last level cache size per core may result in non-optimal use of available on-chip cache" [19] (p. 41). Overall it can be said, that except for the uncertainty with the L2-L3-MEM caching behavior, the applied Skylake machine model works well and gives accurate predictions. Performance optimization potential is again predicted by the plateaus (for spatial blocking) and contributions (for temporal blocking). Spatial blocking to N 1/3 block < 300 may increase performance by up to 30%. Temporal blocking would only make sense, in comparison to spatial, if it is done in the L2 cache, stripping the two upper contributions off the non-overlapping ECM prediction and possibly hitting the instruction throughput bottleneck (T comp ) at 40 cy/CL. Comparison of a Short-ranged Box Stencil on Broadwell and Skylake X Finally, we present a comparison of Broadwell (Intel Xeon E5-2697v4) and Skylake X (Intel Xeon Gold 6148) with a short-ranged box stencil with heterogeneous constant coefficients, cf. Listing 3. Compared to star stencils, box stencils need more loads and registers, which may have a large performance impact. In Figures 7a and 7b benchmark data and model predictions are shown. On the INSPECT website a complete list of graphs, data sets and modeling information may be viewed: https://git.io/fjqav for Broadwell and https://git.io/fjqaU for Skylake X. On Broadwell performance does not seem to be impacted by data traffic nor in-core execution (T comp ). Reasons for the poor and almost constant performance across all grid sizes is a register dependency chain that is visible in the assembly code, to be seen on the INSPECT website under "Kernel Source Code" by clicking on the "Assembly Code" button, but undetected by IACA. This dependency chain slows down the execution so much, that all other effects are suppressed and the performance becomes independent of the grid size. Since the number of available registers has been doubled with the introduction of the Skylake X architecture, this disastrous effect is eliminated there. Instead, until 300 3 the in-core bottleneck T comp dominates and limits the reciprocal throughput to ∼ 60 cy/CL. This prediction, originating from an IACA analysis, is obviously too pessimistic, since measurements show better performance compared to ECM and Roofline models. Looking into the IACA analysis, it only gives an explanation for 46 out of the 60 cycles and adds 14 cycles based on an unknown heuristic. Considering this, 46 cycles per cacheline would explain the measured performance much better and calls for a better in-core model, as aimed for by the OSACA project [20]. Beyond N ≈ 400 3 , data transfers become more dominant and slow down the execution, as qualitatively predicted by the ECM model. Roofline sticks with the 60 cy/CL, because no single memory level surpasses them. In light of the discrepancy between modeled and measured performance, the graph can not be used to guide performance optimization, but it sheds light on the IACA misprediction at hand. For Skylake, simple spatial blocking with N 1/3 block < 400 is advisable. Temporal blocking would not yield better results, because of the hard T comp limit. How to Make Use of INSPECT When developing stencil-driven applications and especially when publishing performance results based on stencil codes, authors have to compare to a suitable, well-understood baseline. In order to make use of INSPECT in this context, the user must first classify their stencil according to our scheme and select a microarchitecture and CPU model from the INSPECT website that is similar or identical to their own. If that is not possible, INSPECT provides the toolchain and automation to compile a new baseline for future reference. Depending on the programming language and software architecture, stencil patterns in applications may be hidden under several abstraction layers but come to light during detailed performance analysis. It is also the user's task to isolate the stencil code in order to be able to measure its performance. This may be done either "in situ" via suitable instrumentation or by writing a proxy application that only performs stencil updates. Once a stencil is classified and a comparison is established, optimization strategies may be guided by the INSPECT ECM model report: Spatial blocking should bring the performance of large data sets on the level of smaller data set sizes by better use of caches (moving to a plateau left in the plots), whereas temporal blocking strategies eliminate data transfers to lower memory hierarchy levels (peeling off layers in the stacked ECM plot contributions). If the measured stencil performance in the application code does not coincide with the INSPECT data and model at least qualitatively, as seen in Sec. 4.4, the culprit is usually the compiler not generating efficient code, but other scenarios are possible: specific hardware features in the user's benchmarking setup (e.g., different DIMM organization), unfavorable system settings (e.g., small OS pages, uncontrolled clock speed, Cluster-on-Die settings, disabled prefetchers), simple benchmarking mistakes such as wrong or no thread affinity, etc.. Whatever the reason, it will be worth investigating, which usually leads to better insight. Related Work Our work comprises three parts: stencil classification and generation with STEMPEL, benchmarking and modeling with Kerncraft, and presentation and publication of results on the INSPECT website. Collecting and presenting benchmark results is a common approach for a variety of reasons. To name a few examples: • The TOP500 [25] ranks the performance of HPC systems world-wide based on the High Performance LINPACK [4] benchmark performance. • The HPCG benchmark [5] takes the same approach as the TOP500, with a different benchmark. • SPEC [27] has a spectrum of benchmarks suites for different aspects and allows its members to publish the results on their website. Their suites come with real applications embedded as test cases. They produce detailed reports on the runtime environment, with the goal of comparing the performance of systems. • The STREAM benchmark [24] is the de facto standard for measuring main memory bandwidth. The website has results for machines in tabulated form. • The HPC Challenge Benchmark Suite [21] combines multiple benchmarks and allows users to publish results through HPCC's website. All of these benchmark collections are focused on comparing machine performance by a set of predefined benchmarks, which is extremely valuable for purchasing decisions and as a reference for researchers and developers. In contrast, we try to explain the observed performance based on the characteristics of the stencil structure, which is usually defined by the underlying model and discretization. This makes it more informative and adaptable for a particular developer to compare and explain their own code's performance with similarly structured reference implementations provided by our framework. The Ginkgo Performance Explorer [2] focuses on presenting performance data gathered by automatic benchmarks as part of a continuous integration (CI) pipeline. This project is generically applicable to other workflows, but lacks the focus on a specific field to allow the fine grained presentation of model predictions and measurements as is done by INSPECT, nor does it comprise any modeling component. Methodologies for performance analysis most often fall into the category of performance models, such as the already mentioned ECM and Roofline models. Their application to specific stencils or stencil-based algorithms was at the focus of intense research [3,7,18,22,26]. Our concept goes beyond these approaches in that it enables easy reproduction of performance numbers and encourages discussion via an open workflow. Datta et al. published a study of a stencil kernel on multiple architectures in 2009 [3]. It is based on the same modeling principles but does not provide a unified process and presentation reusable for other kernels. Conclusion and Outlook We have presented a comprehensive code generation, performance evaluation, modeling, and data presentation framework for stencil algorithms. It includes a classification scheme for stencil characteristics, a data collection methodology, automatic analytic performance modeling via advanced tools and a publicly available website that allows to browse results and models across a variety of processor architectures in a convenient way. The presented baseline performance and model data provides valuable insight and points of comparison for developers who have to write performance critical stencil code. The automatic spatially optimized version is given as an optimization example. INSPECT already contains a large range of different stencil parameters and will be continuously extended to eventually comprise a full coverage of the parameter space. To this end, we plan on optimizing the tool chain to reduce the total runtime considerably. The choice of an analytic performance model over machine learning was deliberate as not only prediction but also insight into bottlenecks is desired. Kerncraft support of non-Intel architectures is still rudimentary. Support for AMD's latest x86 implementations is already available and we have presented its preliminary use, while ARM will require more effort but is on our shortlist of upcoming features. These additions will be integrated into future updates of the INSPECT website. An interesting spin-off of this work would be the integration of more web-enabled tools, such as the layer-condition analysis [10], into INSPECT to allow users to interactively analyze their own code. Compiler explorer [8] would be one potential tool to inspect compiler behavior for different architectures. A generalization from stencils to dense linear algebra and streaming kernels is straightforward from Kerncraft's perspective, but the classification scheme would have to be extended.
7,101
1811.00250
2951153470
Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52 FLOPs on ResNet-110 with even 2.69 relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42 FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL
Most previous works on accelerating CNNs can be roughly divided into three categories, namely, @cite_1 @cite_2 , @cite_20 @cite_32 , and . -based approaches aim to remove the unnecessary connections of the neural network @cite_8 @cite_17 . Essentially, always results in unstructured models, which makes it hard to deploy the existing efficient BLAS library, while not only reduces the storage usage on devices but also decreases computation cost to accelerate the inference. We could roughly divide the filter pruning methods into two categories by whether the training data is utilized to determine the pruned filters, that is, and filter pruning. Data independent method is more efficient than data dependent method as the ultimating of training data is computation consuming.
{ "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs [1] that have substantially impacted the computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., @math 10) layers are approximated. For the widely used very deep VGG-16 model [1] , our method achieves a whole-model speedup of 4 @math with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4 @math accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector [2] .", "", "Abstract: Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank tensor decomposition for removing the redundancy in the convolution kernels. The algorithm finds the exact global optimizer of the decomposition and is more effective than iterative methods. Based on the decomposition, we further propose a new method for training low-rank constrained CNNs from scratch. Interestingly, while achieving a significant speedup, sometimes the low-rank constrained CNNs delivers significantly better performance than their non-constrained counterparts. On the CIFAR-10 dataset, the proposed low-rank NIN model achieves @math accuracy (without data augmentation), which also improves upon state-of-the-art result. We evaluated the proposed method on CIFAR-10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet, NIN, VGG and GoogleNet with success. For example, the forward time of VGG-16 is reduced by half while the performance is still comparable. Empirical success suggests that low-rank tensor decompositions can be a very useful tool for speeding up large CNNs.", "Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difficult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means it’s as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16× smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04 , 0.16 , 0.36 , respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3 of Top-1 accuracy and outperforms previous ternary models by 3 .", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks." ], "cite_N": [ "@cite_8", "@cite_1", "@cite_32", "@cite_2", "@cite_20", "@cite_17" ], "mid": [ "2963674932", "2104636679", "2924515500", "2963225922", "2963424132", "2962965870" ] }
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
The deeper and wider architectures of deep CNNs bring about the superior performance of computer vision tasks [6,26,45]. However, they also cause the prohibitively expensive computational cost and make the model deployment on mobile devices hard if not impossible. Even the latest architecture with high efficiencies, such as residual connection [12] or inception module [34], has millions of parameters requiring billions of float point operations (FLOPs) [15]. Therefore, it is necessary to attain the deep CNN models which have relatively low computational cost * Corrsponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program. An illustration of (a) the pruning criterion for normbased approach and the proposed method; (b) requirements for norm-based filter pruning criterion. In (a), the green boxes denote the filters of the network, where deeper color denotes larger norm of the filter. For the norm-based criterion, only the filters with the largest norm are kept based on the assumption that smallernorm filters are less important. In contrast, the proposed method prunes the filters with redundant information in the network. In this way, filters with different norms indicated by different intensities of green may be retained. In (b), the blue curve represents the ideal norm distribution of the network, and the v1 and v2 is the minimal and maximum value of norm distribution, respectively. To choose the appropriate threshold T (the red shadow), two requirements should be achieved, that is, the norm deviation should be large, and the minimum of the norm should be arbitrarily small. but high accuracy. Recent developments on pruning can be divided into two categories, i.e., weight pruning [11,1] and filter pruning [21,39]. Weight pruning directly deletes weight values in a filter which may cause unstructured sparsities. This irregular structure makes it difficult to leverage the highefficiency Basic Linear Algebra Subprograms (BLAS) libraries [25]. In contrast, filter pruning directly discards the whole selected filters and leaves a model with regular structures. Therefore, filter pruning is more preferred for accelerating the networks and decreasing the model size. Current practice [21,38,15] performs filter pruning by following the "smaller-norm-less-important" criterion, which believes that filters with smaller norms can be pruned safely due to their less importance. As shown in the top right of Figure 1(a), after calculating norms of filters in a model, a pre-specified threshold T is utilized to select filters whose norms are smaller than it. However, as illustrated in Figure 1(b), there are two prerequisites to utilize this "smaller-norm-less-important" criterion. First, the deviation of filter norms should be significant. This requirement makes the searching space for threshold T wide enough so that separating those filters needed to be pruned would be an easy task. Second, the norms of those filters which can be pruned should be arbitrarily small, i.e., close to zero; in other words, the filters with smaller norms are expected to make absolutely small contributions, rather than relatively less but positively large contributions, to the network. An ideal norm distribution when satisfactorily meeting those two requirements is illustrated as the blue curve in Figure 1. Unfortunately, based on our analysis and experimental observations, this is not always true. To address the problems mentioned above, we propose a novel filter pruning approach, named Filter Pruning via Geometric Median (FPGM). Different from the previous methods which prune filters with relatively less contribution, FPGM chooses the filters with the most replaceable contribution. Specifically, we calculate the Geometric Median (GM) [8] of the filters within the same layer. According to the characteristics of GM, the filter(s) F near it can be represented by the remaining ones. Therefore, pruning those filters will not have substantial negative influences on model performance. Note that FPGM does not utilize norm based criterion to select filters to prune, which means its performance will not deteriorate even when failing to meet requirements for norm-based criterion. Contributions. We have three contributions: (1) We analyze the norm-based criterion utilized in previous works, which prunes the relatively less important filters. We elaborate on its two underlying requirements which lead to its limitations; (2) We propose FPGM to prune the most replaceable filters containing redundant information, which can still achieve good performances when norm-based criterion fails; (3) The extensive experiment on two benchmarks demonstrates the effectiveness and efficiency of FPGM. Methodology Preliminaries We formally introduce symbols and notations in this subsection. We assume that a neural network has L layers. We use N i and N i+1 , to represent the number of input channels and the output channels for the i th convolution layer, respectively. F i,j represents the j th filter of the i th layer, then the dimension of filter F i,j is R Ni×K×K , where K is the kernel size of the network 1 . The i th layer of the net- work W (i) could be represented by {F i,j , 1 ≤ j ≤ N i+1 }. The tensor of connection of the deep CNN network could be parameterized by {W (i) ∈ R Ni+1×Ni×K×K , 1 ≤ i ≤ L}. Analysis of Norm-based Criterion Number of filters Value of norm Number of filters Value of norm 0 0 (1) Small Norm Deviation. The deviation of filter norm distributions might be too small, which means the norm values are concentrated to a small interval, as shown in Figure 2(a). A small norm deviation leads to a small search space, which makes it difficult to find an appropriate threshold to select filters to prune. 2 1 2 2 ′ 1 ′′ 1 ′ 2 ′′ 1 Problem 1: σ ( ′) << σ ( ) (a) Small Norm Deviation Problem 2: 1 ′′ ≫ 1 → 0 (b) Large Minimum Norm ′ ′′ (2) Large Minimum Norm. The filters with the minimum norm may not be arbitrarily small, as shown in the 1 Fully-connected layers equal to convolutional layers with k = 1 Figure 2 (b), v 1 >> v 1 → 0. Under this condition, those filters considered as the least important still contribute significantly to the network, which means every filter is highly informative. Therefore, pruning those filters with minimum norm values will cast a negative effect on the network. Norm Statistics in Real Scenarios In Figure 3, statistical information collected from pretrained ResNet-110 on CIFAR-10 and pre-trained ResNet-18 on ILSVRC-2012 demonstrates previous analysis. The small green vertical lines show each observation in this norm distribution, and the blue curves denote the Kernel Distribution Estimate (KDE) [30], which is a nonparametric way to estimate the probability density function of a random variable. The norm distribution of first layer and last layer in both structures are drawn. In addition, to clearly illustrate the relation between norm points, two different x-scale, i.e., linear x-scale and log x-scale, are presented. (1) Small Norm Deviation in Network. For the first convolutional layer of ResNet-110, as shown in Figure 3(b), there is a large quantity of filters whose norms are concentrated around the magnitude of 10 −6 . For the last convolutional layer of ResNet-110, as shown in Figure 3(c), the interval span of the value of norm is roughly 0.3, which is much smaller than the interval span of the norm of the first layer (1.7). For the last convolutional layer of ResNet-18, as shown in Figure 3(g), most filter norms are between the interval [0.8, 1.0]. In all these cases, filters are distributed too densely, which makes it difficult to select a proper threshold to distinguish the important filters from the others. (2) Large Minimum Norm in Network. For the last convolutional layer of ResNet-18, as shown in Figure 3(g), the minimum norm of these filters is around 0.8, which is large comparing to filters in the first convolutional layer (Figure 3(e)). For the last convolutional layer of ResNet-110, as shown in Figure 3(c), only one filter is arbitrarily small, while the others are not. Under those circumstances, the filters with minimum norms, although they are relatively less important according to the norm-based criterion, still make significant contributions in the network. Filter Pruning via Geometric Median To get rid of the constraints in the norm-based criterion, we propose a new filter pruning method inspired from geometric median. The central idea of geometric median [8] is as follows: given a set of n points a (1) , . . . , a (n) with each a (i) ∈ R d , find a point x * ∈ R d that minimizes the sum of Euclidean distances to them: [1,n] x − a As the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], we use the geometric median F GM i to get the common information of all the filters within the single i th layer: x * ∈ arg min x∈R d f (x) where f (x) def = i∈F GM i ∈ arg min x∈R N i ×K×K g(x),(2) where g(x) def = j ∈[1,N i+1 ] x − F i,j 2.(3) In the i th layer, if some filters have the same, or similar values as the geometric median in that layer, which is: Fi,j * ∈ arg min j ∈[1,N i+1 ] F i,j − F GM i 2,(4) then those filters, F i,j * , can be represented by the other filters in the same layer, and therefore, pruning them has little negative impacts on the network performance. As geometric median is a non-trivial problem in computational geometry, the previous fastest running times for computing a (1 + )-approximate geometric median were O(dn 4/3 · −8/3 ) by [2], O(nd log 3 (n/ )) by [3]. In our case, as the final result F i,j * are a list of know points, that is, the candidate filters in the layer, we could relax the above problem. We assume that Fi,j * − F GM i 2 = 0,(5) so the Equation.4 is achieved. Then the above Equation.2 becomes to Fi,j * ∈ arg min j * ∈[1,N i+1 ] j ∈[1,N i+1 ] x − F i,j 2 = arg min j * ∈[1,N i+1 ] g(x)(6) Note that even if the filter need to be pruned, F i,j * , is not included in the calculation of the geometric median in Equation.6 2 , we could also achieve the same result. In this setting, we want to find the filter F i,j * ∈ arg min j * ∈[1,N i+1 ] g (x),(7) where g (x) = j ∈[1,N i+1 ],j =j * x − F i,j 2.(8) With the above Equation.6 and Equation.8, we could get that: Find N i+1 P i filters that satisfy Equation 6 7: g (x) = g(x) − j =j * x − F i,j 2 = g(x) − x − Fi,j * 2.(9 Zeroize selected filters 8: end for 9: end for 10: Obtain the compact model W * from W Output: The compact model and its parameters W * then we could get min g (x) = min{g(x) − x − Fi,j * 2} = min g(x) − min x − Fi,j * 2 = g(Fi,j * ) − min x − Fi,j * 2.(10) For the second component of the right side for Equation.10, when x = F i,j * , we can get: F i,j * = Fi,j * (11) since x − F i,j 2 = 0 Since the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], the selected filter(s), F i,j * , and left ones share the most common information. This indicates the information of the filter(s) F i,j * could be replaced by others. After fine-tuning, the network could easily recover its original performance since the information of pruned filters can be represented by the remaining ones. Therefore, the filter(s) F i,j * could be pruned with negligible effect on the final result of the neural network. The FPGM is summarized in Algorithm 1. Theoretical and Realistic Acceleration Theoretical Acceleration Suppose the shapes of input tensor I ∈ N i × H i × W i and output tensor O ∈ N i+1 × H i+1 × W i+1 . Set the filter pruning rate of the i th layer to P i , then N i+1 × P i filters should be pruned. After filter pruning, the dimension of input and output feature map of the i th layer change to I ∈ [N i × (1 − P i )] × H i × W i and O ∈ [N i+1 × (1 − P i )] × H i+1 × W i+1 , respectively. If setting pruning rate for the (i + 1) th layer to P i+1 , then only (1 − P i+1 ) × (1 − P i ) of the original computation is needed. Finally, a compact model {W * (i) ∈ R Ni+1(1−Pi)×Ni(1−Pi−1)×K×K } is obtained. Realistic Acceleration In the above analysis, only the FLOPs of convolution operations for computation complexity comparison is considered, which is common in previous works [21,15]. This is because other operations such as batch normalization (BN) and pooling are insignificant comparing to convolution operations. However, non-tensor layers (e.g., BN and pooling layers) also need the inference time on GPU [25], and influence the realistic acceleration. Besides, the wide gap between the theoretical and realistic acceleration could also be restricted by the IO delay, buffer switch, and efficiency of BLAS libraries. We compare the theoretical and practical acceleration in Table 5. Experiments We evaluate FPGM for single-branch network (VGGNet [31]), and multiple-branch network (ResNet) on two benchmarks: CIFAR-10 [20] and ILSVRC-2012 [29] 3 . The CIFAR-10 [20] dataset contains 60, 000 32 × 32 color images in 10 different classes, in which 50, 000 training images and 10, 000 testing images are included. ILSVRC-2012 [29] is a large-scale dataset containing 1.28 million training images and 50k validation images of 1,000 classes. Experimental Settings Training setting. On CIFAR-10, the parameter setting is the same as [13] and the training schedule is the same as [40]. In the ILSVRC-2012 experiments, we use the default parameter settings which is same as [12,13]. Data argumentation strategies for ILSVRC-2012 is the same as Py-Torch [28] official examples. We analyze the difference between starting from scratch and the pre-trained model. For pruning the model from scratch, We use the normal training schedule without additional fine-tuning process. For pruning the pre-trained model, we reduce the learning rate to one-tenth of the original learning rate. To conduct a fair comparison of pruning scratch and pre-trained models, we use the same training epochs to train/fine-tune the network. The previous work [21] might use fewer epochs to finetune the pruned model, but it converges too early, and its accuracy can not improve even with more epochs, which can be shown in section 4.2. Pruning setting. In the filter pruning step, we simply prune all the weighted layers with the same pruning rate at the same time, which is the same as [15]. Therefore, only one hyper-parameter P i = P is needed to balance the acceleration and accuracy. The pruning operation is conducted at Table 1. Comparison of pruned ResNet on CIFAR-10. In "Fine-tune?" column, "" and "" indicates whether to use the pre-trained model as initialization or not, respectively. The "Acc. ↓" is the accuracy drop between pruned model and the baseline model, the smaller, the better. the end of every training epoch. Unlike previous work [21], sensitivity analysis is not essential in FPGM to achieve good performances, which will be demonstrated in later sections. Apart from FPGM only criterion, we also use a mixture of FPGM and previous norm-based method [15] to show that FPGM could serve as a supplement to previous methods. FPGM only criterion is denoted as "FPGMonly", the criterion combining the FPGM and norm-based criterion is indicated as "FPGM-mix". "FPGM-only 40%" means 40% filters of the layer are selected with FPGM only, while "FPGM-mix 40%" means 30% filters of the layer are selected with FPGM, and the remaining 10% filters are selected with norm-based criterion [15]. We compare FPGM with previous acceleration algorithms, e.g., MIL [5], PFEC [21], CP [16], ThiNet [25], SFP [15], NISP [39], Rethinking [38]. Not surprisingly, our FPGM method achieves the state-of-the-art result. Single-Branch Network Pruning VGGNet on CIFAR-10. As the training setup is not publicly available for [21], we re-implement the pruning procedure and achieve similar results to the original paper. The result of pruning pre-trained and scratch model is shown in Table 3 and Table 4, respectively. Not surprisingly, FPGM achieves better performance than [21] in both settings. Multiple-Branch Network Pruning ResNet on CIFAR-10. For the CIFAR-10 dataset, we test our FPGM on ResNet-20, 32, 56 and 110 with two different pruning rates: 30% and 40%. As shown in Table 1, our FPGM achieves the stateof-the-art performance. For example, MIL [5] without fine-tuning accelerates ResNet-32 by 31.2% speedup ratio with 1.59% accuracy drop, but our FPGM without finetuning achieves 53.2% speedup ratio with even 0.19% accuracy improvement. Comparing to SFP [15], when pruning 52.6% FLOPs of ResNet-56, our FPGM has only 0.66% accuracy drop, which is much less than SFP [15] (1.33%). For pruning the pre-trained ResNet-110, our method achieves a much higher (52.3% v.s. 38.6%) acceleration ratio with 0.16% performance increase, while PFEC [21] harms the performance with lower acceleration ratio. These results demonstrate that FPGM can produce a more compressed model with comparable or even better performances. ResNet Table 3. Pruning pre-trained VGGNet on CIFAR-10. "w.o." means "without" and "FT" means "fine-tuning" the pruned model. FPGM without fine-tuning achieves the same inference speedup with [15], but its accuracy exceeds by 0.68%. FPGM-only with fine-tuning could even gain 0.60% improvement over FPGM-only without fine-tuning, thus exceeds [15] by 1.28%. For ResNet-50, FPGM with finetuning achieves more inference speedup than CP [16], but our pruned model exceeds their model by 0.85% on the accuracy. Moreover, for pruning a pre-trained ResNet-101, FPGM reduces more than 40% FLOPs of the model without top-5 accuracy loss and only negligible (0.05%) top-1 accuracy loss. In contrast, the performance degradation is 2.10% for Rethinking [38]. Compared to the norm-based criterion, Geometric Median (GM) explicitly utilizes the relationship between filters, which is the main cause of its superior per- formance. To compare the theoretical and realistic acceleration, we measure the forward time of the pruned models on one GTX1080 GPU with a batch size of 64. The results 4 are shown in Table 5. As discussed in the above section, the gap between the theoretical and realistic model may come from the limitation of IO delay, buffer switch, and efficiency of BLAS libraries. Ablation Study Influence of Pruning Interval In our experiment setting, the interval of pruning equals to one, i.e., we conduct our pruning operation at the end of every training epoch. To explore the influence of pruning interval, we change the pruning interval from one epoch to ten epochs. We use the ResNet-110 under pruning rate 40% as the baseline, as shown in Fig. 4(a). The accuracy fluctuation along with the different pruning intervals is less than 0.3%, which means the performance of pruning is not sensitive to this parameter. Note that fine-tuning this parameter could even achieve better performance. Varying Pruned FLOPs We change the ratio of Pruned FLOPs for ResNet-110 to comprehensively understand FPGM, as shown in Fig. 4(b). When the pruned FLOPs is 18% and 40%, the performance of the pruned model even exceeds the baseline model without pruning, which shows FPGM may have a regularization effect on the neural network. Influence of Distance Type We use 1 -norm and cosine distance to replace the distance function in Equation 3. We use the ResNet-110 under pruning rate 40% as the baseline, the accuracy of the pruned model is 93.73 ± 0.23 %. The accuracy based on 1 -norm and cosine distance is 93.87 ± 0.22 % and 93.56 ± 0.13, respectively. Using 1 -norm as the distance of filter would bring a slightly better result, but cosine distance as distance would slightly harm the performance of the network. 4 Optimization of the addition of ResNet shortcuts and convolutional outputs would also affect the results. Combining FPGM with Norm-based Criterion We analyze the effect of combining FPGM and previous normbased criterion. For ResNet-110 on CIFAR-10, FPGMmix is slightly better than FPGM-only. For ResNet-18 on ILSVRC-2012, the performances of FPGM-only and FPGM-mix are almost the same. It seems that the normbased criterion and FPGM together can boost the performance on CIFAR-10, but not on ILSVRC-2012. We believe that this is because the two requirements for the norm-based criterion are met on some layers of CIFAR-10 pre-trained network, but not on that of ILSVRC-2012 pre-trained network, which is shown in Figure 3. Feature Map Visualization We visualize the feature maps of the first layer of the first block of ResNet-50. The feature maps with red titles (7,23,27,46,56,58) correspond to the selected filter activation when setting the pruning rate to 10%. These selected feature maps contain outlines of the bamboo and the panda's head and body, which can be replaced by remaining feature maps: (5,12,16,18,22, Conclusion and Future Work In this paper, we elaborate on the underlying requirements for norm-based filter pruning criterion and point out their limitations. To solve this, we propose a new filter pruning strategy based on the geometric median, named FPGM, to accelerate the deep CNNs. Unlike the previous normbased criterion, FPGM explicitly considers the mutual relations between filters. Thanks to this, FPGM achieves the state-of-the-art performance in several benchmarks. In the future, we plan to work on how to combine FPGM with other acceleration algorithms, e.g., matrix decomposition and low-precision weights, to push the performance to a higher stage.
3,623
1811.00250
2951153470
Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52 FLOPs on ResNet-110 with even 2.69 relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42 FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL
Many recent works @cite_8 @cite_34 @cite_31 @cite_27 @cite_25 @cite_12 @cite_10 focus on pruning fine-grained weight of filters. For example, @cite_8 proposes an iterative method to discard the small weights whose values are below the predefined threshold. @cite_25 formulates pruning as an optimization problem of finding the weights that minimize the loss while satisfying a pruning cost condition.
{ "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Deep neural networks enable state-of-the-art accuracy on visual recognition tasks such as image classification and object detection. However, modern deep networks contain millions of learned weights; a more efficient utilization of computation resources would assist in a variety of deployment scenarios, from embedded platforms with resource constraints to computing clusters running ensembles of networks. In this paper, we combine network pruning and weight quantization in a single learning framework that performs pruning and quantization jointly, and in parallel with fine-tuning. This allows us to take advantage of the complementary nature of pruning and quantization and to recover from premature pruning errors, which is not possible with current two-stage approaches. Our proposed CLIP-Q method (Compression Learning by In-Parallel Pruning-Quantization) compresses AlexNet by 51-fold, GoogLeNet by 10-fold, and ResNet-50 by 15-fold, while preserving the uncompressed network accuracies on ImageNet.", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108x and 17.7x respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https: github.com yiwenguo Dynamic-Network-Surgery.", "Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate.", "", "" ], "cite_N": [ "@cite_8", "@cite_27", "@cite_31", "@cite_34", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2963674932", "2799197246", "2963981420", "2964299589", "2798170643", "2960010704", "2808168148" ] }
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
The deeper and wider architectures of deep CNNs bring about the superior performance of computer vision tasks [6,26,45]. However, they also cause the prohibitively expensive computational cost and make the model deployment on mobile devices hard if not impossible. Even the latest architecture with high efficiencies, such as residual connection [12] or inception module [34], has millions of parameters requiring billions of float point operations (FLOPs) [15]. Therefore, it is necessary to attain the deep CNN models which have relatively low computational cost * Corrsponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program. An illustration of (a) the pruning criterion for normbased approach and the proposed method; (b) requirements for norm-based filter pruning criterion. In (a), the green boxes denote the filters of the network, where deeper color denotes larger norm of the filter. For the norm-based criterion, only the filters with the largest norm are kept based on the assumption that smallernorm filters are less important. In contrast, the proposed method prunes the filters with redundant information in the network. In this way, filters with different norms indicated by different intensities of green may be retained. In (b), the blue curve represents the ideal norm distribution of the network, and the v1 and v2 is the minimal and maximum value of norm distribution, respectively. To choose the appropriate threshold T (the red shadow), two requirements should be achieved, that is, the norm deviation should be large, and the minimum of the norm should be arbitrarily small. but high accuracy. Recent developments on pruning can be divided into two categories, i.e., weight pruning [11,1] and filter pruning [21,39]. Weight pruning directly deletes weight values in a filter which may cause unstructured sparsities. This irregular structure makes it difficult to leverage the highefficiency Basic Linear Algebra Subprograms (BLAS) libraries [25]. In contrast, filter pruning directly discards the whole selected filters and leaves a model with regular structures. Therefore, filter pruning is more preferred for accelerating the networks and decreasing the model size. Current practice [21,38,15] performs filter pruning by following the "smaller-norm-less-important" criterion, which believes that filters with smaller norms can be pruned safely due to their less importance. As shown in the top right of Figure 1(a), after calculating norms of filters in a model, a pre-specified threshold T is utilized to select filters whose norms are smaller than it. However, as illustrated in Figure 1(b), there are two prerequisites to utilize this "smaller-norm-less-important" criterion. First, the deviation of filter norms should be significant. This requirement makes the searching space for threshold T wide enough so that separating those filters needed to be pruned would be an easy task. Second, the norms of those filters which can be pruned should be arbitrarily small, i.e., close to zero; in other words, the filters with smaller norms are expected to make absolutely small contributions, rather than relatively less but positively large contributions, to the network. An ideal norm distribution when satisfactorily meeting those two requirements is illustrated as the blue curve in Figure 1. Unfortunately, based on our analysis and experimental observations, this is not always true. To address the problems mentioned above, we propose a novel filter pruning approach, named Filter Pruning via Geometric Median (FPGM). Different from the previous methods which prune filters with relatively less contribution, FPGM chooses the filters with the most replaceable contribution. Specifically, we calculate the Geometric Median (GM) [8] of the filters within the same layer. According to the characteristics of GM, the filter(s) F near it can be represented by the remaining ones. Therefore, pruning those filters will not have substantial negative influences on model performance. Note that FPGM does not utilize norm based criterion to select filters to prune, which means its performance will not deteriorate even when failing to meet requirements for norm-based criterion. Contributions. We have three contributions: (1) We analyze the norm-based criterion utilized in previous works, which prunes the relatively less important filters. We elaborate on its two underlying requirements which lead to its limitations; (2) We propose FPGM to prune the most replaceable filters containing redundant information, which can still achieve good performances when norm-based criterion fails; (3) The extensive experiment on two benchmarks demonstrates the effectiveness and efficiency of FPGM. Methodology Preliminaries We formally introduce symbols and notations in this subsection. We assume that a neural network has L layers. We use N i and N i+1 , to represent the number of input channels and the output channels for the i th convolution layer, respectively. F i,j represents the j th filter of the i th layer, then the dimension of filter F i,j is R Ni×K×K , where K is the kernel size of the network 1 . The i th layer of the net- work W (i) could be represented by {F i,j , 1 ≤ j ≤ N i+1 }. The tensor of connection of the deep CNN network could be parameterized by {W (i) ∈ R Ni+1×Ni×K×K , 1 ≤ i ≤ L}. Analysis of Norm-based Criterion Number of filters Value of norm Number of filters Value of norm 0 0 (1) Small Norm Deviation. The deviation of filter norm distributions might be too small, which means the norm values are concentrated to a small interval, as shown in Figure 2(a). A small norm deviation leads to a small search space, which makes it difficult to find an appropriate threshold to select filters to prune. 2 1 2 2 ′ 1 ′′ 1 ′ 2 ′′ 1 Problem 1: σ ( ′) << σ ( ) (a) Small Norm Deviation Problem 2: 1 ′′ ≫ 1 → 0 (b) Large Minimum Norm ′ ′′ (2) Large Minimum Norm. The filters with the minimum norm may not be arbitrarily small, as shown in the 1 Fully-connected layers equal to convolutional layers with k = 1 Figure 2 (b), v 1 >> v 1 → 0. Under this condition, those filters considered as the least important still contribute significantly to the network, which means every filter is highly informative. Therefore, pruning those filters with minimum norm values will cast a negative effect on the network. Norm Statistics in Real Scenarios In Figure 3, statistical information collected from pretrained ResNet-110 on CIFAR-10 and pre-trained ResNet-18 on ILSVRC-2012 demonstrates previous analysis. The small green vertical lines show each observation in this norm distribution, and the blue curves denote the Kernel Distribution Estimate (KDE) [30], which is a nonparametric way to estimate the probability density function of a random variable. The norm distribution of first layer and last layer in both structures are drawn. In addition, to clearly illustrate the relation between norm points, two different x-scale, i.e., linear x-scale and log x-scale, are presented. (1) Small Norm Deviation in Network. For the first convolutional layer of ResNet-110, as shown in Figure 3(b), there is a large quantity of filters whose norms are concentrated around the magnitude of 10 −6 . For the last convolutional layer of ResNet-110, as shown in Figure 3(c), the interval span of the value of norm is roughly 0.3, which is much smaller than the interval span of the norm of the first layer (1.7). For the last convolutional layer of ResNet-18, as shown in Figure 3(g), most filter norms are between the interval [0.8, 1.0]. In all these cases, filters are distributed too densely, which makes it difficult to select a proper threshold to distinguish the important filters from the others. (2) Large Minimum Norm in Network. For the last convolutional layer of ResNet-18, as shown in Figure 3(g), the minimum norm of these filters is around 0.8, which is large comparing to filters in the first convolutional layer (Figure 3(e)). For the last convolutional layer of ResNet-110, as shown in Figure 3(c), only one filter is arbitrarily small, while the others are not. Under those circumstances, the filters with minimum norms, although they are relatively less important according to the norm-based criterion, still make significant contributions in the network. Filter Pruning via Geometric Median To get rid of the constraints in the norm-based criterion, we propose a new filter pruning method inspired from geometric median. The central idea of geometric median [8] is as follows: given a set of n points a (1) , . . . , a (n) with each a (i) ∈ R d , find a point x * ∈ R d that minimizes the sum of Euclidean distances to them: [1,n] x − a As the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], we use the geometric median F GM i to get the common information of all the filters within the single i th layer: x * ∈ arg min x∈R d f (x) where f (x) def = i∈F GM i ∈ arg min x∈R N i ×K×K g(x),(2) where g(x) def = j ∈[1,N i+1 ] x − F i,j 2.(3) In the i th layer, if some filters have the same, or similar values as the geometric median in that layer, which is: Fi,j * ∈ arg min j ∈[1,N i+1 ] F i,j − F GM i 2,(4) then those filters, F i,j * , can be represented by the other filters in the same layer, and therefore, pruning them has little negative impacts on the network performance. As geometric median is a non-trivial problem in computational geometry, the previous fastest running times for computing a (1 + )-approximate geometric median were O(dn 4/3 · −8/3 ) by [2], O(nd log 3 (n/ )) by [3]. In our case, as the final result F i,j * are a list of know points, that is, the candidate filters in the layer, we could relax the above problem. We assume that Fi,j * − F GM i 2 = 0,(5) so the Equation.4 is achieved. Then the above Equation.2 becomes to Fi,j * ∈ arg min j * ∈[1,N i+1 ] j ∈[1,N i+1 ] x − F i,j 2 = arg min j * ∈[1,N i+1 ] g(x)(6) Note that even if the filter need to be pruned, F i,j * , is not included in the calculation of the geometric median in Equation.6 2 , we could also achieve the same result. In this setting, we want to find the filter F i,j * ∈ arg min j * ∈[1,N i+1 ] g (x),(7) where g (x) = j ∈[1,N i+1 ],j =j * x − F i,j 2.(8) With the above Equation.6 and Equation.8, we could get that: Find N i+1 P i filters that satisfy Equation 6 7: g (x) = g(x) − j =j * x − F i,j 2 = g(x) − x − Fi,j * 2.(9 Zeroize selected filters 8: end for 9: end for 10: Obtain the compact model W * from W Output: The compact model and its parameters W * then we could get min g (x) = min{g(x) − x − Fi,j * 2} = min g(x) − min x − Fi,j * 2 = g(Fi,j * ) − min x − Fi,j * 2.(10) For the second component of the right side for Equation.10, when x = F i,j * , we can get: F i,j * = Fi,j * (11) since x − F i,j 2 = 0 Since the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], the selected filter(s), F i,j * , and left ones share the most common information. This indicates the information of the filter(s) F i,j * could be replaced by others. After fine-tuning, the network could easily recover its original performance since the information of pruned filters can be represented by the remaining ones. Therefore, the filter(s) F i,j * could be pruned with negligible effect on the final result of the neural network. The FPGM is summarized in Algorithm 1. Theoretical and Realistic Acceleration Theoretical Acceleration Suppose the shapes of input tensor I ∈ N i × H i × W i and output tensor O ∈ N i+1 × H i+1 × W i+1 . Set the filter pruning rate of the i th layer to P i , then N i+1 × P i filters should be pruned. After filter pruning, the dimension of input and output feature map of the i th layer change to I ∈ [N i × (1 − P i )] × H i × W i and O ∈ [N i+1 × (1 − P i )] × H i+1 × W i+1 , respectively. If setting pruning rate for the (i + 1) th layer to P i+1 , then only (1 − P i+1 ) × (1 − P i ) of the original computation is needed. Finally, a compact model {W * (i) ∈ R Ni+1(1−Pi)×Ni(1−Pi−1)×K×K } is obtained. Realistic Acceleration In the above analysis, only the FLOPs of convolution operations for computation complexity comparison is considered, which is common in previous works [21,15]. This is because other operations such as batch normalization (BN) and pooling are insignificant comparing to convolution operations. However, non-tensor layers (e.g., BN and pooling layers) also need the inference time on GPU [25], and influence the realistic acceleration. Besides, the wide gap between the theoretical and realistic acceleration could also be restricted by the IO delay, buffer switch, and efficiency of BLAS libraries. We compare the theoretical and practical acceleration in Table 5. Experiments We evaluate FPGM for single-branch network (VGGNet [31]), and multiple-branch network (ResNet) on two benchmarks: CIFAR-10 [20] and ILSVRC-2012 [29] 3 . The CIFAR-10 [20] dataset contains 60, 000 32 × 32 color images in 10 different classes, in which 50, 000 training images and 10, 000 testing images are included. ILSVRC-2012 [29] is a large-scale dataset containing 1.28 million training images and 50k validation images of 1,000 classes. Experimental Settings Training setting. On CIFAR-10, the parameter setting is the same as [13] and the training schedule is the same as [40]. In the ILSVRC-2012 experiments, we use the default parameter settings which is same as [12,13]. Data argumentation strategies for ILSVRC-2012 is the same as Py-Torch [28] official examples. We analyze the difference between starting from scratch and the pre-trained model. For pruning the model from scratch, We use the normal training schedule without additional fine-tuning process. For pruning the pre-trained model, we reduce the learning rate to one-tenth of the original learning rate. To conduct a fair comparison of pruning scratch and pre-trained models, we use the same training epochs to train/fine-tune the network. The previous work [21] might use fewer epochs to finetune the pruned model, but it converges too early, and its accuracy can not improve even with more epochs, which can be shown in section 4.2. Pruning setting. In the filter pruning step, we simply prune all the weighted layers with the same pruning rate at the same time, which is the same as [15]. Therefore, only one hyper-parameter P i = P is needed to balance the acceleration and accuracy. The pruning operation is conducted at Table 1. Comparison of pruned ResNet on CIFAR-10. In "Fine-tune?" column, "" and "" indicates whether to use the pre-trained model as initialization or not, respectively. The "Acc. ↓" is the accuracy drop between pruned model and the baseline model, the smaller, the better. the end of every training epoch. Unlike previous work [21], sensitivity analysis is not essential in FPGM to achieve good performances, which will be demonstrated in later sections. Apart from FPGM only criterion, we also use a mixture of FPGM and previous norm-based method [15] to show that FPGM could serve as a supplement to previous methods. FPGM only criterion is denoted as "FPGMonly", the criterion combining the FPGM and norm-based criterion is indicated as "FPGM-mix". "FPGM-only 40%" means 40% filters of the layer are selected with FPGM only, while "FPGM-mix 40%" means 30% filters of the layer are selected with FPGM, and the remaining 10% filters are selected with norm-based criterion [15]. We compare FPGM with previous acceleration algorithms, e.g., MIL [5], PFEC [21], CP [16], ThiNet [25], SFP [15], NISP [39], Rethinking [38]. Not surprisingly, our FPGM method achieves the state-of-the-art result. Single-Branch Network Pruning VGGNet on CIFAR-10. As the training setup is not publicly available for [21], we re-implement the pruning procedure and achieve similar results to the original paper. The result of pruning pre-trained and scratch model is shown in Table 3 and Table 4, respectively. Not surprisingly, FPGM achieves better performance than [21] in both settings. Multiple-Branch Network Pruning ResNet on CIFAR-10. For the CIFAR-10 dataset, we test our FPGM on ResNet-20, 32, 56 and 110 with two different pruning rates: 30% and 40%. As shown in Table 1, our FPGM achieves the stateof-the-art performance. For example, MIL [5] without fine-tuning accelerates ResNet-32 by 31.2% speedup ratio with 1.59% accuracy drop, but our FPGM without finetuning achieves 53.2% speedup ratio with even 0.19% accuracy improvement. Comparing to SFP [15], when pruning 52.6% FLOPs of ResNet-56, our FPGM has only 0.66% accuracy drop, which is much less than SFP [15] (1.33%). For pruning the pre-trained ResNet-110, our method achieves a much higher (52.3% v.s. 38.6%) acceleration ratio with 0.16% performance increase, while PFEC [21] harms the performance with lower acceleration ratio. These results demonstrate that FPGM can produce a more compressed model with comparable or even better performances. ResNet Table 3. Pruning pre-trained VGGNet on CIFAR-10. "w.o." means "without" and "FT" means "fine-tuning" the pruned model. FPGM without fine-tuning achieves the same inference speedup with [15], but its accuracy exceeds by 0.68%. FPGM-only with fine-tuning could even gain 0.60% improvement over FPGM-only without fine-tuning, thus exceeds [15] by 1.28%. For ResNet-50, FPGM with finetuning achieves more inference speedup than CP [16], but our pruned model exceeds their model by 0.85% on the accuracy. Moreover, for pruning a pre-trained ResNet-101, FPGM reduces more than 40% FLOPs of the model without top-5 accuracy loss and only negligible (0.05%) top-1 accuracy loss. In contrast, the performance degradation is 2.10% for Rethinking [38]. Compared to the norm-based criterion, Geometric Median (GM) explicitly utilizes the relationship between filters, which is the main cause of its superior per- formance. To compare the theoretical and realistic acceleration, we measure the forward time of the pruned models on one GTX1080 GPU with a batch size of 64. The results 4 are shown in Table 5. As discussed in the above section, the gap between the theoretical and realistic model may come from the limitation of IO delay, buffer switch, and efficiency of BLAS libraries. Ablation Study Influence of Pruning Interval In our experiment setting, the interval of pruning equals to one, i.e., we conduct our pruning operation at the end of every training epoch. To explore the influence of pruning interval, we change the pruning interval from one epoch to ten epochs. We use the ResNet-110 under pruning rate 40% as the baseline, as shown in Fig. 4(a). The accuracy fluctuation along with the different pruning intervals is less than 0.3%, which means the performance of pruning is not sensitive to this parameter. Note that fine-tuning this parameter could even achieve better performance. Varying Pruned FLOPs We change the ratio of Pruned FLOPs for ResNet-110 to comprehensively understand FPGM, as shown in Fig. 4(b). When the pruned FLOPs is 18% and 40%, the performance of the pruned model even exceeds the baseline model without pruning, which shows FPGM may have a regularization effect on the neural network. Influence of Distance Type We use 1 -norm and cosine distance to replace the distance function in Equation 3. We use the ResNet-110 under pruning rate 40% as the baseline, the accuracy of the pruned model is 93.73 ± 0.23 %. The accuracy based on 1 -norm and cosine distance is 93.87 ± 0.22 % and 93.56 ± 0.13, respectively. Using 1 -norm as the distance of filter would bring a slightly better result, but cosine distance as distance would slightly harm the performance of the network. 4 Optimization of the addition of ResNet shortcuts and convolutional outputs would also affect the results. Combining FPGM with Norm-based Criterion We analyze the effect of combining FPGM and previous normbased criterion. For ResNet-110 on CIFAR-10, FPGMmix is slightly better than FPGM-only. For ResNet-18 on ILSVRC-2012, the performances of FPGM-only and FPGM-mix are almost the same. It seems that the normbased criterion and FPGM together can boost the performance on CIFAR-10, but not on ILSVRC-2012. We believe that this is because the two requirements for the norm-based criterion are met on some layers of CIFAR-10 pre-trained network, but not on that of ILSVRC-2012 pre-trained network, which is shown in Figure 3. Feature Map Visualization We visualize the feature maps of the first layer of the first block of ResNet-50. The feature maps with red titles (7,23,27,46,56,58) correspond to the selected filter activation when setting the pruning rate to 10%. These selected feature maps contain outlines of the bamboo and the panda's head and body, which can be replaced by remaining feature maps: (5,12,16,18,22, Conclusion and Future Work In this paper, we elaborate on the underlying requirements for norm-based filter pruning criterion and point out their limitations. To solve this, we propose a new filter pruning strategy based on the geometric median, named FPGM, to accelerate the deep CNNs. Unlike the previous normbased criterion, FPGM explicitly considers the mutual relations between filters. Thanks to this, FPGM achieves the state-of-the-art performance in several benchmarks. In the future, we plan to work on how to combine FPGM with other acceleration algorithms, e.g., matrix decomposition and low-precision weights, to push the performance to a higher stage.
3,623
1811.00250
2951153470
Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52 FLOPs on ResNet-110 with even 2.69 relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42 FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL
Some filter pruning approaches @cite_33 @cite_21 @cite_0 @cite_23 @cite_4 @cite_6 @cite_30 @cite_35 are data dependent, which means the training data is utilized to determine the pruned filters. @cite_21 adopts the statistics information from the next layer to guide the filter selections. @cite_4 aims to obtain a decomposition by minimizing the reconstruction error of training set sample activations. @cite_6 proposes an inherently data driven method which use Principal Component Analysis (PCA) to specify the proportion of the energy that should be preserved. @cite_35 applies subspace clustering to feature maps to eliminate the redundancy in convolutional filters.
{ "abstract": [ "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.", "While the research on convolutional neural networks (CNNs) is progressing quickly, the real-world deployment of these models is often limited by computing resources and memory constraints. In this paper, we address this issue by proposing a novel filter pruning method to compress and accelerate CNNs. Our work is based on the linear relationship identified in different feature map subspaces via visualization of feature maps. Such linear relationship implies that the information in CNNs is redundant. Our method eliminates the redundancy in convolutional filters by applying subspace clustering to feature maps. In this way, most of the representative information in the network can be retained in each cluster. Therefore, our method provides an effective solution to filter pruning for which most existing methods directly remove filters based on simple heuristics. The proposed method is independent of the network structure, thus it can be adopted by any off-the-shelf deep learning libraries. Experiments on different networks and tasks show that our method outperforms existing techniques before fine-tuning, and achieves the state-of-the-art results after fine-tuning.", "We propose a novel Convolutional Neural Network (CNN) compression algorithm based on coreset representations of filters. We exploit the redundancies extant in the space of CNN weights and neuronal activations (across samples) in order to obtain compression. Our method requires no retraining, is easy to implement, and obtains state-of-the-art compression performance across a wide variety of CNN architectures. Coupled with quantization and Huffman coding, we create networks that provide AlexNet-like accuracy, with a memory footprint that is 832 ( ) smaller than the original AlexNet, while also introducing significant reductions in inference time as well. Additionally these compressed networks when fine-tuned, successfully generalize to other domains as well.", "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.", "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52 top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.", "", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "" ], "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_33", "@cite_21", "@cite_6", "@cite_0", "@cite_23" ], "mid": [ "2963145730", "2793035069", "2883070812", "2962851801", "2964233199", "2894330827", "2963363373", "2553910756" ] }
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
The deeper and wider architectures of deep CNNs bring about the superior performance of computer vision tasks [6,26,45]. However, they also cause the prohibitively expensive computational cost and make the model deployment on mobile devices hard if not impossible. Even the latest architecture with high efficiencies, such as residual connection [12] or inception module [34], has millions of parameters requiring billions of float point operations (FLOPs) [15]. Therefore, it is necessary to attain the deep CNN models which have relatively low computational cost * Corrsponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program. An illustration of (a) the pruning criterion for normbased approach and the proposed method; (b) requirements for norm-based filter pruning criterion. In (a), the green boxes denote the filters of the network, where deeper color denotes larger norm of the filter. For the norm-based criterion, only the filters with the largest norm are kept based on the assumption that smallernorm filters are less important. In contrast, the proposed method prunes the filters with redundant information in the network. In this way, filters with different norms indicated by different intensities of green may be retained. In (b), the blue curve represents the ideal norm distribution of the network, and the v1 and v2 is the minimal and maximum value of norm distribution, respectively. To choose the appropriate threshold T (the red shadow), two requirements should be achieved, that is, the norm deviation should be large, and the minimum of the norm should be arbitrarily small. but high accuracy. Recent developments on pruning can be divided into two categories, i.e., weight pruning [11,1] and filter pruning [21,39]. Weight pruning directly deletes weight values in a filter which may cause unstructured sparsities. This irregular structure makes it difficult to leverage the highefficiency Basic Linear Algebra Subprograms (BLAS) libraries [25]. In contrast, filter pruning directly discards the whole selected filters and leaves a model with regular structures. Therefore, filter pruning is more preferred for accelerating the networks and decreasing the model size. Current practice [21,38,15] performs filter pruning by following the "smaller-norm-less-important" criterion, which believes that filters with smaller norms can be pruned safely due to their less importance. As shown in the top right of Figure 1(a), after calculating norms of filters in a model, a pre-specified threshold T is utilized to select filters whose norms are smaller than it. However, as illustrated in Figure 1(b), there are two prerequisites to utilize this "smaller-norm-less-important" criterion. First, the deviation of filter norms should be significant. This requirement makes the searching space for threshold T wide enough so that separating those filters needed to be pruned would be an easy task. Second, the norms of those filters which can be pruned should be arbitrarily small, i.e., close to zero; in other words, the filters with smaller norms are expected to make absolutely small contributions, rather than relatively less but positively large contributions, to the network. An ideal norm distribution when satisfactorily meeting those two requirements is illustrated as the blue curve in Figure 1. Unfortunately, based on our analysis and experimental observations, this is not always true. To address the problems mentioned above, we propose a novel filter pruning approach, named Filter Pruning via Geometric Median (FPGM). Different from the previous methods which prune filters with relatively less contribution, FPGM chooses the filters with the most replaceable contribution. Specifically, we calculate the Geometric Median (GM) [8] of the filters within the same layer. According to the characteristics of GM, the filter(s) F near it can be represented by the remaining ones. Therefore, pruning those filters will not have substantial negative influences on model performance. Note that FPGM does not utilize norm based criterion to select filters to prune, which means its performance will not deteriorate even when failing to meet requirements for norm-based criterion. Contributions. We have three contributions: (1) We analyze the norm-based criterion utilized in previous works, which prunes the relatively less important filters. We elaborate on its two underlying requirements which lead to its limitations; (2) We propose FPGM to prune the most replaceable filters containing redundant information, which can still achieve good performances when norm-based criterion fails; (3) The extensive experiment on two benchmarks demonstrates the effectiveness and efficiency of FPGM. Methodology Preliminaries We formally introduce symbols and notations in this subsection. We assume that a neural network has L layers. We use N i and N i+1 , to represent the number of input channels and the output channels for the i th convolution layer, respectively. F i,j represents the j th filter of the i th layer, then the dimension of filter F i,j is R Ni×K×K , where K is the kernel size of the network 1 . The i th layer of the net- work W (i) could be represented by {F i,j , 1 ≤ j ≤ N i+1 }. The tensor of connection of the deep CNN network could be parameterized by {W (i) ∈ R Ni+1×Ni×K×K , 1 ≤ i ≤ L}. Analysis of Norm-based Criterion Number of filters Value of norm Number of filters Value of norm 0 0 (1) Small Norm Deviation. The deviation of filter norm distributions might be too small, which means the norm values are concentrated to a small interval, as shown in Figure 2(a). A small norm deviation leads to a small search space, which makes it difficult to find an appropriate threshold to select filters to prune. 2 1 2 2 ′ 1 ′′ 1 ′ 2 ′′ 1 Problem 1: σ ( ′) << σ ( ) (a) Small Norm Deviation Problem 2: 1 ′′ ≫ 1 → 0 (b) Large Minimum Norm ′ ′′ (2) Large Minimum Norm. The filters with the minimum norm may not be arbitrarily small, as shown in the 1 Fully-connected layers equal to convolutional layers with k = 1 Figure 2 (b), v 1 >> v 1 → 0. Under this condition, those filters considered as the least important still contribute significantly to the network, which means every filter is highly informative. Therefore, pruning those filters with minimum norm values will cast a negative effect on the network. Norm Statistics in Real Scenarios In Figure 3, statistical information collected from pretrained ResNet-110 on CIFAR-10 and pre-trained ResNet-18 on ILSVRC-2012 demonstrates previous analysis. The small green vertical lines show each observation in this norm distribution, and the blue curves denote the Kernel Distribution Estimate (KDE) [30], which is a nonparametric way to estimate the probability density function of a random variable. The norm distribution of first layer and last layer in both structures are drawn. In addition, to clearly illustrate the relation between norm points, two different x-scale, i.e., linear x-scale and log x-scale, are presented. (1) Small Norm Deviation in Network. For the first convolutional layer of ResNet-110, as shown in Figure 3(b), there is a large quantity of filters whose norms are concentrated around the magnitude of 10 −6 . For the last convolutional layer of ResNet-110, as shown in Figure 3(c), the interval span of the value of norm is roughly 0.3, which is much smaller than the interval span of the norm of the first layer (1.7). For the last convolutional layer of ResNet-18, as shown in Figure 3(g), most filter norms are between the interval [0.8, 1.0]. In all these cases, filters are distributed too densely, which makes it difficult to select a proper threshold to distinguish the important filters from the others. (2) Large Minimum Norm in Network. For the last convolutional layer of ResNet-18, as shown in Figure 3(g), the minimum norm of these filters is around 0.8, which is large comparing to filters in the first convolutional layer (Figure 3(e)). For the last convolutional layer of ResNet-110, as shown in Figure 3(c), only one filter is arbitrarily small, while the others are not. Under those circumstances, the filters with minimum norms, although they are relatively less important according to the norm-based criterion, still make significant contributions in the network. Filter Pruning via Geometric Median To get rid of the constraints in the norm-based criterion, we propose a new filter pruning method inspired from geometric median. The central idea of geometric median [8] is as follows: given a set of n points a (1) , . . . , a (n) with each a (i) ∈ R d , find a point x * ∈ R d that minimizes the sum of Euclidean distances to them: [1,n] x − a As the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], we use the geometric median F GM i to get the common information of all the filters within the single i th layer: x * ∈ arg min x∈R d f (x) where f (x) def = i∈F GM i ∈ arg min x∈R N i ×K×K g(x),(2) where g(x) def = j ∈[1,N i+1 ] x − F i,j 2.(3) In the i th layer, if some filters have the same, or similar values as the geometric median in that layer, which is: Fi,j * ∈ arg min j ∈[1,N i+1 ] F i,j − F GM i 2,(4) then those filters, F i,j * , can be represented by the other filters in the same layer, and therefore, pruning them has little negative impacts on the network performance. As geometric median is a non-trivial problem in computational geometry, the previous fastest running times for computing a (1 + )-approximate geometric median were O(dn 4/3 · −8/3 ) by [2], O(nd log 3 (n/ )) by [3]. In our case, as the final result F i,j * are a list of know points, that is, the candidate filters in the layer, we could relax the above problem. We assume that Fi,j * − F GM i 2 = 0,(5) so the Equation.4 is achieved. Then the above Equation.2 becomes to Fi,j * ∈ arg min j * ∈[1,N i+1 ] j ∈[1,N i+1 ] x − F i,j 2 = arg min j * ∈[1,N i+1 ] g(x)(6) Note that even if the filter need to be pruned, F i,j * , is not included in the calculation of the geometric median in Equation.6 2 , we could also achieve the same result. In this setting, we want to find the filter F i,j * ∈ arg min j * ∈[1,N i+1 ] g (x),(7) where g (x) = j ∈[1,N i+1 ],j =j * x − F i,j 2.(8) With the above Equation.6 and Equation.8, we could get that: Find N i+1 P i filters that satisfy Equation 6 7: g (x) = g(x) − j =j * x − F i,j 2 = g(x) − x − Fi,j * 2.(9 Zeroize selected filters 8: end for 9: end for 10: Obtain the compact model W * from W Output: The compact model and its parameters W * then we could get min g (x) = min{g(x) − x − Fi,j * 2} = min g(x) − min x − Fi,j * 2 = g(Fi,j * ) − min x − Fi,j * 2.(10) For the second component of the right side for Equation.10, when x = F i,j * , we can get: F i,j * = Fi,j * (11) since x − F i,j 2 = 0 Since the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], the selected filter(s), F i,j * , and left ones share the most common information. This indicates the information of the filter(s) F i,j * could be replaced by others. After fine-tuning, the network could easily recover its original performance since the information of pruned filters can be represented by the remaining ones. Therefore, the filter(s) F i,j * could be pruned with negligible effect on the final result of the neural network. The FPGM is summarized in Algorithm 1. Theoretical and Realistic Acceleration Theoretical Acceleration Suppose the shapes of input tensor I ∈ N i × H i × W i and output tensor O ∈ N i+1 × H i+1 × W i+1 . Set the filter pruning rate of the i th layer to P i , then N i+1 × P i filters should be pruned. After filter pruning, the dimension of input and output feature map of the i th layer change to I ∈ [N i × (1 − P i )] × H i × W i and O ∈ [N i+1 × (1 − P i )] × H i+1 × W i+1 , respectively. If setting pruning rate for the (i + 1) th layer to P i+1 , then only (1 − P i+1 ) × (1 − P i ) of the original computation is needed. Finally, a compact model {W * (i) ∈ R Ni+1(1−Pi)×Ni(1−Pi−1)×K×K } is obtained. Realistic Acceleration In the above analysis, only the FLOPs of convolution operations for computation complexity comparison is considered, which is common in previous works [21,15]. This is because other operations such as batch normalization (BN) and pooling are insignificant comparing to convolution operations. However, non-tensor layers (e.g., BN and pooling layers) also need the inference time on GPU [25], and influence the realistic acceleration. Besides, the wide gap between the theoretical and realistic acceleration could also be restricted by the IO delay, buffer switch, and efficiency of BLAS libraries. We compare the theoretical and practical acceleration in Table 5. Experiments We evaluate FPGM for single-branch network (VGGNet [31]), and multiple-branch network (ResNet) on two benchmarks: CIFAR-10 [20] and ILSVRC-2012 [29] 3 . The CIFAR-10 [20] dataset contains 60, 000 32 × 32 color images in 10 different classes, in which 50, 000 training images and 10, 000 testing images are included. ILSVRC-2012 [29] is a large-scale dataset containing 1.28 million training images and 50k validation images of 1,000 classes. Experimental Settings Training setting. On CIFAR-10, the parameter setting is the same as [13] and the training schedule is the same as [40]. In the ILSVRC-2012 experiments, we use the default parameter settings which is same as [12,13]. Data argumentation strategies for ILSVRC-2012 is the same as Py-Torch [28] official examples. We analyze the difference between starting from scratch and the pre-trained model. For pruning the model from scratch, We use the normal training schedule without additional fine-tuning process. For pruning the pre-trained model, we reduce the learning rate to one-tenth of the original learning rate. To conduct a fair comparison of pruning scratch and pre-trained models, we use the same training epochs to train/fine-tune the network. The previous work [21] might use fewer epochs to finetune the pruned model, but it converges too early, and its accuracy can not improve even with more epochs, which can be shown in section 4.2. Pruning setting. In the filter pruning step, we simply prune all the weighted layers with the same pruning rate at the same time, which is the same as [15]. Therefore, only one hyper-parameter P i = P is needed to balance the acceleration and accuracy. The pruning operation is conducted at Table 1. Comparison of pruned ResNet on CIFAR-10. In "Fine-tune?" column, "" and "" indicates whether to use the pre-trained model as initialization or not, respectively. The "Acc. ↓" is the accuracy drop between pruned model and the baseline model, the smaller, the better. the end of every training epoch. Unlike previous work [21], sensitivity analysis is not essential in FPGM to achieve good performances, which will be demonstrated in later sections. Apart from FPGM only criterion, we also use a mixture of FPGM and previous norm-based method [15] to show that FPGM could serve as a supplement to previous methods. FPGM only criterion is denoted as "FPGMonly", the criterion combining the FPGM and norm-based criterion is indicated as "FPGM-mix". "FPGM-only 40%" means 40% filters of the layer are selected with FPGM only, while "FPGM-mix 40%" means 30% filters of the layer are selected with FPGM, and the remaining 10% filters are selected with norm-based criterion [15]. We compare FPGM with previous acceleration algorithms, e.g., MIL [5], PFEC [21], CP [16], ThiNet [25], SFP [15], NISP [39], Rethinking [38]. Not surprisingly, our FPGM method achieves the state-of-the-art result. Single-Branch Network Pruning VGGNet on CIFAR-10. As the training setup is not publicly available for [21], we re-implement the pruning procedure and achieve similar results to the original paper. The result of pruning pre-trained and scratch model is shown in Table 3 and Table 4, respectively. Not surprisingly, FPGM achieves better performance than [21] in both settings. Multiple-Branch Network Pruning ResNet on CIFAR-10. For the CIFAR-10 dataset, we test our FPGM on ResNet-20, 32, 56 and 110 with two different pruning rates: 30% and 40%. As shown in Table 1, our FPGM achieves the stateof-the-art performance. For example, MIL [5] without fine-tuning accelerates ResNet-32 by 31.2% speedup ratio with 1.59% accuracy drop, but our FPGM without finetuning achieves 53.2% speedup ratio with even 0.19% accuracy improvement. Comparing to SFP [15], when pruning 52.6% FLOPs of ResNet-56, our FPGM has only 0.66% accuracy drop, which is much less than SFP [15] (1.33%). For pruning the pre-trained ResNet-110, our method achieves a much higher (52.3% v.s. 38.6%) acceleration ratio with 0.16% performance increase, while PFEC [21] harms the performance with lower acceleration ratio. These results demonstrate that FPGM can produce a more compressed model with comparable or even better performances. ResNet Table 3. Pruning pre-trained VGGNet on CIFAR-10. "w.o." means "without" and "FT" means "fine-tuning" the pruned model. FPGM without fine-tuning achieves the same inference speedup with [15], but its accuracy exceeds by 0.68%. FPGM-only with fine-tuning could even gain 0.60% improvement over FPGM-only without fine-tuning, thus exceeds [15] by 1.28%. For ResNet-50, FPGM with finetuning achieves more inference speedup than CP [16], but our pruned model exceeds their model by 0.85% on the accuracy. Moreover, for pruning a pre-trained ResNet-101, FPGM reduces more than 40% FLOPs of the model without top-5 accuracy loss and only negligible (0.05%) top-1 accuracy loss. In contrast, the performance degradation is 2.10% for Rethinking [38]. Compared to the norm-based criterion, Geometric Median (GM) explicitly utilizes the relationship between filters, which is the main cause of its superior per- formance. To compare the theoretical and realistic acceleration, we measure the forward time of the pruned models on one GTX1080 GPU with a batch size of 64. The results 4 are shown in Table 5. As discussed in the above section, the gap between the theoretical and realistic model may come from the limitation of IO delay, buffer switch, and efficiency of BLAS libraries. Ablation Study Influence of Pruning Interval In our experiment setting, the interval of pruning equals to one, i.e., we conduct our pruning operation at the end of every training epoch. To explore the influence of pruning interval, we change the pruning interval from one epoch to ten epochs. We use the ResNet-110 under pruning rate 40% as the baseline, as shown in Fig. 4(a). The accuracy fluctuation along with the different pruning intervals is less than 0.3%, which means the performance of pruning is not sensitive to this parameter. Note that fine-tuning this parameter could even achieve better performance. Varying Pruned FLOPs We change the ratio of Pruned FLOPs for ResNet-110 to comprehensively understand FPGM, as shown in Fig. 4(b). When the pruned FLOPs is 18% and 40%, the performance of the pruned model even exceeds the baseline model without pruning, which shows FPGM may have a regularization effect on the neural network. Influence of Distance Type We use 1 -norm and cosine distance to replace the distance function in Equation 3. We use the ResNet-110 under pruning rate 40% as the baseline, the accuracy of the pruned model is 93.73 ± 0.23 %. The accuracy based on 1 -norm and cosine distance is 93.87 ± 0.22 % and 93.56 ± 0.13, respectively. Using 1 -norm as the distance of filter would bring a slightly better result, but cosine distance as distance would slightly harm the performance of the network. 4 Optimization of the addition of ResNet shortcuts and convolutional outputs would also affect the results. Combining FPGM with Norm-based Criterion We analyze the effect of combining FPGM and previous normbased criterion. For ResNet-110 on CIFAR-10, FPGMmix is slightly better than FPGM-only. For ResNet-18 on ILSVRC-2012, the performances of FPGM-only and FPGM-mix are almost the same. It seems that the normbased criterion and FPGM together can boost the performance on CIFAR-10, but not on ILSVRC-2012. We believe that this is because the two requirements for the norm-based criterion are met on some layers of CIFAR-10 pre-trained network, but not on that of ILSVRC-2012 pre-trained network, which is shown in Figure 3. Feature Map Visualization We visualize the feature maps of the first layer of the first block of ResNet-50. The feature maps with red titles (7,23,27,46,56,58) correspond to the selected filter activation when setting the pruning rate to 10%. These selected feature maps contain outlines of the bamboo and the panda's head and body, which can be replaced by remaining feature maps: (5,12,16,18,22, Conclusion and Future Work In this paper, we elaborate on the underlying requirements for norm-based filter pruning criterion and point out their limitations. To solve this, we propose a new filter pruning strategy based on the geometric median, named FPGM, to accelerate the deep CNNs. Unlike the previous normbased criterion, FPGM explicitly considers the mutual relations between filters. Thanks to this, FPGM achieves the state-of-the-art performance in several benchmarks. In the future, we plan to work on how to combine FPGM with other acceleration algorithms, e.g., matrix decomposition and low-precision weights, to push the performance to a higher stage.
3,623
1811.00250
2951153470
Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52 FLOPs on ResNet-110 with even 2.69 relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42 FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL
Concurrently with our work, some data independent filter pruning strategies @cite_17 @cite_12 @cite_3 @cite_11 have been explored. @cite_17 utilizes an @math -norm criterion to prune unimportant filters. @cite_12 proposes to select filters with an @math -norm criterion and prune those selected filters in a soft manner. @cite_3 proposes to prune models by enforcing sparsity on the scaling parameter of batch normalization layers. @cite_11 uses spectral clustering on filters to select unimportant ones.
{ "abstract": [ "Deep Convolutional Neural Networks (CNN) has achieved significant success in computer vision field. However, the high computational cost of the deep complex models prevents the deployment on edge devices with limited memory and computational resource. In this paper, we proposed a novel filter pruning for convolutional neural networks compression, namely spectral clustering filter pruning with soft self-adaption manners (SCSP). We first apply spectral clustering on filters layer by layer to explore their intrinsic connections and only count on efficient groups. By self-adaption manners, the pruning operations can be done in few epochs to let the network gradually choose meaningful groups. According to this strategy, we not only achieve model compression while keeping considerable performance, but also find a novel angle to interpret the model compression process.", "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource- limited scenarios. A widely-used practice in relevant work assumes that a smaller- norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance.", "", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks." ], "cite_N": [ "@cite_11", "@cite_3", "@cite_12", "@cite_17" ], "mid": [ "2808166015", "2964001144", "2808168148", "2962965870" ] }
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
The deeper and wider architectures of deep CNNs bring about the superior performance of computer vision tasks [6,26,45]. However, they also cause the prohibitively expensive computational cost and make the model deployment on mobile devices hard if not impossible. Even the latest architecture with high efficiencies, such as residual connection [12] or inception module [34], has millions of parameters requiring billions of float point operations (FLOPs) [15]. Therefore, it is necessary to attain the deep CNN models which have relatively low computational cost * Corrsponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program. An illustration of (a) the pruning criterion for normbased approach and the proposed method; (b) requirements for norm-based filter pruning criterion. In (a), the green boxes denote the filters of the network, where deeper color denotes larger norm of the filter. For the norm-based criterion, only the filters with the largest norm are kept based on the assumption that smallernorm filters are less important. In contrast, the proposed method prunes the filters with redundant information in the network. In this way, filters with different norms indicated by different intensities of green may be retained. In (b), the blue curve represents the ideal norm distribution of the network, and the v1 and v2 is the minimal and maximum value of norm distribution, respectively. To choose the appropriate threshold T (the red shadow), two requirements should be achieved, that is, the norm deviation should be large, and the minimum of the norm should be arbitrarily small. but high accuracy. Recent developments on pruning can be divided into two categories, i.e., weight pruning [11,1] and filter pruning [21,39]. Weight pruning directly deletes weight values in a filter which may cause unstructured sparsities. This irregular structure makes it difficult to leverage the highefficiency Basic Linear Algebra Subprograms (BLAS) libraries [25]. In contrast, filter pruning directly discards the whole selected filters and leaves a model with regular structures. Therefore, filter pruning is more preferred for accelerating the networks and decreasing the model size. Current practice [21,38,15] performs filter pruning by following the "smaller-norm-less-important" criterion, which believes that filters with smaller norms can be pruned safely due to their less importance. As shown in the top right of Figure 1(a), after calculating norms of filters in a model, a pre-specified threshold T is utilized to select filters whose norms are smaller than it. However, as illustrated in Figure 1(b), there are two prerequisites to utilize this "smaller-norm-less-important" criterion. First, the deviation of filter norms should be significant. This requirement makes the searching space for threshold T wide enough so that separating those filters needed to be pruned would be an easy task. Second, the norms of those filters which can be pruned should be arbitrarily small, i.e., close to zero; in other words, the filters with smaller norms are expected to make absolutely small contributions, rather than relatively less but positively large contributions, to the network. An ideal norm distribution when satisfactorily meeting those two requirements is illustrated as the blue curve in Figure 1. Unfortunately, based on our analysis and experimental observations, this is not always true. To address the problems mentioned above, we propose a novel filter pruning approach, named Filter Pruning via Geometric Median (FPGM). Different from the previous methods which prune filters with relatively less contribution, FPGM chooses the filters with the most replaceable contribution. Specifically, we calculate the Geometric Median (GM) [8] of the filters within the same layer. According to the characteristics of GM, the filter(s) F near it can be represented by the remaining ones. Therefore, pruning those filters will not have substantial negative influences on model performance. Note that FPGM does not utilize norm based criterion to select filters to prune, which means its performance will not deteriorate even when failing to meet requirements for norm-based criterion. Contributions. We have three contributions: (1) We analyze the norm-based criterion utilized in previous works, which prunes the relatively less important filters. We elaborate on its two underlying requirements which lead to its limitations; (2) We propose FPGM to prune the most replaceable filters containing redundant information, which can still achieve good performances when norm-based criterion fails; (3) The extensive experiment on two benchmarks demonstrates the effectiveness and efficiency of FPGM. Methodology Preliminaries We formally introduce symbols and notations in this subsection. We assume that a neural network has L layers. We use N i and N i+1 , to represent the number of input channels and the output channels for the i th convolution layer, respectively. F i,j represents the j th filter of the i th layer, then the dimension of filter F i,j is R Ni×K×K , where K is the kernel size of the network 1 . The i th layer of the net- work W (i) could be represented by {F i,j , 1 ≤ j ≤ N i+1 }. The tensor of connection of the deep CNN network could be parameterized by {W (i) ∈ R Ni+1×Ni×K×K , 1 ≤ i ≤ L}. Analysis of Norm-based Criterion Number of filters Value of norm Number of filters Value of norm 0 0 (1) Small Norm Deviation. The deviation of filter norm distributions might be too small, which means the norm values are concentrated to a small interval, as shown in Figure 2(a). A small norm deviation leads to a small search space, which makes it difficult to find an appropriate threshold to select filters to prune. 2 1 2 2 ′ 1 ′′ 1 ′ 2 ′′ 1 Problem 1: σ ( ′) << σ ( ) (a) Small Norm Deviation Problem 2: 1 ′′ ≫ 1 → 0 (b) Large Minimum Norm ′ ′′ (2) Large Minimum Norm. The filters with the minimum norm may not be arbitrarily small, as shown in the 1 Fully-connected layers equal to convolutional layers with k = 1 Figure 2 (b), v 1 >> v 1 → 0. Under this condition, those filters considered as the least important still contribute significantly to the network, which means every filter is highly informative. Therefore, pruning those filters with minimum norm values will cast a negative effect on the network. Norm Statistics in Real Scenarios In Figure 3, statistical information collected from pretrained ResNet-110 on CIFAR-10 and pre-trained ResNet-18 on ILSVRC-2012 demonstrates previous analysis. The small green vertical lines show each observation in this norm distribution, and the blue curves denote the Kernel Distribution Estimate (KDE) [30], which is a nonparametric way to estimate the probability density function of a random variable. The norm distribution of first layer and last layer in both structures are drawn. In addition, to clearly illustrate the relation between norm points, two different x-scale, i.e., linear x-scale and log x-scale, are presented. (1) Small Norm Deviation in Network. For the first convolutional layer of ResNet-110, as shown in Figure 3(b), there is a large quantity of filters whose norms are concentrated around the magnitude of 10 −6 . For the last convolutional layer of ResNet-110, as shown in Figure 3(c), the interval span of the value of norm is roughly 0.3, which is much smaller than the interval span of the norm of the first layer (1.7). For the last convolutional layer of ResNet-18, as shown in Figure 3(g), most filter norms are between the interval [0.8, 1.0]. In all these cases, filters are distributed too densely, which makes it difficult to select a proper threshold to distinguish the important filters from the others. (2) Large Minimum Norm in Network. For the last convolutional layer of ResNet-18, as shown in Figure 3(g), the minimum norm of these filters is around 0.8, which is large comparing to filters in the first convolutional layer (Figure 3(e)). For the last convolutional layer of ResNet-110, as shown in Figure 3(c), only one filter is arbitrarily small, while the others are not. Under those circumstances, the filters with minimum norms, although they are relatively less important according to the norm-based criterion, still make significant contributions in the network. Filter Pruning via Geometric Median To get rid of the constraints in the norm-based criterion, we propose a new filter pruning method inspired from geometric median. The central idea of geometric median [8] is as follows: given a set of n points a (1) , . . . , a (n) with each a (i) ∈ R d , find a point x * ∈ R d that minimizes the sum of Euclidean distances to them: [1,n] x − a As the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], we use the geometric median F GM i to get the common information of all the filters within the single i th layer: x * ∈ arg min x∈R d f (x) where f (x) def = i∈F GM i ∈ arg min x∈R N i ×K×K g(x),(2) where g(x) def = j ∈[1,N i+1 ] x − F i,j 2.(3) In the i th layer, if some filters have the same, or similar values as the geometric median in that layer, which is: Fi,j * ∈ arg min j ∈[1,N i+1 ] F i,j − F GM i 2,(4) then those filters, F i,j * , can be represented by the other filters in the same layer, and therefore, pruning them has little negative impacts on the network performance. As geometric median is a non-trivial problem in computational geometry, the previous fastest running times for computing a (1 + )-approximate geometric median were O(dn 4/3 · −8/3 ) by [2], O(nd log 3 (n/ )) by [3]. In our case, as the final result F i,j * are a list of know points, that is, the candidate filters in the layer, we could relax the above problem. We assume that Fi,j * − F GM i 2 = 0,(5) so the Equation.4 is achieved. Then the above Equation.2 becomes to Fi,j * ∈ arg min j * ∈[1,N i+1 ] j ∈[1,N i+1 ] x − F i,j 2 = arg min j * ∈[1,N i+1 ] g(x)(6) Note that even if the filter need to be pruned, F i,j * , is not included in the calculation of the geometric median in Equation.6 2 , we could also achieve the same result. In this setting, we want to find the filter F i,j * ∈ arg min j * ∈[1,N i+1 ] g (x),(7) where g (x) = j ∈[1,N i+1 ],j =j * x − F i,j 2.(8) With the above Equation.6 and Equation.8, we could get that: Find N i+1 P i filters that satisfy Equation 6 7: g (x) = g(x) − j =j * x − F i,j 2 = g(x) − x − Fi,j * 2.(9 Zeroize selected filters 8: end for 9: end for 10: Obtain the compact model W * from W Output: The compact model and its parameters W * then we could get min g (x) = min{g(x) − x − Fi,j * 2} = min g(x) − min x − Fi,j * 2 = g(Fi,j * ) − min x − Fi,j * 2.(10) For the second component of the right side for Equation.10, when x = F i,j * , we can get: F i,j * = Fi,j * (11) since x − F i,j 2 = 0 Since the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], the selected filter(s), F i,j * , and left ones share the most common information. This indicates the information of the filter(s) F i,j * could be replaced by others. After fine-tuning, the network could easily recover its original performance since the information of pruned filters can be represented by the remaining ones. Therefore, the filter(s) F i,j * could be pruned with negligible effect on the final result of the neural network. The FPGM is summarized in Algorithm 1. Theoretical and Realistic Acceleration Theoretical Acceleration Suppose the shapes of input tensor I ∈ N i × H i × W i and output tensor O ∈ N i+1 × H i+1 × W i+1 . Set the filter pruning rate of the i th layer to P i , then N i+1 × P i filters should be pruned. After filter pruning, the dimension of input and output feature map of the i th layer change to I ∈ [N i × (1 − P i )] × H i × W i and O ∈ [N i+1 × (1 − P i )] × H i+1 × W i+1 , respectively. If setting pruning rate for the (i + 1) th layer to P i+1 , then only (1 − P i+1 ) × (1 − P i ) of the original computation is needed. Finally, a compact model {W * (i) ∈ R Ni+1(1−Pi)×Ni(1−Pi−1)×K×K } is obtained. Realistic Acceleration In the above analysis, only the FLOPs of convolution operations for computation complexity comparison is considered, which is common in previous works [21,15]. This is because other operations such as batch normalization (BN) and pooling are insignificant comparing to convolution operations. However, non-tensor layers (e.g., BN and pooling layers) also need the inference time on GPU [25], and influence the realistic acceleration. Besides, the wide gap between the theoretical and realistic acceleration could also be restricted by the IO delay, buffer switch, and efficiency of BLAS libraries. We compare the theoretical and practical acceleration in Table 5. Experiments We evaluate FPGM for single-branch network (VGGNet [31]), and multiple-branch network (ResNet) on two benchmarks: CIFAR-10 [20] and ILSVRC-2012 [29] 3 . The CIFAR-10 [20] dataset contains 60, 000 32 × 32 color images in 10 different classes, in which 50, 000 training images and 10, 000 testing images are included. ILSVRC-2012 [29] is a large-scale dataset containing 1.28 million training images and 50k validation images of 1,000 classes. Experimental Settings Training setting. On CIFAR-10, the parameter setting is the same as [13] and the training schedule is the same as [40]. In the ILSVRC-2012 experiments, we use the default parameter settings which is same as [12,13]. Data argumentation strategies for ILSVRC-2012 is the same as Py-Torch [28] official examples. We analyze the difference between starting from scratch and the pre-trained model. For pruning the model from scratch, We use the normal training schedule without additional fine-tuning process. For pruning the pre-trained model, we reduce the learning rate to one-tenth of the original learning rate. To conduct a fair comparison of pruning scratch and pre-trained models, we use the same training epochs to train/fine-tune the network. The previous work [21] might use fewer epochs to finetune the pruned model, but it converges too early, and its accuracy can not improve even with more epochs, which can be shown in section 4.2. Pruning setting. In the filter pruning step, we simply prune all the weighted layers with the same pruning rate at the same time, which is the same as [15]. Therefore, only one hyper-parameter P i = P is needed to balance the acceleration and accuracy. The pruning operation is conducted at Table 1. Comparison of pruned ResNet on CIFAR-10. In "Fine-tune?" column, "" and "" indicates whether to use the pre-trained model as initialization or not, respectively. The "Acc. ↓" is the accuracy drop between pruned model and the baseline model, the smaller, the better. the end of every training epoch. Unlike previous work [21], sensitivity analysis is not essential in FPGM to achieve good performances, which will be demonstrated in later sections. Apart from FPGM only criterion, we also use a mixture of FPGM and previous norm-based method [15] to show that FPGM could serve as a supplement to previous methods. FPGM only criterion is denoted as "FPGMonly", the criterion combining the FPGM and norm-based criterion is indicated as "FPGM-mix". "FPGM-only 40%" means 40% filters of the layer are selected with FPGM only, while "FPGM-mix 40%" means 30% filters of the layer are selected with FPGM, and the remaining 10% filters are selected with norm-based criterion [15]. We compare FPGM with previous acceleration algorithms, e.g., MIL [5], PFEC [21], CP [16], ThiNet [25], SFP [15], NISP [39], Rethinking [38]. Not surprisingly, our FPGM method achieves the state-of-the-art result. Single-Branch Network Pruning VGGNet on CIFAR-10. As the training setup is not publicly available for [21], we re-implement the pruning procedure and achieve similar results to the original paper. The result of pruning pre-trained and scratch model is shown in Table 3 and Table 4, respectively. Not surprisingly, FPGM achieves better performance than [21] in both settings. Multiple-Branch Network Pruning ResNet on CIFAR-10. For the CIFAR-10 dataset, we test our FPGM on ResNet-20, 32, 56 and 110 with two different pruning rates: 30% and 40%. As shown in Table 1, our FPGM achieves the stateof-the-art performance. For example, MIL [5] without fine-tuning accelerates ResNet-32 by 31.2% speedup ratio with 1.59% accuracy drop, but our FPGM without finetuning achieves 53.2% speedup ratio with even 0.19% accuracy improvement. Comparing to SFP [15], when pruning 52.6% FLOPs of ResNet-56, our FPGM has only 0.66% accuracy drop, which is much less than SFP [15] (1.33%). For pruning the pre-trained ResNet-110, our method achieves a much higher (52.3% v.s. 38.6%) acceleration ratio with 0.16% performance increase, while PFEC [21] harms the performance with lower acceleration ratio. These results demonstrate that FPGM can produce a more compressed model with comparable or even better performances. ResNet Table 3. Pruning pre-trained VGGNet on CIFAR-10. "w.o." means "without" and "FT" means "fine-tuning" the pruned model. FPGM without fine-tuning achieves the same inference speedup with [15], but its accuracy exceeds by 0.68%. FPGM-only with fine-tuning could even gain 0.60% improvement over FPGM-only without fine-tuning, thus exceeds [15] by 1.28%. For ResNet-50, FPGM with finetuning achieves more inference speedup than CP [16], but our pruned model exceeds their model by 0.85% on the accuracy. Moreover, for pruning a pre-trained ResNet-101, FPGM reduces more than 40% FLOPs of the model without top-5 accuracy loss and only negligible (0.05%) top-1 accuracy loss. In contrast, the performance degradation is 2.10% for Rethinking [38]. Compared to the norm-based criterion, Geometric Median (GM) explicitly utilizes the relationship between filters, which is the main cause of its superior per- formance. To compare the theoretical and realistic acceleration, we measure the forward time of the pruned models on one GTX1080 GPU with a batch size of 64. The results 4 are shown in Table 5. As discussed in the above section, the gap between the theoretical and realistic model may come from the limitation of IO delay, buffer switch, and efficiency of BLAS libraries. Ablation Study Influence of Pruning Interval In our experiment setting, the interval of pruning equals to one, i.e., we conduct our pruning operation at the end of every training epoch. To explore the influence of pruning interval, we change the pruning interval from one epoch to ten epochs. We use the ResNet-110 under pruning rate 40% as the baseline, as shown in Fig. 4(a). The accuracy fluctuation along with the different pruning intervals is less than 0.3%, which means the performance of pruning is not sensitive to this parameter. Note that fine-tuning this parameter could even achieve better performance. Varying Pruned FLOPs We change the ratio of Pruned FLOPs for ResNet-110 to comprehensively understand FPGM, as shown in Fig. 4(b). When the pruned FLOPs is 18% and 40%, the performance of the pruned model even exceeds the baseline model without pruning, which shows FPGM may have a regularization effect on the neural network. Influence of Distance Type We use 1 -norm and cosine distance to replace the distance function in Equation 3. We use the ResNet-110 under pruning rate 40% as the baseline, the accuracy of the pruned model is 93.73 ± 0.23 %. The accuracy based on 1 -norm and cosine distance is 93.87 ± 0.22 % and 93.56 ± 0.13, respectively. Using 1 -norm as the distance of filter would bring a slightly better result, but cosine distance as distance would slightly harm the performance of the network. 4 Optimization of the addition of ResNet shortcuts and convolutional outputs would also affect the results. Combining FPGM with Norm-based Criterion We analyze the effect of combining FPGM and previous normbased criterion. For ResNet-110 on CIFAR-10, FPGMmix is slightly better than FPGM-only. For ResNet-18 on ILSVRC-2012, the performances of FPGM-only and FPGM-mix are almost the same. It seems that the normbased criterion and FPGM together can boost the performance on CIFAR-10, but not on ILSVRC-2012. We believe that this is because the two requirements for the norm-based criterion are met on some layers of CIFAR-10 pre-trained network, but not on that of ILSVRC-2012 pre-trained network, which is shown in Figure 3. Feature Map Visualization We visualize the feature maps of the first layer of the first block of ResNet-50. The feature maps with red titles (7,23,27,46,56,58) correspond to the selected filter activation when setting the pruning rate to 10%. These selected feature maps contain outlines of the bamboo and the panda's head and body, which can be replaced by remaining feature maps: (5,12,16,18,22, Conclusion and Future Work In this paper, we elaborate on the underlying requirements for norm-based filter pruning criterion and point out their limitations. To solve this, we propose a new filter pruning strategy based on the geometric median, named FPGM, to accelerate the deep CNNs. Unlike the previous normbased criterion, FPGM explicitly considers the mutual relations between filters. Thanks to this, FPGM achieves the state-of-the-art performance in several benchmarks. In the future, we plan to work on how to combine FPGM with other acceleration algorithms, e.g., matrix decomposition and low-precision weights, to push the performance to a higher stage.
3,623
1811.00250
2951153470
Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52 FLOPs on ResNet-110 with even 2.69 relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42 FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL
To the best of our knowledge, only one previous work reconsiders the smaller-norm-less-important criterion @cite_3 . We would like to highlight our advantages compared to this approach as below: (1) @cite_3 pays more attention to enforce sparsity on the scaling parameter in the batch normalization operator, which is not friendly to the structure without batch normalization. On the contrary, our approach is not limited by this constraint. (2) After pruning channels selected, @cite_3 need fine-tuning to reduce the performance degradation. However, our method combines the pruning operation with normal training procedure, thus extra fine-tuning is not necessary. (3) Calculation of the gradient of scaling factor is needed for @cite_3 , thus lots of computation cost are inevitable, whereas our approach could accelerate the neural network without calculating of the gradient of scaling factor.
{ "abstract": [ "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource- limited scenarios. A widely-used practice in relevant work assumes that a smaller- norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance." ], "cite_N": [ "@cite_3" ], "mid": [ "2964001144" ] }
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
The deeper and wider architectures of deep CNNs bring about the superior performance of computer vision tasks [6,26,45]. However, they also cause the prohibitively expensive computational cost and make the model deployment on mobile devices hard if not impossible. Even the latest architecture with high efficiencies, such as residual connection [12] or inception module [34], has millions of parameters requiring billions of float point operations (FLOPs) [15]. Therefore, it is necessary to attain the deep CNN models which have relatively low computational cost * Corrsponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program. An illustration of (a) the pruning criterion for normbased approach and the proposed method; (b) requirements for norm-based filter pruning criterion. In (a), the green boxes denote the filters of the network, where deeper color denotes larger norm of the filter. For the norm-based criterion, only the filters with the largest norm are kept based on the assumption that smallernorm filters are less important. In contrast, the proposed method prunes the filters with redundant information in the network. In this way, filters with different norms indicated by different intensities of green may be retained. In (b), the blue curve represents the ideal norm distribution of the network, and the v1 and v2 is the minimal and maximum value of norm distribution, respectively. To choose the appropriate threshold T (the red shadow), two requirements should be achieved, that is, the norm deviation should be large, and the minimum of the norm should be arbitrarily small. but high accuracy. Recent developments on pruning can be divided into two categories, i.e., weight pruning [11,1] and filter pruning [21,39]. Weight pruning directly deletes weight values in a filter which may cause unstructured sparsities. This irregular structure makes it difficult to leverage the highefficiency Basic Linear Algebra Subprograms (BLAS) libraries [25]. In contrast, filter pruning directly discards the whole selected filters and leaves a model with regular structures. Therefore, filter pruning is more preferred for accelerating the networks and decreasing the model size. Current practice [21,38,15] performs filter pruning by following the "smaller-norm-less-important" criterion, which believes that filters with smaller norms can be pruned safely due to their less importance. As shown in the top right of Figure 1(a), after calculating norms of filters in a model, a pre-specified threshold T is utilized to select filters whose norms are smaller than it. However, as illustrated in Figure 1(b), there are two prerequisites to utilize this "smaller-norm-less-important" criterion. First, the deviation of filter norms should be significant. This requirement makes the searching space for threshold T wide enough so that separating those filters needed to be pruned would be an easy task. Second, the norms of those filters which can be pruned should be arbitrarily small, i.e., close to zero; in other words, the filters with smaller norms are expected to make absolutely small contributions, rather than relatively less but positively large contributions, to the network. An ideal norm distribution when satisfactorily meeting those two requirements is illustrated as the blue curve in Figure 1. Unfortunately, based on our analysis and experimental observations, this is not always true. To address the problems mentioned above, we propose a novel filter pruning approach, named Filter Pruning via Geometric Median (FPGM). Different from the previous methods which prune filters with relatively less contribution, FPGM chooses the filters with the most replaceable contribution. Specifically, we calculate the Geometric Median (GM) [8] of the filters within the same layer. According to the characteristics of GM, the filter(s) F near it can be represented by the remaining ones. Therefore, pruning those filters will not have substantial negative influences on model performance. Note that FPGM does not utilize norm based criterion to select filters to prune, which means its performance will not deteriorate even when failing to meet requirements for norm-based criterion. Contributions. We have three contributions: (1) We analyze the norm-based criterion utilized in previous works, which prunes the relatively less important filters. We elaborate on its two underlying requirements which lead to its limitations; (2) We propose FPGM to prune the most replaceable filters containing redundant information, which can still achieve good performances when norm-based criterion fails; (3) The extensive experiment on two benchmarks demonstrates the effectiveness and efficiency of FPGM. Methodology Preliminaries We formally introduce symbols and notations in this subsection. We assume that a neural network has L layers. We use N i and N i+1 , to represent the number of input channels and the output channels for the i th convolution layer, respectively. F i,j represents the j th filter of the i th layer, then the dimension of filter F i,j is R Ni×K×K , where K is the kernel size of the network 1 . The i th layer of the net- work W (i) could be represented by {F i,j , 1 ≤ j ≤ N i+1 }. The tensor of connection of the deep CNN network could be parameterized by {W (i) ∈ R Ni+1×Ni×K×K , 1 ≤ i ≤ L}. Analysis of Norm-based Criterion Number of filters Value of norm Number of filters Value of norm 0 0 (1) Small Norm Deviation. The deviation of filter norm distributions might be too small, which means the norm values are concentrated to a small interval, as shown in Figure 2(a). A small norm deviation leads to a small search space, which makes it difficult to find an appropriate threshold to select filters to prune. 2 1 2 2 ′ 1 ′′ 1 ′ 2 ′′ 1 Problem 1: σ ( ′) << σ ( ) (a) Small Norm Deviation Problem 2: 1 ′′ ≫ 1 → 0 (b) Large Minimum Norm ′ ′′ (2) Large Minimum Norm. The filters with the minimum norm may not be arbitrarily small, as shown in the 1 Fully-connected layers equal to convolutional layers with k = 1 Figure 2 (b), v 1 >> v 1 → 0. Under this condition, those filters considered as the least important still contribute significantly to the network, which means every filter is highly informative. Therefore, pruning those filters with minimum norm values will cast a negative effect on the network. Norm Statistics in Real Scenarios In Figure 3, statistical information collected from pretrained ResNet-110 on CIFAR-10 and pre-trained ResNet-18 on ILSVRC-2012 demonstrates previous analysis. The small green vertical lines show each observation in this norm distribution, and the blue curves denote the Kernel Distribution Estimate (KDE) [30], which is a nonparametric way to estimate the probability density function of a random variable. The norm distribution of first layer and last layer in both structures are drawn. In addition, to clearly illustrate the relation between norm points, two different x-scale, i.e., linear x-scale and log x-scale, are presented. (1) Small Norm Deviation in Network. For the first convolutional layer of ResNet-110, as shown in Figure 3(b), there is a large quantity of filters whose norms are concentrated around the magnitude of 10 −6 . For the last convolutional layer of ResNet-110, as shown in Figure 3(c), the interval span of the value of norm is roughly 0.3, which is much smaller than the interval span of the norm of the first layer (1.7). For the last convolutional layer of ResNet-18, as shown in Figure 3(g), most filter norms are between the interval [0.8, 1.0]. In all these cases, filters are distributed too densely, which makes it difficult to select a proper threshold to distinguish the important filters from the others. (2) Large Minimum Norm in Network. For the last convolutional layer of ResNet-18, as shown in Figure 3(g), the minimum norm of these filters is around 0.8, which is large comparing to filters in the first convolutional layer (Figure 3(e)). For the last convolutional layer of ResNet-110, as shown in Figure 3(c), only one filter is arbitrarily small, while the others are not. Under those circumstances, the filters with minimum norms, although they are relatively less important according to the norm-based criterion, still make significant contributions in the network. Filter Pruning via Geometric Median To get rid of the constraints in the norm-based criterion, we propose a new filter pruning method inspired from geometric median. The central idea of geometric median [8] is as follows: given a set of n points a (1) , . . . , a (n) with each a (i) ∈ R d , find a point x * ∈ R d that minimizes the sum of Euclidean distances to them: [1,n] x − a As the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], we use the geometric median F GM i to get the common information of all the filters within the single i th layer: x * ∈ arg min x∈R d f (x) where f (x) def = i∈F GM i ∈ arg min x∈R N i ×K×K g(x),(2) where g(x) def = j ∈[1,N i+1 ] x − F i,j 2.(3) In the i th layer, if some filters have the same, or similar values as the geometric median in that layer, which is: Fi,j * ∈ arg min j ∈[1,N i+1 ] F i,j − F GM i 2,(4) then those filters, F i,j * , can be represented by the other filters in the same layer, and therefore, pruning them has little negative impacts on the network performance. As geometric median is a non-trivial problem in computational geometry, the previous fastest running times for computing a (1 + )-approximate geometric median were O(dn 4/3 · −8/3 ) by [2], O(nd log 3 (n/ )) by [3]. In our case, as the final result F i,j * are a list of know points, that is, the candidate filters in the layer, we could relax the above problem. We assume that Fi,j * − F GM i 2 = 0,(5) so the Equation.4 is achieved. Then the above Equation.2 becomes to Fi,j * ∈ arg min j * ∈[1,N i+1 ] j ∈[1,N i+1 ] x − F i,j 2 = arg min j * ∈[1,N i+1 ] g(x)(6) Note that even if the filter need to be pruned, F i,j * , is not included in the calculation of the geometric median in Equation.6 2 , we could also achieve the same result. In this setting, we want to find the filter F i,j * ∈ arg min j * ∈[1,N i+1 ] g (x),(7) where g (x) = j ∈[1,N i+1 ],j =j * x − F i,j 2.(8) With the above Equation.6 and Equation.8, we could get that: Find N i+1 P i filters that satisfy Equation 6 7: g (x) = g(x) − j =j * x − F i,j 2 = g(x) − x − Fi,j * 2.(9 Zeroize selected filters 8: end for 9: end for 10: Obtain the compact model W * from W Output: The compact model and its parameters W * then we could get min g (x) = min{g(x) − x − Fi,j * 2} = min g(x) − min x − Fi,j * 2 = g(Fi,j * ) − min x − Fi,j * 2.(10) For the second component of the right side for Equation.10, when x = F i,j * , we can get: F i,j * = Fi,j * (11) since x − F i,j 2 = 0 Since the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], the selected filter(s), F i,j * , and left ones share the most common information. This indicates the information of the filter(s) F i,j * could be replaced by others. After fine-tuning, the network could easily recover its original performance since the information of pruned filters can be represented by the remaining ones. Therefore, the filter(s) F i,j * could be pruned with negligible effect on the final result of the neural network. The FPGM is summarized in Algorithm 1. Theoretical and Realistic Acceleration Theoretical Acceleration Suppose the shapes of input tensor I ∈ N i × H i × W i and output tensor O ∈ N i+1 × H i+1 × W i+1 . Set the filter pruning rate of the i th layer to P i , then N i+1 × P i filters should be pruned. After filter pruning, the dimension of input and output feature map of the i th layer change to I ∈ [N i × (1 − P i )] × H i × W i and O ∈ [N i+1 × (1 − P i )] × H i+1 × W i+1 , respectively. If setting pruning rate for the (i + 1) th layer to P i+1 , then only (1 − P i+1 ) × (1 − P i ) of the original computation is needed. Finally, a compact model {W * (i) ∈ R Ni+1(1−Pi)×Ni(1−Pi−1)×K×K } is obtained. Realistic Acceleration In the above analysis, only the FLOPs of convolution operations for computation complexity comparison is considered, which is common in previous works [21,15]. This is because other operations such as batch normalization (BN) and pooling are insignificant comparing to convolution operations. However, non-tensor layers (e.g., BN and pooling layers) also need the inference time on GPU [25], and influence the realistic acceleration. Besides, the wide gap between the theoretical and realistic acceleration could also be restricted by the IO delay, buffer switch, and efficiency of BLAS libraries. We compare the theoretical and practical acceleration in Table 5. Experiments We evaluate FPGM for single-branch network (VGGNet [31]), and multiple-branch network (ResNet) on two benchmarks: CIFAR-10 [20] and ILSVRC-2012 [29] 3 . The CIFAR-10 [20] dataset contains 60, 000 32 × 32 color images in 10 different classes, in which 50, 000 training images and 10, 000 testing images are included. ILSVRC-2012 [29] is a large-scale dataset containing 1.28 million training images and 50k validation images of 1,000 classes. Experimental Settings Training setting. On CIFAR-10, the parameter setting is the same as [13] and the training schedule is the same as [40]. In the ILSVRC-2012 experiments, we use the default parameter settings which is same as [12,13]. Data argumentation strategies for ILSVRC-2012 is the same as Py-Torch [28] official examples. We analyze the difference between starting from scratch and the pre-trained model. For pruning the model from scratch, We use the normal training schedule without additional fine-tuning process. For pruning the pre-trained model, we reduce the learning rate to one-tenth of the original learning rate. To conduct a fair comparison of pruning scratch and pre-trained models, we use the same training epochs to train/fine-tune the network. The previous work [21] might use fewer epochs to finetune the pruned model, but it converges too early, and its accuracy can not improve even with more epochs, which can be shown in section 4.2. Pruning setting. In the filter pruning step, we simply prune all the weighted layers with the same pruning rate at the same time, which is the same as [15]. Therefore, only one hyper-parameter P i = P is needed to balance the acceleration and accuracy. The pruning operation is conducted at Table 1. Comparison of pruned ResNet on CIFAR-10. In "Fine-tune?" column, "" and "" indicates whether to use the pre-trained model as initialization or not, respectively. The "Acc. ↓" is the accuracy drop between pruned model and the baseline model, the smaller, the better. the end of every training epoch. Unlike previous work [21], sensitivity analysis is not essential in FPGM to achieve good performances, which will be demonstrated in later sections. Apart from FPGM only criterion, we also use a mixture of FPGM and previous norm-based method [15] to show that FPGM could serve as a supplement to previous methods. FPGM only criterion is denoted as "FPGMonly", the criterion combining the FPGM and norm-based criterion is indicated as "FPGM-mix". "FPGM-only 40%" means 40% filters of the layer are selected with FPGM only, while "FPGM-mix 40%" means 30% filters of the layer are selected with FPGM, and the remaining 10% filters are selected with norm-based criterion [15]. We compare FPGM with previous acceleration algorithms, e.g., MIL [5], PFEC [21], CP [16], ThiNet [25], SFP [15], NISP [39], Rethinking [38]. Not surprisingly, our FPGM method achieves the state-of-the-art result. Single-Branch Network Pruning VGGNet on CIFAR-10. As the training setup is not publicly available for [21], we re-implement the pruning procedure and achieve similar results to the original paper. The result of pruning pre-trained and scratch model is shown in Table 3 and Table 4, respectively. Not surprisingly, FPGM achieves better performance than [21] in both settings. Multiple-Branch Network Pruning ResNet on CIFAR-10. For the CIFAR-10 dataset, we test our FPGM on ResNet-20, 32, 56 and 110 with two different pruning rates: 30% and 40%. As shown in Table 1, our FPGM achieves the stateof-the-art performance. For example, MIL [5] without fine-tuning accelerates ResNet-32 by 31.2% speedup ratio with 1.59% accuracy drop, but our FPGM without finetuning achieves 53.2% speedup ratio with even 0.19% accuracy improvement. Comparing to SFP [15], when pruning 52.6% FLOPs of ResNet-56, our FPGM has only 0.66% accuracy drop, which is much less than SFP [15] (1.33%). For pruning the pre-trained ResNet-110, our method achieves a much higher (52.3% v.s. 38.6%) acceleration ratio with 0.16% performance increase, while PFEC [21] harms the performance with lower acceleration ratio. These results demonstrate that FPGM can produce a more compressed model with comparable or even better performances. ResNet Table 3. Pruning pre-trained VGGNet on CIFAR-10. "w.o." means "without" and "FT" means "fine-tuning" the pruned model. FPGM without fine-tuning achieves the same inference speedup with [15], but its accuracy exceeds by 0.68%. FPGM-only with fine-tuning could even gain 0.60% improvement over FPGM-only without fine-tuning, thus exceeds [15] by 1.28%. For ResNet-50, FPGM with finetuning achieves more inference speedup than CP [16], but our pruned model exceeds their model by 0.85% on the accuracy. Moreover, for pruning a pre-trained ResNet-101, FPGM reduces more than 40% FLOPs of the model without top-5 accuracy loss and only negligible (0.05%) top-1 accuracy loss. In contrast, the performance degradation is 2.10% for Rethinking [38]. Compared to the norm-based criterion, Geometric Median (GM) explicitly utilizes the relationship between filters, which is the main cause of its superior per- formance. To compare the theoretical and realistic acceleration, we measure the forward time of the pruned models on one GTX1080 GPU with a batch size of 64. The results 4 are shown in Table 5. As discussed in the above section, the gap between the theoretical and realistic model may come from the limitation of IO delay, buffer switch, and efficiency of BLAS libraries. Ablation Study Influence of Pruning Interval In our experiment setting, the interval of pruning equals to one, i.e., we conduct our pruning operation at the end of every training epoch. To explore the influence of pruning interval, we change the pruning interval from one epoch to ten epochs. We use the ResNet-110 under pruning rate 40% as the baseline, as shown in Fig. 4(a). The accuracy fluctuation along with the different pruning intervals is less than 0.3%, which means the performance of pruning is not sensitive to this parameter. Note that fine-tuning this parameter could even achieve better performance. Varying Pruned FLOPs We change the ratio of Pruned FLOPs for ResNet-110 to comprehensively understand FPGM, as shown in Fig. 4(b). When the pruned FLOPs is 18% and 40%, the performance of the pruned model even exceeds the baseline model without pruning, which shows FPGM may have a regularization effect on the neural network. Influence of Distance Type We use 1 -norm and cosine distance to replace the distance function in Equation 3. We use the ResNet-110 under pruning rate 40% as the baseline, the accuracy of the pruned model is 93.73 ± 0.23 %. The accuracy based on 1 -norm and cosine distance is 93.87 ± 0.22 % and 93.56 ± 0.13, respectively. Using 1 -norm as the distance of filter would bring a slightly better result, but cosine distance as distance would slightly harm the performance of the network. 4 Optimization of the addition of ResNet shortcuts and convolutional outputs would also affect the results. Combining FPGM with Norm-based Criterion We analyze the effect of combining FPGM and previous normbased criterion. For ResNet-110 on CIFAR-10, FPGMmix is slightly better than FPGM-only. For ResNet-18 on ILSVRC-2012, the performances of FPGM-only and FPGM-mix are almost the same. It seems that the normbased criterion and FPGM together can boost the performance on CIFAR-10, but not on ILSVRC-2012. We believe that this is because the two requirements for the norm-based criterion are met on some layers of CIFAR-10 pre-trained network, but not on that of ILSVRC-2012 pre-trained network, which is shown in Figure 3. Feature Map Visualization We visualize the feature maps of the first layer of the first block of ResNet-50. The feature maps with red titles (7,23,27,46,56,58) correspond to the selected filter activation when setting the pruning rate to 10%. These selected feature maps contain outlines of the bamboo and the panda's head and body, which can be replaced by remaining feature maps: (5,12,16,18,22, Conclusion and Future Work In this paper, we elaborate on the underlying requirements for norm-based filter pruning criterion and point out their limitations. To solve this, we propose a new filter pruning strategy based on the geometric median, named FPGM, to accelerate the deep CNNs. Unlike the previous normbased criterion, FPGM explicitly considers the mutual relations between filters. Thanks to this, FPGM achieves the state-of-the-art performance in several benchmarks. In the future, we plan to work on how to combine FPGM with other acceleration algorithms, e.g., matrix decomposition and low-precision weights, to push the performance to a higher stage.
3,623
1811.00250
2951153470
Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52 FLOPs on ResNet-110 with even 2.69 relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42 FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL
Some other works @cite_29 @cite_6 @cite_4 @cite_35 @cite_11 sharing some common ideas with ours, that is, find the filters that have similar function so that the filters could be pruned, the differences exist in @cite_29 (1) We focus on acceleration of inference of neural network, while @cite_29 concentrate on the emergence of duplicate filters over training iterations; (2) We use the geometric median to select filters to prune, while @cite_29 applies cosine similarity to analyze the similarity of filters; (3) We have demonstrated our approach on large-scale datasets with sophisticated ResNet, while @cite_3 only conducted experiments on small-scale CIFAR-10 with AlexNet, @cite_6 only conducted experiments on small-scale CIFAR-10 and CIFAR-100 with AlexNet
{ "abstract": [ "While the research on convolutional neural networks (CNNs) is progressing quickly, the real-world deployment of these models is often limited by computing resources and memory constraints. In this paper, we address this issue by proposing a novel filter pruning method to compress and accelerate CNNs. Our work is based on the linear relationship identified in different feature map subspaces via visualization of feature maps. Such linear relationship implies that the information in CNNs is redundant. Our method eliminates the redundancy in convolutional filters by applying subspace clustering to feature maps. In this way, most of the representative information in the network can be retained in each cluster. Therefore, our method provides an effective solution to filter pruning for which most existing methods directly remove filters based on simple heuristics. The proposed method is independent of the network structure, thus it can be adopted by any off-the-shelf deep learning libraries. Experiments on different networks and tasks show that our method outperforms existing techniques before fine-tuning, and achieves the state-of-the-art results after fine-tuning.", "We propose a novel Convolutional Neural Network (CNN) compression algorithm based on coreset representations of filters. We exploit the redundancies extant in the space of CNN weights and neuronal activations (across samples) in order to obtain compression. Our method requires no retraining, is easy to implement, and obtains state-of-the-art compression performance across a wide variety of CNN architectures. Coupled with quantization and Huffman coding, we create networks that provide AlexNet-like accuracy, with a memory footprint that is 832 ( ) smaller than the original AlexNet, while also introducing significant reductions in inference time as well. Additionally these compressed networks when fine-tuned, successfully generalize to other domains as well.", "", "", "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource- limited scenarios. A widely-used practice in relevant work assumes that a smaller- norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance.", "Deep Convolutional Neural Networks (CNN) has achieved significant success in computer vision field. However, the high computational cost of the deep complex models prevents the deployment on edge devices with limited memory and computational resource. In this paper, we proposed a novel filter pruning for convolutional neural networks compression, namely spectral clustering filter pruning with soft self-adaption manners (SCSP). We first apply spectral clustering on filters layer by layer to explore their intrinsic connections and only count on efficient groups. By self-adaption manners, the pruning operations can be done in few epochs to let the network gradually choose meaningful groups. According to this strategy, we not only achieve model compression while keeping considerable performance, but also find a novel angle to interpret the model compression process." ], "cite_N": [ "@cite_35", "@cite_4", "@cite_29", "@cite_6", "@cite_3", "@cite_11" ], "mid": [ "2793035069", "2883070812", "", "2894330827", "2964001144", "2808166015" ] }
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
The deeper and wider architectures of deep CNNs bring about the superior performance of computer vision tasks [6,26,45]. However, they also cause the prohibitively expensive computational cost and make the model deployment on mobile devices hard if not impossible. Even the latest architecture with high efficiencies, such as residual connection [12] or inception module [34], has millions of parameters requiring billions of float point operations (FLOPs) [15]. Therefore, it is necessary to attain the deep CNN models which have relatively low computational cost * Corrsponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program. An illustration of (a) the pruning criterion for normbased approach and the proposed method; (b) requirements for norm-based filter pruning criterion. In (a), the green boxes denote the filters of the network, where deeper color denotes larger norm of the filter. For the norm-based criterion, only the filters with the largest norm are kept based on the assumption that smallernorm filters are less important. In contrast, the proposed method prunes the filters with redundant information in the network. In this way, filters with different norms indicated by different intensities of green may be retained. In (b), the blue curve represents the ideal norm distribution of the network, and the v1 and v2 is the minimal and maximum value of norm distribution, respectively. To choose the appropriate threshold T (the red shadow), two requirements should be achieved, that is, the norm deviation should be large, and the minimum of the norm should be arbitrarily small. but high accuracy. Recent developments on pruning can be divided into two categories, i.e., weight pruning [11,1] and filter pruning [21,39]. Weight pruning directly deletes weight values in a filter which may cause unstructured sparsities. This irregular structure makes it difficult to leverage the highefficiency Basic Linear Algebra Subprograms (BLAS) libraries [25]. In contrast, filter pruning directly discards the whole selected filters and leaves a model with regular structures. Therefore, filter pruning is more preferred for accelerating the networks and decreasing the model size. Current practice [21,38,15] performs filter pruning by following the "smaller-norm-less-important" criterion, which believes that filters with smaller norms can be pruned safely due to their less importance. As shown in the top right of Figure 1(a), after calculating norms of filters in a model, a pre-specified threshold T is utilized to select filters whose norms are smaller than it. However, as illustrated in Figure 1(b), there are two prerequisites to utilize this "smaller-norm-less-important" criterion. First, the deviation of filter norms should be significant. This requirement makes the searching space for threshold T wide enough so that separating those filters needed to be pruned would be an easy task. Second, the norms of those filters which can be pruned should be arbitrarily small, i.e., close to zero; in other words, the filters with smaller norms are expected to make absolutely small contributions, rather than relatively less but positively large contributions, to the network. An ideal norm distribution when satisfactorily meeting those two requirements is illustrated as the blue curve in Figure 1. Unfortunately, based on our analysis and experimental observations, this is not always true. To address the problems mentioned above, we propose a novel filter pruning approach, named Filter Pruning via Geometric Median (FPGM). Different from the previous methods which prune filters with relatively less contribution, FPGM chooses the filters with the most replaceable contribution. Specifically, we calculate the Geometric Median (GM) [8] of the filters within the same layer. According to the characteristics of GM, the filter(s) F near it can be represented by the remaining ones. Therefore, pruning those filters will not have substantial negative influences on model performance. Note that FPGM does not utilize norm based criterion to select filters to prune, which means its performance will not deteriorate even when failing to meet requirements for norm-based criterion. Contributions. We have three contributions: (1) We analyze the norm-based criterion utilized in previous works, which prunes the relatively less important filters. We elaborate on its two underlying requirements which lead to its limitations; (2) We propose FPGM to prune the most replaceable filters containing redundant information, which can still achieve good performances when norm-based criterion fails; (3) The extensive experiment on two benchmarks demonstrates the effectiveness and efficiency of FPGM. Methodology Preliminaries We formally introduce symbols and notations in this subsection. We assume that a neural network has L layers. We use N i and N i+1 , to represent the number of input channels and the output channels for the i th convolution layer, respectively. F i,j represents the j th filter of the i th layer, then the dimension of filter F i,j is R Ni×K×K , where K is the kernel size of the network 1 . The i th layer of the net- work W (i) could be represented by {F i,j , 1 ≤ j ≤ N i+1 }. The tensor of connection of the deep CNN network could be parameterized by {W (i) ∈ R Ni+1×Ni×K×K , 1 ≤ i ≤ L}. Analysis of Norm-based Criterion Number of filters Value of norm Number of filters Value of norm 0 0 (1) Small Norm Deviation. The deviation of filter norm distributions might be too small, which means the norm values are concentrated to a small interval, as shown in Figure 2(a). A small norm deviation leads to a small search space, which makes it difficult to find an appropriate threshold to select filters to prune. 2 1 2 2 ′ 1 ′′ 1 ′ 2 ′′ 1 Problem 1: σ ( ′) << σ ( ) (a) Small Norm Deviation Problem 2: 1 ′′ ≫ 1 → 0 (b) Large Minimum Norm ′ ′′ (2) Large Minimum Norm. The filters with the minimum norm may not be arbitrarily small, as shown in the 1 Fully-connected layers equal to convolutional layers with k = 1 Figure 2 (b), v 1 >> v 1 → 0. Under this condition, those filters considered as the least important still contribute significantly to the network, which means every filter is highly informative. Therefore, pruning those filters with minimum norm values will cast a negative effect on the network. Norm Statistics in Real Scenarios In Figure 3, statistical information collected from pretrained ResNet-110 on CIFAR-10 and pre-trained ResNet-18 on ILSVRC-2012 demonstrates previous analysis. The small green vertical lines show each observation in this norm distribution, and the blue curves denote the Kernel Distribution Estimate (KDE) [30], which is a nonparametric way to estimate the probability density function of a random variable. The norm distribution of first layer and last layer in both structures are drawn. In addition, to clearly illustrate the relation between norm points, two different x-scale, i.e., linear x-scale and log x-scale, are presented. (1) Small Norm Deviation in Network. For the first convolutional layer of ResNet-110, as shown in Figure 3(b), there is a large quantity of filters whose norms are concentrated around the magnitude of 10 −6 . For the last convolutional layer of ResNet-110, as shown in Figure 3(c), the interval span of the value of norm is roughly 0.3, which is much smaller than the interval span of the norm of the first layer (1.7). For the last convolutional layer of ResNet-18, as shown in Figure 3(g), most filter norms are between the interval [0.8, 1.0]. In all these cases, filters are distributed too densely, which makes it difficult to select a proper threshold to distinguish the important filters from the others. (2) Large Minimum Norm in Network. For the last convolutional layer of ResNet-18, as shown in Figure 3(g), the minimum norm of these filters is around 0.8, which is large comparing to filters in the first convolutional layer (Figure 3(e)). For the last convolutional layer of ResNet-110, as shown in Figure 3(c), only one filter is arbitrarily small, while the others are not. Under those circumstances, the filters with minimum norms, although they are relatively less important according to the norm-based criterion, still make significant contributions in the network. Filter Pruning via Geometric Median To get rid of the constraints in the norm-based criterion, we propose a new filter pruning method inspired from geometric median. The central idea of geometric median [8] is as follows: given a set of n points a (1) , . . . , a (n) with each a (i) ∈ R d , find a point x * ∈ R d that minimizes the sum of Euclidean distances to them: [1,n] x − a As the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], we use the geometric median F GM i to get the common information of all the filters within the single i th layer: x * ∈ arg min x∈R d f (x) where f (x) def = i∈F GM i ∈ arg min x∈R N i ×K×K g(x),(2) where g(x) def = j ∈[1,N i+1 ] x − F i,j 2.(3) In the i th layer, if some filters have the same, or similar values as the geometric median in that layer, which is: Fi,j * ∈ arg min j ∈[1,N i+1 ] F i,j − F GM i 2,(4) then those filters, F i,j * , can be represented by the other filters in the same layer, and therefore, pruning them has little negative impacts on the network performance. As geometric median is a non-trivial problem in computational geometry, the previous fastest running times for computing a (1 + )-approximate geometric median were O(dn 4/3 · −8/3 ) by [2], O(nd log 3 (n/ )) by [3]. In our case, as the final result F i,j * are a list of know points, that is, the candidate filters in the layer, we could relax the above problem. We assume that Fi,j * − F GM i 2 = 0,(5) so the Equation.4 is achieved. Then the above Equation.2 becomes to Fi,j * ∈ arg min j * ∈[1,N i+1 ] j ∈[1,N i+1 ] x − F i,j 2 = arg min j * ∈[1,N i+1 ] g(x)(6) Note that even if the filter need to be pruned, F i,j * , is not included in the calculation of the geometric median in Equation.6 2 , we could also achieve the same result. In this setting, we want to find the filter F i,j * ∈ arg min j * ∈[1,N i+1 ] g (x),(7) where g (x) = j ∈[1,N i+1 ],j =j * x − F i,j 2.(8) With the above Equation.6 and Equation.8, we could get that: Find N i+1 P i filters that satisfy Equation 6 7: g (x) = g(x) − j =j * x − F i,j 2 = g(x) − x − Fi,j * 2.(9 Zeroize selected filters 8: end for 9: end for 10: Obtain the compact model W * from W Output: The compact model and its parameters W * then we could get min g (x) = min{g(x) − x − Fi,j * 2} = min g(x) − min x − Fi,j * 2 = g(Fi,j * ) − min x − Fi,j * 2.(10) For the second component of the right side for Equation.10, when x = F i,j * , we can get: F i,j * = Fi,j * (11) since x − F i,j 2 = 0 Since the geometric median is a classic robust estimator of centrality for data in Euclidean spaces [8], the selected filter(s), F i,j * , and left ones share the most common information. This indicates the information of the filter(s) F i,j * could be replaced by others. After fine-tuning, the network could easily recover its original performance since the information of pruned filters can be represented by the remaining ones. Therefore, the filter(s) F i,j * could be pruned with negligible effect on the final result of the neural network. The FPGM is summarized in Algorithm 1. Theoretical and Realistic Acceleration Theoretical Acceleration Suppose the shapes of input tensor I ∈ N i × H i × W i and output tensor O ∈ N i+1 × H i+1 × W i+1 . Set the filter pruning rate of the i th layer to P i , then N i+1 × P i filters should be pruned. After filter pruning, the dimension of input and output feature map of the i th layer change to I ∈ [N i × (1 − P i )] × H i × W i and O ∈ [N i+1 × (1 − P i )] × H i+1 × W i+1 , respectively. If setting pruning rate for the (i + 1) th layer to P i+1 , then only (1 − P i+1 ) × (1 − P i ) of the original computation is needed. Finally, a compact model {W * (i) ∈ R Ni+1(1−Pi)×Ni(1−Pi−1)×K×K } is obtained. Realistic Acceleration In the above analysis, only the FLOPs of convolution operations for computation complexity comparison is considered, which is common in previous works [21,15]. This is because other operations such as batch normalization (BN) and pooling are insignificant comparing to convolution operations. However, non-tensor layers (e.g., BN and pooling layers) also need the inference time on GPU [25], and influence the realistic acceleration. Besides, the wide gap between the theoretical and realistic acceleration could also be restricted by the IO delay, buffer switch, and efficiency of BLAS libraries. We compare the theoretical and practical acceleration in Table 5. Experiments We evaluate FPGM for single-branch network (VGGNet [31]), and multiple-branch network (ResNet) on two benchmarks: CIFAR-10 [20] and ILSVRC-2012 [29] 3 . The CIFAR-10 [20] dataset contains 60, 000 32 × 32 color images in 10 different classes, in which 50, 000 training images and 10, 000 testing images are included. ILSVRC-2012 [29] is a large-scale dataset containing 1.28 million training images and 50k validation images of 1,000 classes. Experimental Settings Training setting. On CIFAR-10, the parameter setting is the same as [13] and the training schedule is the same as [40]. In the ILSVRC-2012 experiments, we use the default parameter settings which is same as [12,13]. Data argumentation strategies for ILSVRC-2012 is the same as Py-Torch [28] official examples. We analyze the difference between starting from scratch and the pre-trained model. For pruning the model from scratch, We use the normal training schedule without additional fine-tuning process. For pruning the pre-trained model, we reduce the learning rate to one-tenth of the original learning rate. To conduct a fair comparison of pruning scratch and pre-trained models, we use the same training epochs to train/fine-tune the network. The previous work [21] might use fewer epochs to finetune the pruned model, but it converges too early, and its accuracy can not improve even with more epochs, which can be shown in section 4.2. Pruning setting. In the filter pruning step, we simply prune all the weighted layers with the same pruning rate at the same time, which is the same as [15]. Therefore, only one hyper-parameter P i = P is needed to balance the acceleration and accuracy. The pruning operation is conducted at Table 1. Comparison of pruned ResNet on CIFAR-10. In "Fine-tune?" column, "" and "" indicates whether to use the pre-trained model as initialization or not, respectively. The "Acc. ↓" is the accuracy drop between pruned model and the baseline model, the smaller, the better. the end of every training epoch. Unlike previous work [21], sensitivity analysis is not essential in FPGM to achieve good performances, which will be demonstrated in later sections. Apart from FPGM only criterion, we also use a mixture of FPGM and previous norm-based method [15] to show that FPGM could serve as a supplement to previous methods. FPGM only criterion is denoted as "FPGMonly", the criterion combining the FPGM and norm-based criterion is indicated as "FPGM-mix". "FPGM-only 40%" means 40% filters of the layer are selected with FPGM only, while "FPGM-mix 40%" means 30% filters of the layer are selected with FPGM, and the remaining 10% filters are selected with norm-based criterion [15]. We compare FPGM with previous acceleration algorithms, e.g., MIL [5], PFEC [21], CP [16], ThiNet [25], SFP [15], NISP [39], Rethinking [38]. Not surprisingly, our FPGM method achieves the state-of-the-art result. Single-Branch Network Pruning VGGNet on CIFAR-10. As the training setup is not publicly available for [21], we re-implement the pruning procedure and achieve similar results to the original paper. The result of pruning pre-trained and scratch model is shown in Table 3 and Table 4, respectively. Not surprisingly, FPGM achieves better performance than [21] in both settings. Multiple-Branch Network Pruning ResNet on CIFAR-10. For the CIFAR-10 dataset, we test our FPGM on ResNet-20, 32, 56 and 110 with two different pruning rates: 30% and 40%. As shown in Table 1, our FPGM achieves the stateof-the-art performance. For example, MIL [5] without fine-tuning accelerates ResNet-32 by 31.2% speedup ratio with 1.59% accuracy drop, but our FPGM without finetuning achieves 53.2% speedup ratio with even 0.19% accuracy improvement. Comparing to SFP [15], when pruning 52.6% FLOPs of ResNet-56, our FPGM has only 0.66% accuracy drop, which is much less than SFP [15] (1.33%). For pruning the pre-trained ResNet-110, our method achieves a much higher (52.3% v.s. 38.6%) acceleration ratio with 0.16% performance increase, while PFEC [21] harms the performance with lower acceleration ratio. These results demonstrate that FPGM can produce a more compressed model with comparable or even better performances. ResNet Table 3. Pruning pre-trained VGGNet on CIFAR-10. "w.o." means "without" and "FT" means "fine-tuning" the pruned model. FPGM without fine-tuning achieves the same inference speedup with [15], but its accuracy exceeds by 0.68%. FPGM-only with fine-tuning could even gain 0.60% improvement over FPGM-only without fine-tuning, thus exceeds [15] by 1.28%. For ResNet-50, FPGM with finetuning achieves more inference speedup than CP [16], but our pruned model exceeds their model by 0.85% on the accuracy. Moreover, for pruning a pre-trained ResNet-101, FPGM reduces more than 40% FLOPs of the model without top-5 accuracy loss and only negligible (0.05%) top-1 accuracy loss. In contrast, the performance degradation is 2.10% for Rethinking [38]. Compared to the norm-based criterion, Geometric Median (GM) explicitly utilizes the relationship between filters, which is the main cause of its superior per- formance. To compare the theoretical and realistic acceleration, we measure the forward time of the pruned models on one GTX1080 GPU with a batch size of 64. The results 4 are shown in Table 5. As discussed in the above section, the gap between the theoretical and realistic model may come from the limitation of IO delay, buffer switch, and efficiency of BLAS libraries. Ablation Study Influence of Pruning Interval In our experiment setting, the interval of pruning equals to one, i.e., we conduct our pruning operation at the end of every training epoch. To explore the influence of pruning interval, we change the pruning interval from one epoch to ten epochs. We use the ResNet-110 under pruning rate 40% as the baseline, as shown in Fig. 4(a). The accuracy fluctuation along with the different pruning intervals is less than 0.3%, which means the performance of pruning is not sensitive to this parameter. Note that fine-tuning this parameter could even achieve better performance. Varying Pruned FLOPs We change the ratio of Pruned FLOPs for ResNet-110 to comprehensively understand FPGM, as shown in Fig. 4(b). When the pruned FLOPs is 18% and 40%, the performance of the pruned model even exceeds the baseline model without pruning, which shows FPGM may have a regularization effect on the neural network. Influence of Distance Type We use 1 -norm and cosine distance to replace the distance function in Equation 3. We use the ResNet-110 under pruning rate 40% as the baseline, the accuracy of the pruned model is 93.73 ± 0.23 %. The accuracy based on 1 -norm and cosine distance is 93.87 ± 0.22 % and 93.56 ± 0.13, respectively. Using 1 -norm as the distance of filter would bring a slightly better result, but cosine distance as distance would slightly harm the performance of the network. 4 Optimization of the addition of ResNet shortcuts and convolutional outputs would also affect the results. Combining FPGM with Norm-based Criterion We analyze the effect of combining FPGM and previous normbased criterion. For ResNet-110 on CIFAR-10, FPGMmix is slightly better than FPGM-only. For ResNet-18 on ILSVRC-2012, the performances of FPGM-only and FPGM-mix are almost the same. It seems that the normbased criterion and FPGM together can boost the performance on CIFAR-10, but not on ILSVRC-2012. We believe that this is because the two requirements for the norm-based criterion are met on some layers of CIFAR-10 pre-trained network, but not on that of ILSVRC-2012 pre-trained network, which is shown in Figure 3. Feature Map Visualization We visualize the feature maps of the first layer of the first block of ResNet-50. The feature maps with red titles (7,23,27,46,56,58) correspond to the selected filter activation when setting the pruning rate to 10%. These selected feature maps contain outlines of the bamboo and the panda's head and body, which can be replaced by remaining feature maps: (5,12,16,18,22, Conclusion and Future Work In this paper, we elaborate on the underlying requirements for norm-based filter pruning criterion and point out their limitations. To solve this, we propose a new filter pruning strategy based on the geometric median, named FPGM, to accelerate the deep CNNs. Unlike the previous normbased criterion, FPGM explicitly considers the mutual relations between filters. Thanks to this, FPGM achieves the state-of-the-art performance in several benchmarks. In the future, we plan to work on how to combine FPGM with other acceleration algorithms, e.g., matrix decomposition and low-precision weights, to push the performance to a higher stage.
3,623
1810.13082
2899192577
Humanoid robots dynamically navigate an environment by interacting with it via contact wrenches exerted at intermittent contact poses. Therefore, it is important to consider dynamics when planning a contact sequence. Traditional contact planning approaches assume a quasi-static balance criterion to reduce the computational challenges of selecting a contact sequence over a rough terrain. This however limits the applicability of the approach when dynamic motions are required, such as when walking down a steep slope or crossing a wide gap. Recent methods overcome this limitation with the help of efficient mixed integer convex programming solvers capable of synthesizing dynamic contact sequences. Nevertheless, its exponential-time complexity limits its applicability to short time horizon contact sequences within small environments. In this paper, we go beyond current approaches by learning a prediction of the dynamic evolution of the robot centroidal momenta, which can then be used for quickly generating dynamically robust contact sequences for robots with arms and legs using a search-based contact planner. We demonstrate the efficiency and quality of the results of the proposed approach in a set of dynamically challenging scenarios.
Footstep planning for humanoid robot has been studied extensively @cite_15 @cite_35 @cite_12 @cite_30 @cite_24 @cite_7 @cite_21 . In these works, the planner plans a footstep sequence to avoid obstacles on the ground and remain inside the specified contact regions on a flat or piecewise-flat ground. To increase the likelihood of success, they incorporate an approximation of robot balance and kinematic reachability into the contact transition model, and do not explicitly perform balance check online. There are also works addressing contact planning in unstructured environment using both palm and foot contacts @cite_22 @cite_18 @cite_1 @cite_26 @cite_0 . However, these approaches assumes quasi-static motions, and drops solutions involving dynamic motions.
{ "abstract": [ "The rapid development of the theory of robust estimation (Huber, 1973) has created a need for computational procedures to produce robust estimates. We will review a number of different computational approaches for robust linear regression but focus on one—iteratively reweighted least-squares (IRLS). The weight functions that we discuss are a part of a semi-portable subroutine library called ROSEPACK (RObust Statistical Estimation PACKage) that has been developed by the authors and Virginia Klema at the Computer Research Center of the National Bureau of Economic Research, Inc. in Cambridge, Mass. with the support of the National Science Foundation. This library (Klema, 1976) makes it relatively simple to implement an IRLS regression package.", "We present an algorithm for planning goal-directed footstep navigation strategies for biped robots through obstacle-filled environ- ments and uneven ground. Planning footsteps is more general than most existing navigation methods designed for wheeled robots, since the op- tions of stepping over or upon obstacles are available. Given a height map of the terrain and a discrete set of possible footstep motions, the planner uses an A* search to generate a sequence of footstep locations to reach a given goal state. The planner evaluates footstep locations for viability using a collection of heuristic metrics designed to encode the relative safety, effort required, and overall motion complexity. We show preliminary results of the planner over several simulated terrains, as well as a simplified, online version of the algorithm running on the H7 hu- manoid robot. In the latter case, a stereo vision system is used to sense obstacles in the immediate environment and identify a target goal loca- tion, which is used to update the current optimal footstep sequence to the goal from the robot's present location.", "This paper presents a method of computing efficient and natural-looking motions for humanoid robots walking on varied terrain. It uses a small set of high-quality motion primitives (such as a fixed gait on flat ground) that have been generated offline. But rather than restrict motion to these primitives, it uses them to derive a sampling strategy for a probabilistic, sample-based planner. Results in simulation on several different terrains demonstrate a reduction in planning time and a marked increase in motion quality.", "We propose a humanoid robot navigation planning framework that reuses previous experience to decrease planning time. The framework is intended for navigating complex unstructured environments using both palm and foot contacts. In a complex environment, discrete-search-based contact space planners trade-off between high branching factor and action flexibility. Although approaches such as weighted A∗, ARA∗ and ANA∗ could speed up the search by compromising on optimality, they can be very slow when the heuristic is inaccurate. In the proposed framework, an experience-retrieval module is added in parallel to ANA∗. This module collects previously-generated motion plans and clusters them based on contact pose similarity to form a motion plan library. To retrieve an appropriate plan from the library for a given environment, the framework uses a distance between the contact poses in the plan and environment surfaces. Candidate plans are then modified with local trajectory optimization until a plan fitting the query environment is found. Our experiments show that the proposed framework outperforms planning-from-scratch in success rate in unstructured environments by at least 28 and can navigate difficult environments such as rubble and narrow corridors.", "This paper presents improvements in contact-before-motion planning for humanoid robots that enables to find a path in very constrained environments. Starting from our previous work, the main novelties are to use a rough trajectory to drive the search and as a criterion to find new contacts and generate the best nodes first. This way only few nodes are effectively explored, speeding up the planning process. We experience the algorithm on the humanoid HRP-2 in a complex scenario.", "We present a novel method to solve the problem of planning footsteps for a humanoid robot according to an arbitrary set of tasks. In this method, we consider the sequence of footsteps required to solve a task as a virtual kinematic chain that augments the state of the humanoid robot. We introduce this representation to formulate the footsteps planning as an iterative constrainted optimization problem where the footsteps are accounted for as additional degrees of freedom helping the robot in achieving its tasks. We demonstrate the efficiency and the generality of the method through three task scenarios for the humanoid robot HRP-2.", "We present a new method for planning footstep placements for a robot walking on uneven terrain with obstacles, using a mixed-integer quadratically-constrained quadratic program (MIQCQP). Our approach is unique in that it handles obstacle avoidance, kinematic reachability, and rotation of footstep placements, which typically have required non-convex constraints, in a single mixed-integer optimization that can be efficiently solved to its global optimum. Reachability is enforced through a convex inner approximation of the reachable space for the robot's feet. Rotation of the footsteps is handled by a piecewise linear approximation of sine and cosine, designed to ensure that the approximation never overestimates the robot's reachability. Obstacle avoidance is ensured by decomposing the environment into convex regions of obstacle-free configuration space and assigning each footstep to one such safe region. We demonstrate this technique in simple 2D and 3D environments and with real environments sensed by a humanoid robot. We also discuss computational performance of the algorithm, which is currently capable of planning short sequences of a few steps in under one second or longer sequences of 10–30 footsteps in tens of seconds to minutes on common laptop computer hardware. Our implementation is available within the Drake MATLAB toolbox [1].", "We present a contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car. The need for such a planner was shown at the DARPA Robotics Challenge, where such behaviors could not be demonstrated (except for egress). Current planners suffer from their prohibitive algorithmic complexity because they deploy a tree of robot configurations projected in contact with the environment. We tackle this issue by introducing a reduction property: the reachability condition. This condition defines a geometric approximation of the contact manifold, which is of low dimension, presents a Cartesian topology, and can be efficiently sampled and explored. The hard contact planning problem can then be decomposed into two subproblems: first, we plan a path for the root without considering the whole-body configuration, using a sampling-based algorithm; then, we generate a discrete sequence of whole-body configurations in static equilibrium along this path, using a deterministic contact-selection algorithm. The reduction breaks the algorithm complexity encountered in previous works, resulting in the first interactive implementation of a contact planner (open source). While no contact planner has yet been proposed with theoretical completeness, we empirically show the interest of our framework: in a few seconds, with high success rates, we generate complex contact plans for various scenarios and two robots: HRP-2 and HyQ. These plans are validated in dynamic simulations or on the real HRP-2 robot.", "Efficient footstep planning for humanoid navigation through cluttered environments is still a challenging problem. Many obstacles create local minima in the search space, forcing heuristic planners such as A* to expand large areas. The goal of this work is to efficiently compute long, feasible footstep paths. For navigation, finding the optimal path initially is often not needed as it can be improved while walking. Thus, we propose anytime search-based planning using the anytime repairing A* (ARA*) and randomized A* (R*) planners. This allows to obtain efficient paths with provable suboptimality within short planning times. Opposed to completely randomized methods such as rapidly-exploring random trees (RRTs), these planners create paths that are goal-directed and guaranteed to be no more than a certain factor longer than the optimal solution. We thoroughly evaluated the planners in various scenarios using different heuristics. ARA* with the 2D Dijkstra heuristic yields fast and efficient solutions but its potential inadmissibility results in non-optimal paths for some scenarios. R*, on the other hand borrows ideas from RRTs, yields fast solutions, and is less dependent on a well-designed heuristic function. This allows it to avoid local minima and reduces the number of expanded states.", "This paper presents the contact-consistent elastic strips (CES) framework, a motion planning approach capable of producing complex multi-contact whole-body humanoid behaviors in dynamic environments. Planning multi-contact motions for humanoid robots is known to be non-trivial since it involves aspects of autonomous balancing, obstacle avoidance, ensuring global connectivity of the workspace and concurrent consideration of the kinematic and dynamic constraints. Previous works at motion planning for humanoid systems tend to focus on joint space planning and deal with obstacle avoidance and contact-point searching as the separated problems. CES framework, however, simultaneously considers all these requirements and constraints in task space while planning a valid sequence of contact-points and corresponding motions. This resulted in considerable improvements to efficiency and significantly reduced planning time. With the use of CES framework, complex multi-contact locomotion behaviors and real-time adjustments to the robot motions in 3D unstructured environments is possible. Several simulations are demonstrated to evaluate and verify the performance of CES framework.", "We present an algorithm for planning safe navigation strategies for biped robots moving in obstacle-cluttered environments. From a discrete set of plausible statically-stable, single-step motions, a forward dynamic programming approach is used to compute a sequence of feasible footstep locations. In contrast to existing navigation strategies for mobile robots, our method is a global method that takes into account the unique ability of legged robots such as bipedal humanoids to traverse obstacles by stepping over them. Heuristics designed to minimize the number and complexity of the step motions are used to encode cost functions used for searching a footstep transition graph. We show preliminary results of an experimental implementation of the algorithm using a model of the H6 humanoid navigating on an office floor littered with obstacles.", "Despite the stable walking capabilities of modern biped humanoid robots, their ability to autonomously and safely navigate obstacle-filled, unpredictable environments has so far been limited. We present an approach to autonomous humanoid walking that combines vision-based sensing with a footstep planner, allowing the robot to navigate toward a desired goal position while avoiding obstacles. An environment map including the robot, goal, and obstacle locations is built in real-time from vision. The footstep planner then computes an optimal sequence of footstep locations within a time-limited planning horizon. Footstep plans are reused and only partially recomputed as the environment changes during the walking sequence. In our experiments, combining real-time vision with plan reuse has allowed a Honda ASIMO humanoid robot to autonomously traverse dynamic environments containing unpredictably moving obstacles" ], "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_26", "@cite_22", "@cite_7", "@cite_21", "@cite_1", "@cite_24", "@cite_0", "@cite_15", "@cite_12" ], "mid": [ "2050551672", "2127895608", "1517802146", "2570393549", "1508300292", "2135645359", "2015149365", "2345626358", "2084959384", "1588834047", "1962169445", "2540579400" ] }
Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction
0
1810.13082
2899192577
Humanoid robots dynamically navigate an environment by interacting with it via contact wrenches exerted at intermittent contact poses. Therefore, it is important to consider dynamics when planning a contact sequence. Traditional contact planning approaches assume a quasi-static balance criterion to reduce the computational challenges of selecting a contact sequence over a rough terrain. This however limits the applicability of the approach when dynamic motions are required, such as when walking down a steep slope or crossing a wide gap. Recent methods overcome this limitation with the help of efficient mixed integer convex programming solvers capable of synthesizing dynamic contact sequences. Nevertheless, its exponential-time complexity limits its applicability to short time horizon contact sequences within small environments. In this paper, we go beyond current approaches by learning a prediction of the dynamic evolution of the robot centroidal momenta, which can then be used for quickly generating dynamically robust contact sequences for robots with arms and legs using a search-based contact planner. We demonstrate the efficiency and quality of the results of the proposed approach in a set of dynamically challenging scenarios.
Approaches to synthesize dynamically feasible multi-contact motions have also been extensively studied @cite_8 @cite_14 @cite_31 @cite_40 @cite_36 . However, it is not trivial to include planning of contact poses in these approaches because contacts planning in general involves discrete or non-convex constraints for the contact poses. @cite_21 addresses the non-convexity by decomposing the environment into a set of convex regions and approximating the rotation using piecewise affine functions. The problem is then formulated as a mixed integer convex program and solved to global optimality. Although @cite_21 only uses foot contact, and does not consider dynamics, it points a direction to include contact planning in an optimization problem.
{ "abstract": [ "This paper presents a generic and efficient approach to generate dynamically consistent motions for under-actuated systems like humanoid or quadruped robots. The main contribution is a walking pattern generator, able to compute a stable trajectory of the center of mass of the robot along with the angular momentum, for any given configuration of contacts (e.g. on uneven, sloppy or slippery terrain, or with closed-gripper). Unlike existing methods, our solver is fast enough to be applied as a model-predictive controller. We then integrate this pattern generator in a complete framework: an acyclic contact planner is first used to automatically compute the contact sequence from a 3D model of the environment and a desired final posture; a stable walking pattern is then computed by the proposed solver; a dynamically-stable whole-body trajectory is finally obtained using a second-order hierarchical inverse kinematics. The implementation of the whole pipeline is fast enough to plan a step while the previous one is executed. The interest of the method is demonstrated by real experiments on the HRP-2 robot, by performing long-step walking and climbing a staircase with handrail support.", "Optimal control approaches in combination with trajectory optimization have recently proven to be a promising control strategy for legged robots. Computationally efficient and robust algorithms were derived using simplified models of the contact interaction between robot and environment such as the linear inverted pendulum model (LIPM). However, as humanoid robots enter more complex environments, less restrictive models become increasingly important. As we leave the regime of linear models, we need to build dedicated solvers that can compute interaction forces together with consistent kinematic plans for the whole-body. In this paper, we address the problem of planning robot motion and interaction forces for legged robots given predefined contact surfaces. The motion generation process is decomposed into two alternating parts computing force and motion plans in coherence. We focus on the properties of the momentum computation leading to sparse optimal control formulations to be exploited by a dedicated solver. In our experiments, we demonstrate that our motion generation algorithm computes consistent contact forces and joint trajectories for our humanoid robot. We also demonstrate the favorable time complexity due to our formulation and composition of the momentum equations.", "Our work builds largely on Nagasaka's stabilizer in multi-contact motion [1]. Using a sequence of contact stances from an offline multi-contact planner, we use first a Model Predictive Controller to generate a dynamic trajectory of the center of mass, then a whole-body closed-loop model-based controller to track it at best. Relatively to Nagasaka's work, we allow frame changes of the preferred force, provide a heuristic to compute the timing of the transition from purely geometrical features and investigate the synchronization problem between the reduced-model preview control and the whole-body controller. Using our framework, we generate a wide range of 3D motions, while accounting for predictable external forces, which includes transporting objects. Simulation scenarios are presented and obtained results are analyzed and discussed.", "We present a new method for planning footstep placements for a robot walking on uneven terrain with obstacles, using a mixed-integer quadratically-constrained quadratic program (MIQCQP). Our approach is unique in that it handles obstacle avoidance, kinematic reachability, and rotation of footstep placements, which typically have required non-convex constraints, in a single mixed-integer optimization that can be efficiently solved to its global optimum. Reachability is enforced through a convex inner approximation of the reachable space for the robot's feet. Rotation of the footsteps is handled by a piecewise linear approximation of sine and cosine, designed to ensure that the approximation never overestimates the robot's reachability. Obstacle avoidance is ensured by decomposing the environment into convex regions of obstacle-free configuration space and assigning each footstep to one such safe region. We demonstrate this technique in simple 2D and 3D environments and with real environments sensed by a humanoid robot. We also discuss computational performance of the algorithm, which is currently capable of planning short sequences of a few steps in under one second or longer sequences of 10–30 footsteps in tens of seconds to minutes on common laptop computer hardware. Our implementation is available within the Drake MATLAB toolbox [1].", "We present a multi-contact walking pattern generator based on preview-control of the 3D acceleration of the center of mass (COM). A key point in the design of our algorithm is the calculation of contact-stability constraints. Thanks to a mathematical observation on the algebraic nature of the frictional wrench cone, we show that the 3D volume of feasible COM accelerations is always an upward-pointing cone. We reduce its computation to a convex hull of (dual) 2D points, for which optimal C(nlog n) algorithms are readily available. This reformulation brings a significant speedup compared to previous methods, which allows us to compute time-varying contact-stability criteria fast enough for the control loop. Next, we propose a conservative trajectory-wide contact-stability criterion, which can be derived from COM-acceleration volumes at marginal cost and directly applied in a model-predictive controller. We finally implement this pipeline and exemplify it with the HRP-4 humanoid model in multi-contact dynamically walking scenarios.", "In this paper, we present a convex optimization problem to generate Center of Mass (CoM) and momentum trajectories of a walking robot, such that the motion robustly satisfies the friction cone constraints on uneven terrain. We adopt the Contact Wrench Cone (CWC) criterion to measure a robot's dynamical stability, which generalizes the venerable Zero Moment Point (ZMP) criterion. Unlike the ZMP criterion, which is ideal for walking on flat ground with unbounded tangential friction forces, the CWC criterion incorporates non-coplanar contacts with friction cone constraints. We measure the robustness of the motion using the margin in the Contact Wrench Cone at each time instance, which quantifies the capability of the robot to instantaneously resist external force torque disturbance, without causing the foot to tip over or slide. For pre-specified footstep location and time, we formulate a convex optimization problem to search for robot linear and angular momenta that satisfy the CWC criterion. We aim to maximize the CWC margin to improve the robustness of the motion, and minimize the centroidal angular momentum (angular momentum about CoM) to make the motion natural. Instead of directly minimizing the non-convex centroidal angular momentum, we resort to minimizing a convex upper bound. We show that our CWC planner can generate motion similar to the result of the ZMP planner on flat ground with sufficient friction. Moreover, on an uneven terrain course with friction cone constraints, our CWC planner can still find feasible motion, while the outcome of the ZMP planner violates the friction limit." ], "cite_N": [ "@cite_14", "@cite_8", "@cite_36", "@cite_21", "@cite_40", "@cite_31" ], "mid": [ "2415458238", "2399015533", "2033480951", "2015149365", "2477962321", "2569324215" ] }
Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction
0
1810.13082
2899192577
Humanoid robots dynamically navigate an environment by interacting with it via contact wrenches exerted at intermittent contact poses. Therefore, it is important to consider dynamics when planning a contact sequence. Traditional contact planning approaches assume a quasi-static balance criterion to reduce the computational challenges of selecting a contact sequence over a rough terrain. This however limits the applicability of the approach when dynamic motions are required, such as when walking down a steep slope or crossing a wide gap. Recent methods overcome this limitation with the help of efficient mixed integer convex programming solvers capable of synthesizing dynamic contact sequences. Nevertheless, its exponential-time complexity limits its applicability to short time horizon contact sequences within small environments. In this paper, we go beyond current approaches by learning a prediction of the dynamic evolution of the robot centroidal momenta, which can then be used for quickly generating dynamically robust contact sequences for robots with arms and legs using a search-based contact planner. We demonstrate the efficiency and quality of the results of the proposed approach in a set of dynamically challenging scenarios.
Extensions of @cite_21 for dynamic planning of a contact sequences are proposed in @cite_17 @cite_16 , which extend @cite_21 with the selection of contact timings or hand contacts respectively. More recent works @cite_3 @cite_9 use the same concept to plan gait sequences for quadruped robots and produce dynamically robust motions. However, mixed-integer approaches scale poorly against the number of integer decision variables. For instance, their applicability is limited to online contact generation in environments with few convex terrain regions, and short planning horizons.
{ "abstract": [ "Traditional motion planning approaches for multilegged locomotion divide the problem into several stages, such as contact search and trajectory generation. However, reasoning about contacts and motions simultaneously is crucial for the generation of complex whole-body behaviors. Currently, coupling theses problems has required either the assumption of a fixed gait sequence and flat terrain condition, or nonconvex optimization with intractable computation time. In this letter, we propose a mixed-integer convex formulation to plan simultaneously contact locations, gait transitions, and motion, in a computationally efficient fashion. In contrast to previous works, our approach is not limited to flat terrain nor to a prespecified gait sequence. Instead, we incorporate the friction cone stability margin, approximate the robot's torque limits, and plan the gait using mixed-integer convex constraints. We experimentally validated our approach on the HyQ robot by traversing different challenging terrains, where nonconvexity and flat terrain assumptions might lead to suboptimal or unstable plans. Our method increases the motion robustness while keeping a low computation time.", "We present a new method for planning footstep placements for a robot walking on uneven terrain with obstacles, using a mixed-integer quadratically-constrained quadratic program (MIQCQP). Our approach is unique in that it handles obstacle avoidance, kinematic reachability, and rotation of footstep placements, which typically have required non-convex constraints, in a single mixed-integer optimization that can be efficiently solved to its global optimum. Reachability is enforced through a convex inner approximation of the reachable space for the robot's feet. Rotation of the footsteps is handled by a piecewise linear approximation of sine and cosine, designed to ensure that the approximation never overestimates the robot's reachability. Obstacle avoidance is ensured by decomposing the environment into convex regions of obstacle-free configuration space and assigning each footstep to one such safe region. We demonstrate this technique in simple 2D and 3D environments and with real environments sensed by a humanoid robot. We also discuss computational performance of the algorithm, which is currently capable of planning short sequences of a few steps in under one second or longer sequences of 10–30 footsteps in tens of seconds to minutes on common laptop computer hardware. Our implementation is available within the Drake MATLAB toolbox [1].", "This paper introduces an optimization-based framework for robust multilegged walking motion planning. Previous approaches use fixed gait sequences, and rely on Zero Moment Point (ZMP) to guarantee dynamic stability. While this combination works well on flat ground, it does not generalize to uneven terrain requiring aggressive gait or gait transition. To overcome such difficulties, in this paper, we present an optimization framework, that can plan both the contact location and gait sequence simultaneously in a mixed-integer convex optimization program. Moreover, we rely on the Contact Wrench Cone (CWC) stability criterion, which generalizes the ZMP criterion to uneven terrain with friction cone constraints, and we plan the walking motion together with the angular momentum through a convex optimization program. Our approach is successfully tested on a LittleDog quadruped over simulated scenarios. We show that on the flat ground, our planner generates a periodic gait, same as Central Pattern Generator + ZMP planner; while on uneven terrain, our planner can successfully generate a motion containing different gaits, with a center-of-mass motion that respects the friction cone constraints, which are violated by ZMP planners. This improvement clearly demonstrates the advantage of our approach over traditional planning strategies.", "", "Balance strategies range from continuous postural adjustments to discrete changes in contacts: their simultaneous execution is required to maintain postural stability while considering the engaged walking activity. In order to compute optimal time, duration and position of footsteps along with the center of mass trajectory of a humanoid, a novel mixed-integer model of the system is presented. The introduction of this model in a predictive control problem brings the definition of a Mixed-Integer Quadratic Program, subject to linear constraints. Sim-ulation results demonstrate the simultaneous adaptation of the gait pattern and posture of the humanoid, in a walking activity under large disturbances, to efficiently compromise between task performance and balance. In addition, a push recovery scenario displays how, using a single balance-performance ratio, distinct behaviors of the humanoid can be specified." ], "cite_N": [ "@cite_9", "@cite_21", "@cite_3", "@cite_16", "@cite_17" ], "mid": [ "2774366155", "2015149365", "2772959914", "", "2070930497" ] }
Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction
0
1810.13082
2899192577
Humanoid robots dynamically navigate an environment by interacting with it via contact wrenches exerted at intermittent contact poses. Therefore, it is important to consider dynamics when planning a contact sequence. Traditional contact planning approaches assume a quasi-static balance criterion to reduce the computational challenges of selecting a contact sequence over a rough terrain. This however limits the applicability of the approach when dynamic motions are required, such as when walking down a steep slope or crossing a wide gap. Recent methods overcome this limitation with the help of efficient mixed integer convex programming solvers capable of synthesizing dynamic contact sequences. Nevertheless, its exponential-time complexity limits its applicability to short time horizon contact sequences within small environments. In this paper, we go beyond current approaches by learning a prediction of the dynamic evolution of the robot centroidal momenta, which can then be used for quickly generating dynamically robust contact sequences for robots with arms and legs using a search-based contact planner. We demonstrate the efficiency and quality of the results of the proposed approach in a set of dynamically challenging scenarios.
@cite_32 proposes a kinodynamic sampling-based contact planner to plan kinodynamically feasible contact sequences. They use a simplified robot model to dynamically plan smooth center of mass (CoM) trajectories based on convex optimization and then search for kinematically feasible contact poses around it. It shows a unified planning framework to consider dynamics and kinematics constraints, but it suffers from long planning time. @cite_13 proposes an efficient dynamic feasibility check by conservatively reformulating the problem as a linear program. While the check guarantees to reject dynamically infeasible motions, they do not address dynamical robustness in the stability check. @cite_2 learns quadratic dynamics objective of humanoid walking motion, and apply this learned model to select steps in a search-based footstep planner. However, their dynamics model assumes flat contact, and does not consider palm contacts, which limits the applicability of the approach.
{ "abstract": [ "We tackle the transition feasibility problem, that is the issue of determining whether there exists a feasible motion connecting two configurations of a legged robot. To achieve this we introduce CROC, a novel method for computing centroidal dynamics trajectories in multi-contact planning contexts. Our approach is based on a conservative and convex reformulation of the problem, where we represent the center of mass trajectory as a Bezier curve comprising a single free control point as a variable. Under this formulation, the transition problem is solved efficiently with a Linear Program (LP)of low dimension. We use this LP as a feasibility criterion, incorporated in a sampling-based contact planner, to discard efficiently unfeasible contact plans. We are thus able to produce robust contact sequences, likely to define feasible motion synthesis problems. We illustrate this application on various multi-contact scenarios featuring HRP2 and HyQ. We also show that we can use CROC to compute valuable initial guesses, used to warm-start non-linear solvers for motion generation methods. This method could also be used for the 0 and 1-Step capturability problem. The source code of CROC is available under an open source BSD-2 License.", "We present a novel method for synthesizing collision-free, dynamic locomotion behaviors for legged robots, including jumping, going down a very steep slope, or recovering from a push using the arms of the robot. The approach is automatic and generic: non-gaited motions, comprising arbitrary contact postures can be generated along any environment. At the core of our framework is a new steering method that generates trajectories connecting two states of the robot. These trajectories account for the state-dependent, centroidal dynamic constraints inherent to legged robots. The method, of low dimension, formulated as a Linear Program, is really efficient to compute, and can find an application in various problems related to legged locomotion. By incorporating this steering method into an existing sampling-based contact planner, we propose the first kinodynamic contact planner for legged robots.", "In this paper we show that optimal stepping trajectories and trajectory cost for a walking biped robot on rough terrain can be encoded as simple quadratic functions of initial state and footstep sequence. In order to find this encoding, we build a database of optimal walking trajectories for a 3D humanoid model by sampling the input space (initial state and footstep sequence) and solving a physically-based trajectory optimization problem for each sample. Then, the function coefficients are obtained by fitting the data using least squares. The performance of the proposed method is evaluated by comparing the function values with other optimal walking motion data generated with different footstep samples. As an application, we use a quadratic function to calculate the effort cost used in finding an optimal footstep sequence with an A* algorithm. Our study shows that a simple function can encode optimal walking effectively, which provides a fast alternative to online optimization of walking with full body dynamics." ], "cite_N": [ "@cite_13", "@cite_32", "@cite_2" ], "mid": [ "2789286310", "2597700484", "2029068927" ] }
Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction
0
1810.12890
2952634764
Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that activation units in convolutional layers are spatially correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where units in a contiguous region of a feature map are dropped together. We found that applying DropbBlock in skip connections in addition to the convolution layers increases the accuracy. Also, gradually increasing number of dropped units during training leads to better accuracy and more robust to hyperparameter choices. Extensive experiments show that DropBlock works better than dropout in regularizing convolutional networks. On ImageNet classification, ResNet-50 architecture with DropBlock achieves @math accuracy, which is more than @math improvement on the baseline. On COCO detection, DropBlock improves Average Precision of RetinaNet from @math to @math .
The developments of these noise injection techniques specific to the architectures are not unique to convolutional networks. In fact, similar to convolutional networks, recurrent networks require their own noise injection methods. Currently, Variational Dropout @cite_34 and ZoneOut @cite_23 are two of the most commonly used methods to inject noise to recurrent connections.
{ "abstract": [ "Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.", "We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST." ], "cite_N": [ "@cite_34", "@cite_23" ], "mid": [ "2212703438", "2409027918" ] }
DropBlock: A regularization method for convolutional networks
Deep neural nets work well when they have a large number of parameters and are trained with a massive amount of regularization and noise, such as weight decay and dropout [1]. Though the first biggest success of dropout was associated with convolutional networks [2], recent convolutional architectures rarely use dropout [3][4][5][6][7][8][9][10]. In most cases, dropout was mainly used at the fully connected layers of the convolutional networks [11][12][13]. We argue that the main drawback of dropout is that it drops out features randomly. While this can be effective for fully connected layers, it is less effective for convolutional layers, where features are correlated spatially. When the features are correlated, even with dropout, information about the input can still be sent to the next layer, which causes the networks to overfit. This intuition suggests that a more structured form of dropout is needed to better regularize convolutional networks. In this paper, we introduce DropBlock, a structured form of dropout, that is particularly effective to regularize convolutional networks. In DropBlock, features in a block, i.e., a contiguous region of a feature map, are dropped together. As DropBlock discards features in a correlated area, the networks must look elsewhere for evidence to fit the data (see Figure 1). In our experiments, DropBlock is much better than dropout in a range of models and datasets. Adding DropBlock to ResNet-50 architecture improves image classification accuracy on ImageNet from 76.51% to 78.13%. On COCO detection, DropBlock improves AP of RetinaNet from 36.8% to 38.4%. Dropping out activations at random is not effective in removing semantic information because nearby activations contain closely related information. Instead, dropping continuous regions can remove certain semantic information (e.g., head or feet) and consequently enforcing remaining units to learn features for classifying input image. DropBlock DropBlock is a simple method similar to dropout. Its main difference from dropout is that it drops contiguous regions from a feature map of a layer instead of dropping out independent random units. Pseudocode of DropBlock is shown in Algorithm 1. DropBlock has two main parameters which are block_size and γ. block_size is the size of the block to be dropped, and γ, controls how many activation units to drop. We experimented with a shared DropBlock mask across different feature channels or each feature channel has its DropBlock mask. Algorithm 1 corresponds to the latter, which tends to work better in our experiments. Similar to dropout we do not apply DropBlock during inference. This is interpreted as evaluating an averaged prediction across the exponentially-sized ensemble of sub-networks. These sub-networks include a special subset of sub-networks covered by dropout where each network does not see contiguous parts of feature maps. Setting the value of block_size. In our implementation, we set a constant block_size for all feature maps, regardless the resolution of feature map. DropBlock resembles dropout [1] when block_size = 1 and resembles SpatialDropout [20] when block_size covers the full feature map. Setting the value of γ. In practice, we do not explicitly set γ. As stated earlier, γ controls the number of features to drop. Suppose that we want to keep every activation unit with the probability of keep_prob, in dropout [1] the binary mask will be sampled with the Bernoulli distribution with mean 1 − keep_prob. However, to account for the fact that every zero entry in the mask will be expanded by block_size 2 and the blocks will be fully contained in feature map, we need to adjust γ accordingly when we sample the initial binary mask. In our implementation, γ can be computed as γ = 1 − keep_prob block_size 2 f eat_size 2 (f eat_size − block_size + 1) 2(1) where keep_prob can be interpreted as the probability of keeping a unit in traditional dropout. The size of valid seed region is (f eat_size − block_size + 1) 2 where f eat_size is the size of feature map. The main nuance of DropBlock is that there will be some overlapped in the dropped blocks, so the above equation is only an approximation. In our experiments, we first estimate the keep_prob to use (between 0.75 and 0.95), and then compute γ according to the above equation. Scheduled DropBlock. We found that DropBlock with a fixed keep_prob during training does not work well. Applying small value of keep_prob hurts learning at the beginning. Instead, gradually decreasing keep_prob over time from 1 to the target value is more robust and adds improvement for the most values of keep_prob. In our experiments, we use a linear scheme of decreasing the value of keep_prob, which tends to work well across many hyperparameter settings. This linear scheme is similar to ScheduledDropPath [8]. Experiments In the following sections, we empirically investigate the effectiveness of DropBlock for image classification, object detection, and semantic segmentation. We apply DropBlock to ResNet-50 [4] with extensive experiments for image classification. To verify the results can be transferred to a different architecture, we perform DropBlock on a state-of-the-art model architecture, AmoebaNet [10], and show improvements. In addition to image classification, We show DropBlock is helpful in training RetinaNet [24] for object detection and semantic segmentation. ImageNet Classification The ILSVRC 2012 classification dataset [25] contains 1.2 million training images, 50,000 validation images, and 150,000 testing images. Images are labeled with 1,000 categories. We used horizontal flip, scale, and aspect ratio augmentation for training images as in [12,26]. During evaluation, we applied a single-crop rather than averaging results over multiple crops. Following the common practice, we report classification accuracy on the validation set. [17] 77.10 ± 0.08 93.50 ± 0.05 ResNet-50 + SpatialDropout (kp=0.9) [20] 77.41 ± 0.04 93.74 ± 0.02 ResNet-50 + Cutout [23] 76.52 ± 0.07 93.21 ± 0.04 ResNet-50 + AutoAugment [27] 77.63 93.82 ResNet-50 + label smoothing (0.1) [28] 77 Where to apply DropBlock. In residual networks, a building block consists of a few convolution layers and a separate skip connection that performs identity mapping. Every convolution layer is followed by batch normalization layer and ReLU activation. The output of a building block is the sum of outputs from the convolution branch and skip connection. A residual network can be represented by building groups based on the spatial resolution of feature activation. A building group consists of multiple building blocks. We use group 4 to represent the last group in residual network (i.e., all layers in conv5_x) and so on. In the following experiments, we study where to apply DropBlock in residual networks. We experimented with applying DropBlock only after convolution layers or applying DropBlock after both convolution layers and skip connections. To study the performance of DropBlock applying to different feature groups, we experimented with applying DropBlock to Group 4 or to both Groups 3 and 4. DropBlock vs. dropout. The original ResNet architecture does not apply any dropout in the model. For the ease of discussion, we define the dropout baseline for ResNet as applying dropout on convolution branches only. We applied DropBlock to both groups 3 and 4 with block_size = 7 by default. We decreased γ by factor 4 for group 3 in all the experiments. In Figure 3-(a), we show that DropBlock outperforms dropout with 1.3% for top-1 accuracy. The scheduled keep_prob makes DropBlock more robust to the change of keep_prob and adds improvement for the most values of keep_prob (3-(b)). With the best keep_prob found in Figure 3, we swept over block_size from 1 to size covering full feature map. Figure 4 shows applying larger block_size is generally better than applying block_size of 1. The best DropBlock configuration is to apply block_size = 7 to both groups 3 and 4. In all configurations, DropBlock and dropout share the similar trend and DropBlock has a large gain compared to the best dropout result. This shows evidence that the DropBlock is a more effective regularizer compared to dropout. DropBlock vs. SpatialDropout. Similar as dropout baseline, we define the SpatialDropout [20] baseline as applying it on convolution branches only. SpatialDropout is better than dropout but inferior to DropBlock. In Figure 4, we found SpatialDropout can be too harsh when applying to high resolution feature map on group 3. DropBlock achieves the best result by dropping block with constant size on both groups 3 and 4. Comparison with DropPath. Following ScheduledDropPath [8] we applied scheduled DropPath on all connections except the skip connections. We trained models with different values for keep_prob parameter. Also, we trained models where we applied DropPath in all groups and similar to our other experiments only at group 4 or at group 3 and 4. We achieved best validation accuracy of 77.10% when we only apply it to group 4 with keep_prob = 0.9. Comparison with Cutout. We also compared with Cutout [23] which is a data augmentation method and randomly drops a fixed size block from the input images. Although Cutout improves accuracy on the CIFAR-10 dataset as suggested by [23], it does not improve the accuracy on the ImageNet dataset in our experiments. Comparison with other regularization techniques. We compare DropBlock to data augmentation and label smoothing, which are two commonly used regularization techniques. In Table 1, DropBlock has better performance compared to strong data augmentation [27] and label smoothing [28]. The performance improves when combining DropBlock and label smoothing and train for 290 epochs, showing the regularization techniques can be complimentary when we train for longer. DropBlock in AmoebaNet We also show the effectiveness of DropBlock on a recent AmoebaNet-B architecture which is a state-of-art architecture, found using evolutionary architecture search [10]. This model has dropout with keep probability of 0.5 but only on the the final softmax layer. We apply DropBlock after all batch normalization layers and also in the skip connections of the last 50% of the cells. The resolution of the feature maps in these cells are 21x21 or 11x11 for input image with the size of 331x331. Based on the experiments in the last section, we used keep_prob of 0.9 and set block_size = 11 which is the width of the last feature map. DropBlock improves top-1 accuracy of AmoebaNet-B from 82.25% to 82.52% (Table 2). 96.07 Table 2: Top-1 and top-5 validation accuracy of AmoebaNet-B architecture trained on ImageNet. Experimental Analysis DropBlock demonstrates strong empirical results on improving ImageNet classification accuracy compared to dropout. We hypothesize dropout is insufficient because the contiguous regions in convolution layers are strongly correlated. Randomly dropping a unit still allows information to flow through neighboring units. In this section, we conduct an analysis to show DropBlock is more effective in dropping semantic information. Subsequently, the model regularized by DropBlock is more robust compared to model regularized by dropout. We study the problem by applying DropBlock with block_size of 1 and 7 during inference and observing the differences in performance. DropBlock drops more semantic information. We first took the model trained without any regularization and tested it with DropBlock with block_size = 1 and block_size = 7. The green curves in Figure 5 show the validation accuracy reduced quickly with decreasing keep_prob during inference. This suggests DropBlock removes semantic information and makes classification more difficult. The accuracy drops more quickly with decreasing keep_prob, for block_size = 1 in comparison with block_size = 7 which suggests DropBlock is more effective to remove semantic information than dropout. Model trained with DropBlock is more robust. Next we show that model trained with large block size, which removes more semantic information, results in stronger regularization. We demonstrate the fact by taking model trained with block_size = 7 and applied block_size = 1 during inference and vice versa. In Figure 5, models trained with block_size = 1 and block_size = 7 are both robust with block_size = 1 applied during inference. However, the performance of model trained with block_size = 1 reduced more quickly with decreasing keep_prob when applying block_size = 7 during inference. The results suggest that block_size = 7 is more robust and has the benefit of block_size = 1 but not vice versa. DropBlock learns spatially distributed representations. We hypothesize model trained with DropBlock needs to learn spatially distributed representations because DropBlock is effective in removing semantic information in a contiguous region. The model regularized by DropBlock should learn multiple discriminative regions instead of only focusing on one discriminative region. We use class activation maps (CAM) introduced in [29] to visualize conv5_3 class activations of ResNet-50 on ImageNet validation set. Figure 6 shows the class activations of original model and models trained with DropBlock with block_size = 1 and block_size = 7. In general, models trained with DropBlock learn spatially distributed representations that induce high class activations on multiple regions, whereas model without regularization tends to focus on one or few regions. Object Detection in COCO DropBlock is a generic regularization module for CNNs. In this section, we show DropBlock can also be applied for training object detector in COCO dataset [30]. We use RetinaNet [24] framework for the experiments. Unlike an image classifier that predicts single label for an image, RetinaNet runs convolutionally on multiscale Feature Pyramid Networks (FPNs) [31] to localize and classify objects in different scales and locations. We followed the model architecture and anchor definition in [24] to build FPNs and classifier/regressor branches. Where to apply DropBlock to RetinaNet model. RetinaNet model uses ResNet-FPN as its backbone model. For simplicity, we apply DropBlock to ResNet in ResNet-FPN and use the best keep_prob we found for ImageNet classification training. DropBlock is different from recent work [32] which learns to drop a structured pattern on features of region proposals. Training object detector from random initialization. Training object detector from random initialization has been considered as a challenging task. Recently, a few papers tried to address the issue using novel model architecture [33], large minibatch size [34], and better normalization layer [35]. In our experiment, we look at the problem from the model regularization perspective. We tried DropBlock with keep_prob = 0.9, which is the identical hyperparamters as training image classification model, and experimented with different block_size. In Table 3, we show that the model trained from random initialization surpasses ImageNet pre-trained model. Adding DropBlock gives additional 1.6% AP. The results suggest model regularization is an important ingredient to train object detector from scratch and DropBlock is an effective regularization approach for object detection. Semantic Segmentation in PASCAL VOC We show DropBlock also improves semantic segmentation model. We use PASCAL VOC 2012 dataset for experiments and follow the common practice to train with augmented 10,582 training images [36] and report mIOU on 1,449 test set images. We adopt open-source RetinaNet implementation for semantic segmentation. The implementation uses the ResNet-FPN backbone model to extract multiscale features and attaches fully convolution networks on top to predict segmentation. We use default hyperparameters in open-source code for training. Following the experiments for object detection, we study the effect of DropBlock for training model from random initialization. We trained model started with pre-trained ImageNet model for 45 epochs and model with random initialization for 500 epochs. We experimented with applying DropBlock to ResNet-FPN backbone model and fully convolution networks and found apply DropBlock to fully convolution networks is more effective. Applying DropBlock greatly improves mIOU for training model from scratch and shrinks performance gap between training from ImageNet pre-trained model and randomly initialized model. Discussion In this work, we introduce DropBlock to regularize training CNNs. DropBlock is a form of structured dropout that drops spatially correlated information. We demonstrate DropBlock is a more effective regularizer compared to dropout in ImageNet classification and COCO detection. DropBlock consistently outperforms dropout in an extensive experiment setup. We conduct an analysis to show that model trained with DropBlock is more robust and has the benefits of model trained with dropout. The class activation mapping suggests the model can learn more spatially distributed representations regularized by DropBlock. Our experiments show that applying DropBlock in skip connections in addition to the convolution layers increases the accuracy. Also, gradually increasing number of dropped units during training leads to better accuracy and more robust to hyperparameter choices.
2,657
1810.12890
2952634764
Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that activation units in convolutional layers are spatially correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where units in a contiguous region of a feature map are dropped together. We found that applying DropbBlock in skip connections in addition to the convolution layers increases the accuracy. Also, gradually increasing number of dropped units during training leads to better accuracy and more robust to hyperparameter choices. Extensive experiments show that DropBlock works better than dropout in regularizing convolutional networks. On ImageNet classification, ResNet-50 architecture with DropBlock achieves @math accuracy, which is more than @math improvement on the baseline. On COCO detection, DropBlock improves Average Precision of RetinaNet from @math to @math .
Our method is inspired by Cutout @cite_24 , a data augmentation method where parts of the input examples are zeroed out. DropBlock generalizes Cutout by applying Cutout at every feature map in a convolutional networks. In our experiments, having a fixed zero-out ratio for DropBlock during training is not as robust as having an increasing schedule for the ratio during training. In other words, it's better to set the DropBlock ratio to be small initially during training, and linearly increase it over time during training. This scheduling scheme is related to ScheduledDropPath @cite_6 .
{ "abstract": [ "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset." ], "cite_N": [ "@cite_24", "@cite_6" ], "mid": [ "2746314669", "2964081807" ] }
DropBlock: A regularization method for convolutional networks
Deep neural nets work well when they have a large number of parameters and are trained with a massive amount of regularization and noise, such as weight decay and dropout [1]. Though the first biggest success of dropout was associated with convolutional networks [2], recent convolutional architectures rarely use dropout [3][4][5][6][7][8][9][10]. In most cases, dropout was mainly used at the fully connected layers of the convolutional networks [11][12][13]. We argue that the main drawback of dropout is that it drops out features randomly. While this can be effective for fully connected layers, it is less effective for convolutional layers, where features are correlated spatially. When the features are correlated, even with dropout, information about the input can still be sent to the next layer, which causes the networks to overfit. This intuition suggests that a more structured form of dropout is needed to better regularize convolutional networks. In this paper, we introduce DropBlock, a structured form of dropout, that is particularly effective to regularize convolutional networks. In DropBlock, features in a block, i.e., a contiguous region of a feature map, are dropped together. As DropBlock discards features in a correlated area, the networks must look elsewhere for evidence to fit the data (see Figure 1). In our experiments, DropBlock is much better than dropout in a range of models and datasets. Adding DropBlock to ResNet-50 architecture improves image classification accuracy on ImageNet from 76.51% to 78.13%. On COCO detection, DropBlock improves AP of RetinaNet from 36.8% to 38.4%. Dropping out activations at random is not effective in removing semantic information because nearby activations contain closely related information. Instead, dropping continuous regions can remove certain semantic information (e.g., head or feet) and consequently enforcing remaining units to learn features for classifying input image. DropBlock DropBlock is a simple method similar to dropout. Its main difference from dropout is that it drops contiguous regions from a feature map of a layer instead of dropping out independent random units. Pseudocode of DropBlock is shown in Algorithm 1. DropBlock has two main parameters which are block_size and γ. block_size is the size of the block to be dropped, and γ, controls how many activation units to drop. We experimented with a shared DropBlock mask across different feature channels or each feature channel has its DropBlock mask. Algorithm 1 corresponds to the latter, which tends to work better in our experiments. Similar to dropout we do not apply DropBlock during inference. This is interpreted as evaluating an averaged prediction across the exponentially-sized ensemble of sub-networks. These sub-networks include a special subset of sub-networks covered by dropout where each network does not see contiguous parts of feature maps. Setting the value of block_size. In our implementation, we set a constant block_size for all feature maps, regardless the resolution of feature map. DropBlock resembles dropout [1] when block_size = 1 and resembles SpatialDropout [20] when block_size covers the full feature map. Setting the value of γ. In practice, we do not explicitly set γ. As stated earlier, γ controls the number of features to drop. Suppose that we want to keep every activation unit with the probability of keep_prob, in dropout [1] the binary mask will be sampled with the Bernoulli distribution with mean 1 − keep_prob. However, to account for the fact that every zero entry in the mask will be expanded by block_size 2 and the blocks will be fully contained in feature map, we need to adjust γ accordingly when we sample the initial binary mask. In our implementation, γ can be computed as γ = 1 − keep_prob block_size 2 f eat_size 2 (f eat_size − block_size + 1) 2(1) where keep_prob can be interpreted as the probability of keeping a unit in traditional dropout. The size of valid seed region is (f eat_size − block_size + 1) 2 where f eat_size is the size of feature map. The main nuance of DropBlock is that there will be some overlapped in the dropped blocks, so the above equation is only an approximation. In our experiments, we first estimate the keep_prob to use (between 0.75 and 0.95), and then compute γ according to the above equation. Scheduled DropBlock. We found that DropBlock with a fixed keep_prob during training does not work well. Applying small value of keep_prob hurts learning at the beginning. Instead, gradually decreasing keep_prob over time from 1 to the target value is more robust and adds improvement for the most values of keep_prob. In our experiments, we use a linear scheme of decreasing the value of keep_prob, which tends to work well across many hyperparameter settings. This linear scheme is similar to ScheduledDropPath [8]. Experiments In the following sections, we empirically investigate the effectiveness of DropBlock for image classification, object detection, and semantic segmentation. We apply DropBlock to ResNet-50 [4] with extensive experiments for image classification. To verify the results can be transferred to a different architecture, we perform DropBlock on a state-of-the-art model architecture, AmoebaNet [10], and show improvements. In addition to image classification, We show DropBlock is helpful in training RetinaNet [24] for object detection and semantic segmentation. ImageNet Classification The ILSVRC 2012 classification dataset [25] contains 1.2 million training images, 50,000 validation images, and 150,000 testing images. Images are labeled with 1,000 categories. We used horizontal flip, scale, and aspect ratio augmentation for training images as in [12,26]. During evaluation, we applied a single-crop rather than averaging results over multiple crops. Following the common practice, we report classification accuracy on the validation set. [17] 77.10 ± 0.08 93.50 ± 0.05 ResNet-50 + SpatialDropout (kp=0.9) [20] 77.41 ± 0.04 93.74 ± 0.02 ResNet-50 + Cutout [23] 76.52 ± 0.07 93.21 ± 0.04 ResNet-50 + AutoAugment [27] 77.63 93.82 ResNet-50 + label smoothing (0.1) [28] 77 Where to apply DropBlock. In residual networks, a building block consists of a few convolution layers and a separate skip connection that performs identity mapping. Every convolution layer is followed by batch normalization layer and ReLU activation. The output of a building block is the sum of outputs from the convolution branch and skip connection. A residual network can be represented by building groups based on the spatial resolution of feature activation. A building group consists of multiple building blocks. We use group 4 to represent the last group in residual network (i.e., all layers in conv5_x) and so on. In the following experiments, we study where to apply DropBlock in residual networks. We experimented with applying DropBlock only after convolution layers or applying DropBlock after both convolution layers and skip connections. To study the performance of DropBlock applying to different feature groups, we experimented with applying DropBlock to Group 4 or to both Groups 3 and 4. DropBlock vs. dropout. The original ResNet architecture does not apply any dropout in the model. For the ease of discussion, we define the dropout baseline for ResNet as applying dropout on convolution branches only. We applied DropBlock to both groups 3 and 4 with block_size = 7 by default. We decreased γ by factor 4 for group 3 in all the experiments. In Figure 3-(a), we show that DropBlock outperforms dropout with 1.3% for top-1 accuracy. The scheduled keep_prob makes DropBlock more robust to the change of keep_prob and adds improvement for the most values of keep_prob (3-(b)). With the best keep_prob found in Figure 3, we swept over block_size from 1 to size covering full feature map. Figure 4 shows applying larger block_size is generally better than applying block_size of 1. The best DropBlock configuration is to apply block_size = 7 to both groups 3 and 4. In all configurations, DropBlock and dropout share the similar trend and DropBlock has a large gain compared to the best dropout result. This shows evidence that the DropBlock is a more effective regularizer compared to dropout. DropBlock vs. SpatialDropout. Similar as dropout baseline, we define the SpatialDropout [20] baseline as applying it on convolution branches only. SpatialDropout is better than dropout but inferior to DropBlock. In Figure 4, we found SpatialDropout can be too harsh when applying to high resolution feature map on group 3. DropBlock achieves the best result by dropping block with constant size on both groups 3 and 4. Comparison with DropPath. Following ScheduledDropPath [8] we applied scheduled DropPath on all connections except the skip connections. We trained models with different values for keep_prob parameter. Also, we trained models where we applied DropPath in all groups and similar to our other experiments only at group 4 or at group 3 and 4. We achieved best validation accuracy of 77.10% when we only apply it to group 4 with keep_prob = 0.9. Comparison with Cutout. We also compared with Cutout [23] which is a data augmentation method and randomly drops a fixed size block from the input images. Although Cutout improves accuracy on the CIFAR-10 dataset as suggested by [23], it does not improve the accuracy on the ImageNet dataset in our experiments. Comparison with other regularization techniques. We compare DropBlock to data augmentation and label smoothing, which are two commonly used regularization techniques. In Table 1, DropBlock has better performance compared to strong data augmentation [27] and label smoothing [28]. The performance improves when combining DropBlock and label smoothing and train for 290 epochs, showing the regularization techniques can be complimentary when we train for longer. DropBlock in AmoebaNet We also show the effectiveness of DropBlock on a recent AmoebaNet-B architecture which is a state-of-art architecture, found using evolutionary architecture search [10]. This model has dropout with keep probability of 0.5 but only on the the final softmax layer. We apply DropBlock after all batch normalization layers and also in the skip connections of the last 50% of the cells. The resolution of the feature maps in these cells are 21x21 or 11x11 for input image with the size of 331x331. Based on the experiments in the last section, we used keep_prob of 0.9 and set block_size = 11 which is the width of the last feature map. DropBlock improves top-1 accuracy of AmoebaNet-B from 82.25% to 82.52% (Table 2). 96.07 Table 2: Top-1 and top-5 validation accuracy of AmoebaNet-B architecture trained on ImageNet. Experimental Analysis DropBlock demonstrates strong empirical results on improving ImageNet classification accuracy compared to dropout. We hypothesize dropout is insufficient because the contiguous regions in convolution layers are strongly correlated. Randomly dropping a unit still allows information to flow through neighboring units. In this section, we conduct an analysis to show DropBlock is more effective in dropping semantic information. Subsequently, the model regularized by DropBlock is more robust compared to model regularized by dropout. We study the problem by applying DropBlock with block_size of 1 and 7 during inference and observing the differences in performance. DropBlock drops more semantic information. We first took the model trained without any regularization and tested it with DropBlock with block_size = 1 and block_size = 7. The green curves in Figure 5 show the validation accuracy reduced quickly with decreasing keep_prob during inference. This suggests DropBlock removes semantic information and makes classification more difficult. The accuracy drops more quickly with decreasing keep_prob, for block_size = 1 in comparison with block_size = 7 which suggests DropBlock is more effective to remove semantic information than dropout. Model trained with DropBlock is more robust. Next we show that model trained with large block size, which removes more semantic information, results in stronger regularization. We demonstrate the fact by taking model trained with block_size = 7 and applied block_size = 1 during inference and vice versa. In Figure 5, models trained with block_size = 1 and block_size = 7 are both robust with block_size = 1 applied during inference. However, the performance of model trained with block_size = 1 reduced more quickly with decreasing keep_prob when applying block_size = 7 during inference. The results suggest that block_size = 7 is more robust and has the benefit of block_size = 1 but not vice versa. DropBlock learns spatially distributed representations. We hypothesize model trained with DropBlock needs to learn spatially distributed representations because DropBlock is effective in removing semantic information in a contiguous region. The model regularized by DropBlock should learn multiple discriminative regions instead of only focusing on one discriminative region. We use class activation maps (CAM) introduced in [29] to visualize conv5_3 class activations of ResNet-50 on ImageNet validation set. Figure 6 shows the class activations of original model and models trained with DropBlock with block_size = 1 and block_size = 7. In general, models trained with DropBlock learn spatially distributed representations that induce high class activations on multiple regions, whereas model without regularization tends to focus on one or few regions. Object Detection in COCO DropBlock is a generic regularization module for CNNs. In this section, we show DropBlock can also be applied for training object detector in COCO dataset [30]. We use RetinaNet [24] framework for the experiments. Unlike an image classifier that predicts single label for an image, RetinaNet runs convolutionally on multiscale Feature Pyramid Networks (FPNs) [31] to localize and classify objects in different scales and locations. We followed the model architecture and anchor definition in [24] to build FPNs and classifier/regressor branches. Where to apply DropBlock to RetinaNet model. RetinaNet model uses ResNet-FPN as its backbone model. For simplicity, we apply DropBlock to ResNet in ResNet-FPN and use the best keep_prob we found for ImageNet classification training. DropBlock is different from recent work [32] which learns to drop a structured pattern on features of region proposals. Training object detector from random initialization. Training object detector from random initialization has been considered as a challenging task. Recently, a few papers tried to address the issue using novel model architecture [33], large minibatch size [34], and better normalization layer [35]. In our experiment, we look at the problem from the model regularization perspective. We tried DropBlock with keep_prob = 0.9, which is the identical hyperparamters as training image classification model, and experimented with different block_size. In Table 3, we show that the model trained from random initialization surpasses ImageNet pre-trained model. Adding DropBlock gives additional 1.6% AP. The results suggest model regularization is an important ingredient to train object detector from scratch and DropBlock is an effective regularization approach for object detection. Semantic Segmentation in PASCAL VOC We show DropBlock also improves semantic segmentation model. We use PASCAL VOC 2012 dataset for experiments and follow the common practice to train with augmented 10,582 training images [36] and report mIOU on 1,449 test set images. We adopt open-source RetinaNet implementation for semantic segmentation. The implementation uses the ResNet-FPN backbone model to extract multiscale features and attaches fully convolution networks on top to predict segmentation. We use default hyperparameters in open-source code for training. Following the experiments for object detection, we study the effect of DropBlock for training model from random initialization. We trained model started with pre-trained ImageNet model for 45 epochs and model with random initialization for 500 epochs. We experimented with applying DropBlock to ResNet-FPN backbone model and fully convolution networks and found apply DropBlock to fully convolution networks is more effective. Applying DropBlock greatly improves mIOU for training model from scratch and shrinks performance gap between training from ImageNet pre-trained model and randomly initialized model. Discussion In this work, we introduce DropBlock to regularize training CNNs. DropBlock is a form of structured dropout that drops spatially correlated information. We demonstrate DropBlock is a more effective regularizer compared to dropout in ImageNet classification and COCO detection. DropBlock consistently outperforms dropout in an extensive experiment setup. We conduct an analysis to show that model trained with DropBlock is more robust and has the benefits of model trained with dropout. The class activation mapping suggests the model can learn more spatially distributed representations regularized by DropBlock. Our experiments show that applying DropBlock in skip connections in addition to the convolution layers increases the accuracy. Also, gradually increasing number of dropped units during training leads to better accuracy and more robust to hyperparameter choices.
2,657
1810.12522
2899310320
Current state-of-the-art approaches to video understanding adopt temporal jittering to simulate analyzing the video at varying frame rates. However, this does not work well for multirate videos, in which actions or subactions occur at different speeds. The frame sampling rate should vary in accordance with the different motion speeds. In this work, we propose a simple yet effective strategy, termed random temporal skipping, to address this situation. This strategy effectively handles multirate videos by randomizing the sampling rate during training. It is an exhaustive approach, which can potentially cover all motion speed variations. Furthermore, due to the large temporal skipping, our network can see video clips that originally cover over 100 frames. Such a time range is enough to analyze most actions events. We also introduce an occlusion-aware optical flow learning method that generates improved motion maps for human action recognition. Our framework is end-to-end trainable, runs in real-time, and achieves state-of-the-art performance on six widely adopted video benchmarks.
However, compared to the CNN, the optical flow calculation is computationally expensive. It is thus the major speed bottleneck of the current two-stream approaches. There have been recent attempts to better model the temporal information. @cite_0 pre-trained a deep 3D CNN network on a large-scale dataset, and use it as a general spatiotemporal feature extractor. The features generalize well to several tasks but are inferior to two-stream approaches. @cite_23 reduced the dimension of each frame clip using a CNN and aggregated frame-level information using Long Short Term Memory (LSTM) networks. @cite_10 proposed to reduce the size of each frame and use longer clips (e.g., 60 vs 16 frames) as inputs. They managed to gain significant accuracy improvements compared to shorter clips with the same spatial size. @cite_18 experimented with sparse sampling and jointly trained on the sparsely sampled frames clips. In this way, they incorporate more temporal information while preserving the spatial resolution. Recent approaches @cite_4 @cite_26 have evolved to end-to-end learning and are currently the best at incorporating global temporal information. However, none of them handle multirate video analysis effectively.
{ "abstract": [ "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).", "We investigate the problem of representing an entire video using CNN features for human action recognition. End-to-end learning of CNN RNNs is currently not possible for whole videos due to GPU memory limitations and so a common practice is to use sampled frames as inputs along with the video labels as supervision. However, the global video labels might not be suitable for all of the temporally local samples as the videos often contain content besides the action of interest. We therefore propose to instead treat the deep networks trained on local inputs as local feature extractors. The local features are then aggregated to form global features which are used to assign video-level labels through a second classification stage. We investigate a number of design choices for this local feature approach. Experimental results on the HMDB51 and UCF101 datasets show that a simple maximum pooling on the sparsely sampled local features leads to significant performance improvement.", "", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "", "Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7 ) and HMDB51 (67.2 )." ], "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_0", "@cite_23", "@cite_10" ], "mid": [ "2507009361", "2580899942", "", "2952633803", "", "2235034809" ] }
Random Temporal Skipping for Multirate Video Analysis
Significant progress has been made in video analysis during the last five years, including content-based video search, anomaly detection, human action recognition, object tracking and autonomous driving. Take human action recognition as an example. The performance on the challenging UCF101 dataset [18] was only 43.9% reported in the original; it now is 98.0%. Such great improvement is attributed to several factors, such as more complicated models (e.g., deep learning [1]), larger datasets (e.g., Kinetics [10]), better temporal analysis (e.g., two-stream networks [17,29]), etc. However, there has been little work on varying frame-rate video analysis. For simplicity, we denote varying frame-rate as multirate throughout the paper. For real-world video applications, multirate handling is crucial. For surveillance video monitoring, communication package drops occur frequently due to bad internet connections. We may miss a chunk of frames, or miss the partial content of the frames. For activity/event analysis, the videos are multirate in nature. People may perform the same action at different speeds. For video generation, we may manually interpolate frames or sample frames depending on the application. For the scenarios mentioned above, models pre-trained on fixed frame-rate videos may not generalize well to multirate ones. As shown in Figure 1, for the action diving, there is no apparent motion in the first four frames, but fast motion exists in the last four frames. Dense sampling of every frame is redundant and results in large computational cost, while sparse sampling will lose information when fast motion occurs. There are many ways to model the temporal information in a video, including trajectories [21], optical flow [17], temporal convolution [20], 3D CNNs [19] and recurrent neural networks (RNNs) [14]. However, none of these methods can directly handle multirate videos. Usually these methods need a fixed length input (a video clip) with a fixed sampling rate. A straightforward extension therefore is to train multiple such models, each corresponding to a different fixed framerate. This is similar to using image pyramids to handle the multi-scale problem in image analysis. But it is computational infeasible to train models for all the frame-rates. And, once the frame-rate differs, the system's performance may drop dramatically. Hence, it would be more desirable to use one model to handle multiple frame-rates. In this work, we focus on human action recognition because action is closely related to frame-rate. Specifically, our contributions include the following. First, we propose a random temporal skipping strategy for effective multirate video analysis. It can simulate various motion speeds for better action modeling, and makes the training more robust. Second, we introduce an occlusion-aware optical flow learning method to generate better motion maps for human action recognition. Third, we adopt the "segment" idea [3,24] to reason about the temporal information of the entire video. By combining the local random skipping and global segments, our framework achieves state-of-the-art results on six large-scale video benchmarks. In addition, our model is robust under dramatic frame-rate changes, a scenario in which the previous best performing methods [1,3,24] fail. Approach There are two limitations to existing temporal modeling approaches: they require a fixed length input and a fixed sampling rate. For example, we usually adopt 16 frames to compute IDT and C3D features, 10 frames to compute optical flow for two-stream networks, and 30 frames for LSTM. These short durations do not allow reasoning on the entire video. In addition, a fixed sampling rate will either result in redundant information during slow movement or the loss of information during fast movement. The frame sampling rate should vary in accordance with different motion speeds. Hence, we propose random temporal skipping. Random Temporal Skipping In this section, we introduce random temporal skipping and illustrate its difference to traditional sliding window (fixed frame-rate) approaches. For easier understanding, we do not use temporal segments here. Consider a video V with a total of T frames [v 1 , v 2 , . . . , v T ]. In the situation of single-rate analysis, we randomly sample fixed length video clips from an entire video for training. Suppose the fixed length is N , then the input to our model will be a sequence of frames as [v t , v t+1 , · · · , v t+N ].(1) In order to learn a frame-rate invariant model, a straightforward way is using a sliding window. The process can be done either offline or online. The idea is to generate fixed length video clips with different temporal strides, thus covering more video frames. Much literature adopts such a strategy as data augmentation. Suppose we have a temporal stride of τ . The input now will be [v t , v t+τ , · · · , v t+N τ ].(2) As shown in Figure 1, a fixed sampling strategy does not work well for multirate videos. A single τ can not cover all temporal variations. The frame sampling rate should vary in accordance with different motion speeds. Motivated by this observation, we propose random temporal skipping. Instead of using a fixed temporal stride τ , we allow it vary randomly. The input now will be Here, τ n , n = 1, 2, · · · , N are randomly sampled within the range of [0, maxStride]. maxStride is a threshold value indicating the maximum distance we can skip in the temporal domain. Our proposed random temporal skipping represents an exhaustive solution. Given unlimited training iterations, we can model all possible combinations of motion speed, thus leading to the learning of frame-rate invariant features. In addition, this strategy can be easily integrated into existing frameworks with any model, and can be done on-the-fly during training. [v t , v t+τ1 , · · · , v t+τ1+τ2+···+τ N ].(3) Random Temporal Skipping Two-Stream Network Details Since two-stream networks are the state-of-the-art [1,24] for several video benchmarks, we also build a two-stream model but with significant modifications. In this section, we first briefly recall temporal segment network (TSN) to illustrate the idea of segments. Then we describe our newly designed spatial and temporal streams, respectively. Temporal segment network With the goal of capturing long-range temporal structure for improved action recognition, Wang et al. proposed TSN [24] with a sparse sampling strategy. This allows an entire video to be analyzed with reasonable computational costs. TSN first divides a video evenly into three segments and one short snippet is randomly selected from each segment. Two-stream networks are then applied to the short snippets to obtain the initial action class prediction scores. The original TSN finally uses a segmental consensus function to combine the outputs from multiple short snippets to predict the action class probabilities for the video as a whole. Here, motivated by [3], we encode the features from different segments through compact bilinear models [4] as shown in Figure 2. Spatial stream A standard spatial stream takes a single video frame as input. Here, we extend this to multiple frames. Hence, our random temporal skipping also works for the spatial stream. Temporal stream A standard temporal stream takes a stack of 10 optical flow images as input. However, the pre-computation of optical flow is time consuming, storage demanding and sub-optimal for action recognition. Motivated by [29], we propose to use a CNN to learn optical flow from video frames and directly feed the predictions to the temporal stream. We name this optical flow CNN MotionNet as shown in Figure 2. For the MotionNet, we treat optical flow estimation as an image reconstruction problem [32,33]. The intuition is that if we can use the predicted flow and the next frame to reconstruct the previous frame, our model has learned a useful representation of the underlying motion. Suppose we have two consecutive frames I 1 and I 2 . Let us denote the reconstructed previous frame as I 1 . The goal then is to minimize the photometric error between the true previous frame I 1 and the reconstructed previous frame I 1 : L reconst = 1 N N i,j ρ(I 1 (i, j) − I 1 (i, j)).(4) N is the number of pixels. The reconstructed previous frame is computed from the true next frame using inverse warping, I 1 (i, j) = I 2 (i + U i,j , j + V i,j ), accomplished through spatial transformer modules [7] inside the CNN. U and V are the horizontal and vertical components of predicted optical flow. We use a robust convex error function, the generalized Charbonnier penalty ρ(x) = (x 2 + 2 ) α , to reduce the influence of outliers. α is set to 0.45. However, [29] is based on a simple brightness constancy assumption and does not incorporate reasoning about occlusion. This leads to noisier motion in the background and inconsistent flow around human boundaries. As we know, motion boundaries are important for human action recognition. Hence, we extend [29] by incorporating occlusion reasoning, hoping to learn better flow maps for action recognition. In particular, our unsupervised learning framework should not employ the brightness constancy assumption to compute the loss when there is occlusion. Pixels that become occluded in the second frame should not contribute to the photometric error between the true and reconstructed first frames in Equation 4. We therefore mask occluded pixels when computing the image reconstruction loss in order to avoid learning incorrect deformations to fill the occluded locations. Our occlusion detection is based on a forward-backward consistency assumption. That is, for non-occluded pixels, the forward flow should be the inverse of the backward flow at the corresponding pixel in the second frame. We mark pixels as being occluded whenever the mismatch between these two flows is too large. Thus, for occlusion in the forward direction, we define the occlusion flag o f be 1 whenever the constraint |M f + M b M f | 2 < α 1 · (|M f | 2 + |M b M f | 2 ) + α 2(5) is violated, and 0 otherwise. o b is defined in the same way, and M f and M b represent forward and backward flow. We set α 1 =0.01, α 2 =0.5 in all our experiments. Finally, the resulting occlusion-aware loss is represented as: L = (1 − o f ) · L f reconst + (1 − o b ) · L b reconst(6) Once we learn a geometry-aware MotionNet to predict motions between consecutive frames, we can directly stack it to the original temporal CNN for action mapping. Hence, our whole temporal stream is now end-to-end optimized without the computational burden of calculating optical flow. Compact Bilinear Encoding In order to learn a compact feature for an entire video, we need to aggregate information from different segments. There are many ways to accomplish this goal, such as taking the maximum or average, bilinear pooling, Fisher Vector (FV) encoding [16], etc. Here, we choose compact bilinear pooling [4,5,13] due to its simplicity and good performance. The classic bilinear model computes a global descriptor by calculating: B = φ(F ⊗ F ).(7) Here, F are the feature maps from all channels in a specific layer, ⊗ denotes the outer product, φ is the model parameters we are going to learn and B is the bilinear feature. However, due to the many channels of feature maps and their large spatial resolution, the outer product will result in a prohibitively high dimensional feature representation. For this reason, we use the Tensor Sketch algorithm as in [4] to avoid the computational intensive outer product by an approximate projection. Such approximation requires almost no parameter memory. We refer the readers to [4] for a detailed algorithm description. After the approximate projection, we have compact bilinear features with very low feature dimension. Compact bilinear pooling can also significantly reduce the number of CNN model parameters since it can replace fully-connected layers, thus leading to less over-fitting. We will compare compact bilinear pooling to other feature encoding methods in later sections. Spatio-Temporal Fusion Following the testing scheme of [17,23,27], we evenly sample 25 frames/clips for each video. For each frame/clip, we perform 10x data augmentation by cropping the 4 corners and 1 center, flipping them horizontally and averaging the prediction scores (before softmax operation) over all crops of the samples. In the end, we obtain two predictions, one from each stream. We simply late fuse them by weighted averaging. The overview of our framework is shown in Figure 2. Table 1: Necessity of multirate analysis. RTS indicates random temporal skipping. Fixed sampling means we sample the video frames by a fixed length (numbers in the brackets, e.g., 1, 3, 5 frames apart). Random sampling indicates we sample the video frames by a random length of frames apart. Experiments Implementation Details For the CNNs, we use the Caffe toolbox [8]. Our MotionNet is first pre-trained using Adam optimization with the default parameter values. It is a 25 layer CNN with an encoder-decoder architecture [29]. The initial learning rate is set to 3.2 × 10 −5 and is divided in half every 100k iterations. We end our training at 400k iterations. Once MotionNet can estimate decent optical flow, we stack it to a temporal CNN for action prediction. Both the spatial CNN and the temporal CNN are BN-Inception networks pre-trained on ImageNet challenges [2]. We use stochastic gradient descent to train the networks, with a batch size of 128 and momentum of 0.9. We also use horizontal flipping, corner cropping and multi-scale cropping as data augmentation. Take UCF101 as an example. For the spatial stream CNN, the initial learning rate is set to 0.001, and divided by 10 every 4K iterations. We stop the training at 10K iterations. For the stacked temporal stream CNN, we set different initial learning rates for MotionNet and the temporal CNN, which are 10 −6 and 10 −3 , respectively. Then we divide the learning rates by 10 after 5K and 10K. The maximum iteration is set to 16K. Other datasets have the same learning process except the training iterations are different depending on the dataset size. Trimmed Video Dataset In this section, we adopt three trimmed video datasets to evaluate our proposed method, UCF101 [18], HMDB51 [11] and Kinetics [10]. UCF101 is composed of realistic action videos from YouTube. It contains 13, 320 video clips distributed among 101 action classes. HMDB51 includes 6, 766 video clips of 51 actions extracted from a wide range of sources, such as online videos and movies. Both UCF101 and HMDB51 have a standard three-split evaluation protocol and we report the average recognition accuracies over the three splits. Kinetics is similar to UCF101, but substantially larger. It consists of approximately 400, 000 video clips, and covers 400 human action classes. Necessity of Multirate Analysis First, we demonstrate the importance of multirate video analysis. We use UCF101 as the evaluation dataset. We show that a well-trained model with a fixed frame-rate does not work well when the frame-rate differs during testing. As shown in Table 1, no sampling means the dataset does not change. Fixed sampling means we manually sample the video frames by a fixed length (numbers in the brackets, e.g., 1, 3, 5 frames apart). Random sampling indicates we manually sample the video frames by a random length of frames apart. We set the maximum temporal stride to 5. "with RTS" and "without RTS" indicates the use of our proposed random temporal skipping strategy during model training or not. Here, all the samplings are performed for test videos, not training videos. This is used to simulate frame-rate changes between the source and target domains. We make several observations. First, if we compare the left and right columns in Table 1, we can clearly see the advantage of using random temporal skipping and the importance of multirate analysis. Without RTS, the test accuracies are reduced dramatically when the frame-rate differs between the training and test videos. When RTS is adopted, the performance decrease becomes much less significant. Models with RTS perform 5% better than those without RTS on random sampling (last row). Second, in the situation that no sampling is performed (first row in Table 1), models with RTS perform better than those without RTS. This is because RTS helps to capture more temporal variation. It helps to regularize the model during training, acting like additional data augmentation. Third, if we change fixed sampling to random sampling (last two rows in Table 1), we can see that the recognition accuracy without RTS drops again, but the accuracy with RTS remains the same. This demonstrates that our proposed random temporal skipping captures frame-rate invariant features for human action recognition. One interesting thing to note is that, with the increase of sampling rate, the performance of both approaches decrease. This maybe counter-intuitive because RTS should be able to handle the varying frame-rate. The reason for lower accuracy even when RTS is turned on is because videos in UCF101 are usually short. Hence, we do not have as many training samples with large sampling rates as those with small sampling rates. We will show in the next section that when the videos are longer, models with RTS can be trained better. Per-Class Breakdown Here, we perform a per-class accuracy breakdown to obtain insights into why random temporal skipping works and how it helps. We choose the results from the last row in Table 1 to compare. We list, in Figure 3 below, the 10 classes in UCF101 that benefit the most from RTS and the 10 that benefit the least. The actions that benefit the most tend to exhibit varying motion speeds. The actions that benefit the least can either be considered still, and can thus be recognized by individual frames regardless of how they are sampled, or considered repetitive, and so a constant sampling rate is sufficient. Hence, our proposed random temporal skipping effectively handles different motion speeds. Encoding Methods Comparison In this section, we compare different feature encoding methods and show the effectiveness of compact bilinear encoding. In particular, we choose four widely adopted encoding approaches: Bag of Visual Words (BoVW), Vector of Locally Aggregated Descriptors (VLAD), Fisher Vector (FV) and Fully-Connected pooling (FC). FC is the most widely adopted feature aggregation method in deep learning era, thus will be served as baseline. We put it between the last convolutional layer and the classification layer, and set its dimension to 4096. FC will be learned end-to-end during training. BoVW, VLAD and FV are clustering based methods. Although there are recent attempts to integrate them into CNN framework [12], for simplicity, we do not use them in an end-to-end network. We first extract features from a pre-trained model, and then encode the local features into global features by one of the above methods. Finally, we use support vector machines (SVM) to do the classification. To be specific, suppose we have N local features, BoVW quantizes each of the N local features as one of k codewords using a codebook generated through k-means clustering. VLAD is similar to BoVW but encodes the distance between each of the N local features and the assigned codewords. FV models the distribution of the local features using a Gaussian mixture model (GMM) with k components and computes the mean and standard deviation of the weighted difference between the N local features and these k components. In our experiments, we project each local feature into 256 dimensions using PCA and set the number of clusters (k) as 256. This is similar to what is suggested in [26] except we do not break the local features into multiple sub-features. For the bilinear models, we retain the convolutional layers of each network without the fully-connected layers. The convolutional feature maps extracted from the last convolutional layers (after the rectified activation) are fed as input into the bilinear models. Here, the convolutional feature maps for the last layer of BN-Inception produces an output of size 14 × 14 × 1024, leading to bilinear features of size 1024 × 1024, and 8,196 features for compact bilinear models. As can be seen in Table 2, our compact bilinear encoding achieves the best overall performance (two-stream network results). This observation is consistent with [3]. It is interesting that the more complicated encoding methods, BoVW, FV and VLAD, all perform much worse than baseline FC and compact bilinear pooling. We conjecture that this is because they are not end-to-end optimized. Importance of Occlusion-Aware One of our contributions in this work is introducing occlusion reasoning into the MotionNet [29] framework. Here, we show sample visualizations to demonstrate its effectiveness. As can be seen in Figure 4, optical flow estimates with occlusion reasoning are much better than those without. Occlusion reasoning can remove the background noise brought by invalid brightness constancy assumptions, reduce checkerboard artifacts, and generate flows with sharper boundaries due to awareness of disoc-clusion. Quantitatively, we use these two flow estimates as input to the temporal stream. Our network with occlusion reasoning performs 0.9% better than the baseline [29] on UCF101 (95.5 → 96.4). This makes sense because a clean background of optical flow should make it easier for the model to recognize the action itself than the context. We show that we can obtain both better optical flow and higher accuracy in action recognition by incorporating occlusion reasoning in an end-to-end network. Untrimmed Video Dataset In this section, we adopt three untrimmed video datasets to evaluate our proposed method, ActivityNet [6], VIRAT 1.0 [15] and VIRAT 2.0 [15]. For ActivityNet, we use version 1.2 which has 100 action classes. Following the standard evaluation split, 4,819 training and 2,383 validation videos are used for training and 2,480 videos for testing. VIRAT 1.0 is a surveillance video dataset recorded in different scenes. Each video clip contains 1 to 20 instances of activities from 6 categories of person-vehicle interaction events including: loading an object to a vehicle, unloading an object from a vehicle, opening a vehicle trunk, closing a vehicle trunk, getting into a vehicle, and getting out of a vehicle. VIRAT 2.0 is an extended version of VIRAT 1.0. It includes 5 more events captured in more scenes: gesturing, carrying an object, running, entering a facility and exiting a facility. We follow the standard train/test split to report the performance. Investigate Longer Temporal Context In the previous section, we demonstrated that a well-trained model with a fixed frame-rate does not work well when frame-rate differs during testing. Here, we show that using a longer temporal context by random temporal skipping is useful for action recognition. We use ActivityNet as the evaluation dataset because most videos in ActivityNet are long (5 to 10 minutes) so that we can explore more speed variations. Recall from Equation 3 that maxStride is a threshold value indicating the maximum distance we can skip in the temporal domain. We set it from 0 frames to 9 frames apart, indicating no sampling to the longest temporal coverage. As shown in Figure 5, we can see that the longer temporal context we utilize, the higher action recognition accuracy we obtain. One interesting observation is that the performance starts to saturate when maxStride is equal to 6. After that, longer temporal context does not help much. We think this may be due to the fact that the CNNs can not capture the transitions between frames that are so far away. In addition, we investigate the impact of the number of sampled frames. We choose 5, 10, 15 and 20 frames as the length of the input video clip. As we can see in Figure 5, more sampled frames always improves the action recognition accuracy. This demonstrates that longer temporal information benefits video understanding. With 20 input frames and a maxStride of 6, our method can have a temporal coverage of over 120 fames, which is about 5 seconds. Such a time duration is enough for analyzing most actions or events. For UCF101 and HMDB51 datasets, 5 seconds can cover the entire video. Fig. 5: Action recognition accuracy on ActivityNet. We observe that the longer temporal context we utilize, the better performance we obtain. Comparison to State-of-the-Art We compare our method to recent state-of-the-art on the six video benchmarks. As shown in Table 3, our proposed random temporal skipping is an effective data augmentation technique, which leads to the top performance on all evaluation datasets. For the trimmed video datasets, we obtain performance improvements of 0.8% on UCF101, 1.4% on HMDB51 and 1.4% on Kinetics. Because the videos are trimmed and short, we do not benefit much from learning longer temporal information. The improvement for UCF101 is smaller as the accuracy is already saturated on this dataset. Yet, our simple random temporal skipping strategy can improve it further. For the three untrimmed video datasets, we obtain significant improvements, 1.8% on ActivityNet, 4.5% on VIRAT 1.0 and 3.0% on VIRAT 2.0. This demonstrates the importance of multirate video analysis in complex real-world applications, and the effectiveness of our method. We could adapt our approach to real-time action localization due to the precise temporal boundary modeling. There is a recent work I3D [1] that reports higher accuracy on UCF101 (98.0%) and HMDB51 (80.7%). However, it uses additional training data ( [10]) and the network is substantially deeper, which is not a fair comparison to the above approaches. In addition, we would like to note that our approach is realtime because no pre-computation of optical flow is needed. We are only about 1% worse than I3D, but 14 times faster. Conclusion In this work, we propose a simple yet effective strategy, termed random temporal skipping, to handle multirate videos. It can benefit the analysis of long untrimmed videos by capturing longer temporal contexts, and of short trimmed videos by providing extra temporal augmentation. The trained model using random temporal skipping is robust during inference time. We can use just one model to handle multiple frame-rates without further fine-tuning. We also introduce an occlusion-aware CNN to estimate better optical flow for action recognition on-the-fly. Our network can run in real-time and obtain state-of-the-art performance on six large-scale video benchmarks. In the future, we would like to improve our framework in several directions. First, due to the inability of CNNs to learn large motions between distant frames, we will incorporate recurrent neural networks into our framework to handle even longer temporal contexts. Second, we will apply our method to online event detection since our model has a good trade-off between efficiency and accuracy. Third, we will study the fusion of two streams and compare to recent spatiotemporal feature learning work [25,30].
4,416
1810.12522
2899310320
Current state-of-the-art approaches to video understanding adopt temporal jittering to simulate analyzing the video at varying frame rates. However, this does not work well for multirate videos, in which actions or subactions occur at different speeds. The frame sampling rate should vary in accordance with the different motion speeds. In this work, we propose a simple yet effective strategy, termed random temporal skipping, to address this situation. This strategy effectively handles multirate videos by randomizing the sampling rate during training. It is an exhaustive approach, which can potentially cover all motion speed variations. Furthermore, due to the large temporal skipping, our network can see video clips that originally cover over 100 frames. Such a time range is enough to analyze most actions events. We also introduce an occlusion-aware optical flow learning method that generates improved motion maps for human action recognition. Our framework is end-to-end trainable, runs in real-time, and achieves state-of-the-art performance on six widely adopted video benchmarks.
To handle multirate videos, there are two widely adopted approaches. One is to train multiple models, each of them corresponding to a different fixed frame-rate. This is similar to using image pyramids to handle the multi-scale problem in image analysis. The other is to generate sliding windows of different lengths for each video (a.k.a, temporal jittering), with the hope of capturing temporal invariance. However, neither of these approaches is exhaustive, and they are both computationally intensive. @cite_27 is the most similar work to ours since they deal with motion speed variance. However, our work differs in several aspects. First, we aim to explicitly learn the transitions between frames while @cite_27 uses past and future neighboring video clips as the temporal context, and reconstruct the two temporal transitions. Their objective is considerably harder to optimize, which may lead to sub-optimal solutions. Second, our random skipping strategy is easy to implement without computational overhead whereas the image reconstruction of @cite_27 will lead to significant computational burden. Third, their proposed multirate gated recurrent unit only works in RNNs, while our strategy is generally applicable.
{ "abstract": [ "Despite the recent success of neural networks in image feature learning, a major problem in the video domain is the lack of sufficient labeled data for learning to model temporal information. In this paper, we propose an unsupervised temporal modeling method that learns from untrimmed videos. The speed of motion varies constantly, e.g., a man may run quickly or slowly. We therefore train a Multirate Visual Recurrent Model (MVRM) by encoding frames of a clip with different intervals. This learning process makes the learned model more capable of dealing with motion speed variance. Given a clip sampled from a video, we use its past and future neighboring clips as the temporal context, and reconstruct the two temporal transitions, i.e., present @math past transition and present @math future transition, reflecting the temporal information in different views. The proposed method exploits the two transitions simultaneously by incorporating a bidirectional reconstruction which consists of a backward reconstruction and a forward reconstruction. We apply the proposed method to two challenging video tasks, i.e., complex event detection and video captioning, in which it achieves state-of-the-art performance. Notably, our method generates the best single feature for event detection with a relative improvement of 10.4 on the MEDTest-13 dataset and achieves the best performance in video captioning across all evaluation metrics on the YouTube2Text dataset." ], "cite_N": [ "@cite_27" ], "mid": [ "2952550003" ] }
Random Temporal Skipping for Multirate Video Analysis
Significant progress has been made in video analysis during the last five years, including content-based video search, anomaly detection, human action recognition, object tracking and autonomous driving. Take human action recognition as an example. The performance on the challenging UCF101 dataset [18] was only 43.9% reported in the original; it now is 98.0%. Such great improvement is attributed to several factors, such as more complicated models (e.g., deep learning [1]), larger datasets (e.g., Kinetics [10]), better temporal analysis (e.g., two-stream networks [17,29]), etc. However, there has been little work on varying frame-rate video analysis. For simplicity, we denote varying frame-rate as multirate throughout the paper. For real-world video applications, multirate handling is crucial. For surveillance video monitoring, communication package drops occur frequently due to bad internet connections. We may miss a chunk of frames, or miss the partial content of the frames. For activity/event analysis, the videos are multirate in nature. People may perform the same action at different speeds. For video generation, we may manually interpolate frames or sample frames depending on the application. For the scenarios mentioned above, models pre-trained on fixed frame-rate videos may not generalize well to multirate ones. As shown in Figure 1, for the action diving, there is no apparent motion in the first four frames, but fast motion exists in the last four frames. Dense sampling of every frame is redundant and results in large computational cost, while sparse sampling will lose information when fast motion occurs. There are many ways to model the temporal information in a video, including trajectories [21], optical flow [17], temporal convolution [20], 3D CNNs [19] and recurrent neural networks (RNNs) [14]. However, none of these methods can directly handle multirate videos. Usually these methods need a fixed length input (a video clip) with a fixed sampling rate. A straightforward extension therefore is to train multiple such models, each corresponding to a different fixed framerate. This is similar to using image pyramids to handle the multi-scale problem in image analysis. But it is computational infeasible to train models for all the frame-rates. And, once the frame-rate differs, the system's performance may drop dramatically. Hence, it would be more desirable to use one model to handle multiple frame-rates. In this work, we focus on human action recognition because action is closely related to frame-rate. Specifically, our contributions include the following. First, we propose a random temporal skipping strategy for effective multirate video analysis. It can simulate various motion speeds for better action modeling, and makes the training more robust. Second, we introduce an occlusion-aware optical flow learning method to generate better motion maps for human action recognition. Third, we adopt the "segment" idea [3,24] to reason about the temporal information of the entire video. By combining the local random skipping and global segments, our framework achieves state-of-the-art results on six large-scale video benchmarks. In addition, our model is robust under dramatic frame-rate changes, a scenario in which the previous best performing methods [1,3,24] fail. Approach There are two limitations to existing temporal modeling approaches: they require a fixed length input and a fixed sampling rate. For example, we usually adopt 16 frames to compute IDT and C3D features, 10 frames to compute optical flow for two-stream networks, and 30 frames for LSTM. These short durations do not allow reasoning on the entire video. In addition, a fixed sampling rate will either result in redundant information during slow movement or the loss of information during fast movement. The frame sampling rate should vary in accordance with different motion speeds. Hence, we propose random temporal skipping. Random Temporal Skipping In this section, we introduce random temporal skipping and illustrate its difference to traditional sliding window (fixed frame-rate) approaches. For easier understanding, we do not use temporal segments here. Consider a video V with a total of T frames [v 1 , v 2 , . . . , v T ]. In the situation of single-rate analysis, we randomly sample fixed length video clips from an entire video for training. Suppose the fixed length is N , then the input to our model will be a sequence of frames as [v t , v t+1 , · · · , v t+N ].(1) In order to learn a frame-rate invariant model, a straightforward way is using a sliding window. The process can be done either offline or online. The idea is to generate fixed length video clips with different temporal strides, thus covering more video frames. Much literature adopts such a strategy as data augmentation. Suppose we have a temporal stride of τ . The input now will be [v t , v t+τ , · · · , v t+N τ ].(2) As shown in Figure 1, a fixed sampling strategy does not work well for multirate videos. A single τ can not cover all temporal variations. The frame sampling rate should vary in accordance with different motion speeds. Motivated by this observation, we propose random temporal skipping. Instead of using a fixed temporal stride τ , we allow it vary randomly. The input now will be Here, τ n , n = 1, 2, · · · , N are randomly sampled within the range of [0, maxStride]. maxStride is a threshold value indicating the maximum distance we can skip in the temporal domain. Our proposed random temporal skipping represents an exhaustive solution. Given unlimited training iterations, we can model all possible combinations of motion speed, thus leading to the learning of frame-rate invariant features. In addition, this strategy can be easily integrated into existing frameworks with any model, and can be done on-the-fly during training. [v t , v t+τ1 , · · · , v t+τ1+τ2+···+τ N ].(3) Random Temporal Skipping Two-Stream Network Details Since two-stream networks are the state-of-the-art [1,24] for several video benchmarks, we also build a two-stream model but with significant modifications. In this section, we first briefly recall temporal segment network (TSN) to illustrate the idea of segments. Then we describe our newly designed spatial and temporal streams, respectively. Temporal segment network With the goal of capturing long-range temporal structure for improved action recognition, Wang et al. proposed TSN [24] with a sparse sampling strategy. This allows an entire video to be analyzed with reasonable computational costs. TSN first divides a video evenly into three segments and one short snippet is randomly selected from each segment. Two-stream networks are then applied to the short snippets to obtain the initial action class prediction scores. The original TSN finally uses a segmental consensus function to combine the outputs from multiple short snippets to predict the action class probabilities for the video as a whole. Here, motivated by [3], we encode the features from different segments through compact bilinear models [4] as shown in Figure 2. Spatial stream A standard spatial stream takes a single video frame as input. Here, we extend this to multiple frames. Hence, our random temporal skipping also works for the spatial stream. Temporal stream A standard temporal stream takes a stack of 10 optical flow images as input. However, the pre-computation of optical flow is time consuming, storage demanding and sub-optimal for action recognition. Motivated by [29], we propose to use a CNN to learn optical flow from video frames and directly feed the predictions to the temporal stream. We name this optical flow CNN MotionNet as shown in Figure 2. For the MotionNet, we treat optical flow estimation as an image reconstruction problem [32,33]. The intuition is that if we can use the predicted flow and the next frame to reconstruct the previous frame, our model has learned a useful representation of the underlying motion. Suppose we have two consecutive frames I 1 and I 2 . Let us denote the reconstructed previous frame as I 1 . The goal then is to minimize the photometric error between the true previous frame I 1 and the reconstructed previous frame I 1 : L reconst = 1 N N i,j ρ(I 1 (i, j) − I 1 (i, j)).(4) N is the number of pixels. The reconstructed previous frame is computed from the true next frame using inverse warping, I 1 (i, j) = I 2 (i + U i,j , j + V i,j ), accomplished through spatial transformer modules [7] inside the CNN. U and V are the horizontal and vertical components of predicted optical flow. We use a robust convex error function, the generalized Charbonnier penalty ρ(x) = (x 2 + 2 ) α , to reduce the influence of outliers. α is set to 0.45. However, [29] is based on a simple brightness constancy assumption and does not incorporate reasoning about occlusion. This leads to noisier motion in the background and inconsistent flow around human boundaries. As we know, motion boundaries are important for human action recognition. Hence, we extend [29] by incorporating occlusion reasoning, hoping to learn better flow maps for action recognition. In particular, our unsupervised learning framework should not employ the brightness constancy assumption to compute the loss when there is occlusion. Pixels that become occluded in the second frame should not contribute to the photometric error between the true and reconstructed first frames in Equation 4. We therefore mask occluded pixels when computing the image reconstruction loss in order to avoid learning incorrect deformations to fill the occluded locations. Our occlusion detection is based on a forward-backward consistency assumption. That is, for non-occluded pixels, the forward flow should be the inverse of the backward flow at the corresponding pixel in the second frame. We mark pixels as being occluded whenever the mismatch between these two flows is too large. Thus, for occlusion in the forward direction, we define the occlusion flag o f be 1 whenever the constraint |M f + M b M f | 2 < α 1 · (|M f | 2 + |M b M f | 2 ) + α 2(5) is violated, and 0 otherwise. o b is defined in the same way, and M f and M b represent forward and backward flow. We set α 1 =0.01, α 2 =0.5 in all our experiments. Finally, the resulting occlusion-aware loss is represented as: L = (1 − o f ) · L f reconst + (1 − o b ) · L b reconst(6) Once we learn a geometry-aware MotionNet to predict motions between consecutive frames, we can directly stack it to the original temporal CNN for action mapping. Hence, our whole temporal stream is now end-to-end optimized without the computational burden of calculating optical flow. Compact Bilinear Encoding In order to learn a compact feature for an entire video, we need to aggregate information from different segments. There are many ways to accomplish this goal, such as taking the maximum or average, bilinear pooling, Fisher Vector (FV) encoding [16], etc. Here, we choose compact bilinear pooling [4,5,13] due to its simplicity and good performance. The classic bilinear model computes a global descriptor by calculating: B = φ(F ⊗ F ).(7) Here, F are the feature maps from all channels in a specific layer, ⊗ denotes the outer product, φ is the model parameters we are going to learn and B is the bilinear feature. However, due to the many channels of feature maps and their large spatial resolution, the outer product will result in a prohibitively high dimensional feature representation. For this reason, we use the Tensor Sketch algorithm as in [4] to avoid the computational intensive outer product by an approximate projection. Such approximation requires almost no parameter memory. We refer the readers to [4] for a detailed algorithm description. After the approximate projection, we have compact bilinear features with very low feature dimension. Compact bilinear pooling can also significantly reduce the number of CNN model parameters since it can replace fully-connected layers, thus leading to less over-fitting. We will compare compact bilinear pooling to other feature encoding methods in later sections. Spatio-Temporal Fusion Following the testing scheme of [17,23,27], we evenly sample 25 frames/clips for each video. For each frame/clip, we perform 10x data augmentation by cropping the 4 corners and 1 center, flipping them horizontally and averaging the prediction scores (before softmax operation) over all crops of the samples. In the end, we obtain two predictions, one from each stream. We simply late fuse them by weighted averaging. The overview of our framework is shown in Figure 2. Table 1: Necessity of multirate analysis. RTS indicates random temporal skipping. Fixed sampling means we sample the video frames by a fixed length (numbers in the brackets, e.g., 1, 3, 5 frames apart). Random sampling indicates we sample the video frames by a random length of frames apart. Experiments Implementation Details For the CNNs, we use the Caffe toolbox [8]. Our MotionNet is first pre-trained using Adam optimization with the default parameter values. It is a 25 layer CNN with an encoder-decoder architecture [29]. The initial learning rate is set to 3.2 × 10 −5 and is divided in half every 100k iterations. We end our training at 400k iterations. Once MotionNet can estimate decent optical flow, we stack it to a temporal CNN for action prediction. Both the spatial CNN and the temporal CNN are BN-Inception networks pre-trained on ImageNet challenges [2]. We use stochastic gradient descent to train the networks, with a batch size of 128 and momentum of 0.9. We also use horizontal flipping, corner cropping and multi-scale cropping as data augmentation. Take UCF101 as an example. For the spatial stream CNN, the initial learning rate is set to 0.001, and divided by 10 every 4K iterations. We stop the training at 10K iterations. For the stacked temporal stream CNN, we set different initial learning rates for MotionNet and the temporal CNN, which are 10 −6 and 10 −3 , respectively. Then we divide the learning rates by 10 after 5K and 10K. The maximum iteration is set to 16K. Other datasets have the same learning process except the training iterations are different depending on the dataset size. Trimmed Video Dataset In this section, we adopt three trimmed video datasets to evaluate our proposed method, UCF101 [18], HMDB51 [11] and Kinetics [10]. UCF101 is composed of realistic action videos from YouTube. It contains 13, 320 video clips distributed among 101 action classes. HMDB51 includes 6, 766 video clips of 51 actions extracted from a wide range of sources, such as online videos and movies. Both UCF101 and HMDB51 have a standard three-split evaluation protocol and we report the average recognition accuracies over the three splits. Kinetics is similar to UCF101, but substantially larger. It consists of approximately 400, 000 video clips, and covers 400 human action classes. Necessity of Multirate Analysis First, we demonstrate the importance of multirate video analysis. We use UCF101 as the evaluation dataset. We show that a well-trained model with a fixed frame-rate does not work well when the frame-rate differs during testing. As shown in Table 1, no sampling means the dataset does not change. Fixed sampling means we manually sample the video frames by a fixed length (numbers in the brackets, e.g., 1, 3, 5 frames apart). Random sampling indicates we manually sample the video frames by a random length of frames apart. We set the maximum temporal stride to 5. "with RTS" and "without RTS" indicates the use of our proposed random temporal skipping strategy during model training or not. Here, all the samplings are performed for test videos, not training videos. This is used to simulate frame-rate changes between the source and target domains. We make several observations. First, if we compare the left and right columns in Table 1, we can clearly see the advantage of using random temporal skipping and the importance of multirate analysis. Without RTS, the test accuracies are reduced dramatically when the frame-rate differs between the training and test videos. When RTS is adopted, the performance decrease becomes much less significant. Models with RTS perform 5% better than those without RTS on random sampling (last row). Second, in the situation that no sampling is performed (first row in Table 1), models with RTS perform better than those without RTS. This is because RTS helps to capture more temporal variation. It helps to regularize the model during training, acting like additional data augmentation. Third, if we change fixed sampling to random sampling (last two rows in Table 1), we can see that the recognition accuracy without RTS drops again, but the accuracy with RTS remains the same. This demonstrates that our proposed random temporal skipping captures frame-rate invariant features for human action recognition. One interesting thing to note is that, with the increase of sampling rate, the performance of both approaches decrease. This maybe counter-intuitive because RTS should be able to handle the varying frame-rate. The reason for lower accuracy even when RTS is turned on is because videos in UCF101 are usually short. Hence, we do not have as many training samples with large sampling rates as those with small sampling rates. We will show in the next section that when the videos are longer, models with RTS can be trained better. Per-Class Breakdown Here, we perform a per-class accuracy breakdown to obtain insights into why random temporal skipping works and how it helps. We choose the results from the last row in Table 1 to compare. We list, in Figure 3 below, the 10 classes in UCF101 that benefit the most from RTS and the 10 that benefit the least. The actions that benefit the most tend to exhibit varying motion speeds. The actions that benefit the least can either be considered still, and can thus be recognized by individual frames regardless of how they are sampled, or considered repetitive, and so a constant sampling rate is sufficient. Hence, our proposed random temporal skipping effectively handles different motion speeds. Encoding Methods Comparison In this section, we compare different feature encoding methods and show the effectiveness of compact bilinear encoding. In particular, we choose four widely adopted encoding approaches: Bag of Visual Words (BoVW), Vector of Locally Aggregated Descriptors (VLAD), Fisher Vector (FV) and Fully-Connected pooling (FC). FC is the most widely adopted feature aggregation method in deep learning era, thus will be served as baseline. We put it between the last convolutional layer and the classification layer, and set its dimension to 4096. FC will be learned end-to-end during training. BoVW, VLAD and FV are clustering based methods. Although there are recent attempts to integrate them into CNN framework [12], for simplicity, we do not use them in an end-to-end network. We first extract features from a pre-trained model, and then encode the local features into global features by one of the above methods. Finally, we use support vector machines (SVM) to do the classification. To be specific, suppose we have N local features, BoVW quantizes each of the N local features as one of k codewords using a codebook generated through k-means clustering. VLAD is similar to BoVW but encodes the distance between each of the N local features and the assigned codewords. FV models the distribution of the local features using a Gaussian mixture model (GMM) with k components and computes the mean and standard deviation of the weighted difference between the N local features and these k components. In our experiments, we project each local feature into 256 dimensions using PCA and set the number of clusters (k) as 256. This is similar to what is suggested in [26] except we do not break the local features into multiple sub-features. For the bilinear models, we retain the convolutional layers of each network without the fully-connected layers. The convolutional feature maps extracted from the last convolutional layers (after the rectified activation) are fed as input into the bilinear models. Here, the convolutional feature maps for the last layer of BN-Inception produces an output of size 14 × 14 × 1024, leading to bilinear features of size 1024 × 1024, and 8,196 features for compact bilinear models. As can be seen in Table 2, our compact bilinear encoding achieves the best overall performance (two-stream network results). This observation is consistent with [3]. It is interesting that the more complicated encoding methods, BoVW, FV and VLAD, all perform much worse than baseline FC and compact bilinear pooling. We conjecture that this is because they are not end-to-end optimized. Importance of Occlusion-Aware One of our contributions in this work is introducing occlusion reasoning into the MotionNet [29] framework. Here, we show sample visualizations to demonstrate its effectiveness. As can be seen in Figure 4, optical flow estimates with occlusion reasoning are much better than those without. Occlusion reasoning can remove the background noise brought by invalid brightness constancy assumptions, reduce checkerboard artifacts, and generate flows with sharper boundaries due to awareness of disoc-clusion. Quantitatively, we use these two flow estimates as input to the temporal stream. Our network with occlusion reasoning performs 0.9% better than the baseline [29] on UCF101 (95.5 → 96.4). This makes sense because a clean background of optical flow should make it easier for the model to recognize the action itself than the context. We show that we can obtain both better optical flow and higher accuracy in action recognition by incorporating occlusion reasoning in an end-to-end network. Untrimmed Video Dataset In this section, we adopt three untrimmed video datasets to evaluate our proposed method, ActivityNet [6], VIRAT 1.0 [15] and VIRAT 2.0 [15]. For ActivityNet, we use version 1.2 which has 100 action classes. Following the standard evaluation split, 4,819 training and 2,383 validation videos are used for training and 2,480 videos for testing. VIRAT 1.0 is a surveillance video dataset recorded in different scenes. Each video clip contains 1 to 20 instances of activities from 6 categories of person-vehicle interaction events including: loading an object to a vehicle, unloading an object from a vehicle, opening a vehicle trunk, closing a vehicle trunk, getting into a vehicle, and getting out of a vehicle. VIRAT 2.0 is an extended version of VIRAT 1.0. It includes 5 more events captured in more scenes: gesturing, carrying an object, running, entering a facility and exiting a facility. We follow the standard train/test split to report the performance. Investigate Longer Temporal Context In the previous section, we demonstrated that a well-trained model with a fixed frame-rate does not work well when frame-rate differs during testing. Here, we show that using a longer temporal context by random temporal skipping is useful for action recognition. We use ActivityNet as the evaluation dataset because most videos in ActivityNet are long (5 to 10 minutes) so that we can explore more speed variations. Recall from Equation 3 that maxStride is a threshold value indicating the maximum distance we can skip in the temporal domain. We set it from 0 frames to 9 frames apart, indicating no sampling to the longest temporal coverage. As shown in Figure 5, we can see that the longer temporal context we utilize, the higher action recognition accuracy we obtain. One interesting observation is that the performance starts to saturate when maxStride is equal to 6. After that, longer temporal context does not help much. We think this may be due to the fact that the CNNs can not capture the transitions between frames that are so far away. In addition, we investigate the impact of the number of sampled frames. We choose 5, 10, 15 and 20 frames as the length of the input video clip. As we can see in Figure 5, more sampled frames always improves the action recognition accuracy. This demonstrates that longer temporal information benefits video understanding. With 20 input frames and a maxStride of 6, our method can have a temporal coverage of over 120 fames, which is about 5 seconds. Such a time duration is enough for analyzing most actions or events. For UCF101 and HMDB51 datasets, 5 seconds can cover the entire video. Fig. 5: Action recognition accuracy on ActivityNet. We observe that the longer temporal context we utilize, the better performance we obtain. Comparison to State-of-the-Art We compare our method to recent state-of-the-art on the six video benchmarks. As shown in Table 3, our proposed random temporal skipping is an effective data augmentation technique, which leads to the top performance on all evaluation datasets. For the trimmed video datasets, we obtain performance improvements of 0.8% on UCF101, 1.4% on HMDB51 and 1.4% on Kinetics. Because the videos are trimmed and short, we do not benefit much from learning longer temporal information. The improvement for UCF101 is smaller as the accuracy is already saturated on this dataset. Yet, our simple random temporal skipping strategy can improve it further. For the three untrimmed video datasets, we obtain significant improvements, 1.8% on ActivityNet, 4.5% on VIRAT 1.0 and 3.0% on VIRAT 2.0. This demonstrates the importance of multirate video analysis in complex real-world applications, and the effectiveness of our method. We could adapt our approach to real-time action localization due to the precise temporal boundary modeling. There is a recent work I3D [1] that reports higher accuracy on UCF101 (98.0%) and HMDB51 (80.7%). However, it uses additional training data ( [10]) and the network is substantially deeper, which is not a fair comparison to the above approaches. In addition, we would like to note that our approach is realtime because no pre-computation of optical flow is needed. We are only about 1% worse than I3D, but 14 times faster. Conclusion In this work, we propose a simple yet effective strategy, termed random temporal skipping, to handle multirate videos. It can benefit the analysis of long untrimmed videos by capturing longer temporal contexts, and of short trimmed videos by providing extra temporal augmentation. The trained model using random temporal skipping is robust during inference time. We can use just one model to handle multiple frame-rates without further fine-tuning. We also introduce an occlusion-aware CNN to estimate better optical flow for action recognition on-the-fly. Our network can run in real-time and obtain state-of-the-art performance on six large-scale video benchmarks. In the future, we would like to improve our framework in several directions. First, due to the inability of CNNs to learn large motions between distant frames, we will incorporate recurrent neural networks into our framework to handle even longer temporal contexts. Second, we will apply our method to online event detection since our model has a good trade-off between efficiency and accuracy. Third, we will study the fusion of two streams and compare to recent spatiotemporal feature learning work [25,30].
4,416
1810.12483
2964301261
As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function. In many cases, the mapping of plans to objective reward may change due to unforeseen events or circumstances in the real world. In those cases, the planner usually needs some additional effort to adjust to the changed situation and reach its previous level of performance. Whenever we still need to continue polling the planner even during re-planning, it oftentimes exhibits severely lacking performance. In order to improve the planner's resilience to unforeseen change, we argue that maintaining a certain level of diversity amongst the considered plans at all times should be added to the planner's objective. Effectively, we encourage the planner to keep alternative plans to its currently best solution. As an example case, we implement a diversity-aware genetic algorithm using two different metrics for diversity (differing in their generality) and show that the blow in performance due to unexpected change can be severely lessened in the average case. We also analyze the parameter settings necessary for these techniques in order to gain an intuition how they can be incorporated into larger frameworks or process models for software and systems engineering.
The author of @cite_27 describes a problem setting not unlike the one presented in this paper, i.e., the combination of maintaining diversity and searching in a changing environment. The issue of premature convergence is tackled by integrating a certain amount of random search into the genetic algorithm by performing hyper-mutation. This has since become standard procedure and is included in all genetic algorithms presented in this paper, which aims to further improve the resilience of the search process.
{ "abstract": [ "Genetic algorithms perform an adaptive search by maintaining a population of candidate solutions that are allocated dynamically to promising regions of the search space. The distributed nature of the genetic search provides a natural source of power for searching in changing environments. As long as sufficient diversity remains in the population the genetic algorithm can respond to a changing response surface by reallocating future trials. However, the tendency of genetic algorithms to converge rapidly reduces their ability to identify regions of the search space that might suddenly become more attractive as the environment changes. This paper presents a modification of the standard generational genetic algorithm that is designed to maintain the diversity required to track a changing response surface. An experimental study shows some promise for the new technique." ], "cite_N": [ "@cite_27" ], "mid": [ "1555154718" ] }
Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms
Abstract-As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function. In many cases, the mapping of plans to objective reward may change due to unforeseen events or circumstances in the real world. In those cases, the planner usually needs some additional effort to adjust to the changed situation and reach its previous level of performance. Whenever we still need to continue polling the planner even during re-planning, it oftentimes exhibits severely lacking performance. In order to improve the planner's resilience to unforeseen change, we argue that maintaining a certain level of diversity amongst the considered plans at all times should be added to the planner's objective. Effectively, we encourage the planner to keep alternative plans to its currently best solution. As an example case, we implement a diversity-aware genetic algorithm using two different metrics for diversity (differing in their generality) and show that the blow in performance due to unexpected change can be severely lessened in the average case. We also analyze the parameter settings necessary for these techniques in order to gain an intuition how they can be incorporated into larger frameworks or process models for software and systems engineering. Index Terms-planning, unexpected events, dynamic fitness, resilience, robustness, self-protection, self-healing, diversity, optimization, evolutionary algorithms I. INTRODUCTION As automatic optimization in various forms makes its way into industrial systems, there is a wide range of expectations about the upcoming capabilities of future "smart systems" [1]- [5]. For most of the current applications, the optimization part of the system takes place offline, i.e., not while the application is actually performing its main purpose: The product shipped to the customer is fixed after initial training and does not self-adapt (anymore). Instead, it may only gather data that is then used at the vendor's side to either improve the product's performance via software updates later on or assist in building the product's successor. This, of course, misses out on interesting applications that may highly benefit from further optimization even while they are running. In this paper, we focus on the exemplary case of a layout configuration for the positioning of work stations inside a (smart) factory: Depending on the products that need to be build and depending on the current status of the machines involved, we may desire different workflows for the same product at different times during the factory's life. For most current factories, however, the arrangement of workstations is planned far in advance and then fixed until human intervention. One of the reasons for opting for offline adaptation is that the vendor usually has access to more computational power and that the employed adaptation process can benefit from connecting data input from a variety of customers. However, increasing computational resources and online connectivity mitigate these issues. A possibly more important aspect is the issue of consistent performance: An online planner, while theoretically able to react to sudden changes in its environment and/or objective, may take some time to reach good plans and during that time the solutions provided by the planner may be unsuitable. a) Expected Change: The usefulness and importance of self-optimization at the customer's side has already been claimed in the original vision of autonomic computing [6] and has been shown on many occasions since [3], [7], [8]. In these cases, self-optimization usually refers to a process of specialization, i.e., the system is built with a large variety of possible use cases in mind and learns to work best for the few of these it actually faces on site. Intuitively, we may want to build a planner that works on factory layouts in general and that can then specialize on the specific needs of a single factory or a single situation (machine failure, e.g.) if necessary. We expect this approach to work iff every possible situation and every pair of follow-up situation is considered when evaluating a factory layout. As long as we know that machines might fail with a certain probability, we can take this into account and plan redundantly with respect to machine usage. This is what we call expected change of the evaluation function. b) Unexpected Change: Still, we may not want our selfoptimizing planner to completely break on any deviation from the specified scenarios. We imagine that intelligent planners should invest a certain amount of effort to think about and prepare for "what ifs", even when the respective scenarios have not been expected to happen during system design or training. This is further motivated by the fact that many industry applications require the adaptive component to produce a solution better than a certain quality threshold but do not benefit as much from the system finding configurations that are just slightly better beyond that threshold. Instead, that computational effort might be better put into finding alternative solutions that might not be just as good as the primary solution that was just found, but then again might be feasible even when the primary solution fails for some unexpected reason. This argument falls in line with the claim of self-protection for autonomic systems [6]: Our system should not only be able to react and recover from negative external influences but also spend a reasonable effort on actively preparing for negative events. Via this self-protection property we aim to increase the overall resilience of the planning process and by extent the robustness of the system using our planner. c) Scope of This Work: As the original contribution of this paper we identify that diversity in evolutionary algorithms, which we consider a primary example for a heuristic optimization algorithms in this paper, is of central importance for the algorithm's reaction to change and that explicitly optimizing for diversity helps to prepare for changes, even when they cannot be foreseen by the optimization process in any way. We introduce means to formally define the phenomenon of unexpected change in relation to an online planner. To this end, we first formally define the notions of change and unexpectedness that we used intuitively until now (Section II). We then immediately turn to an example of a smart factory domain in which unexpected change might occur and specify our experimental setup (Section III). We introduce our approach at maintaining diversity using two different diversity metrics (Section IV) and sum up the results of applying this approach in the previously defined experiment (Section V) before we discuss related work (Section VI) and conclude this paper (Section VII). II. FOUNDATIONS We assume that to realize modern challenges in industry, software products need to feature a certain degree of autonomy, i.e., they feature at least one component called planner capable of making decisions by providing a plan of actions which the system is supposed to perform to best fulfill its intended goal [8], [9]. This goal is encoded by providing the system with a fitness function that can be used to evaluate plans. A planner respecting a fitness function performs selfoptimization. We claim that for many real-world applications it is often not only important to eventually adapt to new circumstances but also to avoid causing any major damage to overall success while adapting. It follows that the planner needs to offer a suitable solution at all times, even directly after change in the environment. This property can be compared to the robustness of classical systems, i.e., the ability to withstand external changes without being steered away too far from good behavior [10]. Robustness can often be tested against a variety of well-defined external influences. However, not every influence a system will be exposed to can be foreseen. 1 The notion of resilience captures the system's ability to withstand unanticipated changes [11]. 2 One approach to prepare a system for unexpected circumstances is to make it adapt faster, so that its adaptive component finds a new plan of actions faster once the old one is invalidated. However, this approach is still purely reactive and we thus cannot prevent the immediate impact of change. To increase system resilience, we thus might want the planner to become proactive towards possible changes that may occur to the environment and by extension the planner's objective. In order to lessen the blow of unexpected changes, the planner thus needs to prepare for it before it actually occurs. Note that for the changes we are talking about in this section, we still assume that they are unexpected at design time. The planner therefore has no means of predicting when or what is going to happen. Still, we desire for a planner to be caught off-guard as seldom as possible. A planner that needs to re-plan less often would then be considered more resilient with respect to unexpected change. We claim that explicitly increasing planning resilience aids a system's ability to selfprotect and is thus a useful handle to explicitly expose to the developers of such a system. a) Planning: Planners perform (usually stochastic) optimization on the system's behavior by finding plans that (when executed) yield increasingly better results with respect to a specified objective. That objective is given via a fitness function f : P ×E → R, where P is the domain of all possible plans and E is the domain of environments said plans are to be executed in. For the purpose of this paper, we assume that we want to minimize the real-valued output of the fitness function. We can then describe a planner formally as a function plan : E → P from an environment e ∈ E to a plan p ∈ P with the following semantic: plan(e) ≈ arg min p∈P E(f (p, e)). Note that due to the possibly stochastic nature of the environment and in extent the evaluation of the fitness function f , we compute the expected value E of the application of f . Further note that due to the stochastic nature of the planning methods considered in this paper, we may not actually return the single best result over the domain of all plans but when the stochastic optimization process works, we expect to yield a result somewhat close (described by ≈). To compute a reasonable value for f (p, e), a given plan will usually be executed in a simulated version of e. We call the process of repeatedly calling plan to execute the currently best solution online planning, which implies that we may call it for changing e. b) Changing Environments: We can write any occurrence of change in the environment as a function c : E → E. Obviously, if we allow any arbitrary change to happen to the environment, we can construct arbitrarily "evil" environments and cause the planner to perform arbitrarily bad. But frankly, we do not care for a planner managing a smart grid's power production to perform well when a meteor destroys Earth. What is much more realistic and thus much more desirable to prepare for, however, is changes that apply only to parts of the environment. Without looking into the data structure of the environment, we assume that these kinds of changes then only affect the fitness of some possible plans, but do not change the fitness landscape of the domain completely. We thus call a given change function c within a given environment e ∈ E reasonable iff it fulfills the formula: |{p ∈ P : |f (p, e) − f (p, c(e))| > ε}| |P |. Here, ε described a small value used as a minimally discernible distance between fitness values. Likewise, the exact meaning of is to be defined by the case. From this definition, it follows that a planner can prepare for a reasonable change by finding a good plan among the plans that are not affected by the reasonable change. When the change occurs, it can then provide a "quite good" plan immediately before it even begins to search for good plans among the changed parts of the domain. Thus, to increase planning resilience, we want our planner to not converge on and around the best optimum it has found so far, but to always keep an eye out for other local optima, even when they do not look as promising at the moment. Note that this behavior can be likened to strategies developed to prevent premature convergence, a problem with metaheuristic search methods that occurs even in static domains [12], [13]. c) Unexpectedness: Even if a planner can prepare for a reasonable change by diversifying, there are often more efficient ways to prepare for expected change: Usually, we would include instances of expected change into the fitness function by simply evaluating the system in the changed environments as well. In that case, the planner can still fully converge on the predicted path of the environment and not spend computational resources on diversification. However, we claim that in most practical applications the future is not completely predictable and changes may happen that the planner cannot anticipate. We define a change function c to be called unexpected iff the planner is not prepared for the change induced, i.e., if the actions it would take in the unchanged environment e differ from the actions it now has to take in the changed environment c(e). Formally, this can be expressed as follows: |{e ∈ E : plan(c(e)) ≈ plan(e)}| 0 Again, an exact definition of would need to be derived from specific system requirements. Note that this is a purely extrinsic view on unexpectedness. We want to provide a blackbox definition of unexpectedness that does not depend on the internal workings of the planner and is thus as general as possible. The intuition behind it is that if there was a way for the planner to know that and how the change c is going to happen when looking at the environment e, the plan generated via plan(e) would already consider the consequences of said change and thus (to some extent) match the plan for c(e). 3 III. EXPERIMENT To test the validity of our claims about the importance of diversity for planning resilience, we build a model example in which we try to observe the effects of environmental changes as clearly as possible. a) Scenario: We imagine a smart factory scenario where a work piece carried by a mobile (robotic) agent needs to be processed by a setup of work stations. More specifically, we need to perform the 5 tasks A, B, C, D, E in order on a given work piece as quickly as possible. In order to do so, our factory contains 25 work stations placed randomly on a 500×500 grid structure. Each work station can only perform one of the tasks, so that our factory has 5 identical work stations to use for any specific task. Given a work piece starting at the top left corner of the grid, we need to determine the shortest route the work piece can travel for it to reach exactly one station of each task in the right order. See Figure 1 for a simplified illustration of this setup. For each run of our experiment, we randomly generate an n×m matrix F of work station coordinates where each row in F corresponds to a task and each column to an identification number for each of the available work stations for each task. Thus, in our experimental setup we fix n = 5 and m = 5. b) Genetic Algorithm: In order to find a short path that fulfills our requirements, we employ a genetic algorithm [12]. Closely modeling our problem domain, we define the genome as a 5-dimensional vector v ∈ {0, ..., m − 1} n so that v i denotes which of the 5 identical work stations should be visited next in order to fulfill the i-th task where i = 0 denotes the task A, i = 1 denotes task B, and so on. The environment provides a mapping from these v i to their respective positions on the grid, which is used by a distance function L E for the environment E to compute the traveling distance between two work stations. We then define a function waycost to compute the overall length of a given path, summing the Manhattan 4 distance L E 1 between all its vertices: For the standard genetic algorithm, this waycost function is already sufficient as a fitness function f (v, E) = waycost(v, E) to evolve a shorter navigation path. It is important to note that while we closely restrict the search space to paths that cross each type of station exactly once (and in the right order), we do not aid the genetic algorithm by providing any notion of position in space or the closeness of different stations beyond what is encoded in the waycost function above. waycost(v, E) = L E 1 (S, v 0 ) + n−2 i=0 L E 1 (v i , v i+1 ) For the genome defined above, we use the following evolutionary operators: Mutation chooses a value i, 0 ≤ i < n uniformly at random, then generates a new random value x ∈ {0, ..., m − 1}, assigning v i := x. Recombination happens through uniform crossover on the vectors of two individuals. Furthermore, for all experiments performed in this paper, we use a mutation rate of 0.1 per individual to provide strong random input and a crossover rate of 0.3. That means that with a chance of 30% per individual that individual is selected as a first mate for recombination. Two potential mates are then randomly selected from the population: the fitter one is used for as a partner for crossover. We further augment the search by randomly generating some new individuals from scratch each generation. This process (also called hyper-mutation [14]) happens with a chance of 0.1 per individual in the population. c) Random Change: The crucial point of this experimental setup is the occurrence of a random change of environmental circumstances. The present experimental setup is fixed to an evaluation time of 100 generations as earlier experiments have shown our setup of an evolutionary algorithm can easily converge in under 50 generations. We then define a function for unexpected change c A , which chooses A factory stations at random and effectively disables them. This is implemented by repositioning them to an area far off the usual factory area by adding (2500, 2500) to their respective coordinates. This means that while the plans containing the removed stations are still theoretically feasible and can be assigned a valid waycost, the increase in waycost is so high that no plan containing any of the removed stations should be able to compete with plans contained within the actual factory area when it comes to evolutionary selection. From a random initial factory layout F we generate two changed factory layouts F 1 = c A (F ), F 2 = c A (F ) by applying the randomized change function c A . Because we want to be able to compare the scale of fitness values before and after the unexpected change more easily, we start the evolutionary algorithm on the factory configuration F 1 that is already "missing" a few stations. After 50 generations, we switch to factory configuration F 2 , which has A stations disabled as well, but probably different ones. 5 Note that this change is reasonable for small A (according to the definition above) because it only affects the fitness of a maximum of 2 * A possible plans, i.e., those plans which include at least one of the "wrong" machines in F 1 or F 2 . Furthermore, the change is unexpected as the shakeup of the stations' positioning is communicated to the evolutionary algorithm only via the change of the waycost function's values in its fitness evaluation step and thus leaves the adaptation process without any chance of anticipating that event. Nonetheless, the individuals of the evolutionary process are constantly evaluated according to their fitness in the current state of affairs, thus forcing them to adapt to the new situation in order to keep up once reached levels of fitness values. IV. APPROACH We attempt to solve the problem described above using evolutionary algorithms. Evolutionary algorithms have already been applied successfully to many instances of online adaptation, i.e., problems with a changing fitness function [15]- [17]. They are an instance of metaheuristic search algorithms and work by emulating natural evolution. a) Diversity in Genetic Algorithms: In the standard scenario, once the fitness function changes, previously good solutions can possibly be evaluated to have very bad fitness and are thus removed from the evolutionary process. However, if the genetic search has already converged to a local optimum, it can be very hard for the search process to break out of it, because when all known solutions lie very closely together in the solution space, there is no clear path along which the population must travel in order to improve. The problem of 5 It is important to note that this setup means that in many cases none of the stations that go bad during the switch are even included in the best path found by the genetic algorithm. In these cases, the evolutionary process does not have to adapt in any way. In order to analyze the cases when the removal of stations actually does make a huge difference, we need to execute the experiment multiple times. We chose this approach because it allows us use an unbiased change function as opposed to a change function that specifically targets the workstations actually used throughout the experiment. The realm of biased, even directly adversarial change functions is an interesting topic of future research. a genetic search getting stuck in a local optimum with little chance to reach the global optimum (or at least much better local ones) is called premature convergence [12]. It is known that the diversity among the members in the population has a strong impact on the evolutionary process's likelihood to converge too early. The Diversity-Guided Evolutionary Algorithm (DGEA) observes a population's diversity throughout the evolutionary process and takes action when it falls below a given threshold [18]. For online genetic algorithms, we show that maintaining a certain level of diversity throughout the population helps to react better to the change occurring in the environment. To this end, we apply two possible measurements for diversity, which we will both test for the above scenario. In either case, we transform the genetic algorithm's fitness function to a multi-objective optimization problem [13], [19], [20] with a weighting parameter λ, yielding a fitness function f depending on the individual to be evaluated v, the environment E, and the population P as a whole: f (v, E, P ) = waycost(v, E) + λ * similaritycost(v, P ) It is important to note that in order to meaningfully define the diversity of one individual, we need to compare it to the rest of the population, causing us to introduce the population P as an additional parameter to the fitness function. 6 The fitness function thus becomes a relative measure with respect to other individuals in the population. This makes it necessary to re-evaluate fitness in each generation even for unchanged individuals. However, since we assume changes in the environment and thus the fitness function may occur during the online execution of the genetic algorithm anyway, this model seems to fit our situation. We can now define two different diversity measures by providing a definition for the similaritycost function, which penalizes low diversity. b) Domain-Distance Diversity: This can be thought of as the more standard approach to diversity in search and optimization problems. In fact, the authors of [22] show that many common diversity measurements are quite similar to this basic method: We define a simple distance measure between the individuals in the solution space. For a discrete, categorial problem like the one presented here, there is little alternative to just counting the specific differences in some way. similaritycost dom (v, P ) = −n + n−1 i=0 |P | j=0 C(v i , P (j) i ) where C(x, y) = 1 if x = y 0 otherwise 6 In general, we might want approximate this comparison by using a sample drawn from the population or another estimate instead. Likewise, we could consider computing diversity not only against the current generation of individuals but also against a selection of individuals from the past, using for example a "hall of fame" approach [21]. The evaluation of such techniques is left for future research. Note that we write P (j) to access the j-th individual of the population and |P | to represent the amount of individuals in a population. We subtract n from the sum because the given individual v ∈ P is still part of the population and thus adds a cost of n by matching itself perfectly. We thus maintain the (cosmetic) property that in a population of completely different individuals, the average similarity is 0. While the implementation of this diversity measure looks pretty straightforward, it requires complete prior knowledge of the search space provided and and thus introduces further dependencies. For example, the above definition is unfit for continuous search spaces and while a continuous similaritycost function may easily be thought up, optimization problems consisting of a mix of discrete and continuous variables then require more weighting parameters to adequately combine the scales over which the respective similaritycost functions operate. c) Genealogical Diversity: As a more different comparison we implemented a inheritance-based diversity estimate introduced in [13]. The aim of genealogical diversity is to utilize those parts of the domain knowledge that are already encoded in the setup of the genetic algorithm, i.e., the mutation and recombination function the human developer is required to code for the specific genome anyway. We can thus try to quantify the difference between two individuals by estimating the amount of evolution steps it took to develop these different instances of solution candidates. This yields a measure of "relatedness" between individuals not unlike genealogical trees in biology or human ancestry. If all individuals in a population are closely related (sibling or cousins, e.g.), we know that there can only be limited genetic difference between them and thus estimate a low diversity for the respective individuals with respect to that population. However, instead of building and traversing a genealogical tree, the implementation of genealogical diversity used in [13] employs a technique inspired by the way genetic genealogical trees are constructed from the analysis from genomes in biological applications: For this approach, we first need to augment the individuals' genome by a series of t trash bits b k ∈ {0, 1}, k ∈ N, 0 ≤ k < t. For our experiment, t = 16 has proven to be reasonable. However, we do not change the waycost fitness function, so that it does not recognize the additional data added to the genome. This leads to the trash bits not being subjected to selection pressure from the primary objective of the genetic algorithm. As the trash bits are randomly initialized like the other variables in the genome, every individual of the first generation should most probably start out with a very different trash bitstring from anyone else's, given that we choose the length of the trash bitstring sufficiently large. Without direct selection pressure, there is no incentive for individuals to adapt their trash bitstring in any specific way. However, the trash bits are still subjected to mutation and recombination, i.e., whenever a specific individual is chosen for mutation, a random mutation is performed on the trash bitstring as well and whenever a recombination operation is executed for two individuals, their trash bitstrings are likewise recombined. In our implementation at hand, we use one-bit flip for mutation and uniform crossover for recombination. Using the definition of a comparison function C as provided above, we can thus define the similaritycost function for genealogical diversity as follows: similaritycost gen (v, P ) = −t + t−1 i=0 |P | j=0 C(v n+i , P (j) n+i ) Again, we subtract t to ignore self-similarity when iterating over the population. It should be noted that when accessing the (n + i)-th component of an individual inside the sum, we are protruding into the dimensions solely populated by trash bits, retrieving the i-th trash bit of said individual. In order to compute the similarity between two individuals, we now only consider the trash bits, for which we always have the same distance metric regardless of the actual problem domain of the original genetic algorithm. Domain logic is only used indirectly, as the measure we estimate can be regarded as the edit distance between two individuals using the genetic operators the evolutionary process is equipped with. However, since the trash bits are inherited by individuals from their parents and without direct selection pressure, they are not biased toward values resulting in higher fitness; yet, they are still a sufficient representation of the genealogy of an individual, as we show in the following section. V. RESULTS In order to evaluate the benefit of the presented approaches, we simulate the different behavior of genetic algorithms when using the presented diversity measures or no diversity measure at all. In order to achieve a meaningful result considering the highly probabilistic nature of the applied method to generate scenarios, we perform the evaluation on 1000 different scenarios. Figure 2 shows the top fitness achieved at a specific point in time by a single run averaged over all 1000 runs. By taking a look at the optimization process as a whole, it can be seen that a great deal of improvement compared to the random initialization is done during the first steps of evolution, giving an estimate of how good the achieved solutions are in relation to "just guessing". In Figure 3 we show the respective diversity measurements from these runs. We can observe that the diversity-aware algorithms show a slower learning rate in the beginning, since they do not only optimize the plotted primary fitness function, but also the diversity function and thus cannot focus as well on better primary results. However, once the environmental change occurs, they are likewise better prepared for a change in fitness and react with a much smaller increase in waycost than the standard genetic algorithm. In a scenario like ours, where a smart factory needs to be able to efficiently dispatch new workpieces at all times, this can be a huge advantage. We observe that following the unexpected change, average diversity first increases as well-established "families" of similar individuals die out. Due to a new convergence process, diversity then drops until good solutions are found. Finally, diversity seems to reach a similar level as before the unexpected change. The "right" amount of diversity is naturally controlled by the parameter λ of the combined fitness function. For these experiments we found parameters λ = 1500 for domaindependent diversity and λ = 2500 for genealogical diversity via systematic search. The definition of "right", however, depends on the problem domain. In most practical cases, we expect some (nonfunctional) requirements to be present, which specify the robustness properties we want to uphold. For now, these properties must then be verified via statistic testing. Deriving (statistical or hard) guarantees from a stochastic search process like an evolutionary algorithm is still an interesting topics of future work. Goven no further requirements for consistent quality of service, a reasonable setting for λ might achieve that the online planner does not perform worse than a random planner at any point in time, even at the moment of unexpected change. Figures 4 and 5 show that systematic search, including the random population's value before the evolutionary process starts: the fitness achieved by the domain-dependent and the genealogical genetic algorithm, respectively, strongly depends on the choice of parameter λ, i.e., how to distribute focus between the primary objective (small waycost) and the secondary objective (high diversity). Experiments have shown, that diversity-aware genetic algorithms can show a variety of behaviors for different λ. To provide an intuition about the effects various settings for λ have on the algorithm's performance, we can see that higher values of λ generally cause the evolutionary search to produce less optimal results but to perform more stable when facing unexpected change. For the domain-dependent diversity, this phenomenon shows stronger with higher λ-values showing almost no impact of the unexpected change but relatively bad results in general. The approach of genealogical diversity seems to be a bit more robust to the setting of λ in that it still clearly shows a tendency to optimize over time. We chose to showcase genealogical diversity specifically because it works on a rather domain-independent level and introduces only few parameters. Furthermore, it is rather robust with respect to the choice of said parameters. For the length of the used bitstring t, Figure 6 shows that on all but the smallest values for t the genetic algorithm performs most similarly. Especially rather large values for t (that still take up very little memory) do not show any deterioration in the planner's behavior, which means that the choice for that parameter can be made rather comfortably. We also analyze how much change a diversity-aware planner can handle. Figure 7 shows the behavior of the three exemplary planners just around the moment of unexpected change for various amounts of change they are subjected to. Naturally, bigger (and thus un-reasonable) change can impact even diverse system. The increase in costs for the large alterations in the generation-49-line (dashed) shows that on the upper end of the scale we started generating problem instances that generally have fewer good solutions. For more reasonable change (A ≤ 8, which still means that up to 16 out of 25 machine positions may be changed), both diversity-aware algorithms perform comparably and clearly better than the Fig. 4. Top fitness for current generation averaged over only 100 runs each, plotted for λ = 500 * z, z ∈ N, 0 ≤ z < 20 using domain-dependent diversity. The darker the color of the line, the higher is the depicted λ value. Fig. 5. Top fitness for current generation averaged over only 100 runs each, plotted for λ = 500 * z, z ∈ N, 0 ≤ z < 20 using genealogical diversity. The darker the color of the line, the higher is the depicted λ value. Fig. 6. Top fitness for current generation averaged over 100 runs each, plotted for t = 2 z , z ∈ N, 0 ≤ z < 10 using genealogical diversity. The darker the color of the line, the higher is the depicted t value. non-diverse planner. Most remarkably, the domain-dependent variant manages to cope with changes A ≤ 4 with almost no consequence for its performance. VII. CONCLUSION Since we expect future software systems to be increasingly self-adaptive and self-managing, we can also expect them to feature one or multiple components tasked with online planning. Online planning allows systems to learn to optimize their behavior in the face of a moving target fitness. However, it comes with a few pitfalls, one of which is the fact even small changes in the target fitness can have detrimental effects on the current plans' performance. It is thus imperative to keep an eye on a healthy level of diversity in our pool of alternative plans. As we have shown, this can severely soften the blow to overall performance, should only a few plans become impractical due to external circumstances. 7 The diversity of a planner functions as a non-functional requirement for classic applications. Certain levels of desired diversity may be specified in order to augment system architectures that revolve around the optimization process of the system in order to provide flexibility on the component level [46]. This should be expected to strongly influence other properties commonly applied to complex self-adaptive systems like robustness or flexibility. On an application level, the introduced concept of diversityaware optimization may prove especially useful when the 7 It still holds that if we allow arbitrary changes in the environment, it is always possible to design a completely new fitness function so that any given instance of an evolutionary process becomes arbitrarily bad with respect to the new altered fitness function. This is due to the No-Free-Lunch theorem [45]. For realistic scenarios, however, there usually is a limit to how quickly and how drastically the fitness function is expected to change. A thorough analysis of those limits for some practical domains may present an interesting point for further research. reduction in amplitude of fitness causes the system behavior to fall below a predefined quality threshold (or to do so more often at least). A diversity-aware planner might then be able to continue working as usual as its back-up plans fulfill the required quality agreement just as well while a non-diverse planner might more often feel the need to stop the execution of its plans (and thus halt the system in general) until it reaches a new plan of acceptable quality. In this case, we may formulate a non-functional requirement such as planning resilience, measuring how frequent and how big unexpected changes need to be in order to push the planner out of its quality requirements. Using the parameter λ, engineers can adjust the focus point of the planning component between performance and resilience optimization. How well statistical judgements can be made about said resilience property still needs to be evaluated, though. It is up to future research to determine how the concept of diversity (especially genealogical diversity) generalizes for other optimization techniques like the cross-entropy method or simulated annealing. One way to integrate these techniques into the framework defined in this paper may be to set up a pool of solution candidates via ensemble learning [47]. Embracing diversity seems especially promising in searchbased software testing (SBST) as test suites need to adapt faster to new possible exploits. In DevOps, developers push relatively small updates that need testing more frequently. Nonetheless, the changes applied to the code by the developer usually fall into the category of unexpected change as we defined it in this paper. That means, that diverse test generators could possible adapt quicker to the new software system under test. The mutual influence between diversity-aware evolutionary algorithms and co-evolutionary approaches 8 may be an interesting point of further research [21]. A likewise connection in biological systems has been found [48]. Many of the theoretical foundations explaining the ideal structure of a population for various optimization purposes are still unexplored. For instance, we assumed an unpredictable but neither explicitly hostile nor cooperative environment. Any scenario where the change occurs not only unexpected but intentional is likely to have fundamentally different properties. We focused our study on the implications of using diversity within a planner and how the resilience to environmental change may be indicated in a quantifiable way. We have shown that diversity during planning can aid planning resilience in the face of change. Furthermore, we can employ such method in a domain-independent way using genealogical diversity and still achieve valuable results. Software engineering frameworks and processes are now needed to expose desired NFRs like planning resilience to the software and system design and test them adequately.
6,546
1810.12483
2964301261
As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function. In many cases, the mapping of plans to objective reward may change due to unforeseen events or circumstances in the real world. In those cases, the planner usually needs some additional effort to adjust to the changed situation and reach its previous level of performance. Whenever we still need to continue polling the planner even during re-planning, it oftentimes exhibits severely lacking performance. In order to improve the planner's resilience to unforeseen change, we argue that maintaining a certain level of diversity amongst the considered plans at all times should be added to the planner's objective. Effectively, we encourage the planner to keep alternative plans to its currently best solution. As an example case, we implement a diversity-aware genetic algorithm using two different metrics for diversity (differing in their generality) and show that the blow in performance due to unexpected change can be severely lessened in the average case. We also analyze the parameter settings necessary for these techniques in order to gain an intuition how they can be incorporated into larger frameworks or process models for software and systems engineering.
The preparation for unexpected or previously wrongly modeled change is an important issue for the practical application of machine learning in industry @cite_22 . From an engineer's point of view, the diversity of the population of plans can be regarded as a typical with the cost of the plan representing the functional requirement. Applying NFR engineering processes to self-adaptive systems is still a new idea and a clear canon of relevant NFRs for these new challenges has not yet been found @cite_29 @cite_43 .
{ "abstract": [ "We discuss key challenges of software engineering for distributed autonomous real-time systems and introduce a taxonomy for areas of interest with respect to the development of such systems.", "The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.", "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI." ], "cite_N": [ "@cite_43", "@cite_29", "@cite_22" ], "mid": [ "2405769265", "2142486130", "2462906003" ] }
Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms
Abstract-As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function. In many cases, the mapping of plans to objective reward may change due to unforeseen events or circumstances in the real world. In those cases, the planner usually needs some additional effort to adjust to the changed situation and reach its previous level of performance. Whenever we still need to continue polling the planner even during re-planning, it oftentimes exhibits severely lacking performance. In order to improve the planner's resilience to unforeseen change, we argue that maintaining a certain level of diversity amongst the considered plans at all times should be added to the planner's objective. Effectively, we encourage the planner to keep alternative plans to its currently best solution. As an example case, we implement a diversity-aware genetic algorithm using two different metrics for diversity (differing in their generality) and show that the blow in performance due to unexpected change can be severely lessened in the average case. We also analyze the parameter settings necessary for these techniques in order to gain an intuition how they can be incorporated into larger frameworks or process models for software and systems engineering. Index Terms-planning, unexpected events, dynamic fitness, resilience, robustness, self-protection, self-healing, diversity, optimization, evolutionary algorithms I. INTRODUCTION As automatic optimization in various forms makes its way into industrial systems, there is a wide range of expectations about the upcoming capabilities of future "smart systems" [1]- [5]. For most of the current applications, the optimization part of the system takes place offline, i.e., not while the application is actually performing its main purpose: The product shipped to the customer is fixed after initial training and does not self-adapt (anymore). Instead, it may only gather data that is then used at the vendor's side to either improve the product's performance via software updates later on or assist in building the product's successor. This, of course, misses out on interesting applications that may highly benefit from further optimization even while they are running. In this paper, we focus on the exemplary case of a layout configuration for the positioning of work stations inside a (smart) factory: Depending on the products that need to be build and depending on the current status of the machines involved, we may desire different workflows for the same product at different times during the factory's life. For most current factories, however, the arrangement of workstations is planned far in advance and then fixed until human intervention. One of the reasons for opting for offline adaptation is that the vendor usually has access to more computational power and that the employed adaptation process can benefit from connecting data input from a variety of customers. However, increasing computational resources and online connectivity mitigate these issues. A possibly more important aspect is the issue of consistent performance: An online planner, while theoretically able to react to sudden changes in its environment and/or objective, may take some time to reach good plans and during that time the solutions provided by the planner may be unsuitable. a) Expected Change: The usefulness and importance of self-optimization at the customer's side has already been claimed in the original vision of autonomic computing [6] and has been shown on many occasions since [3], [7], [8]. In these cases, self-optimization usually refers to a process of specialization, i.e., the system is built with a large variety of possible use cases in mind and learns to work best for the few of these it actually faces on site. Intuitively, we may want to build a planner that works on factory layouts in general and that can then specialize on the specific needs of a single factory or a single situation (machine failure, e.g.) if necessary. We expect this approach to work iff every possible situation and every pair of follow-up situation is considered when evaluating a factory layout. As long as we know that machines might fail with a certain probability, we can take this into account and plan redundantly with respect to machine usage. This is what we call expected change of the evaluation function. b) Unexpected Change: Still, we may not want our selfoptimizing planner to completely break on any deviation from the specified scenarios. We imagine that intelligent planners should invest a certain amount of effort to think about and prepare for "what ifs", even when the respective scenarios have not been expected to happen during system design or training. This is further motivated by the fact that many industry applications require the adaptive component to produce a solution better than a certain quality threshold but do not benefit as much from the system finding configurations that are just slightly better beyond that threshold. Instead, that computational effort might be better put into finding alternative solutions that might not be just as good as the primary solution that was just found, but then again might be feasible even when the primary solution fails for some unexpected reason. This argument falls in line with the claim of self-protection for autonomic systems [6]: Our system should not only be able to react and recover from negative external influences but also spend a reasonable effort on actively preparing for negative events. Via this self-protection property we aim to increase the overall resilience of the planning process and by extent the robustness of the system using our planner. c) Scope of This Work: As the original contribution of this paper we identify that diversity in evolutionary algorithms, which we consider a primary example for a heuristic optimization algorithms in this paper, is of central importance for the algorithm's reaction to change and that explicitly optimizing for diversity helps to prepare for changes, even when they cannot be foreseen by the optimization process in any way. We introduce means to formally define the phenomenon of unexpected change in relation to an online planner. To this end, we first formally define the notions of change and unexpectedness that we used intuitively until now (Section II). We then immediately turn to an example of a smart factory domain in which unexpected change might occur and specify our experimental setup (Section III). We introduce our approach at maintaining diversity using two different diversity metrics (Section IV) and sum up the results of applying this approach in the previously defined experiment (Section V) before we discuss related work (Section VI) and conclude this paper (Section VII). II. FOUNDATIONS We assume that to realize modern challenges in industry, software products need to feature a certain degree of autonomy, i.e., they feature at least one component called planner capable of making decisions by providing a plan of actions which the system is supposed to perform to best fulfill its intended goal [8], [9]. This goal is encoded by providing the system with a fitness function that can be used to evaluate plans. A planner respecting a fitness function performs selfoptimization. We claim that for many real-world applications it is often not only important to eventually adapt to new circumstances but also to avoid causing any major damage to overall success while adapting. It follows that the planner needs to offer a suitable solution at all times, even directly after change in the environment. This property can be compared to the robustness of classical systems, i.e., the ability to withstand external changes without being steered away too far from good behavior [10]. Robustness can often be tested against a variety of well-defined external influences. However, not every influence a system will be exposed to can be foreseen. 1 The notion of resilience captures the system's ability to withstand unanticipated changes [11]. 2 One approach to prepare a system for unexpected circumstances is to make it adapt faster, so that its adaptive component finds a new plan of actions faster once the old one is invalidated. However, this approach is still purely reactive and we thus cannot prevent the immediate impact of change. To increase system resilience, we thus might want the planner to become proactive towards possible changes that may occur to the environment and by extension the planner's objective. In order to lessen the blow of unexpected changes, the planner thus needs to prepare for it before it actually occurs. Note that for the changes we are talking about in this section, we still assume that they are unexpected at design time. The planner therefore has no means of predicting when or what is going to happen. Still, we desire for a planner to be caught off-guard as seldom as possible. A planner that needs to re-plan less often would then be considered more resilient with respect to unexpected change. We claim that explicitly increasing planning resilience aids a system's ability to selfprotect and is thus a useful handle to explicitly expose to the developers of such a system. a) Planning: Planners perform (usually stochastic) optimization on the system's behavior by finding plans that (when executed) yield increasingly better results with respect to a specified objective. That objective is given via a fitness function f : P ×E → R, where P is the domain of all possible plans and E is the domain of environments said plans are to be executed in. For the purpose of this paper, we assume that we want to minimize the real-valued output of the fitness function. We can then describe a planner formally as a function plan : E → P from an environment e ∈ E to a plan p ∈ P with the following semantic: plan(e) ≈ arg min p∈P E(f (p, e)). Note that due to the possibly stochastic nature of the environment and in extent the evaluation of the fitness function f , we compute the expected value E of the application of f . Further note that due to the stochastic nature of the planning methods considered in this paper, we may not actually return the single best result over the domain of all plans but when the stochastic optimization process works, we expect to yield a result somewhat close (described by ≈). To compute a reasonable value for f (p, e), a given plan will usually be executed in a simulated version of e. We call the process of repeatedly calling plan to execute the currently best solution online planning, which implies that we may call it for changing e. b) Changing Environments: We can write any occurrence of change in the environment as a function c : E → E. Obviously, if we allow any arbitrary change to happen to the environment, we can construct arbitrarily "evil" environments and cause the planner to perform arbitrarily bad. But frankly, we do not care for a planner managing a smart grid's power production to perform well when a meteor destroys Earth. What is much more realistic and thus much more desirable to prepare for, however, is changes that apply only to parts of the environment. Without looking into the data structure of the environment, we assume that these kinds of changes then only affect the fitness of some possible plans, but do not change the fitness landscape of the domain completely. We thus call a given change function c within a given environment e ∈ E reasonable iff it fulfills the formula: |{p ∈ P : |f (p, e) − f (p, c(e))| > ε}| |P |. Here, ε described a small value used as a minimally discernible distance between fitness values. Likewise, the exact meaning of is to be defined by the case. From this definition, it follows that a planner can prepare for a reasonable change by finding a good plan among the plans that are not affected by the reasonable change. When the change occurs, it can then provide a "quite good" plan immediately before it even begins to search for good plans among the changed parts of the domain. Thus, to increase planning resilience, we want our planner to not converge on and around the best optimum it has found so far, but to always keep an eye out for other local optima, even when they do not look as promising at the moment. Note that this behavior can be likened to strategies developed to prevent premature convergence, a problem with metaheuristic search methods that occurs even in static domains [12], [13]. c) Unexpectedness: Even if a planner can prepare for a reasonable change by diversifying, there are often more efficient ways to prepare for expected change: Usually, we would include instances of expected change into the fitness function by simply evaluating the system in the changed environments as well. In that case, the planner can still fully converge on the predicted path of the environment and not spend computational resources on diversification. However, we claim that in most practical applications the future is not completely predictable and changes may happen that the planner cannot anticipate. We define a change function c to be called unexpected iff the planner is not prepared for the change induced, i.e., if the actions it would take in the unchanged environment e differ from the actions it now has to take in the changed environment c(e). Formally, this can be expressed as follows: |{e ∈ E : plan(c(e)) ≈ plan(e)}| 0 Again, an exact definition of would need to be derived from specific system requirements. Note that this is a purely extrinsic view on unexpectedness. We want to provide a blackbox definition of unexpectedness that does not depend on the internal workings of the planner and is thus as general as possible. The intuition behind it is that if there was a way for the planner to know that and how the change c is going to happen when looking at the environment e, the plan generated via plan(e) would already consider the consequences of said change and thus (to some extent) match the plan for c(e). 3 III. EXPERIMENT To test the validity of our claims about the importance of diversity for planning resilience, we build a model example in which we try to observe the effects of environmental changes as clearly as possible. a) Scenario: We imagine a smart factory scenario where a work piece carried by a mobile (robotic) agent needs to be processed by a setup of work stations. More specifically, we need to perform the 5 tasks A, B, C, D, E in order on a given work piece as quickly as possible. In order to do so, our factory contains 25 work stations placed randomly on a 500×500 grid structure. Each work station can only perform one of the tasks, so that our factory has 5 identical work stations to use for any specific task. Given a work piece starting at the top left corner of the grid, we need to determine the shortest route the work piece can travel for it to reach exactly one station of each task in the right order. See Figure 1 for a simplified illustration of this setup. For each run of our experiment, we randomly generate an n×m matrix F of work station coordinates where each row in F corresponds to a task and each column to an identification number for each of the available work stations for each task. Thus, in our experimental setup we fix n = 5 and m = 5. b) Genetic Algorithm: In order to find a short path that fulfills our requirements, we employ a genetic algorithm [12]. Closely modeling our problem domain, we define the genome as a 5-dimensional vector v ∈ {0, ..., m − 1} n so that v i denotes which of the 5 identical work stations should be visited next in order to fulfill the i-th task where i = 0 denotes the task A, i = 1 denotes task B, and so on. The environment provides a mapping from these v i to their respective positions on the grid, which is used by a distance function L E for the environment E to compute the traveling distance between two work stations. We then define a function waycost to compute the overall length of a given path, summing the Manhattan 4 distance L E 1 between all its vertices: For the standard genetic algorithm, this waycost function is already sufficient as a fitness function f (v, E) = waycost(v, E) to evolve a shorter navigation path. It is important to note that while we closely restrict the search space to paths that cross each type of station exactly once (and in the right order), we do not aid the genetic algorithm by providing any notion of position in space or the closeness of different stations beyond what is encoded in the waycost function above. waycost(v, E) = L E 1 (S, v 0 ) + n−2 i=0 L E 1 (v i , v i+1 ) For the genome defined above, we use the following evolutionary operators: Mutation chooses a value i, 0 ≤ i < n uniformly at random, then generates a new random value x ∈ {0, ..., m − 1}, assigning v i := x. Recombination happens through uniform crossover on the vectors of two individuals. Furthermore, for all experiments performed in this paper, we use a mutation rate of 0.1 per individual to provide strong random input and a crossover rate of 0.3. That means that with a chance of 30% per individual that individual is selected as a first mate for recombination. Two potential mates are then randomly selected from the population: the fitter one is used for as a partner for crossover. We further augment the search by randomly generating some new individuals from scratch each generation. This process (also called hyper-mutation [14]) happens with a chance of 0.1 per individual in the population. c) Random Change: The crucial point of this experimental setup is the occurrence of a random change of environmental circumstances. The present experimental setup is fixed to an evaluation time of 100 generations as earlier experiments have shown our setup of an evolutionary algorithm can easily converge in under 50 generations. We then define a function for unexpected change c A , which chooses A factory stations at random and effectively disables them. This is implemented by repositioning them to an area far off the usual factory area by adding (2500, 2500) to their respective coordinates. This means that while the plans containing the removed stations are still theoretically feasible and can be assigned a valid waycost, the increase in waycost is so high that no plan containing any of the removed stations should be able to compete with plans contained within the actual factory area when it comes to evolutionary selection. From a random initial factory layout F we generate two changed factory layouts F 1 = c A (F ), F 2 = c A (F ) by applying the randomized change function c A . Because we want to be able to compare the scale of fitness values before and after the unexpected change more easily, we start the evolutionary algorithm on the factory configuration F 1 that is already "missing" a few stations. After 50 generations, we switch to factory configuration F 2 , which has A stations disabled as well, but probably different ones. 5 Note that this change is reasonable for small A (according to the definition above) because it only affects the fitness of a maximum of 2 * A possible plans, i.e., those plans which include at least one of the "wrong" machines in F 1 or F 2 . Furthermore, the change is unexpected as the shakeup of the stations' positioning is communicated to the evolutionary algorithm only via the change of the waycost function's values in its fitness evaluation step and thus leaves the adaptation process without any chance of anticipating that event. Nonetheless, the individuals of the evolutionary process are constantly evaluated according to their fitness in the current state of affairs, thus forcing them to adapt to the new situation in order to keep up once reached levels of fitness values. IV. APPROACH We attempt to solve the problem described above using evolutionary algorithms. Evolutionary algorithms have already been applied successfully to many instances of online adaptation, i.e., problems with a changing fitness function [15]- [17]. They are an instance of metaheuristic search algorithms and work by emulating natural evolution. a) Diversity in Genetic Algorithms: In the standard scenario, once the fitness function changes, previously good solutions can possibly be evaluated to have very bad fitness and are thus removed from the evolutionary process. However, if the genetic search has already converged to a local optimum, it can be very hard for the search process to break out of it, because when all known solutions lie very closely together in the solution space, there is no clear path along which the population must travel in order to improve. The problem of 5 It is important to note that this setup means that in many cases none of the stations that go bad during the switch are even included in the best path found by the genetic algorithm. In these cases, the evolutionary process does not have to adapt in any way. In order to analyze the cases when the removal of stations actually does make a huge difference, we need to execute the experiment multiple times. We chose this approach because it allows us use an unbiased change function as opposed to a change function that specifically targets the workstations actually used throughout the experiment. The realm of biased, even directly adversarial change functions is an interesting topic of future research. a genetic search getting stuck in a local optimum with little chance to reach the global optimum (or at least much better local ones) is called premature convergence [12]. It is known that the diversity among the members in the population has a strong impact on the evolutionary process's likelihood to converge too early. The Diversity-Guided Evolutionary Algorithm (DGEA) observes a population's diversity throughout the evolutionary process and takes action when it falls below a given threshold [18]. For online genetic algorithms, we show that maintaining a certain level of diversity throughout the population helps to react better to the change occurring in the environment. To this end, we apply two possible measurements for diversity, which we will both test for the above scenario. In either case, we transform the genetic algorithm's fitness function to a multi-objective optimization problem [13], [19], [20] with a weighting parameter λ, yielding a fitness function f depending on the individual to be evaluated v, the environment E, and the population P as a whole: f (v, E, P ) = waycost(v, E) + λ * similaritycost(v, P ) It is important to note that in order to meaningfully define the diversity of one individual, we need to compare it to the rest of the population, causing us to introduce the population P as an additional parameter to the fitness function. 6 The fitness function thus becomes a relative measure with respect to other individuals in the population. This makes it necessary to re-evaluate fitness in each generation even for unchanged individuals. However, since we assume changes in the environment and thus the fitness function may occur during the online execution of the genetic algorithm anyway, this model seems to fit our situation. We can now define two different diversity measures by providing a definition for the similaritycost function, which penalizes low diversity. b) Domain-Distance Diversity: This can be thought of as the more standard approach to diversity in search and optimization problems. In fact, the authors of [22] show that many common diversity measurements are quite similar to this basic method: We define a simple distance measure between the individuals in the solution space. For a discrete, categorial problem like the one presented here, there is little alternative to just counting the specific differences in some way. similaritycost dom (v, P ) = −n + n−1 i=0 |P | j=0 C(v i , P (j) i ) where C(x, y) = 1 if x = y 0 otherwise 6 In general, we might want approximate this comparison by using a sample drawn from the population or another estimate instead. Likewise, we could consider computing diversity not only against the current generation of individuals but also against a selection of individuals from the past, using for example a "hall of fame" approach [21]. The evaluation of such techniques is left for future research. Note that we write P (j) to access the j-th individual of the population and |P | to represent the amount of individuals in a population. We subtract n from the sum because the given individual v ∈ P is still part of the population and thus adds a cost of n by matching itself perfectly. We thus maintain the (cosmetic) property that in a population of completely different individuals, the average similarity is 0. While the implementation of this diversity measure looks pretty straightforward, it requires complete prior knowledge of the search space provided and and thus introduces further dependencies. For example, the above definition is unfit for continuous search spaces and while a continuous similaritycost function may easily be thought up, optimization problems consisting of a mix of discrete and continuous variables then require more weighting parameters to adequately combine the scales over which the respective similaritycost functions operate. c) Genealogical Diversity: As a more different comparison we implemented a inheritance-based diversity estimate introduced in [13]. The aim of genealogical diversity is to utilize those parts of the domain knowledge that are already encoded in the setup of the genetic algorithm, i.e., the mutation and recombination function the human developer is required to code for the specific genome anyway. We can thus try to quantify the difference between two individuals by estimating the amount of evolution steps it took to develop these different instances of solution candidates. This yields a measure of "relatedness" between individuals not unlike genealogical trees in biology or human ancestry. If all individuals in a population are closely related (sibling or cousins, e.g.), we know that there can only be limited genetic difference between them and thus estimate a low diversity for the respective individuals with respect to that population. However, instead of building and traversing a genealogical tree, the implementation of genealogical diversity used in [13] employs a technique inspired by the way genetic genealogical trees are constructed from the analysis from genomes in biological applications: For this approach, we first need to augment the individuals' genome by a series of t trash bits b k ∈ {0, 1}, k ∈ N, 0 ≤ k < t. For our experiment, t = 16 has proven to be reasonable. However, we do not change the waycost fitness function, so that it does not recognize the additional data added to the genome. This leads to the trash bits not being subjected to selection pressure from the primary objective of the genetic algorithm. As the trash bits are randomly initialized like the other variables in the genome, every individual of the first generation should most probably start out with a very different trash bitstring from anyone else's, given that we choose the length of the trash bitstring sufficiently large. Without direct selection pressure, there is no incentive for individuals to adapt their trash bitstring in any specific way. However, the trash bits are still subjected to mutation and recombination, i.e., whenever a specific individual is chosen for mutation, a random mutation is performed on the trash bitstring as well and whenever a recombination operation is executed for two individuals, their trash bitstrings are likewise recombined. In our implementation at hand, we use one-bit flip for mutation and uniform crossover for recombination. Using the definition of a comparison function C as provided above, we can thus define the similaritycost function for genealogical diversity as follows: similaritycost gen (v, P ) = −t + t−1 i=0 |P | j=0 C(v n+i , P (j) n+i ) Again, we subtract t to ignore self-similarity when iterating over the population. It should be noted that when accessing the (n + i)-th component of an individual inside the sum, we are protruding into the dimensions solely populated by trash bits, retrieving the i-th trash bit of said individual. In order to compute the similarity between two individuals, we now only consider the trash bits, for which we always have the same distance metric regardless of the actual problem domain of the original genetic algorithm. Domain logic is only used indirectly, as the measure we estimate can be regarded as the edit distance between two individuals using the genetic operators the evolutionary process is equipped with. However, since the trash bits are inherited by individuals from their parents and without direct selection pressure, they are not biased toward values resulting in higher fitness; yet, they are still a sufficient representation of the genealogy of an individual, as we show in the following section. V. RESULTS In order to evaluate the benefit of the presented approaches, we simulate the different behavior of genetic algorithms when using the presented diversity measures or no diversity measure at all. In order to achieve a meaningful result considering the highly probabilistic nature of the applied method to generate scenarios, we perform the evaluation on 1000 different scenarios. Figure 2 shows the top fitness achieved at a specific point in time by a single run averaged over all 1000 runs. By taking a look at the optimization process as a whole, it can be seen that a great deal of improvement compared to the random initialization is done during the first steps of evolution, giving an estimate of how good the achieved solutions are in relation to "just guessing". In Figure 3 we show the respective diversity measurements from these runs. We can observe that the diversity-aware algorithms show a slower learning rate in the beginning, since they do not only optimize the plotted primary fitness function, but also the diversity function and thus cannot focus as well on better primary results. However, once the environmental change occurs, they are likewise better prepared for a change in fitness and react with a much smaller increase in waycost than the standard genetic algorithm. In a scenario like ours, where a smart factory needs to be able to efficiently dispatch new workpieces at all times, this can be a huge advantage. We observe that following the unexpected change, average diversity first increases as well-established "families" of similar individuals die out. Due to a new convergence process, diversity then drops until good solutions are found. Finally, diversity seems to reach a similar level as before the unexpected change. The "right" amount of diversity is naturally controlled by the parameter λ of the combined fitness function. For these experiments we found parameters λ = 1500 for domaindependent diversity and λ = 2500 for genealogical diversity via systematic search. The definition of "right", however, depends on the problem domain. In most practical cases, we expect some (nonfunctional) requirements to be present, which specify the robustness properties we want to uphold. For now, these properties must then be verified via statistic testing. Deriving (statistical or hard) guarantees from a stochastic search process like an evolutionary algorithm is still an interesting topics of future work. Goven no further requirements for consistent quality of service, a reasonable setting for λ might achieve that the online planner does not perform worse than a random planner at any point in time, even at the moment of unexpected change. Figures 4 and 5 show that systematic search, including the random population's value before the evolutionary process starts: the fitness achieved by the domain-dependent and the genealogical genetic algorithm, respectively, strongly depends on the choice of parameter λ, i.e., how to distribute focus between the primary objective (small waycost) and the secondary objective (high diversity). Experiments have shown, that diversity-aware genetic algorithms can show a variety of behaviors for different λ. To provide an intuition about the effects various settings for λ have on the algorithm's performance, we can see that higher values of λ generally cause the evolutionary search to produce less optimal results but to perform more stable when facing unexpected change. For the domain-dependent diversity, this phenomenon shows stronger with higher λ-values showing almost no impact of the unexpected change but relatively bad results in general. The approach of genealogical diversity seems to be a bit more robust to the setting of λ in that it still clearly shows a tendency to optimize over time. We chose to showcase genealogical diversity specifically because it works on a rather domain-independent level and introduces only few parameters. Furthermore, it is rather robust with respect to the choice of said parameters. For the length of the used bitstring t, Figure 6 shows that on all but the smallest values for t the genetic algorithm performs most similarly. Especially rather large values for t (that still take up very little memory) do not show any deterioration in the planner's behavior, which means that the choice for that parameter can be made rather comfortably. We also analyze how much change a diversity-aware planner can handle. Figure 7 shows the behavior of the three exemplary planners just around the moment of unexpected change for various amounts of change they are subjected to. Naturally, bigger (and thus un-reasonable) change can impact even diverse system. The increase in costs for the large alterations in the generation-49-line (dashed) shows that on the upper end of the scale we started generating problem instances that generally have fewer good solutions. For more reasonable change (A ≤ 8, which still means that up to 16 out of 25 machine positions may be changed), both diversity-aware algorithms perform comparably and clearly better than the Fig. 4. Top fitness for current generation averaged over only 100 runs each, plotted for λ = 500 * z, z ∈ N, 0 ≤ z < 20 using domain-dependent diversity. The darker the color of the line, the higher is the depicted λ value. Fig. 5. Top fitness for current generation averaged over only 100 runs each, plotted for λ = 500 * z, z ∈ N, 0 ≤ z < 20 using genealogical diversity. The darker the color of the line, the higher is the depicted λ value. Fig. 6. Top fitness for current generation averaged over 100 runs each, plotted for t = 2 z , z ∈ N, 0 ≤ z < 10 using genealogical diversity. The darker the color of the line, the higher is the depicted t value. non-diverse planner. Most remarkably, the domain-dependent variant manages to cope with changes A ≤ 4 with almost no consequence for its performance. VII. CONCLUSION Since we expect future software systems to be increasingly self-adaptive and self-managing, we can also expect them to feature one or multiple components tasked with online planning. Online planning allows systems to learn to optimize their behavior in the face of a moving target fitness. However, it comes with a few pitfalls, one of which is the fact even small changes in the target fitness can have detrimental effects on the current plans' performance. It is thus imperative to keep an eye on a healthy level of diversity in our pool of alternative plans. As we have shown, this can severely soften the blow to overall performance, should only a few plans become impractical due to external circumstances. 7 The diversity of a planner functions as a non-functional requirement for classic applications. Certain levels of desired diversity may be specified in order to augment system architectures that revolve around the optimization process of the system in order to provide flexibility on the component level [46]. This should be expected to strongly influence other properties commonly applied to complex self-adaptive systems like robustness or flexibility. On an application level, the introduced concept of diversityaware optimization may prove especially useful when the 7 It still holds that if we allow arbitrary changes in the environment, it is always possible to design a completely new fitness function so that any given instance of an evolutionary process becomes arbitrarily bad with respect to the new altered fitness function. This is due to the No-Free-Lunch theorem [45]. For realistic scenarios, however, there usually is a limit to how quickly and how drastically the fitness function is expected to change. A thorough analysis of those limits for some practical domains may present an interesting point for further research. reduction in amplitude of fitness causes the system behavior to fall below a predefined quality threshold (or to do so more often at least). A diversity-aware planner might then be able to continue working as usual as its back-up plans fulfill the required quality agreement just as well while a non-diverse planner might more often feel the need to stop the execution of its plans (and thus halt the system in general) until it reaches a new plan of acceptable quality. In this case, we may formulate a non-functional requirement such as planning resilience, measuring how frequent and how big unexpected changes need to be in order to push the planner out of its quality requirements. Using the parameter λ, engineers can adjust the focus point of the planning component between performance and resilience optimization. How well statistical judgements can be made about said resilience property still needs to be evaluated, though. It is up to future research to determine how the concept of diversity (especially genealogical diversity) generalizes for other optimization techniques like the cross-entropy method or simulated annealing. One way to integrate these techniques into the framework defined in this paper may be to set up a pool of solution candidates via ensemble learning [47]. Embracing diversity seems especially promising in searchbased software testing (SBST) as test suites need to adapt faster to new possible exploits. In DevOps, developers push relatively small updates that need testing more frequently. Nonetheless, the changes applied to the code by the developer usually fall into the category of unexpected change as we defined it in this paper. That means, that diverse test generators could possible adapt quicker to the new software system under test. The mutual influence between diversity-aware evolutionary algorithms and co-evolutionary approaches 8 may be an interesting point of further research [21]. A likewise connection in biological systems has been found [48]. Many of the theoretical foundations explaining the ideal structure of a population for various optimization purposes are still unexplored. For instance, we assumed an unpredictable but neither explicitly hostile nor cooperative environment. Any scenario where the change occurs not only unexpected but intentional is likely to have fundamentally different properties. We focused our study on the implications of using diversity within a planner and how the resilience to environmental change may be indicated in a quantifiable way. We have shown that diversity during planning can aid planning resilience in the face of change. Furthermore, we can employ such method in a domain-independent way using genealogical diversity and still achieve valuable results. Software engineering frameworks and processes are now needed to expose desired NFRs like planning resilience to the software and system design and test them adequately.
6,546
1906.07841
2913622889
In cloud computing, data processing is delegated to a remote party for efficiency and flexibility reasons. A practical user requirement usually is that the confidentiality and integrity of data processing needs to be protected. In the common scenarios of cloud computing today, this can only be achieved by assuming that the remote party does not in any form act maliciously. In this paper, we propose an approach that avoids having to trust a single entity. Our approach is based on two concepts: (1) the technical abstraction of sealed computation, i.e., a technical mechanism to confine the processing of data within a tamper-proof hardware container, and (2) the additional role of an auditing party that itself cannot add functionality to the system but is able to check whether the system (including the mechanism for sealed computation) works as expected. We discuss the abstract technical and procedural requirements of these concepts and explain how they can be applied in practice.
A trustworthy and privacy-preserving cloud may be addressed by the use of cryptographic techniques such as fully homomorphic encryption (FHE) @cite_9 . However, it is still inefficient for most computations @cite_16 . Similarly in verifiable computing @cite_8 , it was designed to enable result correctness verification but has not shown support for general purpose cloud computing yet.
{ "abstract": [ "We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result -- that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Lattice-based cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a public-key ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable -- i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a server-aided cryptosystem.", "We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5 with write integrity and 8 with read write integrity.", "" ], "cite_N": [ "@cite_9", "@cite_16", "@cite_8" ], "mid": [ "2031533839", "1569778844", "" ] }
Sealed Computation: Abstract Requirements for Mechanisms to Support Trustworthy Cloud Computing
Cloud computing has become widespread as it allows for supplying and utilizing computation resources in an on-demand fashion. This reduces cost, increases flexibility and improves infrastructure scalability [19]. Cloud computing is increasingly being adapted for services provided by networks of small devices, commonly referred to as the Internet of Things (IoT). IoT Cloud [2] or "Cloud of Things" (CoT) [1] provides resources such as storage, analytics tools and shared configurable computing resources to reduce the cost and complexity associated with the IoT systems. When the data processing and storage are delegated to a cloud provider, users of cloud services usually have to trust the cloud provider to act as expected. However, in common cloud deployments, there is no technical guarantee that a single malicious insider like a system administrator or a person with physical access to the cloud infrastructure does not tamper with code and data. Hence cloud clients should be provided some technical guarantees and indications that the cloud service is trustworthy. As an example, consider the scenario of an IoT Cloud implementation for usage-based insurance (UBI) [16], a novel car insurance business model, where the insurance company calculates premiums based on drivers behavior using actual driving data. In UBI, participating cars are equipped with a telematics devices to collect driving data such as location, speed, acceleration, cornering, and other details. Driving data are processed to get a ranking based on personal driving behavior. Using the driver ranking, the insurance company calculates a customized premium to the policyholder employing a more accurate risk estimate, reducing incurred losses [9,25] and offering a bonus in the case of good driving behavior. UBI promises many benefits such as, for the insurance companies, reducing incurred losses through accurate risk estimates [25,9] and, for the policyholders (drivers), improving their driving style through feedback and decreasing their premiums. But obviously, UBI also raises concerns, such as user discrimination [16], and consumer privacy [25,9]. High-level view of usage-based insurance scenario: The data is processed by the service provider on behalf of the insurance company. Processing is performed by a cloud provider running the service provider's software. The policyholders receive feedback on their driving habits. Figure 1 depicts an abstract view of UBI: The service provider may actually be the same entity as the insurance company, but in many business implementations (BonusDrive by Allianz [4], SmartDriver by HUK-Coburg [15]) it is a different company. One reason for separation is that insurance companies do not have the corresponding know-how to compute the driving ranking. Another reason is that the insurance companies want to mitigate consumers privacy concerns by stating that they have no access to the behavioral data, as it is processed by a third party [5]. Users who process sensitive data in the cloud have the following general security requirements: -Confidentiality of data: Policyholders agree that their ranking is computed, but they want their individual usage data to remain confidential towards the insurance company and the cloud provider. -Confidentiality of code: Service providers want to protect their intellectual property from other parties, in particular the insurance company and the cloud provider. So the software which is deployed in the cloud should be protected. -Integrity of data and code: Insurance company, service provider and the policyholders should have a guarantee that the cloud provider does not change data or code in any unauthorized way. On the one hand, users establish a sense of trust in the cloud provider in practice via contracts over Service Level Agreements (SLAs), auditing certificates and reputation. Unfortunately, even with the most refined SLAs the necessity to place trust in the cloud provider remains. On the other hand, numerous technical approaches [18] have been proposed to achieve security requirements such as those above using trusted hardware. For example, hardware security modules (HSMs) [10,26], i.e., tamper-resistant physical computing devices, can perform secure and confidential computation of data. Using HSMs, it is possible to deploy specific software modules, create cryptographic keys and process data purely within the hardware device. Returning to our UBI scenario, the HSM can be used to effectively protect the service provider's data and code from the cloud provider. However, in this case the necessity to trust a single entity is not avoided, it is merely shifted from the cloud provider to the trusted hardware provider. This observation is not specific to HSMs but holds also for other such technologies such as Intel SGX [6,24]. Contributions In this paper, we propose a general approach that ensures generic confidentiality and integrity of cloud service and that avoids the necessity of having to trust a single entity. Our approach is based on the combination of two concepts: 1. Sealed computation, an abstract technical mechanism to confine the processing of data within a tamper-proof hardware container (like a HSM), and 2. a procedural mechanism of mutual checking applying the additional role of an auditing party, which is necessary to check whether the system works as expected, but cannot modify it. We describe the abstract technical and procedural requirements of both concepts and argue that they are sufficient to achieve the generic security properties described above. In the spirit of work by Morris Jr. [20], our work is conceptional, avoiding over-formalization but still providing clear definitions and evaluating statements. The main insight is to show how an abstract hardware mechanism (sealed computation, solely defined by its requirements) must be utilized in the cloud service such that the necessity to trust in a single entity is avoided. Similar to other work [24,6], this paper focuses on integrity and confidentiality properties and do not consider availability. We use the UBI scenario above repeatedly as an example to illustrate our exposition, it generalizes to many other scenarios. Outlook We first define the concept of sealed computation in Section 2. Then the system and attacker model is presented in Section 3. Section 4 describes the procedural mechanism applying the role of an auditor. In Section 5 we provide a security analysis and argue that general security requirements are satisfied unless two parties act maliciously. Related work is discussed in Section 6. Finally, Section 7 concludes the paper. Sealed Computation While data at rest can, typically, be protected by encryption, while protecting data during processing commonly is still an interesting problem to solve. We introduce a definition of sealed computation using abstract roles to keep it general, later, these are mapped to the parties introduced in Section 3. The term sealed computation is an abstraction that describes a well-defined level of protection against such attackers. Intuitively, this is done by encapsulating the software execution within a physical piece of hardware. We utilize the notion of sealed computation to maintain the integrity and confidentiality requirements of the system. Definition In sealed computation, a party A provides a physical execution container C into which a party B may "seal" its software. The container C ensures that the software is running in an unmodified fashion. Furthermore, C also guarantees that only a restricted set of interactions with the software are possible through a well-defined interface. Apart from that, no information is leaked from within C to the outside, not even to A the provider of the container nor the software provider B. More formally, let a party A provide a physical execution container C and party B provide a software M which implements some input/output specification via a well-defined interface. The interface can be thought of as a description of input/output signals over wires or the format of incoming or outgoing protocol messages. Definition 1 (Sealed Computation). We say that B seals M within C provided by A if the following technical requirements are met: -(Sealing) A and B cannot access the code and data of M after it has been sealed within C, apart from changes allowed by the interface. -(Attestation) As long as M has not terminated and as long as A acts honestly, C can provide evidence which proves that C is running software provided by B in a manner which is unique to the sealing instance, i.e., any change of M , C or any subsequent sealing using the same combination will result in different evidence. ification results in termination of M and the destruction of C such that neither code nor data from within C can be retrieved. Intuitively, the Sealing requirement of sealed computation binds the execution of a program to a particular hardware environment. The requirements of Black-box and Tamper-resistance limit access to data and code only to interactions given in the functional specification of M : Black-box restricts information flow for expected interactions, while Tamper-resistance does this for unexpected interactions. The Attestation requirement enables external parties to validate the fact that M has been sealed. It implies that C contains some known unique characteristic that can be validated by checking the provided evidence. This validation, however, depends on the correctness of A. A common realization of this is for A to embed a secret key within C and allow external parties to validate its existence by providing the corresponding public key. The existence of such a unique characteristic implies that it is possible to establish an authentic and confidential communication channel to M once sealing has started. Similarly, note that B or any user of M still has to rely on A to act honestly because it is not verifiable whether C actually implements sealed computation. However, if B correctly seals M within C provided by an honest A, even A cannot change M afterwards and the tamper-resistance requirement of C protects all secrets within M that are not accessible via its interface or before sealing. Confidential Software Deployment The notion of sealed computation is a powerful abstraction that can be used to describe techniques that protect software also during deployment. We now argue that the technical requirements of sealed computation allow to ensure the confidentiality of the code which is sealed. Intuitively, the idea of confidential software deployment is for B to initially install within the sealed computation a loader stub which is able to load the final user program specified by B into C. Within the sealed computation, this software is decrypted, installed and then takes over the final interface operations expected by the users. This loader stub can be part of the sealed computation mechanism from the start. Since it can be easily added to any mechanism that satisfies Definition 1, we did not include it as an additional requirement in that definition. Observe that M cannot be assumed to remain confidential if A is untrustworthy. However, if A is trustworthy, sealed computation can be used to run code that remains confidential even towards A. System and Attacker Model Participants For a general cloud-based application system model, our approach assumes the following main participants -referred to as entities or parties interchangeably: 1. Data Prosumer (DP): The DP is a producer and/or consumer of data at the same time, i.e., it produces input data and/or has an interest to consume the computed results. The way in which data is processed by the application is described by the DP in the form of a functional specification. 2. Application Software Provider (ASP): The ASP develops and maintains the analytics software which processes the data in the cloud and computes desired results according to the functional specification. 3. Cloud Provider (CP): The CP provides the cloud service which includes the hardware infrastructure, the software, and all associated configuration, administration and deployment tasks. The CP is also responsible for the security of the system as well as its availability towards the DP. 4. Auditing Party (AP): The AP is an independent party that helps to ensure the integrity of the hardware and software before the system becomes operational. We simply refer to the AP as the auditor. 5. Sealed Computation Provider (SCP): Additional entity to be considered is the SCP provides the sealed computation technology. To map the sealed computation definition in Section 2 to the UBI scenario, it may help to think of the execution container C being a specific HSM provided by party A (the SCP), while party B is the service provider (SP) who wrote software M on behalf of insurance company (DP). User Security Requirements The desired security requirements of the parties are described in more detail here. Every requirement has a name that is prefixed by the corresponding participant role. Definition 2 (User Security Requirements). The participants have the following security requirements: -(DP-Privacy) The DP requires that data remains confidential to any other party, i.e., neither CP, nor ASP, nor AP, nor SCP can learn anything about the data. 3 -(DP-Integrity) Results which are obtained from the system by the DP are correctly computed on data as provided according to the functional specification. DP-Integrity covers data storage and processing integrity. -(ASP-Integrity) The analytics software provided by the ASP is executed in an unmodified form within the system. Note that ASP-Integrity does not imply DP-Integrity since the latter refers also to data. -(ASP-Confidentiality) No other party except the AP is able to learn about the analytics software developed by the ASP apart from what is described in the functional specification. Attacker Model In this section, we formulate the attacker model. First, the ways in which individual participants may maliciously misbehave are described (the local attacker assumption). Then we define a condition that restricts the number of parties that may act maliciously (the global attacker assumption). The participants may act as follows: -Application Software Provider (ASP): The ASP could provide an analytics software that leaks information about the processed data, thus violating DP-Privacy. Also, the ASP could violate DP-Integrity by providing software that incorrectly computes the results, i.e., computes the results not according to the functional specification provided by the DP. -Sealed Computation Provider (SCP): The SCP could provide an incorrect sealed computation mechanism, i.e., a mechanism that has back-doors or vulnerabilities that enable changing code and data, thus violating ASP-Integrity or DP-Integrity, or a system that leaks code or data which violates ASP-Confidentiality or DP-Privacy. -Cloud Provider (CP): The CP could leak any software that it has access to a malicious party, thereby violating ASP-Confidentiality. The CP has physical access to the mechanism provided by the SCP so it may attempt to access and/or modify data that is stored/processed, thus violating ASP-Integrity, DP-Integrity or DP-Privacy. We assume, however, that the CP protects its systems from interference and misuse by external attackers that are not specific to our scenario. Therefore these attacks are excluded from consideration in this work. -Auditing Party (AP): During checking, the AP could try to add functionality to the system to leak information about the processed data and/or the software, thereby violating DP-Privacy or ASP-Confidentiality directly. If any party acts in ways described above we say that this party acts maliciously. A party that does not act maliciously is considered honest. For reasons of simplicity, the DP is excluded from our attacker model. Typical misbehavior of the DP can be giving a wrong functional specification, providing false data or to reveal the received results to any other party. Correct behavior in this respect cannot be enforced using a trustworthy cloud service as we envision here. Therefore, the DP is assumed to always be honest. The global attacker assumption, i.e., a restriction on the number of parties that may act maliciously is formulated as follows: either the AP or both SCP and ASP are honest. More precisely, if the identifiers are taken as Boolean predicates of whether they are acting honestly or not, then the global attacker assumption is satisfied if the following condition holds: AP ∨ (SCP ∧ ASP ) Note that the condition is independent of the actions of the CP, and that it does not state which party exactly acts maliciously (AP, SCP or ASP). Availability of Remote Attestation To establish trust, it is often necessary to use mechanisms for remote attestation. Following the terminology of Cocker et al. [8], attestation is the activity of making a claim to an appraiser about the properties of a target by supplying evidence which supports that claim. An attester is a party performing this activity. The result of an attestation depends on a mixture of facts that the appraiser can check directly on the evidence provided by the attester (e.g., cryptographic signatures) and trust in the attester itself (the mechanism by which the evidence was generated). Any party being part of a remote attestation has the requirement that the directly checkable part of the attestation works as expected. In practice, this means that the used cryptography (e.g., digital signatures) is secure and that honest parties protect their cryptographic secrets. Combining Sealed Computation with an Auditor One application of sealed computation in cloud computing would be for the CP to offer a mechanism to its "customers" DP and ASP to perform a sealed computation on the provided cloud hardware. In this case SCP and CP would be the same party. However, note that utilizing sealed computation alone is not sufficient to ensure the participants' security requirements because (1) sealed computation does not guarantee anything before sealing takes place, and (2) the mechanism of sealed computation cannot be trusted without means to verify its function. We will therefore treat CP and SCP as independent parties. The Role of Auditor The sealed computation is combined with the role of an auditing party AP to establish the security requirements described in Definition 2. In general, auditors are known to usually perform independent checks and assess other entities in terms of service, performance, security and data privacy and system operations [14]. We use the AP to both guarantee the functionality of the sealing mechanism provided by the SCP and to verify the functionality of the analytic software provided by the ASP. Once sealing has taken place, the mechanism of sealed computation ensures continued trust in the system without having to interact with the AP anymore. The auditor is not allowed to add or modify functionality in the system. This is ensured by a mutual checking procedure described below. The AP, however, has to enable a possibility of attestation which is independent of the SCP. This can be realized by either providing an independent mechanism or (better) by adequately configuring an attestation technique that is already presented in the sealed computation technology (e.g., by embedding a secret within the physical container of sealed computing). Figure 2 illustrates the structural model with the roles and responsibilities of each participant. The idea is to base the well-functioning of the system on the assumption that either the auditor or all parties checked by the auditor are honest during critical phases of system operation. While commonly the DP had to trust the CP exclusively, it now must rely on trust either in the SCP and ASP or the AP (a condition expressed in our global attacker assumption above). The ASP provides software run within a sealed computation, a mechanism provided by the SCP and hosted by the CP. The AP performs an independent verification of the analytic software and the sealed computation container and enables mechanisms for the DP to remotely check its integrity. To illustrate the different roles using our introductory UBI scenario, the policyholders and the insurance company share the role of the DP. The insurance company defines the functional specification of the driver ranking based on which the ASP develops the analytic software. The SCP could be a provider of the sealed computation container (like a HSM) and the AP would be a company like a certified public accountant, that is able to perform code and security audits on hard-and software. The SCP is assumed to have appropriate security mechanisms in place against attacks by parties not considered above (e.g., hackers and cybercriminals). Regarding remote attestation, the HSM provides certificates with which attestation evidence generated by the HSM can be verified [27]. Trust Establishment Procedure For simplicity and comprehension of discussion we distinguish the execution lifetime of the system model into mutually exclusive phases: the Checking phase and Running phase. During the Checking phase, the trust establishment procedure takes place, while the Running phase begins with the service start-up. During the Running phase, the DP can upload data and get results and the CP operates the cloud system. The exact actions and obligations of the participants and interplay among each other are described as trust establishment procedure below. This procedure can be regarded as a form of procedural requirement which in combination with the technical requirements of Sealed Computation allows to fulfill the user requirements. Definition 3 (Trust Establishment Procedure with Mutual Checking). The participants undergo the following procedure: 1. Trust establishment in analytics software: (a) The ASP prepares the analytics software ready to be deployed. (b) The AP verifies whether the analytics software satisfies the functional specification and does not leak any information about the processed data. (c) At the same time, the ASP ensures that the AP does not change any functionality of the analytics software. (d) As a result of this procedure, ASP and AP generate public evidence to be produced by an attestation mechanism by which it can be verified that the checked version of the software is running in the sealed computation (e.g., a hash of the binary code that can be attested). Trust establishment in sealed computation mechanism: (a) Before the sealed computation system is shipped and deployed, regardless of the deployment model, the SCP prepares the sealed computation mechanism (hardware and software, including the possibility for confidential software deployment). (b) The AP verifies (off-line) the integrity of the sealed computation mechanism, i.e., the entire hardware and software system. This includes a physical check for the security measures, policy compliance, data security and data privacy, functional check also of the confidential software deployment mechanism. (c) At the same time, the SCP ensures that the AP is not adding new functionality during these checks, i.e., that the AP is behaving according to the auditing procedure specifications. (d) The AP and the SCP generate public evidence that enables attestation of the sealed computation mechanism, e.g., by embedding independent private keys within the sealed computation container to which they possess the corresponding public keys. 3. The sealing mechanism is started in the presence of AP and SCP. At this time the auditing procedure ends and both SCP and AP can leave the deployment site which is run by the CP. 4. Using the confidential deployment procedure, the ASP loads the code that was checked by the AP in Step 1 above. 5. The AP and the SCP must be present any time when the system and/or the sealed computation mechanism is reset/restarted, is under maintenance or shall be changed. In such cases the AP and the SCP must re-check the system and both must re-enable the attestation mechanism as described in the above procedure. The result of this procedure are two pieces of public evidence that all parties can use to verify their security requirements: -Public evidence provided by AP and SCP that DP, CP and ASP can use to verify that an instance of sealed computation is running. -Public evidence provided by AP and ASP that can be used to verify that a particular software is running within the sealed computation. Security Analysis and Discussion Security Analysis To argue that the security requirements from Def. 2 are met, we make the following introductory observation: The sealed computation mechanism defined in Def. 1 will not be in the Running phase if the ASP software or the sealed computation mechanism is not correct. To see this, we make a case distinction based on the global attacker assumption which states that all parties can act maliciously as long as the global attacker assumption is satisfied, i.e., either the AP or both the ASP and the SCP behave honestly. There are three possible cases for parties to act maliciously during the checking phase when the trust establishment procedure (Definition 3) takes place: -The ASP is malicious: If the ASP is malicious, then the AP must be honest. So if the ASP acts maliciously and implements an incorrect software then the checking procedure (Step 1.b) mandates that the AP checks the software correctness. Since the AP is honest, it will detect the incorrectness of software, the check will fail and the Running phase will not take place. -The SCP is malicious: If the SCP is malicious, then the AP must be honest. So if the SCP is not honest, the sealing container may not be implemented correctly. However, the checking procedure (Step 2.b) requires the AP to check whether the sealed computation requirements are met. Since the AP is honest, it will detect incorrectness and the Running phase will not be entered. -The AP is malicious: If the AP is malicious, then the ASP and the SCP are both honest. In this case, the analytics software and the sealed computation mechanism are correct from the beginning. Furthermore, the mutual checking procedure (Steps 1.c and 2.c) requires that both ASP and SCP ensure that the AP does not manipulate the functionality of the analytics software or the sealed computation mechanism. So if the Running phase is entered, the sealed computation mechanism and the analytics software are both correct. Therefore, under the attacker assumption, the establishment procedure guarantees that the system will not enter the Running phase unless it is working properly as defined in the specification. Subsequently, during the Running phase, the sealed computation mechanism (Definition. 1) takes over to guarantee the desired requirements. To argue for the fulfillment of ASP-Integrity and DP-Integrity, the Sealing and Tamper-resistance requirements of the sealed computation ensure that content (data and code) in the sealed container cannot be improperly modified. Furthermore, the Blackbox requirement restricts information flow such that DP-Privacy and (assuming confidential deployment) ASP-Confidentiality are maintained. Discussion While our results are conceptual, they provide a preliminary guideline of building a trustworthy cloud computing service in which cloud customers can trust that cloud providers and operators cannot access their data and code. In essence, sealed computation may not be a brand new concept, as sealed storage was defined by Morris [20]. Whereas, to the best of our knowledge, sealed computation was not formally defined comprehensively before. Any computational implementation that satisfies the requirements defined in Definition 1 can be considered a sealed computation mechanism. However, in practice, one may argue that any assumption like the security of cryptography or requirements like Black-box of any hardware device only hold with a certain probability, so the guarantees in practice never hold with 100%. One may also argue that many parts of the procedures described in Definition 3 are also rather hypothetical and cannot be realized fully in practice. For example, the AP is assumed to perfectly verify the correctness of the software of the ASP (in Step 1b) against the functional specification. While software verification has come a long way, it still is restricted by the size and complexity of the software system. Another example that appears far from practice is the statement that the AP can verify the correctness of the sealed computation container (hardware and software) provided by the SCP (in Step 2b). It is well-known that the production of hardware is a very complex process involving lots of different technologies. The resulting chips are rather non-transparent and need complex validation equipment to be checked. Useful insights can be inferred from the proposed approach. While the AP is one party in our model, in practice it can consist of multiple independent auditing actors, e.g., different companies that all check independent parts of the system and mutually certify the results towards each other. The collection of auditors in its entirety then forms the AP, meaning also that all "sub-auditors" must behave correctly for the AP to be regarded as honest. In practice, these sub-auditors are even often part of the same company, albeit in different parts that are independent of each others (like software development and testing departments). Another highlight is, it is possible to delegate security enforcement to trusted hardware without having to trust a single entity. However, during the Checking phase, the AP must be continuously present until the sealed computation container runs, and it must be possible to establish attestation evidence which is independently supported by the AP and the SCP (for the sealed computation container) and by the AP and the ASP (for the analytics software). These points result from the requirement of mutual checking, i.e., not only does the AP verify the actions of ASP/SCP, but also ASP/SCP need to prevent the AP from slipping in new functionality to software and hardware, a detail which is often overlooked or (unconvincingly) excluded by the assumption that the AP is always honest. Being able to embed shared attestation credentials of mutually untrusted parties in a single trusted hardware container is a feature which isat least to our knowledge -not supported by any currently available trusted computing mechanism [18]. So overall, the proposed approach presents an idealized version of system construction and deployment processes which can serve as an orientation for practice towards achieving a trustworthy service. Conclusions and Future Work We introduced the sealed computation concept and proposed a mutual checking procedure with an auditor role during setup time to provide an increased level of security and trust in cloud scenarios. The sealed computation concept abstracts from trusted hardware technologies like HSMs, the auditor is an abstraction of policies and procedures that increase trust in a single party. We believe that the abstract system model using the auditor as an additional role is a good approach for medium-size and large cloud deployments instead of running their own private cloud. While the existence of the role of auditor may be intuitive, on the one hand, it is not clear whether the concept is really necessary, i.e., whether any technique that distributes trust can simulate the auditor as described above. On the other hand, practical methods for auditing could be investigated. Furthermore, we wish to attempt more rigid formalization for the attestation verification.
4,997
1906.07781
2950623005
The directed Physarum dynamics is known to solve positive linear programs: minimize @math subject to @math and @math for a positive cost vector @math . The directed Physarum dynamics evolves a positive vector @math according to the dynamics @math . Here @math is the solution to @math that minimizes the "energy" @math . In this paper, we study the non-uniform directed dynamics @math , where @math is a positive diagonal matrix. The non-uniform dynamics is more complex than the uniform dynamics (with @math being the identity matrix), as it allows each component of @math to react with different speed to the differences between @math and @math . Our contribution is to show that the non-uniform directed dynamics solves positive linear programs.
is a slime mold that apparently is able to solve shortest path problems. Nakagaki, Yamada, and T ' o th @cite_0 report about the following experiment; see Figure . They built a maze, covered it by pieces of Physarum (the slime can be cut into pieces which will reunite if brought into vicinity), and then fed the slime with oatmeal at two locations. After a few hours the slime retracted to a path following the shortest path in the maze connecting the food sources. The authors report that they repeated the experiment with different mazes; in all experiments, Physarum retracted to the shortest path.
{ "abstract": [ "The plasmodium of the slime mould Physarum polycephalum is a large amoeba-like cell consisting of a dendritic network of tube-like structures (pseudopodia). It changes its shape as it crawls over a plain agar gel and, if food is placed at two different points, it will put out pseudopodia that connect the two food sources. Here we show that this simple organism has the ability to find the minimum-length solution between two points in a labyrinth." ], "cite_N": [ "@cite_0" ], "mid": [ "1661179413" ] }
0
1906.07781
2950623005
The directed Physarum dynamics is known to solve positive linear programs: minimize @math subject to @math and @math for a positive cost vector @math . The directed Physarum dynamics evolves a positive vector @math according to the dynamics @math . Here @math is the solution to @math that minimizes the "energy" @math . In this paper, we study the non-uniform directed dynamics @math , where @math is a positive diagonal matrix. The non-uniform dynamics is more complex than the uniform dynamics (with @math being the identity matrix), as it allows each component of @math to react with different speed to the differences between @math and @math . Our contribution is to show that the non-uniform directed dynamics solves positive linear programs.
The paper @cite_11 proposes a mathematical model for the behavior of the slime and argues extensively that the model is adequate. Physarum is modeled as an electrical network with time varying resistors. We have a simple graph @math with two distinguished nodes modeling the food sources. Each edge @math has a positive length @math and a positive capacity @math ; @math is fixed, but @math is a function of time. The resistance @math of @math is @math . In the electrical network defined by these resistances, a current of value 1 is forced from one of the distinguished nodes to the other. For an (arbitrarily oriented) edge @math , let @math be the resulting current over @math . Then, the capacity of @math evolves according to the differential equation where @math is the derivative of @math with respect to time.
{ "abstract": [ "We describe here a mathematical model of the adaptive dynamics of a transport network of the true slime mold Physarum polycephalum, an amoeboid organism that exhibits path-finding behavior in a maze. This organism possesses a network of tubular elements, by means of which nutrients and signals circulate through the plasmodium. When the organism is put in a maze, the network changes its shape to connect two exits by the shortest path. This process of path-finding is attributed to an underlying physiological mechanism: a tube thickens as the flux through it increases. The experimental evidence for this is, however, only qualitative. We constructed a mathematical model of the general form of the tube dynamics. Our model contains a key parameter corresponding to the extent of the feedback regulation between the thickness of a tube and the flux through it. We demonstrate the dependence of the behavior of the model on this parameter." ], "cite_N": [ "@cite_11" ], "mid": [ "2003810228" ] }
0
1906.07781
2950623005
The directed Physarum dynamics is known to solve positive linear programs: minimize @math subject to @math and @math for a positive cost vector @math . The directed Physarum dynamics evolves a positive vector @math according to the dynamics @math . Here @math is the solution to @math that minimizes the "energy" @math . In this paper, we study the non-uniform directed dynamics @math , where @math is a positive diagonal matrix. The non-uniform dynamics is more complex than the uniform dynamics (with @math being the identity matrix), as it allows each component of @math to react with different speed to the differences between @math and @math . Our contribution is to show that the non-uniform directed dynamics solves positive linear programs.
Nakagaki et.-al. @cite_3 pointed out that different edge may react with different speed to the differences between flow and capacity. For example, Physarum prefers darkness over bright light and hence edges in a bright environment react differently than edges in darkness. This let to the non-uniform dynamics where @math is an indicator for the reactivity of an edge.
{ "abstract": [ "When two food sources are presented to the slime mold Physarum in the dark, a thick tube for absorbing nutrients is formed that connects the food sources through the shortest route. When the light-avoiding organism is partially illuminated, however, the tube connecting the food sources follows a different route. Defining risk as the experimentally measurable rate of light-avoiding movement, the minimum-risk path is exhibited by the organism, determined by integrating along the path. A model for an adaptive-tube network is presented that is in good agreement with the experimental observations." ], "cite_N": [ "@cite_3" ], "mid": [ "2031275178" ] }
0
1906.07900
2949728259
Comprehensive quality-aware automated semantic web service composition is an NP-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., Quality of services (QoS) and Quality of semantic matchmaking (QoSM) are simultaneously optimized. The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request. In this paper, we proposed novel memetic EDA-based approaches to tackle this problem. The proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. Apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified EDA to reduce the overall computation time of our memetic approach. To better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 bansal2008wsc and WSC-09 kona2009wsc . Experimental results on this benchmark show that one of our proposed memetic EDA-based approach (i.e., MEEDA-LOP) significantly outperforms existing state-of-the-art algorithms.
Automated web service composition aims to loosely couple web services to fulfill a service request, without strictly obeying a pre-given abstract workflow. Instead, composition workflows are gradually built up while its component services are selected. Existing works in fully automated web service composition can be categorized into two approaches --- direct approaches and indirect approaches @cite_37 . The direct approaches represent composition solutions explicitly in the representation that displays actual execution flows of composite services, while the indirect approaches often represent composite services implicitly as permutations, which require a decoding process to build up actual execution workflows.
{ "abstract": [ "Web services have become increasingly popular in recent years, and they are especially suitable to the process of Web service composition, which is when several services are combined to create an application that accomplishes a more complex task. In recent years, significant research efforts have been made on developing approaches for performing Quality of Service -aware Web service composition. Evolutionary computing (EC) techniques have been widely used for solving this problem, since they allow for the quality of compositions to be optimised, meanwhile also ensuring that the solutions produced have the required functionality. Existing EC-based composition approaches perform constrained optimisation to produce solutions that meet those requirements, however these constraints may hinder the effectiveness of the search. To address this issue, a novel framework based on an indirect representation is proposed in this work. The core idea is to first generate candidate service compositions encoded as sequences of services. Then, a decoding scheme is developed to transform any sequence of services into a corresponding feasible service composition. Given a service sequence, the decoding scheme builds the workflow from scratch by iteratively adding the services to proper positions of the workflow in the order of the sequence. This is beneficial because it allows the optimisation to be carried out in an unconstrained way, later enforcing functionality constraints during the decoding process. A number of encoding methods and corresponding search operators, including the PSO, GA, and GP-based methods, are proposed and tested, with results showing that the quality of the solutions produced by the proposed indirect approach is higher than that of a baseline direct representation-based approach for twelve out of the thirteen datasets considered. In particular, the method using the variable-length sequence representation has the most efficient execution time, while the fixed-length sequence produces the highest quality solutions." ], "cite_N": [ "@cite_37" ], "mid": [ "2605603844" ] }
Memetic EDA-Based Approaches to Comprehensive Quality-Aware Automated Semantic Web Service Composition
S ERVICE Oriented Architecture (SOA) has been contributing to the reuse of software components [3]. Web services are one of the most successful implementations of SOA to provide services as "modular, self-describing, self-contained applications that are available on the Internet" [4]. Often, users' requirements cannot be satisfied by one existing web service, Web service composition aims to loosely couple a set of Web services to provide a value-added composite service (i.e., a solution of service composition) that accommodates users' complex requirements. These requirements are related to functional (i.e., quality of semantic matchmaking as QoSM) and non-functional (i.e., Quality of service as QoS) requirements, which give birth to semantic web service composition and QoS-aware web service composition, with the aim of optimizing QoSM and QoS of service composition solutions respectively. Many researchers have been working on solving these optimization problems in web service composition [5], [6], [7], [8], [9], [10], [11], [12], [13]. Existing works that study the above problems are classified as semi-automated and fully-automated web service composition [14] with two different assumptions. One assumes that users know an abstract service composition workflow, and all the composite services produced by the composition system must strictly obey the given workflow. However, this assumption is not always valid since the workflow may not be provided or not even known by users. The second group of research works does not rely on any existing work-flows. Instead, a composite service will be constructed from scratch by selecting and connecting multiple atomic services obtained from the service repository [14]. Therefore, this construction process can end up with different workflows. Apparently, compared to semi-automated web service composition, fully-automated web service composition opens new opportunities to improve QoS and QoSM further due to different workflows automatically constructed. Nevertheless, the difficulty of the composition task is also increased. AI planning and Evolutionary Computation (EC) are two of the most widely used techniques for semi-automated and fully-automated web service composition [5], [7], [10], [13], [15], [16], [17]. AI planning techniques focus on creating valid composite services, where the functional correctness is always ensured with gradually constructed workflows. However, these approaches do not optimize the QoS or QoSM of the solutions produced [18]. EC techniques have been widely used to solve service composite problems that aim to optimize either one or both of QoSM and QoS, and are potentially more useful in practice as they can efficiently find "good enough" composite solutions. Important approaches [5], [6], [7], [8], [9], [10], [11], [12], [13] based on Genetic Algorithms (GA) [19], Genetic Programming (GP) [20], Particle Swarm Optimization (PSO) [21] and Estimation of Distribution Algorithm (EDA) [22], have been widely investigated in the literature. To effectively search for good solutions, EC techniques often employ useful information distilled from promising solutions to produce new offspring. The information can be used either implicitly or explicitly. Conventional EC techniques, such as GA and GP, fall in the implicit camp by producing new solutions through recombining solutions evolved previously [5], [7], [13]. In contrast, one EC technique that has achieved prominent success through explicit arXiv:1906.07900v1 [cs.AI] 19 Jun 2019 use of information is Estimation of Distribution Algorithm (EDA) [23]. In EDA, information about promising solutions evolved previously is captured compactly in the form of probability models. EDA has been successfully utilized for semi-automated service composition [6], [24], but they can not support fully automated service composition. We recently proposed a new EDA-based approach for fully automated web service composition through reliable and accurate learning of a probability model that encodes the distribution of promising solutions [12], i.e., a distribution model. EDA stresses more on global exploration, rather than local exploitation [25]. It is due to that the distribution model has the objective of exploring more promising regions in the entire solution space, without attempting to improve the quality of any specific solutions evolved previously. However, the optimization performance can often be improved directly through local modifications to promising solutions. By restricting the target region for local search and avoiding most of the randomness involved in sampling directly from the distribution model, this can potentially expedite the search of optimal solutions. Therefore, to improve its competency in finding more effective solutions, an idea is to enhance EDA with local search, namely, memetic EDA. Memetic EDA has been successfully applied to many optimizations problems with local search operators [26], [25], such as arc routing and assembly flow-shop scheduling problems. On the one hand, although memetic EDA has been successfully applied to many applications, those memetic approaches work inappropriate for web service composition, as these local search operators are only applicable to domain-specific or problem-specific solution representations [25], [27]. On the other hand, despite the recent success in EDA-based service composition, the effectiveness of this approach can be enhanced by introducing memetic EDA. Several challenges remain to be addressed in developing a memetic EDA approach to service composition as follows: First, a composite service is commonly represented as a DAG, exploring the neighborhood of a DAG, especially large DAGs, is computationally infeasible [28]. Note that the discussed neighborhood is structured by local search operators on the search space, where neighbor solutions can be generated iteratively from a given candidate solution. Therefore, researchers [9], [29] often indirectly defined the neighborhood of a composite service represented in the form of a permutation, which can be converted to a DAG through a separate decoding process. Often, socalled "swap" operators produce neighbors by swapping two random elements in a permutation. Consequently, a neighborhood is defined by the collection of permutations obtainable through a "swap" to any given permutation. However, such neighborhood often contains a large proportion of neighboring permutations with inferior quality. For effective local research, the neighborhood must be refined to exclude most of the clearly unwise swapping choices by exploiting domain-specific knowledge. Second, as we know, it is very challenging to determine which candidate solutions are to be selected for local search in memetic algorithms, as the selection method has a significant impact on the effectiveness and efficiency of memetic EDA. Should an equal chance be given to all the candidate solutions or only elite solutions should be considered for local search? Moreover, what are elite solutions, and how many of them should be modified locally? However, answers to these challenging questions often vary from many factors, such as EC algorithms, domain problems, etc. Therefore, it is challenging to determine one effective selection strategy for the memetic EDA-based approach to service composition. Third, a traditional strategy that exclusively explores the whole neighboring space of composite services can incur high computation cost without guarantee of improving solution quality. For example, for permutation-based representation, if a simple swap operator is utilized for exploring the neighborhood, then the dimension of the permutation determines the computational complexity. In the context of service composition, the dimension of such permutation is usually equivalent to the size of the service repository. As the neighborhood size is extremely huge when many services are to be considered during the service composition process, this strategy is infeasible for practical use. Fourth, in EDA, although a probability distribution model is adjusted to trace promising searching areas throughout generations, one proportion of promising solutions (i.e., permutations) are more likely to be repetitively sampled, while the distribution model is getting converged along the generations. Furthermore, these repeatedly sampled solutions are often favorable to users, since they are candidate solutions with high quality. In the EDA-based approach to service composition, sampled permutationbased solutions are very costly as they require repetitive computation time for decoding and evaluations. To address these challenges above, we propose a memetic EDA-based approach, achieving substantially high performances in effectiveness and efficiency. These outstanding performances are observed by comparing it with some recently proposed web service composition approaches, such as a EDA-based approach [12], a PSO-based approach [10], and GA-and Memetic GA-based approaches [9]. In particular, an empirical, experimental study on the effectiveness of different neighborhoods structured by different local search operators is conducted. The contributions of this paper are listed below, and where the first contribution is to address the first challenge discussed previously, and the second contribution is proposed to address the remaining challenges. 1) To perform an effective local search in composite services, we first propose several neighborhood structures for candidate solutions. These neighborhoods are created by developing several novel domain-dependent local search operators, based on constructing and swapping effective building blocks of composite services for local improvements. Subsequently, we develop an effective memetic EDA-based approach based on our previous work [12], with nature integration with those local search operators. 2) To significantly reduce the computation time of our proposed memetic EDA-based approach, an integrated local search procedure is proposed with a modified EDA based on the standard EDA. To decrease computation losses in repetitive sampling and evaluations, we utilize an archiving technique to avoid sampling solutions repetitively. This technique is prevalent and straightforward to use. Besides that, the local search procedure employs an effective joint strategy to efficiently finding better solutions. This strategy considers fitness uniform distribution scheme and stochastic local search jointly with our proposed local search operators. 3) To demonstrate the performance of our memetic EDAbased approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 [1] and WSC-09 [2]. In particular, the new benchmark inherits the functionalities provided by services in benchmark dataset WSC-08 and WSC-09 and the QoS attributes of web services in benchmark dataset QWS [30]. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. This benchmark has been made freely available online as well as the codes of our memetic EDA-based approach 1 . We experimentally compare our memetic EDA-based approach with some state-of-the-art methods that have been recently proposed to solve the same or a similar service composition problem using the new benchmark. Our experimental results illustrate that our method can achieve cutting-edge performance. Literature on EC-Based fully automated web service composition Automated web service composition aims to loosely couple web services to fulfill a service request, without strictly obeying a pre-given abstract workflow. Instead, composition workflows are gradually built up while its component services are selected. Existing works in fully automated web service composition can be categorized into two approaches -direct approaches and indirect approaches [31]. The direct approaches represent composition solutions explicitly in the representation that displays actual execution flows of composite services, while the indirect approaches often represent composite services implicitly as permutations, which require a decoding process to build up actual execution workflows. 1. Two augmented benchmarks for automated web service composition is available from https://github.com/chenwangnida/Dataset, and the codes of our memetic EDA-based approach is available from https://github.com/chenwangnida/MENHBSA4SWSC. In the first category, tree-and graph-based representations are widely used to represent service composition solutions directly. A graph-based evolutionary process is introduced in [32] to directly evolve DAG-based service composition solutions, applying domain-dependent crossover and mutation operators with repairing methods. GP is utilized for searching optimal solutions represented as trees. [7] proposes a context-free grammar for randomly initializing treebased service composition solutions with correct structures of composite services. In contrast, [13] randomly initializes tree-based service composition solutions completely, but they develop adaptive crossover and mutation rates according to the diversity of the population for accelerating the speed of convergence. Both approaches [7], [13] utilize a penalization method for filtering incorrect solutions while evaluating the QoS of candidate solutions. To achieve higher performance, [5], [8] utilize a greedy search algorithm for creating correct DAG-based composition workflows, which are mapped to tree-based ones with different methods. During the evolutionary process, the correctness of the solutions is ensured by domain-dependent crossover and mutation. However, the mapped tree-based representations suffer a scalability issue, since many replicas of subtrees are produced from the mapping methods. To overcome this issue, [11] proposes a tree-like representation, on which the replicas of subtrees are handled by removing them, and inserting edges from the root of the replicas to the roots of the copies. In the second category, service composition solutions are represented as permutations, which are then decoded into solutions represented as DAGs [10], [31], [33]. PSO is utilized to find an optimized queue of services (i.e., a permutation), which can be decoded into a corresponding DAG-based composite service [33]. [10] extends [33] to jointly optimize QoSM and QoS, where a weighted DAG is decoded, where edge weights correspond to matchmaking quality between services. These two PSO-based approaches rely on PSO to determine the weights of particle's position (that corresponding with a service) to form an ordered service queue. Optimizing QoSM and QoS simultaneously is more challenging than optimizing QoS only because the searching space has significantly increased, and it demands more effective and efficient searching techniques. Apart from that, it has been suggested that utilizing the indirect representation often contributes to a higher performance, compared to direct representation [31]. It is due to that the search space is not unwittingly restricted by unconstrained random initialization of solutions and operators. In summary, EC techniques have been showing their promises in fully automated web service composition. Moreover, the indirect approaches have been indicated to be more effective. Therefore, EC techniques with indirect representations are exciting techniques to be focused on for solving service composition problem in this paper. Literature on memetic EC-based approaches and EDA Memetic algorithms have drawn growing attention from researchers in recent years and achieved significant successes in many applications [34]. By introducing local search, the performance of EC techniques can be improved. In the domain of service composition, to overcome the prematurity and proneness of GP, Tabu search is combined with GP to solve QoS-aware data-intensive web service composition [35]. [9] proposed an indirect memetic approach for QoSaware web service composition, where a domain-dependent crossover operator is proposed to produce candidate solutions. Besides that, an exhaustive local search is applied to composite solutions represented as permutations. However, the produced neighbors are likely to be decoded into the same composite solution. Therefore, the effectiveness of this local search operator demands further improvement. Recently, EDA has been used as a technique to tackle permutation-based optimization problems [23]. In particular, a distribution model is learned iteratively for each population. Subsequently, new offsprings are generated based on the learned model. Moreover, domain-dependent local search operators are often introduced to enhance the performances of EDA. For example, a probability matrix that is related to the job priority permutation of a solution is learned in EDA-based flow-shop scheduling problem, and different job-based local search operators were proposed to enhance the exploitation ability of EDA [25]. An Edge Histogram Matrix is applied to uncertain capacitated arc routing problems and is leaned from solutions represented by a set of routes [27]. To make local improvements, different move operators, such as single insertion and swap, are also proposed. The use of EDA has only been investigated for semiautomated web service composition [6], [24], [36]. However, we recently proposed an EDA-based approach for fully automated web service composition, where candidate solutions are represented as permutations over a given service repository. The success of the proposed method strongly depends on the distribution model and the way of learning the distribution model. We employ Node Histogram Matrix (NHM) to learn the distribution of promising solutions in one population, Node Histogram-Based Sampling Algorithm (NHBSA) [22] is empoloyed to produce candidate solutions. Although we started an initial study for fully automated service composition, it remains an opportunity to improve its performance further. EDA is good at global exploration, and local search operators are motivated to be introduced in EDA to enhance its capability in exploitation. In summary, on the one hand, memetic EDA-based approaches have been investigated in many problems, other than fully automated service composition, achieving promising results. On the other hand, notwithstanding success achieved in our initial investigation in EDA-based fully automated service composition, the performance of this EDA-based approach can be further improved by combining it with local search. SEMANTIC WEB SERVICE COMPOSITION PROB-LEM A semantic web service (service, for short) is considered as a tuple S = (I S , O S , QoS S ) where I S is a set of service inputs that are consumed by S, O S is a set of service outputs that are produced by S, and QoS S = {t S , c S , r S , a S } is a set of non-functional attributes of S. The inputs in I S and outputs in O S are parameters modeled through concepts in a domain-specific ontology O. The attributes t S , c S , r S , a S refer to the response time, cost, reliability, and availability of service S, respectively, which are four commonly used QoS attributes [37]. A service repository SR is a finite collection of services supported by a common ontology O. A composition task (also called service request) over a given SR is a tuple T = (I T , O T ) where I T is a set of task inputs, and O T is a set of task outputs. The inputs in I T and outputs in O T are parameters that are semantically described by concepts in the ontology O. Two special atomic services Start = (∅, I T , ∅) and End = (O T , ∅, ∅) are always included in SR to account for the input and output of a given composition task T . We use matchmaking types to describe the level of a match between outputs and inputs [38]. For concepts a, b in O the matchmaking returns exact if a and b are equivalent (a ≡ b), plugin if a is a sub-concept of b (a b), subsume if a is a super-concept of b (a b) , and f ail if none of previous matchmaking types is returned. In this paper we are only interested in exact and plugin matches for robust compositions, see [39]. As argued in [39] plugin matches are less preferable than exact matches due to the overheads associated with data processing. For plugin matches, the semantic similarity of concepts is suggested to be considered when comparing different plugin matches. A robust causal link [40] is a link between two matched services S and S , denoted as S → S , if an output a (a ∈ O S ) of S serves as the input b (b ∈ O S ) of S satisfying either a ≡ b or a b. For concepts a, b in O, the semantic similarity sim(a, b) is calculated based on the edge counting method in a taxonomy like WorldNet [41]. Advantages of this method are simple calculation and good semantic measurement [41]. Therefore, the matchmaking type and semantic similarity of a robust causal link is defined as follows: type link = 1 if a ≡ b (exact match) p if a b (plugin match) (1) sim link = sim(a, b) = 2Nc Na + N b(2) with a suitable parameter p, 0 < p < 1, and with N a , N b and N c , which measure the distances from concept a, concept b, and the closest common ancestor c of a and b to the top concept of the ontology O, respectively. However, if more than one pair of matched output and input exist from service S to service S , type e and sim e will take on their average values. The QoSM of a composite service is obtained by aggregating over all robust causal links as follows: M T = m j=1 type link j (3) SIM = 1 m m j=1 sim link j(4) Formal expressions as in [42] are used to represent service compositions. The constructors •, , + and * are used to denote sequential composition, parallel composi- C = r C = a C = ct C = t C = •(C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k d k=1 t C k (C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k M AX{t C k |k ∈ {1, ..., d}} +(C 1 , . . . , C d ) d k=1 p k · r C k d k=1 p k · a C k d k=1 p k · ct C k d k=1 p k · t C k * C 0 r C 0 a C 0 · ct C 0 · t C 0 tion, choice, and iteration, respectively. The set of composite service expressions is the smallest collection SC that contains all atomic services and that is closed under sequential composition, parallel composition, choice, and iteration. That is, whenever C 0 , C 1 , . . . , C d are in SC then •(C 1 , . . . , C d ), (C 1 , . . . , C d ), +(C 1 , . . . , C d ) , and * C 0 are in SC, too. Let C be a composite service expression. If C denotes an atomic service S then its QoS is given by QoS S . Otherwise the QoS of C can be obtained inductively as summarized in Table 1. Herein, p 1 , . . . , p d with d k=1 p k = 1 denote the probabilities of the different options of the choice +, while denotes the average number of iterations. Therefore, QoS of a service composition solution, i.e., availability (A), reliability (R), execution time (T ), and cost (CT ) can be obtained by aggregating a C , r C , t C and ct C as in Table 1. In the presentation of this paper, we mainly focus on two constructors, sequence • and parallel , similar as in most automated service composition works [5], [8], [10], [11], [32], [33], where service composition solutions are represented as a Directed Acyclic Graph (DAG). We can easily calculate QoS of a composite service that is represented as a DAG [10] according to Table 1. When multiple quality criteria are involved in decision making, the fitness of a solution is defined as a weighted sum of all individual criteria in Eq. (5), assuming the preference of each quality criterion based on its relative importance is provided by the user [43]: F itness(C) = w 1M T +w 2Ŝ IM +w 3Â +w 4R +w 5 (1−T )+w 6 (1−ĈT )(5) with 6 k=1 w k = 1. This objective function is defined as a comprehensive quality model for service composition. We can adjust the weights according to the user's preferences.M T ,ŜIM ,Â,R,T , andĈT are normalized values calculated within the range from 0 to 1 using Eq. (6). To simplify the presentation we also use the notation (Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 ) = (M T, SIM, A, R, T, CT ). Q 1 and Q 2 have minimum value 0 and maximum value 1. The minimum and maximum value of Q 3 , Q 4 , Q 5 , and Q 6 are calculated across all the relevant services (that are determined in Sect. 4.2) in the service repository SR using greedy search in [5], [8]. Q k =        Q k −Q k,min Q k,max −Q k,min if k = 1, . . . , 4 and Q k,max − Q k,min = 0, Q k,max −Q k Q k,max −Q k,min if k = 5, 6 and Q k,max − Q k,min = 0, 1 otherwise. The goal of comprehensive quality-aware service composition is to find a composite service expression C that maximizes the objective function in Eq. (5). C is hence considered as the best possible solution for a given composition task T . MEMETIC EDA-BASED APPROACH FOR SE-MANTIC WEB SERVICE COMPOSITION In this section, we present our memetic EDA-based approach to fully automated semantic web service composition. We start by giving an overview of our memetic EDAbased approach. Subsequently, we discuss some essential steps in the approach: the first one is to discover relevant services and service layers, see details in Sect.4.2. The second one is to introduce a permutation-based representation proposed in our previous work, see details in Sect. 4.3 and 4.4. The third one is to introduce an effective joint strategy for a local search procedure, see details in Sect. 4.5. We propose several key ideas that are jointly employed to build our memetic EDA-based approach: 1) A composite service is commonly represented as a DAG, since a DAG can intuitively represent an execution flow of web services, and allows efficient computation of QoS. The success of the EDA strategy strongly relies on the proper distribution model for learning the knowledge of promising solutions. Our initial study [12] represents a composite service as a unique queue of services, i.e., a permutation of atomic services, which is mapped from a DAG-based solution. Composite services in this permutation form contributes to a distribution model to be learned and new permutationbased promising solutions to be sampled. Therefore, a bi-directional map is ensured between permutations and DAGs for learning and evaluation purposes. 2) To significantly decrease the computation time of the local search procedure, it is crucial to select a restricted number of suitable candidate solutions for local searches. We assume that candidate solutions with close fitness values are similar in their corresponding DAG forms, so neighbors produced from these candidate solutions can be the same. Therefore, we group candidate solutions based on their fitness values according to a uniform distribution scheme, which allows candidate solutions with the most considerable differences measured by single-objective fitness values can be effectively chosen for applying local search. 3) It is not efficient to exhaustively explore the whole neighbors in the conventional local search [9]. Instead, stochastically searching the neighboring solutions can significantly reduce computation cost [26]. Therefore, we introduce a stochastic local search with EDA to posite service is unusually computationally infeasible [28]. However, it is straightforward to define the neighborhood on a permutation-based representation by socalled swap operators. To develop effective swap operators, we utilize domain knowledge of service composition to create effective building blocks for these swap operators on permutation-based candidate solutions. These swap operators aim to exploit fitter neighbors effectively. That is they are likely to make local improvements in the produced neighbors. An overview of memetic EDA-based algorithm for automatic service composition An overview of the memetic EDA-based approach is represented in Figure 1, consisting of the following steps: initialize population, evaluate population, select superior subpopulation, learn probability model, sample individuals and return optimal solutions. We start with discovering all the relevant services that are related to a given composition request T in Step 1. Meanwhile, several service layers are identified (see details in Subsection 4.2). These relevant services are used to randomly generate m composite services represented as permutations, Π g k , where g = 0 and k = 1, . . . , m. In Step 2, these permutation-based individuals are decoded into DAG-based solutions using a forward graph building technique [10], based on which, the fitness in Eq. 5 of each individual can be calculated. In Step 3, we merge the current population P g with an archive. The archive is an empty individual set initially and will saved with elite composite services in the future. By adopting Breath-First Search (BFS) on each corresponding DAG-based solutions in the merged population, we produce another encoded permutation-based solutions Π g k . Then, the local search procedure is applied to a very small set of these permutations. This small permutation set is selected based on a fitness uniform selection scheme over the current population (see details in 4.5.1). For each permutation in the small set, a stochastic local search is employed to create new permutations as its neighbors, where the best neighbor is identified based on the fitness value. This permutation in the small set is replaced with its best neighbor (see details in Subsection 4.5). The top half of the best-performing solutions are reserved in P g according to their fitness values and put them into the archive as elite solutions. In Step 4, we use these elite solutions in the archive to learn a N HM g of generation g, which produces offsprings for generation g + 1 using NHBSA, see details in Subsection 4.4. Consequently, we go back to Step 2 to evaluate the fitness of new offsprings. The steps 2 to 4 will be repeated until the maximum number of generations is reached. Eventually, the best solutions found throughout the evolutionary process is returned. In a nutshell, we introduce a permutation-based representation derived from the common DAG-based one. In our proposed algorithm, we always switch between these two representations back and forth for better searching or evaluation purposes. Furthermore, an effective and efficient local search procedure is developed through the use of the selection scheme and the stochastic local search. Relevant Services and Service Layers Discovering relevant services and service layers is an initial, but crucial step for our memetic EDA-based approach. We achieve two goals at this initial stage: the first goal is to reduce the size of the service repository SR to keep only those that are relevant to the service composition task T ; the second goal is to identify service layers of these relevant services. In particular, a group of layers is identified, and each layer contains a set of services that have the same longest distance to Start. We adopt a layer discovering method in [44] to find relevant services and service layers as illustrated in the following example. Fig. 3 shows an example of discovering relevant services and service layers given a service request T , where five related services (i.e., S 0 , S 1 , S 2 , S 3 , and S 4 ) and two layers (i.e., L 1 and L 2 ) are found. In L 1 , S 0 , S 1 , S 2 , and S 4 can be satisfied by {a, b} of T , and they have the same distance to Start (Note that the distance is measured by the number of predecessors). While S 3 in L 2 requires additional inputs from other services and it is associated with a longer distance to Start. A Novel Permutation-Based Representation Service composition solutions are commonly represented as Directed Acyclic Graphs (DAGs) [5], [8], [10], [11], [32], [33]. Let G = (V, E) be a DAG-based composite solution from Start to End, where nodes correspond to the services and edges correspond to the robust causal links. Often, V does not contain all services in SR. Many combinatorial optimization problems naturally represent solutions as permutations, which can be different in different problems [23]. Here we present composite services as permutations, and we ensure a bi-directional map between permutations and DAGs. The bi-directional map is crucial for learning the distribution of promising composite solutions. Because it is less reliable to learn a distribution based on permutations if different permutations are mapped to the same DAG-based composition service. Let Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ) be a permutation, elements of which are {0, . . . , t, t + 1, . . . , n − 1} such that Π i = Π j for all i = j. Particularly, {0, . . . , t} are service indexes (i.e., id number) of the component services in the corresponding G , and is sorted based on the longest distance from Start to every component services of G. While {t + 1, . . . , n − 1} be indexes of remaining services in SR not utilized by G. We use Π g k to present the k th (out of m, m is population size) service composition solution, and P g = [Π g 0 , . . . , Π g k , . . . , Π g m−1 ] to represent a population of solutions of generation g. An example of producing a permutation-based composite solution is shown as follows. Fig. 3 illustrates a process to produce an permutation-based solution. As an example, take an permutation as [4, 1, 2, 3, 0]. This service index queue is decoded into a DAG G 0 0 representing a service composition that satisfies the composition task T . Afterwards G 0 0 is mapped to a permutation Π 0 0 = [1, 2, 3 | 4, 0]. Herein, each position on the left side of | corresponds to a service discovered by a BFS on G 0 0 from Start. This BFS additionally takes ascending order of service indexes during the search. While the right side corresponds to the remaining atomic services in SR, but not in G 0 0 . Note, that | is just displayed for the courtesy of the reader, rather than being part of the permutation-based representation. Furthermore, we also do not permit the encoding [1, 2, 3 | 0, 4], as no information can be extracted from G 0 0 to determine the positions of 0 and 4 in the permutation. A permutation-based population P g can be created with m permutation-based solutions. Consider m = 6, P g could be represented as follows: [22] proposed Node histogram-based sampling (NHBSA) as a tool for sampling new candidate solutions, which is commonly represented in the form of permutations. By employing the discussed representation of composite services in Sect. 4.3, we are now capable of applying NHBSA to sample new permutations as candidate composite services. The NHM at generation g, denoted by N HM g , is an n × n-matrix with entries e i,j as follows: P g =        sol g 0 sol g 1 sol g 2 sol g 3 sol g 4 sol g 5        =        Application of node histogram-based sampling e g i,j = n−1 k=0 δ i,j (sol g k ) + ε (7) δ i,j (sol g k ) = 1 if I g k (S i ) = j 0 otherwise (8) ε = m n − 1 b ratio(9) where i, j = 0, 1, . . . , n − 1, and b ratio is a predetermined bias. Roughly speaking, entry e g i,j counts the number of times that service S i appears in position j of the service queue over all solutions in population P g . We pick up an element in the N HM g as an example to demonstrate the meaning of each element in the NHM. For example, e g 0,0 ( that equals 2.6) is made of integer and decimal parts: 2 and 0.6. The integer number 2 means that service S 0 appears at the first position 2 times, while the decimal number 0.6 is a ε bias. Once we have computed N HM g , we use node histogram-based sampling [22] to sample new permutations for the next generation. Effective Local Search Procedure Through a Joint Strategy In this section, we introduce a joint strategy of our local search procedure: we begin with an introduction of a selection of suitable individuals for employing local search. This selection aims to choose the individuals based on global and local population information using a fitness uniform selection scheme in Algorithim 2. Subsequently, we present several local search operators with the representation discussed in 4.3. These operators are specially designed to work seamlessly with different neighborhoods that are investigated in this paper. The joint strategy for local search is summarized in ALGORITHM 1. Fig. 1) Input : P g , n nb and n set Output: updated P g 1 Select a small number n set of individulals to form a subset SelectedIndiSet of P g using ALGORITHM 2; 2 foreach Π in SelectedIndiSet do 3 Generate a size n nb of neighbors from Π by local search ; 4 Identify the best neighbor Π best with the highest fitness ; 5 replace Π with Π best ; 6 return P g ; ALGORITHM 1 takes three inputs: P g the gth population, n set the number of seleted individuals for local search and n nb the number of neighbors. In this algorithm, we start by selecting a fixed and small number n set of candidate solutions to form a subset SelectedIndiSet of the current population P g using ALGORITHM 2, see details in Section 4.5.1. These selected solutions are used for local search. For each solution Π in SelectedIndiSet, we produce a number n nb of neighbors from Π by local search, see details in Section 4.5.2, and then we identify the best neighbor Π best from the produced neighbors. We replace the best neighbor Π best with the selected Π in the small solutions set SelectedIndiSet. Eventually, we return a updated P g . ALGORITHM 1. Joint strategy for local search (Step 3.3 in Application of uniform distribution schema Two types of selection schemes for selecting suitable individuals for local search have been studied [34]: random selection scheme, and statistics scheme. The random selection scheme is a primary selection method, where a local search is potentially applied to all individuals with a predefined rate. However, it can be less effective as it does not assign local search to the most suitable candidate solutions, and it is more time-consuming when the population size is huge. This statistics scheme often chooses more suitable individuals based on the statistics information of the current population. For example, this method can assign local search on a set of candidate solutions with the highest differences measured by their fitness values. Our selection scheme, inspired by [45], is proposed based on the statistics information that aims to select a small number of suitable individuals for local search, making a good balance of local improvement and execution time. This selection scheme is presented in ALGORITHM 2. This algorithm applied a local search on a set of selected individuals SelectedIndiSet. The size of SelectedIndiSet, n set , is a pre-defined parameter. SelectedIndiSet consists of one elite individual and n set − 1 individuals from n set − 1 groups of individuals in each generation. Particularly, we calculate a uniform fitness interval based on the maximal fitness value, maxf itness and minimal fitness value, minf itness of the ALGORITHM 2. Fitness uniform selection scheme Input : P g and n set Output: selected solutions SelectedIndiSet 1 SelectedIndiSet ← {} ; 2 Sort P g in descending order based on the fitness ; 3 Put the first individual in P g into SelectedIndiSet ; 4 Calculate fitness range for n set − 1 groups based on an uniform interval between maxf itness and minf itness ; 5 Assign each permutation in P g to n set − 1 groups based on the fitness value ; 6 Random select one permutation from each group and put it in SelectedIndiSet; 7 return SelectedIndiSet; current population P g . Therefore, the population is divided into n set − 1 groups based on the calculated fitness interval. Consequently, these groups represent different groups of individuals, and each group represents close similarities based on their fitness. Note that, for every generation, the actual number of selected individuals for local search could be less than n set , because there could be no individuals fall into one group based on its fitness value. Stochastic Local Search Operators To investigate an appropriate structure of neighborhood for composite services, suitable local search operators must be proposed to effectively utilize domain knowledge. Then we repeatedly assign these local search operators to SelectedIndiSet for exploring their neighboring solutions. Apart from that, to balance the quality of local improvement and computation time, only a random subset of the entire large neighborhood is explored by a stochastic strategy. Based on the discussed permutation-based representation in Sect. 4.3, local search operators are proposed in a straightforward way as "swap". In this paper, we investigate four different swap operators: 1) Constrained One-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), two service indexes Π a , where 0 ≤ a ≤ t, and Π b , where t + 1 ≤ b ≤ n − 1, are selected and exchanged. The one-point swap local search operator is inspired by [9], which swaps a pair of service indexes in a permutation. In [9], local search exclusively explores the neighborhood based on one selected index of the permutation, so the size of the neighborhood associated with the index is n − 1. However, it can be very computational expensive because the number of swaps becomes significant for large n. Besides that, it can be less flexible as the neighborhoods are just focusing on those neighborhoods in relation to one selected index. Herein we propose a more efficient and flexible local search with one-point swap: first, we pre-determine a fixed, relatively small number of neighbors n nb to be produced for a considerable computational time assigned for local search; second, we randomly produce n nb neighbors by swapping two randomly selected indexes, rather than by swapping n−1 indexes with one fixed index. We expect that swapping two randomly selected indexes is more effective within a budget computation time for making local improvements. Meanwhile, we constrain the two randomly selected indexes that they must be before | and after | respectively in every swap because these swaps exclude those have lower opportunities for local improvements. For example, one neighbor is created by swapping one pair of used service indexes. This swap operation has a higher chance to produce the same DAG-based solution. Figure 4 shows an example of one-point swap for a selected individual. 2) Constrained Two-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), four service indexes Π a1 , Π a2 , Π b1 , and Π b2 are selected, where 0 ≤ a 1 ≤ t, 0 ≤ a 2 ≤ t, t + 1 ≤ b 1 ≤ n − 1, t + 1 ≤ b 2 ≤ n − 1, a 1 = a 2 , and b 1 = b 2 . Π a1 and Π b1 are exchanged. Likewise, Π a2 and Π b2 are exchanged. Motivated by the one-point swap proposed above, we created two-point swap operator by combing two constrained one-point swap into a single operator. We make a hypothesis that the two-point swap could efficiently produce a higher quality neighbor by one local change, rather than producing two neighbors by a sequence of two constrained one-point local changes. Primarily, given a budgeted number of candidate solutions for local search, a two-point swap operator can perform a more efficient local search for finding highquality solutions. Figure 5 shows an example of a twopoint swap for a selected individual and a produced neighbors. Constrained One-Block Swap is proposed based on the concept of a block, i.e., consecutive points (service indexes) in a permutation. In this swap, two blocks are built up based on two randomly generated starting point Π a and Π b before | and after I of a permutation respectively. After swaps, produced neighbors inherit two parts of the original permutation. Figure 5 shows an example of a constrained one-block swap for a permutation, where one block is built up from the start position StartP os1 to the last positions of used services, and another block is built up from the start position StartP os2 to the last index. EXPERIMENTS We conduct experiments to evaluate the performances of our memetic EDA-based approaches, i.e., memetic EDA with constrained one-point swap (henceforth referred to as MEEDA-OP), memetic EDA with constrained two-point swap (henceforth referred to as MEEDA-TP), memetic EDA with constrained layer-based one-point swap (henceforth referred to as MEEDA-LOP) and memetic EDA with constrained one-block swap (henceforth referred to as MEEDA-OB). These memetic EDA-based approaches are compared to some state-of-the-art EC-based methods that were recently proposed to solve the same or similar problems: a PSO-based approach [10] (henceforth referred to as PSO), a GA-based approach (henceforth referred to as GA), a memetic GA-based approach [9] (henceforth referred to as MEGA) and an EDA-based approach [12] (henceforth referred to as NHM-EDA). Two benchmarks, WSC-08 [1] and WSC-09 [2] extended with QoS attributes , which generated from the QoS distribution from QWS [30] are created. These two benchmarks have already been broadly employed in service composition [5], [10], [13] for experimental evaluations. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. We also make this benchmark available to the public. Particuarly, WSC08 contains 8 composition tasks with increasing size of service repository, i.e., 316, 1116, 1216, 2082, 2180, 4396, 8226, and 16238, and WSC09 contains 5 composition tasks with increasing size of service repository, i.e., 1144, 8258, 16276, 16602, and 30422 SRs respectively. The population size is set to 200, the number of generations equals to 100, and b ratio is 0.0002. The size of SelectedIndiSet is 6, and the number of neighbors of each individual in SelectedIndiSet explored by local search operators n nb is 20. For all the competing methods, we follow strictly their settings in their papers. In GA, the crossover rate is set to 0.95, and the mutation rate is set to 0.05. In MEGA, the crossover rate is set to 0.95, and local search rate is 0.05. We run the experiment with 30 independent repetitions. Following existing works [10], [11], [12], the weights of the fitness function Eq. (5) are simply configured to balance the QoSM and QoS. In particular, we set both w 1 and w 2 to 0.25, and w 3 , w 4 , w 5 and w 6 all to 0.125. More experiments have been conducted and show that all our methods work consistently well under different weight settings. The p of type link is determined by the preference of users, and is recommended as 0.75 for the plugin match according to [39]. Comparison of the Fitness We employ the independent-sample T-test with a significance level of 5% to verify the observed differences in performance concerning fitness value and execution time. In particular, we use a pairwise comparison to compare all competing approaches, and then the top performances are identified, and its related value is highlighted in green color in Table 2. Note that those methods that consistently find the best-known solutions over 30 runs with 0 standard deviations are also marked as top performances. The pairwise comparison results for fitness are summarized in Table 3, where textitwin/draw/loss shows the scores of one method compared to all the others, and displays the frequency that this method outperforms, equals or is outperformed by the competing method. This testing and comparison methods are also used in Sect 5.2. One of the objectives of the experiments is to evaluate the effectiveness of the proposed memetic EDA-based approaches comparing to NHM-EDA [12], PSO [10], GA and MEGA [9]. Table 2 shows the mean value of the fitness value and the standard deviation over 30 repetitions. The pairwise comparison results of the fitness value are summarized in Table 3. From Table 2 and Table 3, we observe some interesting behaviors of these approaches in finding highquality solutions. Based on these observations, we also make some analysis and possible conclusions below: Firstly, for the two baseline methods -PSO and GA, all EDA-based approaches (with and without local search) consistently outperform PSO. However, only memetic EDAbased approaches outperform GA. Then, MEGA [9] achieved very comparable results to all our memetic EDA-based methods. However, MEEDA-LOP achieves the best performance. As shown in Table 3, MEEDA-LOP only loss 1 out of 13 composition tasks over WSC-08 land WSC-09. Furthermore, MEEDA-LOP has achieved extremely stable performance in the most runs with 0 standard deviation. In addition, MEEDA-OP, MEEDA-TP, MEEDA-OB, and MEEDA-LOP significantly outperforms NHM-EDA [12]. This observation corresponds well with our expectation that the exploitation ability of EDA can be enhanced by hybridizing it with local search. We can see that all memetic EDA-based approaches reach a better balance of exploration and exploitation. Furthermore, for the four memetic EDA-based approaches, MEEDA-OB is the worst while MEEDA-OP and MEEDA-TP are very comparable to each other. This observation demonstrates that the neighborhood based on blocks is considered to be less suitable for service composition problems, it is due to that swapping building blocks can potentially ruin the learned distribution of promising solutions. Lastly, MEEDA-LOP is the best performer. This observation corresponds well with our assumption that using the layer-based information can further improve the effectiveness of one-point swap. MEEDA-LOP applies the local search operator to a much smaller, but useful set of services considered in MEEDA-OP. In summary, we sort all the competing approaches based on the effectiveness in a descending order: MEEDA-LOP > MEGA > MEEDA-TP = MEEDA-OP > MEEDA-OB > GA > EDA > PSO. Comparison of the Execution Time The second objective of our experiment is to study the efficiency of all the proposed EDA-based approaches comparing to EDA [12], PSO [10], GA and MEGA [9]. Table 4 shows the mean value of the execution time and the standard deviation over 30 repetitions. The pairwise comparison results for the execution time are summarized Table 5. From the two tables above, we make some analysis and possible conclusions about the execution time of these approaches as below: First, MEEDA-LOP requires consistently less execution time compared to other approaches, which can be observed from the highlighted execution time in Table 4. It is a remarkable observation that the local search in MEEDA-LOP based on layers and constrained one-point swap requires less computation time compared to MEEDA-OP. However, this significant improvement is mainly due to two techniques in MEEDA-LOP. The first one is the archive technique, which reserves half population-size elite individuals to the next generation, and significantly reduces the overall computation time for the decoding and evaluation of the reserved individuals in the future. The second one is the layer-based information, which improves the effectiveness of one-point swap, resulting in learning more accurate and reliable NHM. Therefore, useful services are more likely to be put in the front of the permutation, which accelerates the execution time in the decoding process. Second, in contrast, MEGA requires the highest execution time, because all the candidate solutions in MEGA have an opportunity for local search using random selection scheme, and MEGA also exclusively searches the whole neighborhood based on one position. These results confirm that the combination of the random selection scheme and the exclusively local search strategy in MEGA is less effective and more time-consuming than our statistics scheme and stochastic local search operators. Lastly, MEEDA-OB is also very computation-intensive among all the memetic EDA-based approaches. It is due to that one-block swap retards accurate distributions to be learned as local improvements of one-block swap is less effective, so required services for service composition are less likely to be put at the front of a service queue. Also, building blocks consume extra time in MEEDA-OB. In summary, we sort all the competing approaches based on the execution time in a ascending order: MEEDA-LOP > MEEDA-OP > MEEDA-TP > PSO > GA > MEEDA-OB > MEGA. Comparison of the Convergence Rate The third objective of our experiment is to study the convergence rate of all the approaches over 30 independent runs. We have used WSC08-3 and WSC09-2 as two examples to illustrate the performance of all the compared methods. MEGA requires much higher time for execution, we set different execution time scales for two tasks of WSC08-3 and WSC09-2 to easily observe their differences. First, we observe a significant increase in the fitness value towards the optimum over all the approaches excluding MEGA. These approaches eventually reach different levels of plateaus. Given the same budget of execution time, all memetic EDA-based methods happen to converge significantly faster and require much less time than the baseline PSO over all the composition tasks. Second, MEGA suffers from the the scalability issue when the size of the service repository is doubled in our new benchmark. The complexity of its local search strongly depends on n, i.e., the dimension of each permutation. Therefore, MEGA does not even converge at all when the same amount of execution time that is required by other approaches is assigned. Lastly, MEEDA-LOP is consistently ranked as a top performer among all the competing methods. The convergence rate of MEEDA-OP and MEEDA-TP presents a very similar pattern. However, MEEDA-OB happens to converge slower than the others, but it eventually reaches comparable results compared to MEEDA-OP and MEEDA-TP. Comparison of local search operators We investigate how often the mean fitness of neighbors is better than the fitness of their original permutation in MEEDA-OP, MEEDA-TP, MEEDA-LOP, and MEEDA-BP to demonstrate which swap-based local search operator is more likely to produce better solutions. Herein we use the composition task WSC0803 as an example to demonstrate the percentage of better neighbors produced by our four memetic EDA-based approaches along generations over 30 runs for WSC08-03 in Fig. 9. The result shows that MEEDA-BP and MEEDA-TP are less like to produce better solutions while MEEDA-OP and MEEDA-LOP are very comparable to each other, although slightly higher percentages of better mean fitness can be achieved by MEEDA-LOP. We further analyze differences between layer-based constrained one-point swap and constraint one-point swap operator using a permutation in Figure 10. Figure 10 exhibits an example of two produced neighbors from a permutation using constraint one-point swaps without considering layer information. In the example, one identical solution can be decoded from both the given permutation and the produced two neighbors, resulting in no local exploitation. In contrast, the discussed swapping cases are not qualified for the layer-based constraint one-point swap, where any produced neighbor must strictly follow the layer order on the left-hand side of the permutation. In the example, a given permutation is highlighted with two layers (i.e., L 1 and L 2 ) in ascending order. Particularly, S 1 , S 2 ∈ L 1 and S 3 ∈ L 2 . When the constrained one-point swap is performed, S 3 in the given permutation are replaced with S 4 or S 0 in the produced neighbor 1 and neighbor 2 respectively. However, L 2 is destroyed in the produced neighbors because of S 4 ∈ L 1 and S 0 ∈ L 1 . However, if the layer-based one-point swap is applied to the given permutation, it prevents these two neighbors from being exploited. In general, all produced neighbors must keep all the ordered layers from the given permutation. CONCLUSION In this paper, we propose effective and efficient memetic EDA-based approaches to fully automated service composition. The success of this memetic approach principally relies on the local search, where several ideas are jointly employed. In particular, we proposed several neighborhood structures by different local search operators, which are integrated with our permutation-based representation naturally. Besides that, a uniform distribution scheme and a stochastic strategy are also jointly utilized for selecting and applying local search. The experiments show that one of our proposed approach MEEDA-LOP achieves significantly better effectiveness and efficiency, compared to some stateof-the-art EC-based approaches and other memetic EDAbased approaches we proposed in the paper. Future work can investigate variable neighborhood with combinations of more than one local search operators in one evolutionary process, and investigate memetic EDA for handling multiobjective service composition problems.
9,373
1906.07900
2949728259
Comprehensive quality-aware automated semantic web service composition is an NP-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., Quality of services (QoS) and Quality of semantic matchmaking (QoSM) are simultaneously optimized. The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request. In this paper, we proposed novel memetic EDA-based approaches to tackle this problem. The proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. Apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified EDA to reduce the overall computation time of our memetic approach. To better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 bansal2008wsc and WSC-09 kona2009wsc . Experimental results on this benchmark show that one of our proposed memetic EDA-based approach (i.e., MEEDA-LOP) significantly outperforms existing state-of-the-art algorithms.
In the second category, service composition solutions are represented as permutations, which are then decoded into solutions represented as DAGs @cite_20 @cite_37 @cite_24 . PSO is utilized to find an optimized queue of services (i.e., a permutation), which can be decoded into a corresponding DAG-based composite service @cite_24 . @cite_20 extends @cite_24 to jointly optimize QoSM and QoS, where a weighted DAG is decoded, where edge weights correspond to matchmaking quality between services. These two PSO-based approaches rely on PSO to determine the weights of particle's position (that corresponding with a service) to form an ordered service queue. Optimizing QoSM and QoS simultaneously is more challenging than optimizing QoS only because the searching space has significantly increased, and it demands more effective and efficient searching techniques. Apart from that, it has been suggested that utilizing the indirect representation often contributes to a higher performance, compared to direct representation @cite_37 . It is due to that the search space is not unwittingly restricted by unconstrained random initialization of solutions and operators.
{ "abstract": [ "Automated Web service composition, which refers to the creation of a complex application from pre-existing building blocks (Web services), has been an active research topic in the past years. The advantage of having an automated composition system is that it allows users to create new applications simply by providing the required parameters, instead of having to manually assemble the services. Existing approaches to automated composition rely on planning techniques or evolutionary computing (EC) to modify and optimise composition solutions directly in their tree graph form, a complex process that requires several constraints to be considered before each alteration. To improve the search efficiency and simplify the checking of constraints, this work proposes an indirect Particle Swarm Optimisation (PSO)-based approach. The key idea of the indirect approach is to optimise a service queue which is then decoded into a composition solution by using a planning algorithm. This approach is compared to a previously proposed graph-based direct representation method, and experiment results show that the indirect representation can lead to a greater (or equivalent) quality while requiring a lower execution time. The analysis conducted shows that this is due to the design of the algorithms used for building and evaluating the fitness of solutions.", "Web services have become increasingly popular in recent years, and they are especially suitable to the process of Web service composition, which is when several services are combined to create an application that accomplishes a more complex task. In recent years, significant research efforts have been made on developing approaches for performing Quality of Service -aware Web service composition. Evolutionary computing (EC) techniques have been widely used for solving this problem, since they allow for the quality of compositions to be optimised, meanwhile also ensuring that the solutions produced have the required functionality. Existing EC-based composition approaches perform constrained optimisation to produce solutions that meet those requirements, however these constraints may hinder the effectiveness of the search. To address this issue, a novel framework based on an indirect representation is proposed in this work. The core idea is to first generate candidate service compositions encoded as sequences of services. Then, a decoding scheme is developed to transform any sequence of services into a corresponding feasible service composition. Given a service sequence, the decoding scheme builds the workflow from scratch by iteratively adding the services to proper positions of the workflow in the order of the sequence. This is beneficial because it allows the optimisation to be carried out in an unconstrained way, later enforcing functionality constraints during the decoding process. A number of encoding methods and corresponding search operators, including the PSO, GA, and GP-based methods, are proposed and tested, with results showing that the quality of the solutions produced by the proposed indirect approach is higher than that of a baseline direct representation-based approach for twelve out of the thirteen datasets considered. In particular, the method using the variable-length sequence representation has the most efficient execution time, while the fixed-length sequence produces the highest quality solutions.", "Web service composition has been a prevailing research direction in recent years. There are two major challenges faced by researchers, semantic matchmaking and Quality of Service (QoS) optimisation. Semantic matchmaking aims to discover interoperable web services that can interact with each other by their resources described semantically. QoS optimisation aims to optimise the non-functional requirements of service users, such as minimum cost and maximum reliability. To meet the requirements of service users, both semantic matchmaking quality and QoS should be considered simultaneously. Most existing works on web service composition, however, focus only on one of these two aspects. Therefore, we propose a comprehensive quality model that takes both semantic matchmaking quality and QoS into account with the aim of achieving a more desirable balance of both sides. Further, we develop a PSO-based service composition approach with explicit support for the proposed comprehensive quality model. We also conduct experiments to explore the effectiveness of our PSO-based approach and the desirable balance achieved by using our comprehensive quality model." ], "cite_N": [ "@cite_24", "@cite_37", "@cite_20" ], "mid": [ "2346194969", "2605603844", "2724655564" ] }
Memetic EDA-Based Approaches to Comprehensive Quality-Aware Automated Semantic Web Service Composition
S ERVICE Oriented Architecture (SOA) has been contributing to the reuse of software components [3]. Web services are one of the most successful implementations of SOA to provide services as "modular, self-describing, self-contained applications that are available on the Internet" [4]. Often, users' requirements cannot be satisfied by one existing web service, Web service composition aims to loosely couple a set of Web services to provide a value-added composite service (i.e., a solution of service composition) that accommodates users' complex requirements. These requirements are related to functional (i.e., quality of semantic matchmaking as QoSM) and non-functional (i.e., Quality of service as QoS) requirements, which give birth to semantic web service composition and QoS-aware web service composition, with the aim of optimizing QoSM and QoS of service composition solutions respectively. Many researchers have been working on solving these optimization problems in web service composition [5], [6], [7], [8], [9], [10], [11], [12], [13]. Existing works that study the above problems are classified as semi-automated and fully-automated web service composition [14] with two different assumptions. One assumes that users know an abstract service composition workflow, and all the composite services produced by the composition system must strictly obey the given workflow. However, this assumption is not always valid since the workflow may not be provided or not even known by users. The second group of research works does not rely on any existing work-flows. Instead, a composite service will be constructed from scratch by selecting and connecting multiple atomic services obtained from the service repository [14]. Therefore, this construction process can end up with different workflows. Apparently, compared to semi-automated web service composition, fully-automated web service composition opens new opportunities to improve QoS and QoSM further due to different workflows automatically constructed. Nevertheless, the difficulty of the composition task is also increased. AI planning and Evolutionary Computation (EC) are two of the most widely used techniques for semi-automated and fully-automated web service composition [5], [7], [10], [13], [15], [16], [17]. AI planning techniques focus on creating valid composite services, where the functional correctness is always ensured with gradually constructed workflows. However, these approaches do not optimize the QoS or QoSM of the solutions produced [18]. EC techniques have been widely used to solve service composite problems that aim to optimize either one or both of QoSM and QoS, and are potentially more useful in practice as they can efficiently find "good enough" composite solutions. Important approaches [5], [6], [7], [8], [9], [10], [11], [12], [13] based on Genetic Algorithms (GA) [19], Genetic Programming (GP) [20], Particle Swarm Optimization (PSO) [21] and Estimation of Distribution Algorithm (EDA) [22], have been widely investigated in the literature. To effectively search for good solutions, EC techniques often employ useful information distilled from promising solutions to produce new offspring. The information can be used either implicitly or explicitly. Conventional EC techniques, such as GA and GP, fall in the implicit camp by producing new solutions through recombining solutions evolved previously [5], [7], [13]. In contrast, one EC technique that has achieved prominent success through explicit arXiv:1906.07900v1 [cs.AI] 19 Jun 2019 use of information is Estimation of Distribution Algorithm (EDA) [23]. In EDA, information about promising solutions evolved previously is captured compactly in the form of probability models. EDA has been successfully utilized for semi-automated service composition [6], [24], but they can not support fully automated service composition. We recently proposed a new EDA-based approach for fully automated web service composition through reliable and accurate learning of a probability model that encodes the distribution of promising solutions [12], i.e., a distribution model. EDA stresses more on global exploration, rather than local exploitation [25]. It is due to that the distribution model has the objective of exploring more promising regions in the entire solution space, without attempting to improve the quality of any specific solutions evolved previously. However, the optimization performance can often be improved directly through local modifications to promising solutions. By restricting the target region for local search and avoiding most of the randomness involved in sampling directly from the distribution model, this can potentially expedite the search of optimal solutions. Therefore, to improve its competency in finding more effective solutions, an idea is to enhance EDA with local search, namely, memetic EDA. Memetic EDA has been successfully applied to many optimizations problems with local search operators [26], [25], such as arc routing and assembly flow-shop scheduling problems. On the one hand, although memetic EDA has been successfully applied to many applications, those memetic approaches work inappropriate for web service composition, as these local search operators are only applicable to domain-specific or problem-specific solution representations [25], [27]. On the other hand, despite the recent success in EDA-based service composition, the effectiveness of this approach can be enhanced by introducing memetic EDA. Several challenges remain to be addressed in developing a memetic EDA approach to service composition as follows: First, a composite service is commonly represented as a DAG, exploring the neighborhood of a DAG, especially large DAGs, is computationally infeasible [28]. Note that the discussed neighborhood is structured by local search operators on the search space, where neighbor solutions can be generated iteratively from a given candidate solution. Therefore, researchers [9], [29] often indirectly defined the neighborhood of a composite service represented in the form of a permutation, which can be converted to a DAG through a separate decoding process. Often, socalled "swap" operators produce neighbors by swapping two random elements in a permutation. Consequently, a neighborhood is defined by the collection of permutations obtainable through a "swap" to any given permutation. However, such neighborhood often contains a large proportion of neighboring permutations with inferior quality. For effective local research, the neighborhood must be refined to exclude most of the clearly unwise swapping choices by exploiting domain-specific knowledge. Second, as we know, it is very challenging to determine which candidate solutions are to be selected for local search in memetic algorithms, as the selection method has a significant impact on the effectiveness and efficiency of memetic EDA. Should an equal chance be given to all the candidate solutions or only elite solutions should be considered for local search? Moreover, what are elite solutions, and how many of them should be modified locally? However, answers to these challenging questions often vary from many factors, such as EC algorithms, domain problems, etc. Therefore, it is challenging to determine one effective selection strategy for the memetic EDA-based approach to service composition. Third, a traditional strategy that exclusively explores the whole neighboring space of composite services can incur high computation cost without guarantee of improving solution quality. For example, for permutation-based representation, if a simple swap operator is utilized for exploring the neighborhood, then the dimension of the permutation determines the computational complexity. In the context of service composition, the dimension of such permutation is usually equivalent to the size of the service repository. As the neighborhood size is extremely huge when many services are to be considered during the service composition process, this strategy is infeasible for practical use. Fourth, in EDA, although a probability distribution model is adjusted to trace promising searching areas throughout generations, one proportion of promising solutions (i.e., permutations) are more likely to be repetitively sampled, while the distribution model is getting converged along the generations. Furthermore, these repeatedly sampled solutions are often favorable to users, since they are candidate solutions with high quality. In the EDA-based approach to service composition, sampled permutationbased solutions are very costly as they require repetitive computation time for decoding and evaluations. To address these challenges above, we propose a memetic EDA-based approach, achieving substantially high performances in effectiveness and efficiency. These outstanding performances are observed by comparing it with some recently proposed web service composition approaches, such as a EDA-based approach [12], a PSO-based approach [10], and GA-and Memetic GA-based approaches [9]. In particular, an empirical, experimental study on the effectiveness of different neighborhoods structured by different local search operators is conducted. The contributions of this paper are listed below, and where the first contribution is to address the first challenge discussed previously, and the second contribution is proposed to address the remaining challenges. 1) To perform an effective local search in composite services, we first propose several neighborhood structures for candidate solutions. These neighborhoods are created by developing several novel domain-dependent local search operators, based on constructing and swapping effective building blocks of composite services for local improvements. Subsequently, we develop an effective memetic EDA-based approach based on our previous work [12], with nature integration with those local search operators. 2) To significantly reduce the computation time of our proposed memetic EDA-based approach, an integrated local search procedure is proposed with a modified EDA based on the standard EDA. To decrease computation losses in repetitive sampling and evaluations, we utilize an archiving technique to avoid sampling solutions repetitively. This technique is prevalent and straightforward to use. Besides that, the local search procedure employs an effective joint strategy to efficiently finding better solutions. This strategy considers fitness uniform distribution scheme and stochastic local search jointly with our proposed local search operators. 3) To demonstrate the performance of our memetic EDAbased approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 [1] and WSC-09 [2]. In particular, the new benchmark inherits the functionalities provided by services in benchmark dataset WSC-08 and WSC-09 and the QoS attributes of web services in benchmark dataset QWS [30]. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. This benchmark has been made freely available online as well as the codes of our memetic EDA-based approach 1 . We experimentally compare our memetic EDA-based approach with some state-of-the-art methods that have been recently proposed to solve the same or a similar service composition problem using the new benchmark. Our experimental results illustrate that our method can achieve cutting-edge performance. Literature on EC-Based fully automated web service composition Automated web service composition aims to loosely couple web services to fulfill a service request, without strictly obeying a pre-given abstract workflow. Instead, composition workflows are gradually built up while its component services are selected. Existing works in fully automated web service composition can be categorized into two approaches -direct approaches and indirect approaches [31]. The direct approaches represent composition solutions explicitly in the representation that displays actual execution flows of composite services, while the indirect approaches often represent composite services implicitly as permutations, which require a decoding process to build up actual execution workflows. 1. Two augmented benchmarks for automated web service composition is available from https://github.com/chenwangnida/Dataset, and the codes of our memetic EDA-based approach is available from https://github.com/chenwangnida/MENHBSA4SWSC. In the first category, tree-and graph-based representations are widely used to represent service composition solutions directly. A graph-based evolutionary process is introduced in [32] to directly evolve DAG-based service composition solutions, applying domain-dependent crossover and mutation operators with repairing methods. GP is utilized for searching optimal solutions represented as trees. [7] proposes a context-free grammar for randomly initializing treebased service composition solutions with correct structures of composite services. In contrast, [13] randomly initializes tree-based service composition solutions completely, but they develop adaptive crossover and mutation rates according to the diversity of the population for accelerating the speed of convergence. Both approaches [7], [13] utilize a penalization method for filtering incorrect solutions while evaluating the QoS of candidate solutions. To achieve higher performance, [5], [8] utilize a greedy search algorithm for creating correct DAG-based composition workflows, which are mapped to tree-based ones with different methods. During the evolutionary process, the correctness of the solutions is ensured by domain-dependent crossover and mutation. However, the mapped tree-based representations suffer a scalability issue, since many replicas of subtrees are produced from the mapping methods. To overcome this issue, [11] proposes a tree-like representation, on which the replicas of subtrees are handled by removing them, and inserting edges from the root of the replicas to the roots of the copies. In the second category, service composition solutions are represented as permutations, which are then decoded into solutions represented as DAGs [10], [31], [33]. PSO is utilized to find an optimized queue of services (i.e., a permutation), which can be decoded into a corresponding DAG-based composite service [33]. [10] extends [33] to jointly optimize QoSM and QoS, where a weighted DAG is decoded, where edge weights correspond to matchmaking quality between services. These two PSO-based approaches rely on PSO to determine the weights of particle's position (that corresponding with a service) to form an ordered service queue. Optimizing QoSM and QoS simultaneously is more challenging than optimizing QoS only because the searching space has significantly increased, and it demands more effective and efficient searching techniques. Apart from that, it has been suggested that utilizing the indirect representation often contributes to a higher performance, compared to direct representation [31]. It is due to that the search space is not unwittingly restricted by unconstrained random initialization of solutions and operators. In summary, EC techniques have been showing their promises in fully automated web service composition. Moreover, the indirect approaches have been indicated to be more effective. Therefore, EC techniques with indirect representations are exciting techniques to be focused on for solving service composition problem in this paper. Literature on memetic EC-based approaches and EDA Memetic algorithms have drawn growing attention from researchers in recent years and achieved significant successes in many applications [34]. By introducing local search, the performance of EC techniques can be improved. In the domain of service composition, to overcome the prematurity and proneness of GP, Tabu search is combined with GP to solve QoS-aware data-intensive web service composition [35]. [9] proposed an indirect memetic approach for QoSaware web service composition, where a domain-dependent crossover operator is proposed to produce candidate solutions. Besides that, an exhaustive local search is applied to composite solutions represented as permutations. However, the produced neighbors are likely to be decoded into the same composite solution. Therefore, the effectiveness of this local search operator demands further improvement. Recently, EDA has been used as a technique to tackle permutation-based optimization problems [23]. In particular, a distribution model is learned iteratively for each population. Subsequently, new offsprings are generated based on the learned model. Moreover, domain-dependent local search operators are often introduced to enhance the performances of EDA. For example, a probability matrix that is related to the job priority permutation of a solution is learned in EDA-based flow-shop scheduling problem, and different job-based local search operators were proposed to enhance the exploitation ability of EDA [25]. An Edge Histogram Matrix is applied to uncertain capacitated arc routing problems and is leaned from solutions represented by a set of routes [27]. To make local improvements, different move operators, such as single insertion and swap, are also proposed. The use of EDA has only been investigated for semiautomated web service composition [6], [24], [36]. However, we recently proposed an EDA-based approach for fully automated web service composition, where candidate solutions are represented as permutations over a given service repository. The success of the proposed method strongly depends on the distribution model and the way of learning the distribution model. We employ Node Histogram Matrix (NHM) to learn the distribution of promising solutions in one population, Node Histogram-Based Sampling Algorithm (NHBSA) [22] is empoloyed to produce candidate solutions. Although we started an initial study for fully automated service composition, it remains an opportunity to improve its performance further. EDA is good at global exploration, and local search operators are motivated to be introduced in EDA to enhance its capability in exploitation. In summary, on the one hand, memetic EDA-based approaches have been investigated in many problems, other than fully automated service composition, achieving promising results. On the other hand, notwithstanding success achieved in our initial investigation in EDA-based fully automated service composition, the performance of this EDA-based approach can be further improved by combining it with local search. SEMANTIC WEB SERVICE COMPOSITION PROB-LEM A semantic web service (service, for short) is considered as a tuple S = (I S , O S , QoS S ) where I S is a set of service inputs that are consumed by S, O S is a set of service outputs that are produced by S, and QoS S = {t S , c S , r S , a S } is a set of non-functional attributes of S. The inputs in I S and outputs in O S are parameters modeled through concepts in a domain-specific ontology O. The attributes t S , c S , r S , a S refer to the response time, cost, reliability, and availability of service S, respectively, which are four commonly used QoS attributes [37]. A service repository SR is a finite collection of services supported by a common ontology O. A composition task (also called service request) over a given SR is a tuple T = (I T , O T ) where I T is a set of task inputs, and O T is a set of task outputs. The inputs in I T and outputs in O T are parameters that are semantically described by concepts in the ontology O. Two special atomic services Start = (∅, I T , ∅) and End = (O T , ∅, ∅) are always included in SR to account for the input and output of a given composition task T . We use matchmaking types to describe the level of a match between outputs and inputs [38]. For concepts a, b in O the matchmaking returns exact if a and b are equivalent (a ≡ b), plugin if a is a sub-concept of b (a b), subsume if a is a super-concept of b (a b) , and f ail if none of previous matchmaking types is returned. In this paper we are only interested in exact and plugin matches for robust compositions, see [39]. As argued in [39] plugin matches are less preferable than exact matches due to the overheads associated with data processing. For plugin matches, the semantic similarity of concepts is suggested to be considered when comparing different plugin matches. A robust causal link [40] is a link between two matched services S and S , denoted as S → S , if an output a (a ∈ O S ) of S serves as the input b (b ∈ O S ) of S satisfying either a ≡ b or a b. For concepts a, b in O, the semantic similarity sim(a, b) is calculated based on the edge counting method in a taxonomy like WorldNet [41]. Advantages of this method are simple calculation and good semantic measurement [41]. Therefore, the matchmaking type and semantic similarity of a robust causal link is defined as follows: type link = 1 if a ≡ b (exact match) p if a b (plugin match) (1) sim link = sim(a, b) = 2Nc Na + N b(2) with a suitable parameter p, 0 < p < 1, and with N a , N b and N c , which measure the distances from concept a, concept b, and the closest common ancestor c of a and b to the top concept of the ontology O, respectively. However, if more than one pair of matched output and input exist from service S to service S , type e and sim e will take on their average values. The QoSM of a composite service is obtained by aggregating over all robust causal links as follows: M T = m j=1 type link j (3) SIM = 1 m m j=1 sim link j(4) Formal expressions as in [42] are used to represent service compositions. The constructors •, , + and * are used to denote sequential composition, parallel composi- C = r C = a C = ct C = t C = •(C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k d k=1 t C k (C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k M AX{t C k |k ∈ {1, ..., d}} +(C 1 , . . . , C d ) d k=1 p k · r C k d k=1 p k · a C k d k=1 p k · ct C k d k=1 p k · t C k * C 0 r C 0 a C 0 · ct C 0 · t C 0 tion, choice, and iteration, respectively. The set of composite service expressions is the smallest collection SC that contains all atomic services and that is closed under sequential composition, parallel composition, choice, and iteration. That is, whenever C 0 , C 1 , . . . , C d are in SC then •(C 1 , . . . , C d ), (C 1 , . . . , C d ), +(C 1 , . . . , C d ) , and * C 0 are in SC, too. Let C be a composite service expression. If C denotes an atomic service S then its QoS is given by QoS S . Otherwise the QoS of C can be obtained inductively as summarized in Table 1. Herein, p 1 , . . . , p d with d k=1 p k = 1 denote the probabilities of the different options of the choice +, while denotes the average number of iterations. Therefore, QoS of a service composition solution, i.e., availability (A), reliability (R), execution time (T ), and cost (CT ) can be obtained by aggregating a C , r C , t C and ct C as in Table 1. In the presentation of this paper, we mainly focus on two constructors, sequence • and parallel , similar as in most automated service composition works [5], [8], [10], [11], [32], [33], where service composition solutions are represented as a Directed Acyclic Graph (DAG). We can easily calculate QoS of a composite service that is represented as a DAG [10] according to Table 1. When multiple quality criteria are involved in decision making, the fitness of a solution is defined as a weighted sum of all individual criteria in Eq. (5), assuming the preference of each quality criterion based on its relative importance is provided by the user [43]: F itness(C) = w 1M T +w 2Ŝ IM +w 3Â +w 4R +w 5 (1−T )+w 6 (1−ĈT )(5) with 6 k=1 w k = 1. This objective function is defined as a comprehensive quality model for service composition. We can adjust the weights according to the user's preferences.M T ,ŜIM ,Â,R,T , andĈT are normalized values calculated within the range from 0 to 1 using Eq. (6). To simplify the presentation we also use the notation (Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 ) = (M T, SIM, A, R, T, CT ). Q 1 and Q 2 have minimum value 0 and maximum value 1. The minimum and maximum value of Q 3 , Q 4 , Q 5 , and Q 6 are calculated across all the relevant services (that are determined in Sect. 4.2) in the service repository SR using greedy search in [5], [8]. Q k =        Q k −Q k,min Q k,max −Q k,min if k = 1, . . . , 4 and Q k,max − Q k,min = 0, Q k,max −Q k Q k,max −Q k,min if k = 5, 6 and Q k,max − Q k,min = 0, 1 otherwise. The goal of comprehensive quality-aware service composition is to find a composite service expression C that maximizes the objective function in Eq. (5). C is hence considered as the best possible solution for a given composition task T . MEMETIC EDA-BASED APPROACH FOR SE-MANTIC WEB SERVICE COMPOSITION In this section, we present our memetic EDA-based approach to fully automated semantic web service composition. We start by giving an overview of our memetic EDAbased approach. Subsequently, we discuss some essential steps in the approach: the first one is to discover relevant services and service layers, see details in Sect.4.2. The second one is to introduce a permutation-based representation proposed in our previous work, see details in Sect. 4.3 and 4.4. The third one is to introduce an effective joint strategy for a local search procedure, see details in Sect. 4.5. We propose several key ideas that are jointly employed to build our memetic EDA-based approach: 1) A composite service is commonly represented as a DAG, since a DAG can intuitively represent an execution flow of web services, and allows efficient computation of QoS. The success of the EDA strategy strongly relies on the proper distribution model for learning the knowledge of promising solutions. Our initial study [12] represents a composite service as a unique queue of services, i.e., a permutation of atomic services, which is mapped from a DAG-based solution. Composite services in this permutation form contributes to a distribution model to be learned and new permutationbased promising solutions to be sampled. Therefore, a bi-directional map is ensured between permutations and DAGs for learning and evaluation purposes. 2) To significantly decrease the computation time of the local search procedure, it is crucial to select a restricted number of suitable candidate solutions for local searches. We assume that candidate solutions with close fitness values are similar in their corresponding DAG forms, so neighbors produced from these candidate solutions can be the same. Therefore, we group candidate solutions based on their fitness values according to a uniform distribution scheme, which allows candidate solutions with the most considerable differences measured by single-objective fitness values can be effectively chosen for applying local search. 3) It is not efficient to exhaustively explore the whole neighbors in the conventional local search [9]. Instead, stochastically searching the neighboring solutions can significantly reduce computation cost [26]. Therefore, we introduce a stochastic local search with EDA to posite service is unusually computationally infeasible [28]. However, it is straightforward to define the neighborhood on a permutation-based representation by socalled swap operators. To develop effective swap operators, we utilize domain knowledge of service composition to create effective building blocks for these swap operators on permutation-based candidate solutions. These swap operators aim to exploit fitter neighbors effectively. That is they are likely to make local improvements in the produced neighbors. An overview of memetic EDA-based algorithm for automatic service composition An overview of the memetic EDA-based approach is represented in Figure 1, consisting of the following steps: initialize population, evaluate population, select superior subpopulation, learn probability model, sample individuals and return optimal solutions. We start with discovering all the relevant services that are related to a given composition request T in Step 1. Meanwhile, several service layers are identified (see details in Subsection 4.2). These relevant services are used to randomly generate m composite services represented as permutations, Π g k , where g = 0 and k = 1, . . . , m. In Step 2, these permutation-based individuals are decoded into DAG-based solutions using a forward graph building technique [10], based on which, the fitness in Eq. 5 of each individual can be calculated. In Step 3, we merge the current population P g with an archive. The archive is an empty individual set initially and will saved with elite composite services in the future. By adopting Breath-First Search (BFS) on each corresponding DAG-based solutions in the merged population, we produce another encoded permutation-based solutions Π g k . Then, the local search procedure is applied to a very small set of these permutations. This small permutation set is selected based on a fitness uniform selection scheme over the current population (see details in 4.5.1). For each permutation in the small set, a stochastic local search is employed to create new permutations as its neighbors, where the best neighbor is identified based on the fitness value. This permutation in the small set is replaced with its best neighbor (see details in Subsection 4.5). The top half of the best-performing solutions are reserved in P g according to their fitness values and put them into the archive as elite solutions. In Step 4, we use these elite solutions in the archive to learn a N HM g of generation g, which produces offsprings for generation g + 1 using NHBSA, see details in Subsection 4.4. Consequently, we go back to Step 2 to evaluate the fitness of new offsprings. The steps 2 to 4 will be repeated until the maximum number of generations is reached. Eventually, the best solutions found throughout the evolutionary process is returned. In a nutshell, we introduce a permutation-based representation derived from the common DAG-based one. In our proposed algorithm, we always switch between these two representations back and forth for better searching or evaluation purposes. Furthermore, an effective and efficient local search procedure is developed through the use of the selection scheme and the stochastic local search. Relevant Services and Service Layers Discovering relevant services and service layers is an initial, but crucial step for our memetic EDA-based approach. We achieve two goals at this initial stage: the first goal is to reduce the size of the service repository SR to keep only those that are relevant to the service composition task T ; the second goal is to identify service layers of these relevant services. In particular, a group of layers is identified, and each layer contains a set of services that have the same longest distance to Start. We adopt a layer discovering method in [44] to find relevant services and service layers as illustrated in the following example. Fig. 3 shows an example of discovering relevant services and service layers given a service request T , where five related services (i.e., S 0 , S 1 , S 2 , S 3 , and S 4 ) and two layers (i.e., L 1 and L 2 ) are found. In L 1 , S 0 , S 1 , S 2 , and S 4 can be satisfied by {a, b} of T , and they have the same distance to Start (Note that the distance is measured by the number of predecessors). While S 3 in L 2 requires additional inputs from other services and it is associated with a longer distance to Start. A Novel Permutation-Based Representation Service composition solutions are commonly represented as Directed Acyclic Graphs (DAGs) [5], [8], [10], [11], [32], [33]. Let G = (V, E) be a DAG-based composite solution from Start to End, where nodes correspond to the services and edges correspond to the robust causal links. Often, V does not contain all services in SR. Many combinatorial optimization problems naturally represent solutions as permutations, which can be different in different problems [23]. Here we present composite services as permutations, and we ensure a bi-directional map between permutations and DAGs. The bi-directional map is crucial for learning the distribution of promising composite solutions. Because it is less reliable to learn a distribution based on permutations if different permutations are mapped to the same DAG-based composition service. Let Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ) be a permutation, elements of which are {0, . . . , t, t + 1, . . . , n − 1} such that Π i = Π j for all i = j. Particularly, {0, . . . , t} are service indexes (i.e., id number) of the component services in the corresponding G , and is sorted based on the longest distance from Start to every component services of G. While {t + 1, . . . , n − 1} be indexes of remaining services in SR not utilized by G. We use Π g k to present the k th (out of m, m is population size) service composition solution, and P g = [Π g 0 , . . . , Π g k , . . . , Π g m−1 ] to represent a population of solutions of generation g. An example of producing a permutation-based composite solution is shown as follows. Fig. 3 illustrates a process to produce an permutation-based solution. As an example, take an permutation as [4, 1, 2, 3, 0]. This service index queue is decoded into a DAG G 0 0 representing a service composition that satisfies the composition task T . Afterwards G 0 0 is mapped to a permutation Π 0 0 = [1, 2, 3 | 4, 0]. Herein, each position on the left side of | corresponds to a service discovered by a BFS on G 0 0 from Start. This BFS additionally takes ascending order of service indexes during the search. While the right side corresponds to the remaining atomic services in SR, but not in G 0 0 . Note, that | is just displayed for the courtesy of the reader, rather than being part of the permutation-based representation. Furthermore, we also do not permit the encoding [1, 2, 3 | 0, 4], as no information can be extracted from G 0 0 to determine the positions of 0 and 4 in the permutation. A permutation-based population P g can be created with m permutation-based solutions. Consider m = 6, P g could be represented as follows: [22] proposed Node histogram-based sampling (NHBSA) as a tool for sampling new candidate solutions, which is commonly represented in the form of permutations. By employing the discussed representation of composite services in Sect. 4.3, we are now capable of applying NHBSA to sample new permutations as candidate composite services. The NHM at generation g, denoted by N HM g , is an n × n-matrix with entries e i,j as follows: P g =        sol g 0 sol g 1 sol g 2 sol g 3 sol g 4 sol g 5        =        Application of node histogram-based sampling e g i,j = n−1 k=0 δ i,j (sol g k ) + ε (7) δ i,j (sol g k ) = 1 if I g k (S i ) = j 0 otherwise (8) ε = m n − 1 b ratio(9) where i, j = 0, 1, . . . , n − 1, and b ratio is a predetermined bias. Roughly speaking, entry e g i,j counts the number of times that service S i appears in position j of the service queue over all solutions in population P g . We pick up an element in the N HM g as an example to demonstrate the meaning of each element in the NHM. For example, e g 0,0 ( that equals 2.6) is made of integer and decimal parts: 2 and 0.6. The integer number 2 means that service S 0 appears at the first position 2 times, while the decimal number 0.6 is a ε bias. Once we have computed N HM g , we use node histogram-based sampling [22] to sample new permutations for the next generation. Effective Local Search Procedure Through a Joint Strategy In this section, we introduce a joint strategy of our local search procedure: we begin with an introduction of a selection of suitable individuals for employing local search. This selection aims to choose the individuals based on global and local population information using a fitness uniform selection scheme in Algorithim 2. Subsequently, we present several local search operators with the representation discussed in 4.3. These operators are specially designed to work seamlessly with different neighborhoods that are investigated in this paper. The joint strategy for local search is summarized in ALGORITHM 1. Fig. 1) Input : P g , n nb and n set Output: updated P g 1 Select a small number n set of individulals to form a subset SelectedIndiSet of P g using ALGORITHM 2; 2 foreach Π in SelectedIndiSet do 3 Generate a size n nb of neighbors from Π by local search ; 4 Identify the best neighbor Π best with the highest fitness ; 5 replace Π with Π best ; 6 return P g ; ALGORITHM 1 takes three inputs: P g the gth population, n set the number of seleted individuals for local search and n nb the number of neighbors. In this algorithm, we start by selecting a fixed and small number n set of candidate solutions to form a subset SelectedIndiSet of the current population P g using ALGORITHM 2, see details in Section 4.5.1. These selected solutions are used for local search. For each solution Π in SelectedIndiSet, we produce a number n nb of neighbors from Π by local search, see details in Section 4.5.2, and then we identify the best neighbor Π best from the produced neighbors. We replace the best neighbor Π best with the selected Π in the small solutions set SelectedIndiSet. Eventually, we return a updated P g . ALGORITHM 1. Joint strategy for local search (Step 3.3 in Application of uniform distribution schema Two types of selection schemes for selecting suitable individuals for local search have been studied [34]: random selection scheme, and statistics scheme. The random selection scheme is a primary selection method, where a local search is potentially applied to all individuals with a predefined rate. However, it can be less effective as it does not assign local search to the most suitable candidate solutions, and it is more time-consuming when the population size is huge. This statistics scheme often chooses more suitable individuals based on the statistics information of the current population. For example, this method can assign local search on a set of candidate solutions with the highest differences measured by their fitness values. Our selection scheme, inspired by [45], is proposed based on the statistics information that aims to select a small number of suitable individuals for local search, making a good balance of local improvement and execution time. This selection scheme is presented in ALGORITHM 2. This algorithm applied a local search on a set of selected individuals SelectedIndiSet. The size of SelectedIndiSet, n set , is a pre-defined parameter. SelectedIndiSet consists of one elite individual and n set − 1 individuals from n set − 1 groups of individuals in each generation. Particularly, we calculate a uniform fitness interval based on the maximal fitness value, maxf itness and minimal fitness value, minf itness of the ALGORITHM 2. Fitness uniform selection scheme Input : P g and n set Output: selected solutions SelectedIndiSet 1 SelectedIndiSet ← {} ; 2 Sort P g in descending order based on the fitness ; 3 Put the first individual in P g into SelectedIndiSet ; 4 Calculate fitness range for n set − 1 groups based on an uniform interval between maxf itness and minf itness ; 5 Assign each permutation in P g to n set − 1 groups based on the fitness value ; 6 Random select one permutation from each group and put it in SelectedIndiSet; 7 return SelectedIndiSet; current population P g . Therefore, the population is divided into n set − 1 groups based on the calculated fitness interval. Consequently, these groups represent different groups of individuals, and each group represents close similarities based on their fitness. Note that, for every generation, the actual number of selected individuals for local search could be less than n set , because there could be no individuals fall into one group based on its fitness value. Stochastic Local Search Operators To investigate an appropriate structure of neighborhood for composite services, suitable local search operators must be proposed to effectively utilize domain knowledge. Then we repeatedly assign these local search operators to SelectedIndiSet for exploring their neighboring solutions. Apart from that, to balance the quality of local improvement and computation time, only a random subset of the entire large neighborhood is explored by a stochastic strategy. Based on the discussed permutation-based representation in Sect. 4.3, local search operators are proposed in a straightforward way as "swap". In this paper, we investigate four different swap operators: 1) Constrained One-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), two service indexes Π a , where 0 ≤ a ≤ t, and Π b , where t + 1 ≤ b ≤ n − 1, are selected and exchanged. The one-point swap local search operator is inspired by [9], which swaps a pair of service indexes in a permutation. In [9], local search exclusively explores the neighborhood based on one selected index of the permutation, so the size of the neighborhood associated with the index is n − 1. However, it can be very computational expensive because the number of swaps becomes significant for large n. Besides that, it can be less flexible as the neighborhoods are just focusing on those neighborhoods in relation to one selected index. Herein we propose a more efficient and flexible local search with one-point swap: first, we pre-determine a fixed, relatively small number of neighbors n nb to be produced for a considerable computational time assigned for local search; second, we randomly produce n nb neighbors by swapping two randomly selected indexes, rather than by swapping n−1 indexes with one fixed index. We expect that swapping two randomly selected indexes is more effective within a budget computation time for making local improvements. Meanwhile, we constrain the two randomly selected indexes that they must be before | and after | respectively in every swap because these swaps exclude those have lower opportunities for local improvements. For example, one neighbor is created by swapping one pair of used service indexes. This swap operation has a higher chance to produce the same DAG-based solution. Figure 4 shows an example of one-point swap for a selected individual. 2) Constrained Two-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), four service indexes Π a1 , Π a2 , Π b1 , and Π b2 are selected, where 0 ≤ a 1 ≤ t, 0 ≤ a 2 ≤ t, t + 1 ≤ b 1 ≤ n − 1, t + 1 ≤ b 2 ≤ n − 1, a 1 = a 2 , and b 1 = b 2 . Π a1 and Π b1 are exchanged. Likewise, Π a2 and Π b2 are exchanged. Motivated by the one-point swap proposed above, we created two-point swap operator by combing two constrained one-point swap into a single operator. We make a hypothesis that the two-point swap could efficiently produce a higher quality neighbor by one local change, rather than producing two neighbors by a sequence of two constrained one-point local changes. Primarily, given a budgeted number of candidate solutions for local search, a two-point swap operator can perform a more efficient local search for finding highquality solutions. Figure 5 shows an example of a twopoint swap for a selected individual and a produced neighbors. Constrained One-Block Swap is proposed based on the concept of a block, i.e., consecutive points (service indexes) in a permutation. In this swap, two blocks are built up based on two randomly generated starting point Π a and Π b before | and after I of a permutation respectively. After swaps, produced neighbors inherit two parts of the original permutation. Figure 5 shows an example of a constrained one-block swap for a permutation, where one block is built up from the start position StartP os1 to the last positions of used services, and another block is built up from the start position StartP os2 to the last index. EXPERIMENTS We conduct experiments to evaluate the performances of our memetic EDA-based approaches, i.e., memetic EDA with constrained one-point swap (henceforth referred to as MEEDA-OP), memetic EDA with constrained two-point swap (henceforth referred to as MEEDA-TP), memetic EDA with constrained layer-based one-point swap (henceforth referred to as MEEDA-LOP) and memetic EDA with constrained one-block swap (henceforth referred to as MEEDA-OB). These memetic EDA-based approaches are compared to some state-of-the-art EC-based methods that were recently proposed to solve the same or similar problems: a PSO-based approach [10] (henceforth referred to as PSO), a GA-based approach (henceforth referred to as GA), a memetic GA-based approach [9] (henceforth referred to as MEGA) and an EDA-based approach [12] (henceforth referred to as NHM-EDA). Two benchmarks, WSC-08 [1] and WSC-09 [2] extended with QoS attributes , which generated from the QoS distribution from QWS [30] are created. These two benchmarks have already been broadly employed in service composition [5], [10], [13] for experimental evaluations. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. We also make this benchmark available to the public. Particuarly, WSC08 contains 8 composition tasks with increasing size of service repository, i.e., 316, 1116, 1216, 2082, 2180, 4396, 8226, and 16238, and WSC09 contains 5 composition tasks with increasing size of service repository, i.e., 1144, 8258, 16276, 16602, and 30422 SRs respectively. The population size is set to 200, the number of generations equals to 100, and b ratio is 0.0002. The size of SelectedIndiSet is 6, and the number of neighbors of each individual in SelectedIndiSet explored by local search operators n nb is 20. For all the competing methods, we follow strictly their settings in their papers. In GA, the crossover rate is set to 0.95, and the mutation rate is set to 0.05. In MEGA, the crossover rate is set to 0.95, and local search rate is 0.05. We run the experiment with 30 independent repetitions. Following existing works [10], [11], [12], the weights of the fitness function Eq. (5) are simply configured to balance the QoSM and QoS. In particular, we set both w 1 and w 2 to 0.25, and w 3 , w 4 , w 5 and w 6 all to 0.125. More experiments have been conducted and show that all our methods work consistently well under different weight settings. The p of type link is determined by the preference of users, and is recommended as 0.75 for the plugin match according to [39]. Comparison of the Fitness We employ the independent-sample T-test with a significance level of 5% to verify the observed differences in performance concerning fitness value and execution time. In particular, we use a pairwise comparison to compare all competing approaches, and then the top performances are identified, and its related value is highlighted in green color in Table 2. Note that those methods that consistently find the best-known solutions over 30 runs with 0 standard deviations are also marked as top performances. The pairwise comparison results for fitness are summarized in Table 3, where textitwin/draw/loss shows the scores of one method compared to all the others, and displays the frequency that this method outperforms, equals or is outperformed by the competing method. This testing and comparison methods are also used in Sect 5.2. One of the objectives of the experiments is to evaluate the effectiveness of the proposed memetic EDA-based approaches comparing to NHM-EDA [12], PSO [10], GA and MEGA [9]. Table 2 shows the mean value of the fitness value and the standard deviation over 30 repetitions. The pairwise comparison results of the fitness value are summarized in Table 3. From Table 2 and Table 3, we observe some interesting behaviors of these approaches in finding highquality solutions. Based on these observations, we also make some analysis and possible conclusions below: Firstly, for the two baseline methods -PSO and GA, all EDA-based approaches (with and without local search) consistently outperform PSO. However, only memetic EDAbased approaches outperform GA. Then, MEGA [9] achieved very comparable results to all our memetic EDA-based methods. However, MEEDA-LOP achieves the best performance. As shown in Table 3, MEEDA-LOP only loss 1 out of 13 composition tasks over WSC-08 land WSC-09. Furthermore, MEEDA-LOP has achieved extremely stable performance in the most runs with 0 standard deviation. In addition, MEEDA-OP, MEEDA-TP, MEEDA-OB, and MEEDA-LOP significantly outperforms NHM-EDA [12]. This observation corresponds well with our expectation that the exploitation ability of EDA can be enhanced by hybridizing it with local search. We can see that all memetic EDA-based approaches reach a better balance of exploration and exploitation. Furthermore, for the four memetic EDA-based approaches, MEEDA-OB is the worst while MEEDA-OP and MEEDA-TP are very comparable to each other. This observation demonstrates that the neighborhood based on blocks is considered to be less suitable for service composition problems, it is due to that swapping building blocks can potentially ruin the learned distribution of promising solutions. Lastly, MEEDA-LOP is the best performer. This observation corresponds well with our assumption that using the layer-based information can further improve the effectiveness of one-point swap. MEEDA-LOP applies the local search operator to a much smaller, but useful set of services considered in MEEDA-OP. In summary, we sort all the competing approaches based on the effectiveness in a descending order: MEEDA-LOP > MEGA > MEEDA-TP = MEEDA-OP > MEEDA-OB > GA > EDA > PSO. Comparison of the Execution Time The second objective of our experiment is to study the efficiency of all the proposed EDA-based approaches comparing to EDA [12], PSO [10], GA and MEGA [9]. Table 4 shows the mean value of the execution time and the standard deviation over 30 repetitions. The pairwise comparison results for the execution time are summarized Table 5. From the two tables above, we make some analysis and possible conclusions about the execution time of these approaches as below: First, MEEDA-LOP requires consistently less execution time compared to other approaches, which can be observed from the highlighted execution time in Table 4. It is a remarkable observation that the local search in MEEDA-LOP based on layers and constrained one-point swap requires less computation time compared to MEEDA-OP. However, this significant improvement is mainly due to two techniques in MEEDA-LOP. The first one is the archive technique, which reserves half population-size elite individuals to the next generation, and significantly reduces the overall computation time for the decoding and evaluation of the reserved individuals in the future. The second one is the layer-based information, which improves the effectiveness of one-point swap, resulting in learning more accurate and reliable NHM. Therefore, useful services are more likely to be put in the front of the permutation, which accelerates the execution time in the decoding process. Second, in contrast, MEGA requires the highest execution time, because all the candidate solutions in MEGA have an opportunity for local search using random selection scheme, and MEGA also exclusively searches the whole neighborhood based on one position. These results confirm that the combination of the random selection scheme and the exclusively local search strategy in MEGA is less effective and more time-consuming than our statistics scheme and stochastic local search operators. Lastly, MEEDA-OB is also very computation-intensive among all the memetic EDA-based approaches. It is due to that one-block swap retards accurate distributions to be learned as local improvements of one-block swap is less effective, so required services for service composition are less likely to be put at the front of a service queue. Also, building blocks consume extra time in MEEDA-OB. In summary, we sort all the competing approaches based on the execution time in a ascending order: MEEDA-LOP > MEEDA-OP > MEEDA-TP > PSO > GA > MEEDA-OB > MEGA. Comparison of the Convergence Rate The third objective of our experiment is to study the convergence rate of all the approaches over 30 independent runs. We have used WSC08-3 and WSC09-2 as two examples to illustrate the performance of all the compared methods. MEGA requires much higher time for execution, we set different execution time scales for two tasks of WSC08-3 and WSC09-2 to easily observe their differences. First, we observe a significant increase in the fitness value towards the optimum over all the approaches excluding MEGA. These approaches eventually reach different levels of plateaus. Given the same budget of execution time, all memetic EDA-based methods happen to converge significantly faster and require much less time than the baseline PSO over all the composition tasks. Second, MEGA suffers from the the scalability issue when the size of the service repository is doubled in our new benchmark. The complexity of its local search strongly depends on n, i.e., the dimension of each permutation. Therefore, MEGA does not even converge at all when the same amount of execution time that is required by other approaches is assigned. Lastly, MEEDA-LOP is consistently ranked as a top performer among all the competing methods. The convergence rate of MEEDA-OP and MEEDA-TP presents a very similar pattern. However, MEEDA-OB happens to converge slower than the others, but it eventually reaches comparable results compared to MEEDA-OP and MEEDA-TP. Comparison of local search operators We investigate how often the mean fitness of neighbors is better than the fitness of their original permutation in MEEDA-OP, MEEDA-TP, MEEDA-LOP, and MEEDA-BP to demonstrate which swap-based local search operator is more likely to produce better solutions. Herein we use the composition task WSC0803 as an example to demonstrate the percentage of better neighbors produced by our four memetic EDA-based approaches along generations over 30 runs for WSC08-03 in Fig. 9. The result shows that MEEDA-BP and MEEDA-TP are less like to produce better solutions while MEEDA-OP and MEEDA-LOP are very comparable to each other, although slightly higher percentages of better mean fitness can be achieved by MEEDA-LOP. We further analyze differences between layer-based constrained one-point swap and constraint one-point swap operator using a permutation in Figure 10. Figure 10 exhibits an example of two produced neighbors from a permutation using constraint one-point swaps without considering layer information. In the example, one identical solution can be decoded from both the given permutation and the produced two neighbors, resulting in no local exploitation. In contrast, the discussed swapping cases are not qualified for the layer-based constraint one-point swap, where any produced neighbor must strictly follow the layer order on the left-hand side of the permutation. In the example, a given permutation is highlighted with two layers (i.e., L 1 and L 2 ) in ascending order. Particularly, S 1 , S 2 ∈ L 1 and S 3 ∈ L 2 . When the constrained one-point swap is performed, S 3 in the given permutation are replaced with S 4 or S 0 in the produced neighbor 1 and neighbor 2 respectively. However, L 2 is destroyed in the produced neighbors because of S 4 ∈ L 1 and S 0 ∈ L 1 . However, if the layer-based one-point swap is applied to the given permutation, it prevents these two neighbors from being exploited. In general, all produced neighbors must keep all the ordered layers from the given permutation. CONCLUSION In this paper, we propose effective and efficient memetic EDA-based approaches to fully automated service composition. The success of this memetic approach principally relies on the local search, where several ideas are jointly employed. In particular, we proposed several neighborhood structures by different local search operators, which are integrated with our permutation-based representation naturally. Besides that, a uniform distribution scheme and a stochastic strategy are also jointly utilized for selecting and applying local search. The experiments show that one of our proposed approach MEEDA-LOP achieves significantly better effectiveness and efficiency, compared to some stateof-the-art EC-based approaches and other memetic EDAbased approaches we proposed in the paper. Future work can investigate variable neighborhood with combinations of more than one local search operators in one evolutionary process, and investigate memetic EDA for handling multiobjective service composition problems.
9,373
1906.07900
2949728259
Comprehensive quality-aware automated semantic web service composition is an NP-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., Quality of services (QoS) and Quality of semantic matchmaking (QoSM) are simultaneously optimized. The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request. In this paper, we proposed novel memetic EDA-based approaches to tackle this problem. The proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. Apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified EDA to reduce the overall computation time of our memetic approach. To better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 bansal2008wsc and WSC-09 kona2009wsc . Experimental results on this benchmark show that one of our proposed memetic EDA-based approach (i.e., MEEDA-LOP) significantly outperforms existing state-of-the-art algorithms.
Memetic algorithms have drawn growing attention from researchers in recent years and achieved significant successes in many applications @cite_35 . By introducing local search, the performance of EC techniques can be improved. In the domain of service composition, to overcome the prematurity and proneness of GP, Tabu search is combined with GP to solve QoS-aware data-intensive web service composition @cite_22 . @cite_7 proposed an indirect memetic approach for QoS-aware web service composition, where a domain-dependent crossover operator is proposed to produce candidate solutions. Besides that, an exhaustive local search is applied to composite solutions represented as permutations. However, the produced neighbors are likely to be decoded into the same composite solution. Therefore, the effectiveness of this local search operator demands further improvement.
{ "abstract": [ "Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.", "Web service composition has become a promising technique to build powerful business applications by making use of distributed services with different functions. Due to the explosion in the volume of data, providing efficient approaches to composing data intensive services will become more and more important in the field of service-oriented computing. Meanwhile, as numerous web services have been emerging to offer identical or similar functionality, web service composition is usually performed with end-to-end Quality of Service QoS properties which are adopted to describe the non-functional properties e.g., response time, execution cost, reliability, etc. of a web service. In this paper, a hybrid approach that integrates the use of genetic programming and tabu search to QoS-aware data intensive service composition is proposed. The performance of the proposed approach is evaluated using the publicly available benchmark datasets. A full set of experimental results show that a significant improvement of our approach over that obtained by the simple genetic programming method and several traditional optimization methods, has been achieved.", "" ], "cite_N": [ "@cite_35", "@cite_22", "@cite_7" ], "mid": [ "2104274529", "54571564", "" ] }
Memetic EDA-Based Approaches to Comprehensive Quality-Aware Automated Semantic Web Service Composition
S ERVICE Oriented Architecture (SOA) has been contributing to the reuse of software components [3]. Web services are one of the most successful implementations of SOA to provide services as "modular, self-describing, self-contained applications that are available on the Internet" [4]. Often, users' requirements cannot be satisfied by one existing web service, Web service composition aims to loosely couple a set of Web services to provide a value-added composite service (i.e., a solution of service composition) that accommodates users' complex requirements. These requirements are related to functional (i.e., quality of semantic matchmaking as QoSM) and non-functional (i.e., Quality of service as QoS) requirements, which give birth to semantic web service composition and QoS-aware web service composition, with the aim of optimizing QoSM and QoS of service composition solutions respectively. Many researchers have been working on solving these optimization problems in web service composition [5], [6], [7], [8], [9], [10], [11], [12], [13]. Existing works that study the above problems are classified as semi-automated and fully-automated web service composition [14] with two different assumptions. One assumes that users know an abstract service composition workflow, and all the composite services produced by the composition system must strictly obey the given workflow. However, this assumption is not always valid since the workflow may not be provided or not even known by users. The second group of research works does not rely on any existing work-flows. Instead, a composite service will be constructed from scratch by selecting and connecting multiple atomic services obtained from the service repository [14]. Therefore, this construction process can end up with different workflows. Apparently, compared to semi-automated web service composition, fully-automated web service composition opens new opportunities to improve QoS and QoSM further due to different workflows automatically constructed. Nevertheless, the difficulty of the composition task is also increased. AI planning and Evolutionary Computation (EC) are two of the most widely used techniques for semi-automated and fully-automated web service composition [5], [7], [10], [13], [15], [16], [17]. AI planning techniques focus on creating valid composite services, where the functional correctness is always ensured with gradually constructed workflows. However, these approaches do not optimize the QoS or QoSM of the solutions produced [18]. EC techniques have been widely used to solve service composite problems that aim to optimize either one or both of QoSM and QoS, and are potentially more useful in practice as they can efficiently find "good enough" composite solutions. Important approaches [5], [6], [7], [8], [9], [10], [11], [12], [13] based on Genetic Algorithms (GA) [19], Genetic Programming (GP) [20], Particle Swarm Optimization (PSO) [21] and Estimation of Distribution Algorithm (EDA) [22], have been widely investigated in the literature. To effectively search for good solutions, EC techniques often employ useful information distilled from promising solutions to produce new offspring. The information can be used either implicitly or explicitly. Conventional EC techniques, such as GA and GP, fall in the implicit camp by producing new solutions through recombining solutions evolved previously [5], [7], [13]. In contrast, one EC technique that has achieved prominent success through explicit arXiv:1906.07900v1 [cs.AI] 19 Jun 2019 use of information is Estimation of Distribution Algorithm (EDA) [23]. In EDA, information about promising solutions evolved previously is captured compactly in the form of probability models. EDA has been successfully utilized for semi-automated service composition [6], [24], but they can not support fully automated service composition. We recently proposed a new EDA-based approach for fully automated web service composition through reliable and accurate learning of a probability model that encodes the distribution of promising solutions [12], i.e., a distribution model. EDA stresses more on global exploration, rather than local exploitation [25]. It is due to that the distribution model has the objective of exploring more promising regions in the entire solution space, without attempting to improve the quality of any specific solutions evolved previously. However, the optimization performance can often be improved directly through local modifications to promising solutions. By restricting the target region for local search and avoiding most of the randomness involved in sampling directly from the distribution model, this can potentially expedite the search of optimal solutions. Therefore, to improve its competency in finding more effective solutions, an idea is to enhance EDA with local search, namely, memetic EDA. Memetic EDA has been successfully applied to many optimizations problems with local search operators [26], [25], such as arc routing and assembly flow-shop scheduling problems. On the one hand, although memetic EDA has been successfully applied to many applications, those memetic approaches work inappropriate for web service composition, as these local search operators are only applicable to domain-specific or problem-specific solution representations [25], [27]. On the other hand, despite the recent success in EDA-based service composition, the effectiveness of this approach can be enhanced by introducing memetic EDA. Several challenges remain to be addressed in developing a memetic EDA approach to service composition as follows: First, a composite service is commonly represented as a DAG, exploring the neighborhood of a DAG, especially large DAGs, is computationally infeasible [28]. Note that the discussed neighborhood is structured by local search operators on the search space, where neighbor solutions can be generated iteratively from a given candidate solution. Therefore, researchers [9], [29] often indirectly defined the neighborhood of a composite service represented in the form of a permutation, which can be converted to a DAG through a separate decoding process. Often, socalled "swap" operators produce neighbors by swapping two random elements in a permutation. Consequently, a neighborhood is defined by the collection of permutations obtainable through a "swap" to any given permutation. However, such neighborhood often contains a large proportion of neighboring permutations with inferior quality. For effective local research, the neighborhood must be refined to exclude most of the clearly unwise swapping choices by exploiting domain-specific knowledge. Second, as we know, it is very challenging to determine which candidate solutions are to be selected for local search in memetic algorithms, as the selection method has a significant impact on the effectiveness and efficiency of memetic EDA. Should an equal chance be given to all the candidate solutions or only elite solutions should be considered for local search? Moreover, what are elite solutions, and how many of them should be modified locally? However, answers to these challenging questions often vary from many factors, such as EC algorithms, domain problems, etc. Therefore, it is challenging to determine one effective selection strategy for the memetic EDA-based approach to service composition. Third, a traditional strategy that exclusively explores the whole neighboring space of composite services can incur high computation cost without guarantee of improving solution quality. For example, for permutation-based representation, if a simple swap operator is utilized for exploring the neighborhood, then the dimension of the permutation determines the computational complexity. In the context of service composition, the dimension of such permutation is usually equivalent to the size of the service repository. As the neighborhood size is extremely huge when many services are to be considered during the service composition process, this strategy is infeasible for practical use. Fourth, in EDA, although a probability distribution model is adjusted to trace promising searching areas throughout generations, one proportion of promising solutions (i.e., permutations) are more likely to be repetitively sampled, while the distribution model is getting converged along the generations. Furthermore, these repeatedly sampled solutions are often favorable to users, since they are candidate solutions with high quality. In the EDA-based approach to service composition, sampled permutationbased solutions are very costly as they require repetitive computation time for decoding and evaluations. To address these challenges above, we propose a memetic EDA-based approach, achieving substantially high performances in effectiveness and efficiency. These outstanding performances are observed by comparing it with some recently proposed web service composition approaches, such as a EDA-based approach [12], a PSO-based approach [10], and GA-and Memetic GA-based approaches [9]. In particular, an empirical, experimental study on the effectiveness of different neighborhoods structured by different local search operators is conducted. The contributions of this paper are listed below, and where the first contribution is to address the first challenge discussed previously, and the second contribution is proposed to address the remaining challenges. 1) To perform an effective local search in composite services, we first propose several neighborhood structures for candidate solutions. These neighborhoods are created by developing several novel domain-dependent local search operators, based on constructing and swapping effective building blocks of composite services for local improvements. Subsequently, we develop an effective memetic EDA-based approach based on our previous work [12], with nature integration with those local search operators. 2) To significantly reduce the computation time of our proposed memetic EDA-based approach, an integrated local search procedure is proposed with a modified EDA based on the standard EDA. To decrease computation losses in repetitive sampling and evaluations, we utilize an archiving technique to avoid sampling solutions repetitively. This technique is prevalent and straightforward to use. Besides that, the local search procedure employs an effective joint strategy to efficiently finding better solutions. This strategy considers fitness uniform distribution scheme and stochastic local search jointly with our proposed local search operators. 3) To demonstrate the performance of our memetic EDAbased approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 [1] and WSC-09 [2]. In particular, the new benchmark inherits the functionalities provided by services in benchmark dataset WSC-08 and WSC-09 and the QoS attributes of web services in benchmark dataset QWS [30]. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. This benchmark has been made freely available online as well as the codes of our memetic EDA-based approach 1 . We experimentally compare our memetic EDA-based approach with some state-of-the-art methods that have been recently proposed to solve the same or a similar service composition problem using the new benchmark. Our experimental results illustrate that our method can achieve cutting-edge performance. Literature on EC-Based fully automated web service composition Automated web service composition aims to loosely couple web services to fulfill a service request, without strictly obeying a pre-given abstract workflow. Instead, composition workflows are gradually built up while its component services are selected. Existing works in fully automated web service composition can be categorized into two approaches -direct approaches and indirect approaches [31]. The direct approaches represent composition solutions explicitly in the representation that displays actual execution flows of composite services, while the indirect approaches often represent composite services implicitly as permutations, which require a decoding process to build up actual execution workflows. 1. Two augmented benchmarks for automated web service composition is available from https://github.com/chenwangnida/Dataset, and the codes of our memetic EDA-based approach is available from https://github.com/chenwangnida/MENHBSA4SWSC. In the first category, tree-and graph-based representations are widely used to represent service composition solutions directly. A graph-based evolutionary process is introduced in [32] to directly evolve DAG-based service composition solutions, applying domain-dependent crossover and mutation operators with repairing methods. GP is utilized for searching optimal solutions represented as trees. [7] proposes a context-free grammar for randomly initializing treebased service composition solutions with correct structures of composite services. In contrast, [13] randomly initializes tree-based service composition solutions completely, but they develop adaptive crossover and mutation rates according to the diversity of the population for accelerating the speed of convergence. Both approaches [7], [13] utilize a penalization method for filtering incorrect solutions while evaluating the QoS of candidate solutions. To achieve higher performance, [5], [8] utilize a greedy search algorithm for creating correct DAG-based composition workflows, which are mapped to tree-based ones with different methods. During the evolutionary process, the correctness of the solutions is ensured by domain-dependent crossover and mutation. However, the mapped tree-based representations suffer a scalability issue, since many replicas of subtrees are produced from the mapping methods. To overcome this issue, [11] proposes a tree-like representation, on which the replicas of subtrees are handled by removing them, and inserting edges from the root of the replicas to the roots of the copies. In the second category, service composition solutions are represented as permutations, which are then decoded into solutions represented as DAGs [10], [31], [33]. PSO is utilized to find an optimized queue of services (i.e., a permutation), which can be decoded into a corresponding DAG-based composite service [33]. [10] extends [33] to jointly optimize QoSM and QoS, where a weighted DAG is decoded, where edge weights correspond to matchmaking quality between services. These two PSO-based approaches rely on PSO to determine the weights of particle's position (that corresponding with a service) to form an ordered service queue. Optimizing QoSM and QoS simultaneously is more challenging than optimizing QoS only because the searching space has significantly increased, and it demands more effective and efficient searching techniques. Apart from that, it has been suggested that utilizing the indirect representation often contributes to a higher performance, compared to direct representation [31]. It is due to that the search space is not unwittingly restricted by unconstrained random initialization of solutions and operators. In summary, EC techniques have been showing their promises in fully automated web service composition. Moreover, the indirect approaches have been indicated to be more effective. Therefore, EC techniques with indirect representations are exciting techniques to be focused on for solving service composition problem in this paper. Literature on memetic EC-based approaches and EDA Memetic algorithms have drawn growing attention from researchers in recent years and achieved significant successes in many applications [34]. By introducing local search, the performance of EC techniques can be improved. In the domain of service composition, to overcome the prematurity and proneness of GP, Tabu search is combined with GP to solve QoS-aware data-intensive web service composition [35]. [9] proposed an indirect memetic approach for QoSaware web service composition, where a domain-dependent crossover operator is proposed to produce candidate solutions. Besides that, an exhaustive local search is applied to composite solutions represented as permutations. However, the produced neighbors are likely to be decoded into the same composite solution. Therefore, the effectiveness of this local search operator demands further improvement. Recently, EDA has been used as a technique to tackle permutation-based optimization problems [23]. In particular, a distribution model is learned iteratively for each population. Subsequently, new offsprings are generated based on the learned model. Moreover, domain-dependent local search operators are often introduced to enhance the performances of EDA. For example, a probability matrix that is related to the job priority permutation of a solution is learned in EDA-based flow-shop scheduling problem, and different job-based local search operators were proposed to enhance the exploitation ability of EDA [25]. An Edge Histogram Matrix is applied to uncertain capacitated arc routing problems and is leaned from solutions represented by a set of routes [27]. To make local improvements, different move operators, such as single insertion and swap, are also proposed. The use of EDA has only been investigated for semiautomated web service composition [6], [24], [36]. However, we recently proposed an EDA-based approach for fully automated web service composition, where candidate solutions are represented as permutations over a given service repository. The success of the proposed method strongly depends on the distribution model and the way of learning the distribution model. We employ Node Histogram Matrix (NHM) to learn the distribution of promising solutions in one population, Node Histogram-Based Sampling Algorithm (NHBSA) [22] is empoloyed to produce candidate solutions. Although we started an initial study for fully automated service composition, it remains an opportunity to improve its performance further. EDA is good at global exploration, and local search operators are motivated to be introduced in EDA to enhance its capability in exploitation. In summary, on the one hand, memetic EDA-based approaches have been investigated in many problems, other than fully automated service composition, achieving promising results. On the other hand, notwithstanding success achieved in our initial investigation in EDA-based fully automated service composition, the performance of this EDA-based approach can be further improved by combining it with local search. SEMANTIC WEB SERVICE COMPOSITION PROB-LEM A semantic web service (service, for short) is considered as a tuple S = (I S , O S , QoS S ) where I S is a set of service inputs that are consumed by S, O S is a set of service outputs that are produced by S, and QoS S = {t S , c S , r S , a S } is a set of non-functional attributes of S. The inputs in I S and outputs in O S are parameters modeled through concepts in a domain-specific ontology O. The attributes t S , c S , r S , a S refer to the response time, cost, reliability, and availability of service S, respectively, which are four commonly used QoS attributes [37]. A service repository SR is a finite collection of services supported by a common ontology O. A composition task (also called service request) over a given SR is a tuple T = (I T , O T ) where I T is a set of task inputs, and O T is a set of task outputs. The inputs in I T and outputs in O T are parameters that are semantically described by concepts in the ontology O. Two special atomic services Start = (∅, I T , ∅) and End = (O T , ∅, ∅) are always included in SR to account for the input and output of a given composition task T . We use matchmaking types to describe the level of a match between outputs and inputs [38]. For concepts a, b in O the matchmaking returns exact if a and b are equivalent (a ≡ b), plugin if a is a sub-concept of b (a b), subsume if a is a super-concept of b (a b) , and f ail if none of previous matchmaking types is returned. In this paper we are only interested in exact and plugin matches for robust compositions, see [39]. As argued in [39] plugin matches are less preferable than exact matches due to the overheads associated with data processing. For plugin matches, the semantic similarity of concepts is suggested to be considered when comparing different plugin matches. A robust causal link [40] is a link between two matched services S and S , denoted as S → S , if an output a (a ∈ O S ) of S serves as the input b (b ∈ O S ) of S satisfying either a ≡ b or a b. For concepts a, b in O, the semantic similarity sim(a, b) is calculated based on the edge counting method in a taxonomy like WorldNet [41]. Advantages of this method are simple calculation and good semantic measurement [41]. Therefore, the matchmaking type and semantic similarity of a robust causal link is defined as follows: type link = 1 if a ≡ b (exact match) p if a b (plugin match) (1) sim link = sim(a, b) = 2Nc Na + N b(2) with a suitable parameter p, 0 < p < 1, and with N a , N b and N c , which measure the distances from concept a, concept b, and the closest common ancestor c of a and b to the top concept of the ontology O, respectively. However, if more than one pair of matched output and input exist from service S to service S , type e and sim e will take on their average values. The QoSM of a composite service is obtained by aggregating over all robust causal links as follows: M T = m j=1 type link j (3) SIM = 1 m m j=1 sim link j(4) Formal expressions as in [42] are used to represent service compositions. The constructors •, , + and * are used to denote sequential composition, parallel composi- C = r C = a C = ct C = t C = •(C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k d k=1 t C k (C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k M AX{t C k |k ∈ {1, ..., d}} +(C 1 , . . . , C d ) d k=1 p k · r C k d k=1 p k · a C k d k=1 p k · ct C k d k=1 p k · t C k * C 0 r C 0 a C 0 · ct C 0 · t C 0 tion, choice, and iteration, respectively. The set of composite service expressions is the smallest collection SC that contains all atomic services and that is closed under sequential composition, parallel composition, choice, and iteration. That is, whenever C 0 , C 1 , . . . , C d are in SC then •(C 1 , . . . , C d ), (C 1 , . . . , C d ), +(C 1 , . . . , C d ) , and * C 0 are in SC, too. Let C be a composite service expression. If C denotes an atomic service S then its QoS is given by QoS S . Otherwise the QoS of C can be obtained inductively as summarized in Table 1. Herein, p 1 , . . . , p d with d k=1 p k = 1 denote the probabilities of the different options of the choice +, while denotes the average number of iterations. Therefore, QoS of a service composition solution, i.e., availability (A), reliability (R), execution time (T ), and cost (CT ) can be obtained by aggregating a C , r C , t C and ct C as in Table 1. In the presentation of this paper, we mainly focus on two constructors, sequence • and parallel , similar as in most automated service composition works [5], [8], [10], [11], [32], [33], where service composition solutions are represented as a Directed Acyclic Graph (DAG). We can easily calculate QoS of a composite service that is represented as a DAG [10] according to Table 1. When multiple quality criteria are involved in decision making, the fitness of a solution is defined as a weighted sum of all individual criteria in Eq. (5), assuming the preference of each quality criterion based on its relative importance is provided by the user [43]: F itness(C) = w 1M T +w 2Ŝ IM +w 3Â +w 4R +w 5 (1−T )+w 6 (1−ĈT )(5) with 6 k=1 w k = 1. This objective function is defined as a comprehensive quality model for service composition. We can adjust the weights according to the user's preferences.M T ,ŜIM ,Â,R,T , andĈT are normalized values calculated within the range from 0 to 1 using Eq. (6). To simplify the presentation we also use the notation (Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 ) = (M T, SIM, A, R, T, CT ). Q 1 and Q 2 have minimum value 0 and maximum value 1. The minimum and maximum value of Q 3 , Q 4 , Q 5 , and Q 6 are calculated across all the relevant services (that are determined in Sect. 4.2) in the service repository SR using greedy search in [5], [8]. Q k =        Q k −Q k,min Q k,max −Q k,min if k = 1, . . . , 4 and Q k,max − Q k,min = 0, Q k,max −Q k Q k,max −Q k,min if k = 5, 6 and Q k,max − Q k,min = 0, 1 otherwise. The goal of comprehensive quality-aware service composition is to find a composite service expression C that maximizes the objective function in Eq. (5). C is hence considered as the best possible solution for a given composition task T . MEMETIC EDA-BASED APPROACH FOR SE-MANTIC WEB SERVICE COMPOSITION In this section, we present our memetic EDA-based approach to fully automated semantic web service composition. We start by giving an overview of our memetic EDAbased approach. Subsequently, we discuss some essential steps in the approach: the first one is to discover relevant services and service layers, see details in Sect.4.2. The second one is to introduce a permutation-based representation proposed in our previous work, see details in Sect. 4.3 and 4.4. The third one is to introduce an effective joint strategy for a local search procedure, see details in Sect. 4.5. We propose several key ideas that are jointly employed to build our memetic EDA-based approach: 1) A composite service is commonly represented as a DAG, since a DAG can intuitively represent an execution flow of web services, and allows efficient computation of QoS. The success of the EDA strategy strongly relies on the proper distribution model for learning the knowledge of promising solutions. Our initial study [12] represents a composite service as a unique queue of services, i.e., a permutation of atomic services, which is mapped from a DAG-based solution. Composite services in this permutation form contributes to a distribution model to be learned and new permutationbased promising solutions to be sampled. Therefore, a bi-directional map is ensured between permutations and DAGs for learning and evaluation purposes. 2) To significantly decrease the computation time of the local search procedure, it is crucial to select a restricted number of suitable candidate solutions for local searches. We assume that candidate solutions with close fitness values are similar in their corresponding DAG forms, so neighbors produced from these candidate solutions can be the same. Therefore, we group candidate solutions based on their fitness values according to a uniform distribution scheme, which allows candidate solutions with the most considerable differences measured by single-objective fitness values can be effectively chosen for applying local search. 3) It is not efficient to exhaustively explore the whole neighbors in the conventional local search [9]. Instead, stochastically searching the neighboring solutions can significantly reduce computation cost [26]. Therefore, we introduce a stochastic local search with EDA to posite service is unusually computationally infeasible [28]. However, it is straightforward to define the neighborhood on a permutation-based representation by socalled swap operators. To develop effective swap operators, we utilize domain knowledge of service composition to create effective building blocks for these swap operators on permutation-based candidate solutions. These swap operators aim to exploit fitter neighbors effectively. That is they are likely to make local improvements in the produced neighbors. An overview of memetic EDA-based algorithm for automatic service composition An overview of the memetic EDA-based approach is represented in Figure 1, consisting of the following steps: initialize population, evaluate population, select superior subpopulation, learn probability model, sample individuals and return optimal solutions. We start with discovering all the relevant services that are related to a given composition request T in Step 1. Meanwhile, several service layers are identified (see details in Subsection 4.2). These relevant services are used to randomly generate m composite services represented as permutations, Π g k , where g = 0 and k = 1, . . . , m. In Step 2, these permutation-based individuals are decoded into DAG-based solutions using a forward graph building technique [10], based on which, the fitness in Eq. 5 of each individual can be calculated. In Step 3, we merge the current population P g with an archive. The archive is an empty individual set initially and will saved with elite composite services in the future. By adopting Breath-First Search (BFS) on each corresponding DAG-based solutions in the merged population, we produce another encoded permutation-based solutions Π g k . Then, the local search procedure is applied to a very small set of these permutations. This small permutation set is selected based on a fitness uniform selection scheme over the current population (see details in 4.5.1). For each permutation in the small set, a stochastic local search is employed to create new permutations as its neighbors, where the best neighbor is identified based on the fitness value. This permutation in the small set is replaced with its best neighbor (see details in Subsection 4.5). The top half of the best-performing solutions are reserved in P g according to their fitness values and put them into the archive as elite solutions. In Step 4, we use these elite solutions in the archive to learn a N HM g of generation g, which produces offsprings for generation g + 1 using NHBSA, see details in Subsection 4.4. Consequently, we go back to Step 2 to evaluate the fitness of new offsprings. The steps 2 to 4 will be repeated until the maximum number of generations is reached. Eventually, the best solutions found throughout the evolutionary process is returned. In a nutshell, we introduce a permutation-based representation derived from the common DAG-based one. In our proposed algorithm, we always switch between these two representations back and forth for better searching or evaluation purposes. Furthermore, an effective and efficient local search procedure is developed through the use of the selection scheme and the stochastic local search. Relevant Services and Service Layers Discovering relevant services and service layers is an initial, but crucial step for our memetic EDA-based approach. We achieve two goals at this initial stage: the first goal is to reduce the size of the service repository SR to keep only those that are relevant to the service composition task T ; the second goal is to identify service layers of these relevant services. In particular, a group of layers is identified, and each layer contains a set of services that have the same longest distance to Start. We adopt a layer discovering method in [44] to find relevant services and service layers as illustrated in the following example. Fig. 3 shows an example of discovering relevant services and service layers given a service request T , where five related services (i.e., S 0 , S 1 , S 2 , S 3 , and S 4 ) and two layers (i.e., L 1 and L 2 ) are found. In L 1 , S 0 , S 1 , S 2 , and S 4 can be satisfied by {a, b} of T , and they have the same distance to Start (Note that the distance is measured by the number of predecessors). While S 3 in L 2 requires additional inputs from other services and it is associated with a longer distance to Start. A Novel Permutation-Based Representation Service composition solutions are commonly represented as Directed Acyclic Graphs (DAGs) [5], [8], [10], [11], [32], [33]. Let G = (V, E) be a DAG-based composite solution from Start to End, where nodes correspond to the services and edges correspond to the robust causal links. Often, V does not contain all services in SR. Many combinatorial optimization problems naturally represent solutions as permutations, which can be different in different problems [23]. Here we present composite services as permutations, and we ensure a bi-directional map between permutations and DAGs. The bi-directional map is crucial for learning the distribution of promising composite solutions. Because it is less reliable to learn a distribution based on permutations if different permutations are mapped to the same DAG-based composition service. Let Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ) be a permutation, elements of which are {0, . . . , t, t + 1, . . . , n − 1} such that Π i = Π j for all i = j. Particularly, {0, . . . , t} are service indexes (i.e., id number) of the component services in the corresponding G , and is sorted based on the longest distance from Start to every component services of G. While {t + 1, . . . , n − 1} be indexes of remaining services in SR not utilized by G. We use Π g k to present the k th (out of m, m is population size) service composition solution, and P g = [Π g 0 , . . . , Π g k , . . . , Π g m−1 ] to represent a population of solutions of generation g. An example of producing a permutation-based composite solution is shown as follows. Fig. 3 illustrates a process to produce an permutation-based solution. As an example, take an permutation as [4, 1, 2, 3, 0]. This service index queue is decoded into a DAG G 0 0 representing a service composition that satisfies the composition task T . Afterwards G 0 0 is mapped to a permutation Π 0 0 = [1, 2, 3 | 4, 0]. Herein, each position on the left side of | corresponds to a service discovered by a BFS on G 0 0 from Start. This BFS additionally takes ascending order of service indexes during the search. While the right side corresponds to the remaining atomic services in SR, but not in G 0 0 . Note, that | is just displayed for the courtesy of the reader, rather than being part of the permutation-based representation. Furthermore, we also do not permit the encoding [1, 2, 3 | 0, 4], as no information can be extracted from G 0 0 to determine the positions of 0 and 4 in the permutation. A permutation-based population P g can be created with m permutation-based solutions. Consider m = 6, P g could be represented as follows: [22] proposed Node histogram-based sampling (NHBSA) as a tool for sampling new candidate solutions, which is commonly represented in the form of permutations. By employing the discussed representation of composite services in Sect. 4.3, we are now capable of applying NHBSA to sample new permutations as candidate composite services. The NHM at generation g, denoted by N HM g , is an n × n-matrix with entries e i,j as follows: P g =        sol g 0 sol g 1 sol g 2 sol g 3 sol g 4 sol g 5        =        Application of node histogram-based sampling e g i,j = n−1 k=0 δ i,j (sol g k ) + ε (7) δ i,j (sol g k ) = 1 if I g k (S i ) = j 0 otherwise (8) ε = m n − 1 b ratio(9) where i, j = 0, 1, . . . , n − 1, and b ratio is a predetermined bias. Roughly speaking, entry e g i,j counts the number of times that service S i appears in position j of the service queue over all solutions in population P g . We pick up an element in the N HM g as an example to demonstrate the meaning of each element in the NHM. For example, e g 0,0 ( that equals 2.6) is made of integer and decimal parts: 2 and 0.6. The integer number 2 means that service S 0 appears at the first position 2 times, while the decimal number 0.6 is a ε bias. Once we have computed N HM g , we use node histogram-based sampling [22] to sample new permutations for the next generation. Effective Local Search Procedure Through a Joint Strategy In this section, we introduce a joint strategy of our local search procedure: we begin with an introduction of a selection of suitable individuals for employing local search. This selection aims to choose the individuals based on global and local population information using a fitness uniform selection scheme in Algorithim 2. Subsequently, we present several local search operators with the representation discussed in 4.3. These operators are specially designed to work seamlessly with different neighborhoods that are investigated in this paper. The joint strategy for local search is summarized in ALGORITHM 1. Fig. 1) Input : P g , n nb and n set Output: updated P g 1 Select a small number n set of individulals to form a subset SelectedIndiSet of P g using ALGORITHM 2; 2 foreach Π in SelectedIndiSet do 3 Generate a size n nb of neighbors from Π by local search ; 4 Identify the best neighbor Π best with the highest fitness ; 5 replace Π with Π best ; 6 return P g ; ALGORITHM 1 takes three inputs: P g the gth population, n set the number of seleted individuals for local search and n nb the number of neighbors. In this algorithm, we start by selecting a fixed and small number n set of candidate solutions to form a subset SelectedIndiSet of the current population P g using ALGORITHM 2, see details in Section 4.5.1. These selected solutions are used for local search. For each solution Π in SelectedIndiSet, we produce a number n nb of neighbors from Π by local search, see details in Section 4.5.2, and then we identify the best neighbor Π best from the produced neighbors. We replace the best neighbor Π best with the selected Π in the small solutions set SelectedIndiSet. Eventually, we return a updated P g . ALGORITHM 1. Joint strategy for local search (Step 3.3 in Application of uniform distribution schema Two types of selection schemes for selecting suitable individuals for local search have been studied [34]: random selection scheme, and statistics scheme. The random selection scheme is a primary selection method, where a local search is potentially applied to all individuals with a predefined rate. However, it can be less effective as it does not assign local search to the most suitable candidate solutions, and it is more time-consuming when the population size is huge. This statistics scheme often chooses more suitable individuals based on the statistics information of the current population. For example, this method can assign local search on a set of candidate solutions with the highest differences measured by their fitness values. Our selection scheme, inspired by [45], is proposed based on the statistics information that aims to select a small number of suitable individuals for local search, making a good balance of local improvement and execution time. This selection scheme is presented in ALGORITHM 2. This algorithm applied a local search on a set of selected individuals SelectedIndiSet. The size of SelectedIndiSet, n set , is a pre-defined parameter. SelectedIndiSet consists of one elite individual and n set − 1 individuals from n set − 1 groups of individuals in each generation. Particularly, we calculate a uniform fitness interval based on the maximal fitness value, maxf itness and minimal fitness value, minf itness of the ALGORITHM 2. Fitness uniform selection scheme Input : P g and n set Output: selected solutions SelectedIndiSet 1 SelectedIndiSet ← {} ; 2 Sort P g in descending order based on the fitness ; 3 Put the first individual in P g into SelectedIndiSet ; 4 Calculate fitness range for n set − 1 groups based on an uniform interval between maxf itness and minf itness ; 5 Assign each permutation in P g to n set − 1 groups based on the fitness value ; 6 Random select one permutation from each group and put it in SelectedIndiSet; 7 return SelectedIndiSet; current population P g . Therefore, the population is divided into n set − 1 groups based on the calculated fitness interval. Consequently, these groups represent different groups of individuals, and each group represents close similarities based on their fitness. Note that, for every generation, the actual number of selected individuals for local search could be less than n set , because there could be no individuals fall into one group based on its fitness value. Stochastic Local Search Operators To investigate an appropriate structure of neighborhood for composite services, suitable local search operators must be proposed to effectively utilize domain knowledge. Then we repeatedly assign these local search operators to SelectedIndiSet for exploring their neighboring solutions. Apart from that, to balance the quality of local improvement and computation time, only a random subset of the entire large neighborhood is explored by a stochastic strategy. Based on the discussed permutation-based representation in Sect. 4.3, local search operators are proposed in a straightforward way as "swap". In this paper, we investigate four different swap operators: 1) Constrained One-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), two service indexes Π a , where 0 ≤ a ≤ t, and Π b , where t + 1 ≤ b ≤ n − 1, are selected and exchanged. The one-point swap local search operator is inspired by [9], which swaps a pair of service indexes in a permutation. In [9], local search exclusively explores the neighborhood based on one selected index of the permutation, so the size of the neighborhood associated with the index is n − 1. However, it can be very computational expensive because the number of swaps becomes significant for large n. Besides that, it can be less flexible as the neighborhoods are just focusing on those neighborhoods in relation to one selected index. Herein we propose a more efficient and flexible local search with one-point swap: first, we pre-determine a fixed, relatively small number of neighbors n nb to be produced for a considerable computational time assigned for local search; second, we randomly produce n nb neighbors by swapping two randomly selected indexes, rather than by swapping n−1 indexes with one fixed index. We expect that swapping two randomly selected indexes is more effective within a budget computation time for making local improvements. Meanwhile, we constrain the two randomly selected indexes that they must be before | and after | respectively in every swap because these swaps exclude those have lower opportunities for local improvements. For example, one neighbor is created by swapping one pair of used service indexes. This swap operation has a higher chance to produce the same DAG-based solution. Figure 4 shows an example of one-point swap for a selected individual. 2) Constrained Two-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), four service indexes Π a1 , Π a2 , Π b1 , and Π b2 are selected, where 0 ≤ a 1 ≤ t, 0 ≤ a 2 ≤ t, t + 1 ≤ b 1 ≤ n − 1, t + 1 ≤ b 2 ≤ n − 1, a 1 = a 2 , and b 1 = b 2 . Π a1 and Π b1 are exchanged. Likewise, Π a2 and Π b2 are exchanged. Motivated by the one-point swap proposed above, we created two-point swap operator by combing two constrained one-point swap into a single operator. We make a hypothesis that the two-point swap could efficiently produce a higher quality neighbor by one local change, rather than producing two neighbors by a sequence of two constrained one-point local changes. Primarily, given a budgeted number of candidate solutions for local search, a two-point swap operator can perform a more efficient local search for finding highquality solutions. Figure 5 shows an example of a twopoint swap for a selected individual and a produced neighbors. Constrained One-Block Swap is proposed based on the concept of a block, i.e., consecutive points (service indexes) in a permutation. In this swap, two blocks are built up based on two randomly generated starting point Π a and Π b before | and after I of a permutation respectively. After swaps, produced neighbors inherit two parts of the original permutation. Figure 5 shows an example of a constrained one-block swap for a permutation, where one block is built up from the start position StartP os1 to the last positions of used services, and another block is built up from the start position StartP os2 to the last index. EXPERIMENTS We conduct experiments to evaluate the performances of our memetic EDA-based approaches, i.e., memetic EDA with constrained one-point swap (henceforth referred to as MEEDA-OP), memetic EDA with constrained two-point swap (henceforth referred to as MEEDA-TP), memetic EDA with constrained layer-based one-point swap (henceforth referred to as MEEDA-LOP) and memetic EDA with constrained one-block swap (henceforth referred to as MEEDA-OB). These memetic EDA-based approaches are compared to some state-of-the-art EC-based methods that were recently proposed to solve the same or similar problems: a PSO-based approach [10] (henceforth referred to as PSO), a GA-based approach (henceforth referred to as GA), a memetic GA-based approach [9] (henceforth referred to as MEGA) and an EDA-based approach [12] (henceforth referred to as NHM-EDA). Two benchmarks, WSC-08 [1] and WSC-09 [2] extended with QoS attributes , which generated from the QoS distribution from QWS [30] are created. These two benchmarks have already been broadly employed in service composition [5], [10], [13] for experimental evaluations. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. We also make this benchmark available to the public. Particuarly, WSC08 contains 8 composition tasks with increasing size of service repository, i.e., 316, 1116, 1216, 2082, 2180, 4396, 8226, and 16238, and WSC09 contains 5 composition tasks with increasing size of service repository, i.e., 1144, 8258, 16276, 16602, and 30422 SRs respectively. The population size is set to 200, the number of generations equals to 100, and b ratio is 0.0002. The size of SelectedIndiSet is 6, and the number of neighbors of each individual in SelectedIndiSet explored by local search operators n nb is 20. For all the competing methods, we follow strictly their settings in their papers. In GA, the crossover rate is set to 0.95, and the mutation rate is set to 0.05. In MEGA, the crossover rate is set to 0.95, and local search rate is 0.05. We run the experiment with 30 independent repetitions. Following existing works [10], [11], [12], the weights of the fitness function Eq. (5) are simply configured to balance the QoSM and QoS. In particular, we set both w 1 and w 2 to 0.25, and w 3 , w 4 , w 5 and w 6 all to 0.125. More experiments have been conducted and show that all our methods work consistently well under different weight settings. The p of type link is determined by the preference of users, and is recommended as 0.75 for the plugin match according to [39]. Comparison of the Fitness We employ the independent-sample T-test with a significance level of 5% to verify the observed differences in performance concerning fitness value and execution time. In particular, we use a pairwise comparison to compare all competing approaches, and then the top performances are identified, and its related value is highlighted in green color in Table 2. Note that those methods that consistently find the best-known solutions over 30 runs with 0 standard deviations are also marked as top performances. The pairwise comparison results for fitness are summarized in Table 3, where textitwin/draw/loss shows the scores of one method compared to all the others, and displays the frequency that this method outperforms, equals or is outperformed by the competing method. This testing and comparison methods are also used in Sect 5.2. One of the objectives of the experiments is to evaluate the effectiveness of the proposed memetic EDA-based approaches comparing to NHM-EDA [12], PSO [10], GA and MEGA [9]. Table 2 shows the mean value of the fitness value and the standard deviation over 30 repetitions. The pairwise comparison results of the fitness value are summarized in Table 3. From Table 2 and Table 3, we observe some interesting behaviors of these approaches in finding highquality solutions. Based on these observations, we also make some analysis and possible conclusions below: Firstly, for the two baseline methods -PSO and GA, all EDA-based approaches (with and without local search) consistently outperform PSO. However, only memetic EDAbased approaches outperform GA. Then, MEGA [9] achieved very comparable results to all our memetic EDA-based methods. However, MEEDA-LOP achieves the best performance. As shown in Table 3, MEEDA-LOP only loss 1 out of 13 composition tasks over WSC-08 land WSC-09. Furthermore, MEEDA-LOP has achieved extremely stable performance in the most runs with 0 standard deviation. In addition, MEEDA-OP, MEEDA-TP, MEEDA-OB, and MEEDA-LOP significantly outperforms NHM-EDA [12]. This observation corresponds well with our expectation that the exploitation ability of EDA can be enhanced by hybridizing it with local search. We can see that all memetic EDA-based approaches reach a better balance of exploration and exploitation. Furthermore, for the four memetic EDA-based approaches, MEEDA-OB is the worst while MEEDA-OP and MEEDA-TP are very comparable to each other. This observation demonstrates that the neighborhood based on blocks is considered to be less suitable for service composition problems, it is due to that swapping building blocks can potentially ruin the learned distribution of promising solutions. Lastly, MEEDA-LOP is the best performer. This observation corresponds well with our assumption that using the layer-based information can further improve the effectiveness of one-point swap. MEEDA-LOP applies the local search operator to a much smaller, but useful set of services considered in MEEDA-OP. In summary, we sort all the competing approaches based on the effectiveness in a descending order: MEEDA-LOP > MEGA > MEEDA-TP = MEEDA-OP > MEEDA-OB > GA > EDA > PSO. Comparison of the Execution Time The second objective of our experiment is to study the efficiency of all the proposed EDA-based approaches comparing to EDA [12], PSO [10], GA and MEGA [9]. Table 4 shows the mean value of the execution time and the standard deviation over 30 repetitions. The pairwise comparison results for the execution time are summarized Table 5. From the two tables above, we make some analysis and possible conclusions about the execution time of these approaches as below: First, MEEDA-LOP requires consistently less execution time compared to other approaches, which can be observed from the highlighted execution time in Table 4. It is a remarkable observation that the local search in MEEDA-LOP based on layers and constrained one-point swap requires less computation time compared to MEEDA-OP. However, this significant improvement is mainly due to two techniques in MEEDA-LOP. The first one is the archive technique, which reserves half population-size elite individuals to the next generation, and significantly reduces the overall computation time for the decoding and evaluation of the reserved individuals in the future. The second one is the layer-based information, which improves the effectiveness of one-point swap, resulting in learning more accurate and reliable NHM. Therefore, useful services are more likely to be put in the front of the permutation, which accelerates the execution time in the decoding process. Second, in contrast, MEGA requires the highest execution time, because all the candidate solutions in MEGA have an opportunity for local search using random selection scheme, and MEGA also exclusively searches the whole neighborhood based on one position. These results confirm that the combination of the random selection scheme and the exclusively local search strategy in MEGA is less effective and more time-consuming than our statistics scheme and stochastic local search operators. Lastly, MEEDA-OB is also very computation-intensive among all the memetic EDA-based approaches. It is due to that one-block swap retards accurate distributions to be learned as local improvements of one-block swap is less effective, so required services for service composition are less likely to be put at the front of a service queue. Also, building blocks consume extra time in MEEDA-OB. In summary, we sort all the competing approaches based on the execution time in a ascending order: MEEDA-LOP > MEEDA-OP > MEEDA-TP > PSO > GA > MEEDA-OB > MEGA. Comparison of the Convergence Rate The third objective of our experiment is to study the convergence rate of all the approaches over 30 independent runs. We have used WSC08-3 and WSC09-2 as two examples to illustrate the performance of all the compared methods. MEGA requires much higher time for execution, we set different execution time scales for two tasks of WSC08-3 and WSC09-2 to easily observe their differences. First, we observe a significant increase in the fitness value towards the optimum over all the approaches excluding MEGA. These approaches eventually reach different levels of plateaus. Given the same budget of execution time, all memetic EDA-based methods happen to converge significantly faster and require much less time than the baseline PSO over all the composition tasks. Second, MEGA suffers from the the scalability issue when the size of the service repository is doubled in our new benchmark. The complexity of its local search strongly depends on n, i.e., the dimension of each permutation. Therefore, MEGA does not even converge at all when the same amount of execution time that is required by other approaches is assigned. Lastly, MEEDA-LOP is consistently ranked as a top performer among all the competing methods. The convergence rate of MEEDA-OP and MEEDA-TP presents a very similar pattern. However, MEEDA-OB happens to converge slower than the others, but it eventually reaches comparable results compared to MEEDA-OP and MEEDA-TP. Comparison of local search operators We investigate how often the mean fitness of neighbors is better than the fitness of their original permutation in MEEDA-OP, MEEDA-TP, MEEDA-LOP, and MEEDA-BP to demonstrate which swap-based local search operator is more likely to produce better solutions. Herein we use the composition task WSC0803 as an example to demonstrate the percentage of better neighbors produced by our four memetic EDA-based approaches along generations over 30 runs for WSC08-03 in Fig. 9. The result shows that MEEDA-BP and MEEDA-TP are less like to produce better solutions while MEEDA-OP and MEEDA-LOP are very comparable to each other, although slightly higher percentages of better mean fitness can be achieved by MEEDA-LOP. We further analyze differences between layer-based constrained one-point swap and constraint one-point swap operator using a permutation in Figure 10. Figure 10 exhibits an example of two produced neighbors from a permutation using constraint one-point swaps without considering layer information. In the example, one identical solution can be decoded from both the given permutation and the produced two neighbors, resulting in no local exploitation. In contrast, the discussed swapping cases are not qualified for the layer-based constraint one-point swap, where any produced neighbor must strictly follow the layer order on the left-hand side of the permutation. In the example, a given permutation is highlighted with two layers (i.e., L 1 and L 2 ) in ascending order. Particularly, S 1 , S 2 ∈ L 1 and S 3 ∈ L 2 . When the constrained one-point swap is performed, S 3 in the given permutation are replaced with S 4 or S 0 in the produced neighbor 1 and neighbor 2 respectively. However, L 2 is destroyed in the produced neighbors because of S 4 ∈ L 1 and S 0 ∈ L 1 . However, if the layer-based one-point swap is applied to the given permutation, it prevents these two neighbors from being exploited. In general, all produced neighbors must keep all the ordered layers from the given permutation. CONCLUSION In this paper, we propose effective and efficient memetic EDA-based approaches to fully automated service composition. The success of this memetic approach principally relies on the local search, where several ideas are jointly employed. In particular, we proposed several neighborhood structures by different local search operators, which are integrated with our permutation-based representation naturally. Besides that, a uniform distribution scheme and a stochastic strategy are also jointly utilized for selecting and applying local search. The experiments show that one of our proposed approach MEEDA-LOP achieves significantly better effectiveness and efficiency, compared to some stateof-the-art EC-based approaches and other memetic EDAbased approaches we proposed in the paper. Future work can investigate variable neighborhood with combinations of more than one local search operators in one evolutionary process, and investigate memetic EDA for handling multiobjective service composition problems.
9,373
1906.07900
2949728259
Comprehensive quality-aware automated semantic web service composition is an NP-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., Quality of services (QoS) and Quality of semantic matchmaking (QoSM) are simultaneously optimized. The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request. In this paper, we proposed novel memetic EDA-based approaches to tackle this problem. The proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. Apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified EDA to reduce the overall computation time of our memetic approach. To better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 bansal2008wsc and WSC-09 kona2009wsc . Experimental results on this benchmark show that one of our proposed memetic EDA-based approach (i.e., MEEDA-LOP) significantly outperforms existing state-of-the-art algorithms.
Recently, EDA has been used as a technique to tackle permutation-based optimization problems @cite_9 . In particular, a distribution model is learned iteratively for each population. Subsequently, new offsprings are generated based on the learned model. Moreover, domain-dependent local search operators are often introduced to enhance the performances of EDA. For example, a probability matrix that is related to the job priority permutation of a solution is learned in EDA-based flow-shop scheduling problem, and different job-based local search operators were proposed to enhance the exploitation ability of EDA @cite_4 . An Edge Histogram Matrix is applied to uncertain capacitated arc routing problems and is leaned from solutions represented by a set of routes @cite_30 . To make local improvements, different move operators, such as single insertion and swap, are also proposed.
{ "abstract": [ "", "Estimation of distribution algorithms (EDAs) are a set of algorithms that belong to the field of Evolutionary Computation. Characterized by the use of probabilistic models to represent the solutions and the dependencies between the variables of the problem, these algorithms have been applied to a wide set of academic and real-world optimization problems, achieving competitive results in most scenarios. Nevertheless, there are some optimization problems, whose solutions can be naturally represented as permutations, for which EDAs have not been extensively developed. Although some work has been carried out in this direction, most of the approaches are adaptations of EDAs designed for problems based on integer or real domains, and only a few algorithms have been specifically designed to deal with permutation-based problems. In order to set the basis for a development of EDAs in permutation-based problems similar to that which occurred in other optimization fields (integer and real-value problems), in this paper we carry out a thorough review of state-of-the-art EDAs applied to permutation-based problems. Furthermore, we provide some ideas on probabilistic modeling over permutation spaces that could inspire the researchers of EDAs to design new approaches for these kinds of problems.", "In this paper, an estimation of distribution algorithm (EDA)-based memetic algorithm (MA) is proposed for solving the distributed assembly permutation flow-shop scheduling problem (DAPFSP) with the objective to minimize the maximum completion time. A novel bi-vector-based method is proposed to represent a solution for the DAPFSP. In the searching phase of the EDA-based MA (EDAMA), the EDA-based exploration and the local-search-based exploitation are incorporated within the MA framework. For the EDA-based exploration phase, a probability model is built to describe the probability distribution of superior solutions. Besides, a novel selective-enhancing sampling mechanism is proposed for generating new solutions by sampling the probability model. For the local-search-based exploitation phase, the critical path of the DAPFSP is analyzed to avoid invalid searching operators. Based on the analysis, a critical-path-based local search strategy is proposed to further improve the potential solutions obtained in the EDA-based searching phase. Moreover, the effect of parameter setting is investigated based on the Taguchi method of design-of-experiment. Suitable parameter values are suggested for instances with different scales. Finally, numerical simulations based on 1710 benchmark instances are carried out. The experimental results and comparisons with existing algorithms show the effectiveness of the EDAMA in solving the DAPFSP. In addition, the best-known solutions of 181 instances are updated by the EDAMA." ], "cite_N": [ "@cite_30", "@cite_9", "@cite_4" ], "mid": [ "", "2037166173", "2088303535" ] }
Memetic EDA-Based Approaches to Comprehensive Quality-Aware Automated Semantic Web Service Composition
S ERVICE Oriented Architecture (SOA) has been contributing to the reuse of software components [3]. Web services are one of the most successful implementations of SOA to provide services as "modular, self-describing, self-contained applications that are available on the Internet" [4]. Often, users' requirements cannot be satisfied by one existing web service, Web service composition aims to loosely couple a set of Web services to provide a value-added composite service (i.e., a solution of service composition) that accommodates users' complex requirements. These requirements are related to functional (i.e., quality of semantic matchmaking as QoSM) and non-functional (i.e., Quality of service as QoS) requirements, which give birth to semantic web service composition and QoS-aware web service composition, with the aim of optimizing QoSM and QoS of service composition solutions respectively. Many researchers have been working on solving these optimization problems in web service composition [5], [6], [7], [8], [9], [10], [11], [12], [13]. Existing works that study the above problems are classified as semi-automated and fully-automated web service composition [14] with two different assumptions. One assumes that users know an abstract service composition workflow, and all the composite services produced by the composition system must strictly obey the given workflow. However, this assumption is not always valid since the workflow may not be provided or not even known by users. The second group of research works does not rely on any existing work-flows. Instead, a composite service will be constructed from scratch by selecting and connecting multiple atomic services obtained from the service repository [14]. Therefore, this construction process can end up with different workflows. Apparently, compared to semi-automated web service composition, fully-automated web service composition opens new opportunities to improve QoS and QoSM further due to different workflows automatically constructed. Nevertheless, the difficulty of the composition task is also increased. AI planning and Evolutionary Computation (EC) are two of the most widely used techniques for semi-automated and fully-automated web service composition [5], [7], [10], [13], [15], [16], [17]. AI planning techniques focus on creating valid composite services, where the functional correctness is always ensured with gradually constructed workflows. However, these approaches do not optimize the QoS or QoSM of the solutions produced [18]. EC techniques have been widely used to solve service composite problems that aim to optimize either one or both of QoSM and QoS, and are potentially more useful in practice as they can efficiently find "good enough" composite solutions. Important approaches [5], [6], [7], [8], [9], [10], [11], [12], [13] based on Genetic Algorithms (GA) [19], Genetic Programming (GP) [20], Particle Swarm Optimization (PSO) [21] and Estimation of Distribution Algorithm (EDA) [22], have been widely investigated in the literature. To effectively search for good solutions, EC techniques often employ useful information distilled from promising solutions to produce new offspring. The information can be used either implicitly or explicitly. Conventional EC techniques, such as GA and GP, fall in the implicit camp by producing new solutions through recombining solutions evolved previously [5], [7], [13]. In contrast, one EC technique that has achieved prominent success through explicit arXiv:1906.07900v1 [cs.AI] 19 Jun 2019 use of information is Estimation of Distribution Algorithm (EDA) [23]. In EDA, information about promising solutions evolved previously is captured compactly in the form of probability models. EDA has been successfully utilized for semi-automated service composition [6], [24], but they can not support fully automated service composition. We recently proposed a new EDA-based approach for fully automated web service composition through reliable and accurate learning of a probability model that encodes the distribution of promising solutions [12], i.e., a distribution model. EDA stresses more on global exploration, rather than local exploitation [25]. It is due to that the distribution model has the objective of exploring more promising regions in the entire solution space, without attempting to improve the quality of any specific solutions evolved previously. However, the optimization performance can often be improved directly through local modifications to promising solutions. By restricting the target region for local search and avoiding most of the randomness involved in sampling directly from the distribution model, this can potentially expedite the search of optimal solutions. Therefore, to improve its competency in finding more effective solutions, an idea is to enhance EDA with local search, namely, memetic EDA. Memetic EDA has been successfully applied to many optimizations problems with local search operators [26], [25], such as arc routing and assembly flow-shop scheduling problems. On the one hand, although memetic EDA has been successfully applied to many applications, those memetic approaches work inappropriate for web service composition, as these local search operators are only applicable to domain-specific or problem-specific solution representations [25], [27]. On the other hand, despite the recent success in EDA-based service composition, the effectiveness of this approach can be enhanced by introducing memetic EDA. Several challenges remain to be addressed in developing a memetic EDA approach to service composition as follows: First, a composite service is commonly represented as a DAG, exploring the neighborhood of a DAG, especially large DAGs, is computationally infeasible [28]. Note that the discussed neighborhood is structured by local search operators on the search space, where neighbor solutions can be generated iteratively from a given candidate solution. Therefore, researchers [9], [29] often indirectly defined the neighborhood of a composite service represented in the form of a permutation, which can be converted to a DAG through a separate decoding process. Often, socalled "swap" operators produce neighbors by swapping two random elements in a permutation. Consequently, a neighborhood is defined by the collection of permutations obtainable through a "swap" to any given permutation. However, such neighborhood often contains a large proportion of neighboring permutations with inferior quality. For effective local research, the neighborhood must be refined to exclude most of the clearly unwise swapping choices by exploiting domain-specific knowledge. Second, as we know, it is very challenging to determine which candidate solutions are to be selected for local search in memetic algorithms, as the selection method has a significant impact on the effectiveness and efficiency of memetic EDA. Should an equal chance be given to all the candidate solutions or only elite solutions should be considered for local search? Moreover, what are elite solutions, and how many of them should be modified locally? However, answers to these challenging questions often vary from many factors, such as EC algorithms, domain problems, etc. Therefore, it is challenging to determine one effective selection strategy for the memetic EDA-based approach to service composition. Third, a traditional strategy that exclusively explores the whole neighboring space of composite services can incur high computation cost without guarantee of improving solution quality. For example, for permutation-based representation, if a simple swap operator is utilized for exploring the neighborhood, then the dimension of the permutation determines the computational complexity. In the context of service composition, the dimension of such permutation is usually equivalent to the size of the service repository. As the neighborhood size is extremely huge when many services are to be considered during the service composition process, this strategy is infeasible for practical use. Fourth, in EDA, although a probability distribution model is adjusted to trace promising searching areas throughout generations, one proportion of promising solutions (i.e., permutations) are more likely to be repetitively sampled, while the distribution model is getting converged along the generations. Furthermore, these repeatedly sampled solutions are often favorable to users, since they are candidate solutions with high quality. In the EDA-based approach to service composition, sampled permutationbased solutions are very costly as they require repetitive computation time for decoding and evaluations. To address these challenges above, we propose a memetic EDA-based approach, achieving substantially high performances in effectiveness and efficiency. These outstanding performances are observed by comparing it with some recently proposed web service composition approaches, such as a EDA-based approach [12], a PSO-based approach [10], and GA-and Memetic GA-based approaches [9]. In particular, an empirical, experimental study on the effectiveness of different neighborhoods structured by different local search operators is conducted. The contributions of this paper are listed below, and where the first contribution is to address the first challenge discussed previously, and the second contribution is proposed to address the remaining challenges. 1) To perform an effective local search in composite services, we first propose several neighborhood structures for candidate solutions. These neighborhoods are created by developing several novel domain-dependent local search operators, based on constructing and swapping effective building blocks of composite services for local improvements. Subsequently, we develop an effective memetic EDA-based approach based on our previous work [12], with nature integration with those local search operators. 2) To significantly reduce the computation time of our proposed memetic EDA-based approach, an integrated local search procedure is proposed with a modified EDA based on the standard EDA. To decrease computation losses in repetitive sampling and evaluations, we utilize an archiving technique to avoid sampling solutions repetitively. This technique is prevalent and straightforward to use. Besides that, the local search procedure employs an effective joint strategy to efficiently finding better solutions. This strategy considers fitness uniform distribution scheme and stochastic local search jointly with our proposed local search operators. 3) To demonstrate the performance of our memetic EDAbased approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 [1] and WSC-09 [2]. In particular, the new benchmark inherits the functionalities provided by services in benchmark dataset WSC-08 and WSC-09 and the QoS attributes of web services in benchmark dataset QWS [30]. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. This benchmark has been made freely available online as well as the codes of our memetic EDA-based approach 1 . We experimentally compare our memetic EDA-based approach with some state-of-the-art methods that have been recently proposed to solve the same or a similar service composition problem using the new benchmark. Our experimental results illustrate that our method can achieve cutting-edge performance. Literature on EC-Based fully automated web service composition Automated web service composition aims to loosely couple web services to fulfill a service request, without strictly obeying a pre-given abstract workflow. Instead, composition workflows are gradually built up while its component services are selected. Existing works in fully automated web service composition can be categorized into two approaches -direct approaches and indirect approaches [31]. The direct approaches represent composition solutions explicitly in the representation that displays actual execution flows of composite services, while the indirect approaches often represent composite services implicitly as permutations, which require a decoding process to build up actual execution workflows. 1. Two augmented benchmarks for automated web service composition is available from https://github.com/chenwangnida/Dataset, and the codes of our memetic EDA-based approach is available from https://github.com/chenwangnida/MENHBSA4SWSC. In the first category, tree-and graph-based representations are widely used to represent service composition solutions directly. A graph-based evolutionary process is introduced in [32] to directly evolve DAG-based service composition solutions, applying domain-dependent crossover and mutation operators with repairing methods. GP is utilized for searching optimal solutions represented as trees. [7] proposes a context-free grammar for randomly initializing treebased service composition solutions with correct structures of composite services. In contrast, [13] randomly initializes tree-based service composition solutions completely, but they develop adaptive crossover and mutation rates according to the diversity of the population for accelerating the speed of convergence. Both approaches [7], [13] utilize a penalization method for filtering incorrect solutions while evaluating the QoS of candidate solutions. To achieve higher performance, [5], [8] utilize a greedy search algorithm for creating correct DAG-based composition workflows, which are mapped to tree-based ones with different methods. During the evolutionary process, the correctness of the solutions is ensured by domain-dependent crossover and mutation. However, the mapped tree-based representations suffer a scalability issue, since many replicas of subtrees are produced from the mapping methods. To overcome this issue, [11] proposes a tree-like representation, on which the replicas of subtrees are handled by removing them, and inserting edges from the root of the replicas to the roots of the copies. In the second category, service composition solutions are represented as permutations, which are then decoded into solutions represented as DAGs [10], [31], [33]. PSO is utilized to find an optimized queue of services (i.e., a permutation), which can be decoded into a corresponding DAG-based composite service [33]. [10] extends [33] to jointly optimize QoSM and QoS, where a weighted DAG is decoded, where edge weights correspond to matchmaking quality between services. These two PSO-based approaches rely on PSO to determine the weights of particle's position (that corresponding with a service) to form an ordered service queue. Optimizing QoSM and QoS simultaneously is more challenging than optimizing QoS only because the searching space has significantly increased, and it demands more effective and efficient searching techniques. Apart from that, it has been suggested that utilizing the indirect representation often contributes to a higher performance, compared to direct representation [31]. It is due to that the search space is not unwittingly restricted by unconstrained random initialization of solutions and operators. In summary, EC techniques have been showing their promises in fully automated web service composition. Moreover, the indirect approaches have been indicated to be more effective. Therefore, EC techniques with indirect representations are exciting techniques to be focused on for solving service composition problem in this paper. Literature on memetic EC-based approaches and EDA Memetic algorithms have drawn growing attention from researchers in recent years and achieved significant successes in many applications [34]. By introducing local search, the performance of EC techniques can be improved. In the domain of service composition, to overcome the prematurity and proneness of GP, Tabu search is combined with GP to solve QoS-aware data-intensive web service composition [35]. [9] proposed an indirect memetic approach for QoSaware web service composition, where a domain-dependent crossover operator is proposed to produce candidate solutions. Besides that, an exhaustive local search is applied to composite solutions represented as permutations. However, the produced neighbors are likely to be decoded into the same composite solution. Therefore, the effectiveness of this local search operator demands further improvement. Recently, EDA has been used as a technique to tackle permutation-based optimization problems [23]. In particular, a distribution model is learned iteratively for each population. Subsequently, new offsprings are generated based on the learned model. Moreover, domain-dependent local search operators are often introduced to enhance the performances of EDA. For example, a probability matrix that is related to the job priority permutation of a solution is learned in EDA-based flow-shop scheduling problem, and different job-based local search operators were proposed to enhance the exploitation ability of EDA [25]. An Edge Histogram Matrix is applied to uncertain capacitated arc routing problems and is leaned from solutions represented by a set of routes [27]. To make local improvements, different move operators, such as single insertion and swap, are also proposed. The use of EDA has only been investigated for semiautomated web service composition [6], [24], [36]. However, we recently proposed an EDA-based approach for fully automated web service composition, where candidate solutions are represented as permutations over a given service repository. The success of the proposed method strongly depends on the distribution model and the way of learning the distribution model. We employ Node Histogram Matrix (NHM) to learn the distribution of promising solutions in one population, Node Histogram-Based Sampling Algorithm (NHBSA) [22] is empoloyed to produce candidate solutions. Although we started an initial study for fully automated service composition, it remains an opportunity to improve its performance further. EDA is good at global exploration, and local search operators are motivated to be introduced in EDA to enhance its capability in exploitation. In summary, on the one hand, memetic EDA-based approaches have been investigated in many problems, other than fully automated service composition, achieving promising results. On the other hand, notwithstanding success achieved in our initial investigation in EDA-based fully automated service composition, the performance of this EDA-based approach can be further improved by combining it with local search. SEMANTIC WEB SERVICE COMPOSITION PROB-LEM A semantic web service (service, for short) is considered as a tuple S = (I S , O S , QoS S ) where I S is a set of service inputs that are consumed by S, O S is a set of service outputs that are produced by S, and QoS S = {t S , c S , r S , a S } is a set of non-functional attributes of S. The inputs in I S and outputs in O S are parameters modeled through concepts in a domain-specific ontology O. The attributes t S , c S , r S , a S refer to the response time, cost, reliability, and availability of service S, respectively, which are four commonly used QoS attributes [37]. A service repository SR is a finite collection of services supported by a common ontology O. A composition task (also called service request) over a given SR is a tuple T = (I T , O T ) where I T is a set of task inputs, and O T is a set of task outputs. The inputs in I T and outputs in O T are parameters that are semantically described by concepts in the ontology O. Two special atomic services Start = (∅, I T , ∅) and End = (O T , ∅, ∅) are always included in SR to account for the input and output of a given composition task T . We use matchmaking types to describe the level of a match between outputs and inputs [38]. For concepts a, b in O the matchmaking returns exact if a and b are equivalent (a ≡ b), plugin if a is a sub-concept of b (a b), subsume if a is a super-concept of b (a b) , and f ail if none of previous matchmaking types is returned. In this paper we are only interested in exact and plugin matches for robust compositions, see [39]. As argued in [39] plugin matches are less preferable than exact matches due to the overheads associated with data processing. For plugin matches, the semantic similarity of concepts is suggested to be considered when comparing different plugin matches. A robust causal link [40] is a link between two matched services S and S , denoted as S → S , if an output a (a ∈ O S ) of S serves as the input b (b ∈ O S ) of S satisfying either a ≡ b or a b. For concepts a, b in O, the semantic similarity sim(a, b) is calculated based on the edge counting method in a taxonomy like WorldNet [41]. Advantages of this method are simple calculation and good semantic measurement [41]. Therefore, the matchmaking type and semantic similarity of a robust causal link is defined as follows: type link = 1 if a ≡ b (exact match) p if a b (plugin match) (1) sim link = sim(a, b) = 2Nc Na + N b(2) with a suitable parameter p, 0 < p < 1, and with N a , N b and N c , which measure the distances from concept a, concept b, and the closest common ancestor c of a and b to the top concept of the ontology O, respectively. However, if more than one pair of matched output and input exist from service S to service S , type e and sim e will take on their average values. The QoSM of a composite service is obtained by aggregating over all robust causal links as follows: M T = m j=1 type link j (3) SIM = 1 m m j=1 sim link j(4) Formal expressions as in [42] are used to represent service compositions. The constructors •, , + and * are used to denote sequential composition, parallel composi- C = r C = a C = ct C = t C = •(C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k d k=1 t C k (C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k M AX{t C k |k ∈ {1, ..., d}} +(C 1 , . . . , C d ) d k=1 p k · r C k d k=1 p k · a C k d k=1 p k · ct C k d k=1 p k · t C k * C 0 r C 0 a C 0 · ct C 0 · t C 0 tion, choice, and iteration, respectively. The set of composite service expressions is the smallest collection SC that contains all atomic services and that is closed under sequential composition, parallel composition, choice, and iteration. That is, whenever C 0 , C 1 , . . . , C d are in SC then •(C 1 , . . . , C d ), (C 1 , . . . , C d ), +(C 1 , . . . , C d ) , and * C 0 are in SC, too. Let C be a composite service expression. If C denotes an atomic service S then its QoS is given by QoS S . Otherwise the QoS of C can be obtained inductively as summarized in Table 1. Herein, p 1 , . . . , p d with d k=1 p k = 1 denote the probabilities of the different options of the choice +, while denotes the average number of iterations. Therefore, QoS of a service composition solution, i.e., availability (A), reliability (R), execution time (T ), and cost (CT ) can be obtained by aggregating a C , r C , t C and ct C as in Table 1. In the presentation of this paper, we mainly focus on two constructors, sequence • and parallel , similar as in most automated service composition works [5], [8], [10], [11], [32], [33], where service composition solutions are represented as a Directed Acyclic Graph (DAG). We can easily calculate QoS of a composite service that is represented as a DAG [10] according to Table 1. When multiple quality criteria are involved in decision making, the fitness of a solution is defined as a weighted sum of all individual criteria in Eq. (5), assuming the preference of each quality criterion based on its relative importance is provided by the user [43]: F itness(C) = w 1M T +w 2Ŝ IM +w 3Â +w 4R +w 5 (1−T )+w 6 (1−ĈT )(5) with 6 k=1 w k = 1. This objective function is defined as a comprehensive quality model for service composition. We can adjust the weights according to the user's preferences.M T ,ŜIM ,Â,R,T , andĈT are normalized values calculated within the range from 0 to 1 using Eq. (6). To simplify the presentation we also use the notation (Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 ) = (M T, SIM, A, R, T, CT ). Q 1 and Q 2 have minimum value 0 and maximum value 1. The minimum and maximum value of Q 3 , Q 4 , Q 5 , and Q 6 are calculated across all the relevant services (that are determined in Sect. 4.2) in the service repository SR using greedy search in [5], [8]. Q k =        Q k −Q k,min Q k,max −Q k,min if k = 1, . . . , 4 and Q k,max − Q k,min = 0, Q k,max −Q k Q k,max −Q k,min if k = 5, 6 and Q k,max − Q k,min = 0, 1 otherwise. The goal of comprehensive quality-aware service composition is to find a composite service expression C that maximizes the objective function in Eq. (5). C is hence considered as the best possible solution for a given composition task T . MEMETIC EDA-BASED APPROACH FOR SE-MANTIC WEB SERVICE COMPOSITION In this section, we present our memetic EDA-based approach to fully automated semantic web service composition. We start by giving an overview of our memetic EDAbased approach. Subsequently, we discuss some essential steps in the approach: the first one is to discover relevant services and service layers, see details in Sect.4.2. The second one is to introduce a permutation-based representation proposed in our previous work, see details in Sect. 4.3 and 4.4. The third one is to introduce an effective joint strategy for a local search procedure, see details in Sect. 4.5. We propose several key ideas that are jointly employed to build our memetic EDA-based approach: 1) A composite service is commonly represented as a DAG, since a DAG can intuitively represent an execution flow of web services, and allows efficient computation of QoS. The success of the EDA strategy strongly relies on the proper distribution model for learning the knowledge of promising solutions. Our initial study [12] represents a composite service as a unique queue of services, i.e., a permutation of atomic services, which is mapped from a DAG-based solution. Composite services in this permutation form contributes to a distribution model to be learned and new permutationbased promising solutions to be sampled. Therefore, a bi-directional map is ensured between permutations and DAGs for learning and evaluation purposes. 2) To significantly decrease the computation time of the local search procedure, it is crucial to select a restricted number of suitable candidate solutions for local searches. We assume that candidate solutions with close fitness values are similar in their corresponding DAG forms, so neighbors produced from these candidate solutions can be the same. Therefore, we group candidate solutions based on their fitness values according to a uniform distribution scheme, which allows candidate solutions with the most considerable differences measured by single-objective fitness values can be effectively chosen for applying local search. 3) It is not efficient to exhaustively explore the whole neighbors in the conventional local search [9]. Instead, stochastically searching the neighboring solutions can significantly reduce computation cost [26]. Therefore, we introduce a stochastic local search with EDA to posite service is unusually computationally infeasible [28]. However, it is straightforward to define the neighborhood on a permutation-based representation by socalled swap operators. To develop effective swap operators, we utilize domain knowledge of service composition to create effective building blocks for these swap operators on permutation-based candidate solutions. These swap operators aim to exploit fitter neighbors effectively. That is they are likely to make local improvements in the produced neighbors. An overview of memetic EDA-based algorithm for automatic service composition An overview of the memetic EDA-based approach is represented in Figure 1, consisting of the following steps: initialize population, evaluate population, select superior subpopulation, learn probability model, sample individuals and return optimal solutions. We start with discovering all the relevant services that are related to a given composition request T in Step 1. Meanwhile, several service layers are identified (see details in Subsection 4.2). These relevant services are used to randomly generate m composite services represented as permutations, Π g k , where g = 0 and k = 1, . . . , m. In Step 2, these permutation-based individuals are decoded into DAG-based solutions using a forward graph building technique [10], based on which, the fitness in Eq. 5 of each individual can be calculated. In Step 3, we merge the current population P g with an archive. The archive is an empty individual set initially and will saved with elite composite services in the future. By adopting Breath-First Search (BFS) on each corresponding DAG-based solutions in the merged population, we produce another encoded permutation-based solutions Π g k . Then, the local search procedure is applied to a very small set of these permutations. This small permutation set is selected based on a fitness uniform selection scheme over the current population (see details in 4.5.1). For each permutation in the small set, a stochastic local search is employed to create new permutations as its neighbors, where the best neighbor is identified based on the fitness value. This permutation in the small set is replaced with its best neighbor (see details in Subsection 4.5). The top half of the best-performing solutions are reserved in P g according to their fitness values and put them into the archive as elite solutions. In Step 4, we use these elite solutions in the archive to learn a N HM g of generation g, which produces offsprings for generation g + 1 using NHBSA, see details in Subsection 4.4. Consequently, we go back to Step 2 to evaluate the fitness of new offsprings. The steps 2 to 4 will be repeated until the maximum number of generations is reached. Eventually, the best solutions found throughout the evolutionary process is returned. In a nutshell, we introduce a permutation-based representation derived from the common DAG-based one. In our proposed algorithm, we always switch between these two representations back and forth for better searching or evaluation purposes. Furthermore, an effective and efficient local search procedure is developed through the use of the selection scheme and the stochastic local search. Relevant Services and Service Layers Discovering relevant services and service layers is an initial, but crucial step for our memetic EDA-based approach. We achieve two goals at this initial stage: the first goal is to reduce the size of the service repository SR to keep only those that are relevant to the service composition task T ; the second goal is to identify service layers of these relevant services. In particular, a group of layers is identified, and each layer contains a set of services that have the same longest distance to Start. We adopt a layer discovering method in [44] to find relevant services and service layers as illustrated in the following example. Fig. 3 shows an example of discovering relevant services and service layers given a service request T , where five related services (i.e., S 0 , S 1 , S 2 , S 3 , and S 4 ) and two layers (i.e., L 1 and L 2 ) are found. In L 1 , S 0 , S 1 , S 2 , and S 4 can be satisfied by {a, b} of T , and they have the same distance to Start (Note that the distance is measured by the number of predecessors). While S 3 in L 2 requires additional inputs from other services and it is associated with a longer distance to Start. A Novel Permutation-Based Representation Service composition solutions are commonly represented as Directed Acyclic Graphs (DAGs) [5], [8], [10], [11], [32], [33]. Let G = (V, E) be a DAG-based composite solution from Start to End, where nodes correspond to the services and edges correspond to the robust causal links. Often, V does not contain all services in SR. Many combinatorial optimization problems naturally represent solutions as permutations, which can be different in different problems [23]. Here we present composite services as permutations, and we ensure a bi-directional map between permutations and DAGs. The bi-directional map is crucial for learning the distribution of promising composite solutions. Because it is less reliable to learn a distribution based on permutations if different permutations are mapped to the same DAG-based composition service. Let Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ) be a permutation, elements of which are {0, . . . , t, t + 1, . . . , n − 1} such that Π i = Π j for all i = j. Particularly, {0, . . . , t} are service indexes (i.e., id number) of the component services in the corresponding G , and is sorted based on the longest distance from Start to every component services of G. While {t + 1, . . . , n − 1} be indexes of remaining services in SR not utilized by G. We use Π g k to present the k th (out of m, m is population size) service composition solution, and P g = [Π g 0 , . . . , Π g k , . . . , Π g m−1 ] to represent a population of solutions of generation g. An example of producing a permutation-based composite solution is shown as follows. Fig. 3 illustrates a process to produce an permutation-based solution. As an example, take an permutation as [4, 1, 2, 3, 0]. This service index queue is decoded into a DAG G 0 0 representing a service composition that satisfies the composition task T . Afterwards G 0 0 is mapped to a permutation Π 0 0 = [1, 2, 3 | 4, 0]. Herein, each position on the left side of | corresponds to a service discovered by a BFS on G 0 0 from Start. This BFS additionally takes ascending order of service indexes during the search. While the right side corresponds to the remaining atomic services in SR, but not in G 0 0 . Note, that | is just displayed for the courtesy of the reader, rather than being part of the permutation-based representation. Furthermore, we also do not permit the encoding [1, 2, 3 | 0, 4], as no information can be extracted from G 0 0 to determine the positions of 0 and 4 in the permutation. A permutation-based population P g can be created with m permutation-based solutions. Consider m = 6, P g could be represented as follows: [22] proposed Node histogram-based sampling (NHBSA) as a tool for sampling new candidate solutions, which is commonly represented in the form of permutations. By employing the discussed representation of composite services in Sect. 4.3, we are now capable of applying NHBSA to sample new permutations as candidate composite services. The NHM at generation g, denoted by N HM g , is an n × n-matrix with entries e i,j as follows: P g =        sol g 0 sol g 1 sol g 2 sol g 3 sol g 4 sol g 5        =        Application of node histogram-based sampling e g i,j = n−1 k=0 δ i,j (sol g k ) + ε (7) δ i,j (sol g k ) = 1 if I g k (S i ) = j 0 otherwise (8) ε = m n − 1 b ratio(9) where i, j = 0, 1, . . . , n − 1, and b ratio is a predetermined bias. Roughly speaking, entry e g i,j counts the number of times that service S i appears in position j of the service queue over all solutions in population P g . We pick up an element in the N HM g as an example to demonstrate the meaning of each element in the NHM. For example, e g 0,0 ( that equals 2.6) is made of integer and decimal parts: 2 and 0.6. The integer number 2 means that service S 0 appears at the first position 2 times, while the decimal number 0.6 is a ε bias. Once we have computed N HM g , we use node histogram-based sampling [22] to sample new permutations for the next generation. Effective Local Search Procedure Through a Joint Strategy In this section, we introduce a joint strategy of our local search procedure: we begin with an introduction of a selection of suitable individuals for employing local search. This selection aims to choose the individuals based on global and local population information using a fitness uniform selection scheme in Algorithim 2. Subsequently, we present several local search operators with the representation discussed in 4.3. These operators are specially designed to work seamlessly with different neighborhoods that are investigated in this paper. The joint strategy for local search is summarized in ALGORITHM 1. Fig. 1) Input : P g , n nb and n set Output: updated P g 1 Select a small number n set of individulals to form a subset SelectedIndiSet of P g using ALGORITHM 2; 2 foreach Π in SelectedIndiSet do 3 Generate a size n nb of neighbors from Π by local search ; 4 Identify the best neighbor Π best with the highest fitness ; 5 replace Π with Π best ; 6 return P g ; ALGORITHM 1 takes three inputs: P g the gth population, n set the number of seleted individuals for local search and n nb the number of neighbors. In this algorithm, we start by selecting a fixed and small number n set of candidate solutions to form a subset SelectedIndiSet of the current population P g using ALGORITHM 2, see details in Section 4.5.1. These selected solutions are used for local search. For each solution Π in SelectedIndiSet, we produce a number n nb of neighbors from Π by local search, see details in Section 4.5.2, and then we identify the best neighbor Π best from the produced neighbors. We replace the best neighbor Π best with the selected Π in the small solutions set SelectedIndiSet. Eventually, we return a updated P g . ALGORITHM 1. Joint strategy for local search (Step 3.3 in Application of uniform distribution schema Two types of selection schemes for selecting suitable individuals for local search have been studied [34]: random selection scheme, and statistics scheme. The random selection scheme is a primary selection method, where a local search is potentially applied to all individuals with a predefined rate. However, it can be less effective as it does not assign local search to the most suitable candidate solutions, and it is more time-consuming when the population size is huge. This statistics scheme often chooses more suitable individuals based on the statistics information of the current population. For example, this method can assign local search on a set of candidate solutions with the highest differences measured by their fitness values. Our selection scheme, inspired by [45], is proposed based on the statistics information that aims to select a small number of suitable individuals for local search, making a good balance of local improvement and execution time. This selection scheme is presented in ALGORITHM 2. This algorithm applied a local search on a set of selected individuals SelectedIndiSet. The size of SelectedIndiSet, n set , is a pre-defined parameter. SelectedIndiSet consists of one elite individual and n set − 1 individuals from n set − 1 groups of individuals in each generation. Particularly, we calculate a uniform fitness interval based on the maximal fitness value, maxf itness and minimal fitness value, minf itness of the ALGORITHM 2. Fitness uniform selection scheme Input : P g and n set Output: selected solutions SelectedIndiSet 1 SelectedIndiSet ← {} ; 2 Sort P g in descending order based on the fitness ; 3 Put the first individual in P g into SelectedIndiSet ; 4 Calculate fitness range for n set − 1 groups based on an uniform interval between maxf itness and minf itness ; 5 Assign each permutation in P g to n set − 1 groups based on the fitness value ; 6 Random select one permutation from each group and put it in SelectedIndiSet; 7 return SelectedIndiSet; current population P g . Therefore, the population is divided into n set − 1 groups based on the calculated fitness interval. Consequently, these groups represent different groups of individuals, and each group represents close similarities based on their fitness. Note that, for every generation, the actual number of selected individuals for local search could be less than n set , because there could be no individuals fall into one group based on its fitness value. Stochastic Local Search Operators To investigate an appropriate structure of neighborhood for composite services, suitable local search operators must be proposed to effectively utilize domain knowledge. Then we repeatedly assign these local search operators to SelectedIndiSet for exploring their neighboring solutions. Apart from that, to balance the quality of local improvement and computation time, only a random subset of the entire large neighborhood is explored by a stochastic strategy. Based on the discussed permutation-based representation in Sect. 4.3, local search operators are proposed in a straightforward way as "swap". In this paper, we investigate four different swap operators: 1) Constrained One-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), two service indexes Π a , where 0 ≤ a ≤ t, and Π b , where t + 1 ≤ b ≤ n − 1, are selected and exchanged. The one-point swap local search operator is inspired by [9], which swaps a pair of service indexes in a permutation. In [9], local search exclusively explores the neighborhood based on one selected index of the permutation, so the size of the neighborhood associated with the index is n − 1. However, it can be very computational expensive because the number of swaps becomes significant for large n. Besides that, it can be less flexible as the neighborhoods are just focusing on those neighborhoods in relation to one selected index. Herein we propose a more efficient and flexible local search with one-point swap: first, we pre-determine a fixed, relatively small number of neighbors n nb to be produced for a considerable computational time assigned for local search; second, we randomly produce n nb neighbors by swapping two randomly selected indexes, rather than by swapping n−1 indexes with one fixed index. We expect that swapping two randomly selected indexes is more effective within a budget computation time for making local improvements. Meanwhile, we constrain the two randomly selected indexes that they must be before | and after | respectively in every swap because these swaps exclude those have lower opportunities for local improvements. For example, one neighbor is created by swapping one pair of used service indexes. This swap operation has a higher chance to produce the same DAG-based solution. Figure 4 shows an example of one-point swap for a selected individual. 2) Constrained Two-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), four service indexes Π a1 , Π a2 , Π b1 , and Π b2 are selected, where 0 ≤ a 1 ≤ t, 0 ≤ a 2 ≤ t, t + 1 ≤ b 1 ≤ n − 1, t + 1 ≤ b 2 ≤ n − 1, a 1 = a 2 , and b 1 = b 2 . Π a1 and Π b1 are exchanged. Likewise, Π a2 and Π b2 are exchanged. Motivated by the one-point swap proposed above, we created two-point swap operator by combing two constrained one-point swap into a single operator. We make a hypothesis that the two-point swap could efficiently produce a higher quality neighbor by one local change, rather than producing two neighbors by a sequence of two constrained one-point local changes. Primarily, given a budgeted number of candidate solutions for local search, a two-point swap operator can perform a more efficient local search for finding highquality solutions. Figure 5 shows an example of a twopoint swap for a selected individual and a produced neighbors. Constrained One-Block Swap is proposed based on the concept of a block, i.e., consecutive points (service indexes) in a permutation. In this swap, two blocks are built up based on two randomly generated starting point Π a and Π b before | and after I of a permutation respectively. After swaps, produced neighbors inherit two parts of the original permutation. Figure 5 shows an example of a constrained one-block swap for a permutation, where one block is built up from the start position StartP os1 to the last positions of used services, and another block is built up from the start position StartP os2 to the last index. EXPERIMENTS We conduct experiments to evaluate the performances of our memetic EDA-based approaches, i.e., memetic EDA with constrained one-point swap (henceforth referred to as MEEDA-OP), memetic EDA with constrained two-point swap (henceforth referred to as MEEDA-TP), memetic EDA with constrained layer-based one-point swap (henceforth referred to as MEEDA-LOP) and memetic EDA with constrained one-block swap (henceforth referred to as MEEDA-OB). These memetic EDA-based approaches are compared to some state-of-the-art EC-based methods that were recently proposed to solve the same or similar problems: a PSO-based approach [10] (henceforth referred to as PSO), a GA-based approach (henceforth referred to as GA), a memetic GA-based approach [9] (henceforth referred to as MEGA) and an EDA-based approach [12] (henceforth referred to as NHM-EDA). Two benchmarks, WSC-08 [1] and WSC-09 [2] extended with QoS attributes , which generated from the QoS distribution from QWS [30] are created. These two benchmarks have already been broadly employed in service composition [5], [10], [13] for experimental evaluations. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. We also make this benchmark available to the public. Particuarly, WSC08 contains 8 composition tasks with increasing size of service repository, i.e., 316, 1116, 1216, 2082, 2180, 4396, 8226, and 16238, and WSC09 contains 5 composition tasks with increasing size of service repository, i.e., 1144, 8258, 16276, 16602, and 30422 SRs respectively. The population size is set to 200, the number of generations equals to 100, and b ratio is 0.0002. The size of SelectedIndiSet is 6, and the number of neighbors of each individual in SelectedIndiSet explored by local search operators n nb is 20. For all the competing methods, we follow strictly their settings in their papers. In GA, the crossover rate is set to 0.95, and the mutation rate is set to 0.05. In MEGA, the crossover rate is set to 0.95, and local search rate is 0.05. We run the experiment with 30 independent repetitions. Following existing works [10], [11], [12], the weights of the fitness function Eq. (5) are simply configured to balance the QoSM and QoS. In particular, we set both w 1 and w 2 to 0.25, and w 3 , w 4 , w 5 and w 6 all to 0.125. More experiments have been conducted and show that all our methods work consistently well under different weight settings. The p of type link is determined by the preference of users, and is recommended as 0.75 for the plugin match according to [39]. Comparison of the Fitness We employ the independent-sample T-test with a significance level of 5% to verify the observed differences in performance concerning fitness value and execution time. In particular, we use a pairwise comparison to compare all competing approaches, and then the top performances are identified, and its related value is highlighted in green color in Table 2. Note that those methods that consistently find the best-known solutions over 30 runs with 0 standard deviations are also marked as top performances. The pairwise comparison results for fitness are summarized in Table 3, where textitwin/draw/loss shows the scores of one method compared to all the others, and displays the frequency that this method outperforms, equals or is outperformed by the competing method. This testing and comparison methods are also used in Sect 5.2. One of the objectives of the experiments is to evaluate the effectiveness of the proposed memetic EDA-based approaches comparing to NHM-EDA [12], PSO [10], GA and MEGA [9]. Table 2 shows the mean value of the fitness value and the standard deviation over 30 repetitions. The pairwise comparison results of the fitness value are summarized in Table 3. From Table 2 and Table 3, we observe some interesting behaviors of these approaches in finding highquality solutions. Based on these observations, we also make some analysis and possible conclusions below: Firstly, for the two baseline methods -PSO and GA, all EDA-based approaches (with and without local search) consistently outperform PSO. However, only memetic EDAbased approaches outperform GA. Then, MEGA [9] achieved very comparable results to all our memetic EDA-based methods. However, MEEDA-LOP achieves the best performance. As shown in Table 3, MEEDA-LOP only loss 1 out of 13 composition tasks over WSC-08 land WSC-09. Furthermore, MEEDA-LOP has achieved extremely stable performance in the most runs with 0 standard deviation. In addition, MEEDA-OP, MEEDA-TP, MEEDA-OB, and MEEDA-LOP significantly outperforms NHM-EDA [12]. This observation corresponds well with our expectation that the exploitation ability of EDA can be enhanced by hybridizing it with local search. We can see that all memetic EDA-based approaches reach a better balance of exploration and exploitation. Furthermore, for the four memetic EDA-based approaches, MEEDA-OB is the worst while MEEDA-OP and MEEDA-TP are very comparable to each other. This observation demonstrates that the neighborhood based on blocks is considered to be less suitable for service composition problems, it is due to that swapping building blocks can potentially ruin the learned distribution of promising solutions. Lastly, MEEDA-LOP is the best performer. This observation corresponds well with our assumption that using the layer-based information can further improve the effectiveness of one-point swap. MEEDA-LOP applies the local search operator to a much smaller, but useful set of services considered in MEEDA-OP. In summary, we sort all the competing approaches based on the effectiveness in a descending order: MEEDA-LOP > MEGA > MEEDA-TP = MEEDA-OP > MEEDA-OB > GA > EDA > PSO. Comparison of the Execution Time The second objective of our experiment is to study the efficiency of all the proposed EDA-based approaches comparing to EDA [12], PSO [10], GA and MEGA [9]. Table 4 shows the mean value of the execution time and the standard deviation over 30 repetitions. The pairwise comparison results for the execution time are summarized Table 5. From the two tables above, we make some analysis and possible conclusions about the execution time of these approaches as below: First, MEEDA-LOP requires consistently less execution time compared to other approaches, which can be observed from the highlighted execution time in Table 4. It is a remarkable observation that the local search in MEEDA-LOP based on layers and constrained one-point swap requires less computation time compared to MEEDA-OP. However, this significant improvement is mainly due to two techniques in MEEDA-LOP. The first one is the archive technique, which reserves half population-size elite individuals to the next generation, and significantly reduces the overall computation time for the decoding and evaluation of the reserved individuals in the future. The second one is the layer-based information, which improves the effectiveness of one-point swap, resulting in learning more accurate and reliable NHM. Therefore, useful services are more likely to be put in the front of the permutation, which accelerates the execution time in the decoding process. Second, in contrast, MEGA requires the highest execution time, because all the candidate solutions in MEGA have an opportunity for local search using random selection scheme, and MEGA also exclusively searches the whole neighborhood based on one position. These results confirm that the combination of the random selection scheme and the exclusively local search strategy in MEGA is less effective and more time-consuming than our statistics scheme and stochastic local search operators. Lastly, MEEDA-OB is also very computation-intensive among all the memetic EDA-based approaches. It is due to that one-block swap retards accurate distributions to be learned as local improvements of one-block swap is less effective, so required services for service composition are less likely to be put at the front of a service queue. Also, building blocks consume extra time in MEEDA-OB. In summary, we sort all the competing approaches based on the execution time in a ascending order: MEEDA-LOP > MEEDA-OP > MEEDA-TP > PSO > GA > MEEDA-OB > MEGA. Comparison of the Convergence Rate The third objective of our experiment is to study the convergence rate of all the approaches over 30 independent runs. We have used WSC08-3 and WSC09-2 as two examples to illustrate the performance of all the compared methods. MEGA requires much higher time for execution, we set different execution time scales for two tasks of WSC08-3 and WSC09-2 to easily observe their differences. First, we observe a significant increase in the fitness value towards the optimum over all the approaches excluding MEGA. These approaches eventually reach different levels of plateaus. Given the same budget of execution time, all memetic EDA-based methods happen to converge significantly faster and require much less time than the baseline PSO over all the composition tasks. Second, MEGA suffers from the the scalability issue when the size of the service repository is doubled in our new benchmark. The complexity of its local search strongly depends on n, i.e., the dimension of each permutation. Therefore, MEGA does not even converge at all when the same amount of execution time that is required by other approaches is assigned. Lastly, MEEDA-LOP is consistently ranked as a top performer among all the competing methods. The convergence rate of MEEDA-OP and MEEDA-TP presents a very similar pattern. However, MEEDA-OB happens to converge slower than the others, but it eventually reaches comparable results compared to MEEDA-OP and MEEDA-TP. Comparison of local search operators We investigate how often the mean fitness of neighbors is better than the fitness of their original permutation in MEEDA-OP, MEEDA-TP, MEEDA-LOP, and MEEDA-BP to demonstrate which swap-based local search operator is more likely to produce better solutions. Herein we use the composition task WSC0803 as an example to demonstrate the percentage of better neighbors produced by our four memetic EDA-based approaches along generations over 30 runs for WSC08-03 in Fig. 9. The result shows that MEEDA-BP and MEEDA-TP are less like to produce better solutions while MEEDA-OP and MEEDA-LOP are very comparable to each other, although slightly higher percentages of better mean fitness can be achieved by MEEDA-LOP. We further analyze differences between layer-based constrained one-point swap and constraint one-point swap operator using a permutation in Figure 10. Figure 10 exhibits an example of two produced neighbors from a permutation using constraint one-point swaps without considering layer information. In the example, one identical solution can be decoded from both the given permutation and the produced two neighbors, resulting in no local exploitation. In contrast, the discussed swapping cases are not qualified for the layer-based constraint one-point swap, where any produced neighbor must strictly follow the layer order on the left-hand side of the permutation. In the example, a given permutation is highlighted with two layers (i.e., L 1 and L 2 ) in ascending order. Particularly, S 1 , S 2 ∈ L 1 and S 3 ∈ L 2 . When the constrained one-point swap is performed, S 3 in the given permutation are replaced with S 4 or S 0 in the produced neighbor 1 and neighbor 2 respectively. However, L 2 is destroyed in the produced neighbors because of S 4 ∈ L 1 and S 0 ∈ L 1 . However, if the layer-based one-point swap is applied to the given permutation, it prevents these two neighbors from being exploited. In general, all produced neighbors must keep all the ordered layers from the given permutation. CONCLUSION In this paper, we propose effective and efficient memetic EDA-based approaches to fully automated service composition. The success of this memetic approach principally relies on the local search, where several ideas are jointly employed. In particular, we proposed several neighborhood structures by different local search operators, which are integrated with our permutation-based representation naturally. Besides that, a uniform distribution scheme and a stochastic strategy are also jointly utilized for selecting and applying local search. The experiments show that one of our proposed approach MEEDA-LOP achieves significantly better effectiveness and efficiency, compared to some stateof-the-art EC-based approaches and other memetic EDAbased approaches we proposed in the paper. Future work can investigate variable neighborhood with combinations of more than one local search operators in one evolutionary process, and investigate memetic EDA for handling multiobjective service composition problems.
9,373
1906.07900
2949728259
Comprehensive quality-aware automated semantic web service composition is an NP-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., Quality of services (QoS) and Quality of semantic matchmaking (QoSM) are simultaneously optimized. The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request. In this paper, we proposed novel memetic EDA-based approaches to tackle this problem. The proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. Apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified EDA to reduce the overall computation time of our memetic approach. To better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 bansal2008wsc and WSC-09 kona2009wsc . Experimental results on this benchmark show that one of our proposed memetic EDA-based approach (i.e., MEEDA-LOP) significantly outperforms existing state-of-the-art algorithms.
The use of EDA has only been investigated for semi-automated web service composition @cite_28 @cite_34 @cite_6 . However, we recently proposed an EDA-based approach for fully automated web service composition, where candidate solutions are represented as permutations over a given service repository. The success of the proposed method strongly depends on the distribution model and the way of learning the distribution model. We employ Node Histogram Matrix (NHM) to learn the distribution of promising solutions in one population, Node Histogram-Based Sampling Algorithm (NHBSA) @cite_12 is empoloyed to produce candidate solutions. Although we started an initial study for fully automated service composition, it remains an opportunity to improve its performance further. EDA is good at global exploration, and local search operators are motivated to be introduced in EDA to enhance its capability in exploitation.
{ "abstract": [ "Many enterprises have a growing interest in service composition to construct their business applications. With the increase of alternative services, Quality of Service (QoS) becomes an important indicator of obtaining optimal composite services. Due to the dynamic nature of the service environment, a composite service may not guarantee to deliver an overall optimal QoS. Re-optimization approaches have been developed to handle a dynamic environment. However, these approaches do not consider the diversity of alternative solutions, which may lead to better solutions. In this work, we introduce an adaptive approach, called estimation of distribution algorithm based on Restricted Boltzmann Machine (rEDA). rEDA effectively maintains the diversity of alternative solutions, by leveraging the inference ability of Restricted Boltzmann Machine to capture the potential solutions. It also provides a predictive guidance for the exploration of solution space, by considering the degree of how well a service contributes to the global QoS. The experimental evaluation shows that rEDA has a significant improvement on effectiveness and efficiency over existing approaches.", "In service composition, quality of service is a major criterion for selecting services to collaborate in a process flow to satisfy a certain quality goal. This paper presents an approach for service composition which considers QoS-based service provision schemes and variability of the QoS when planning. The QoS of a service can be stated in terms of complex service provision schemes, e.g. its service cost is offered at different rates for different classes of processing time, or its partnership with another service gives a special class of QoS when they operate in the same plan. We also address that it is desirable for service planning to result in a plan that is durable and reusable since change in the plan, e.g. by deviation of the actual QoS, would incur overheads. Our planning approach takes into account these dynamic situations and is demonstrated by using the Estimation of Distribution Algorithm.", "In a previous paper we proposed a node histogram based sampling algorithm (NHBSA) and compared it with edge histogram based sampling algorithm (EHBSA). The results showed NHBSA outperforms EHBSA on the permutation problems where absolute position of each node in a string is related to its performance. However, we used only a limited variation of sampling methods for NHBSA. In this paper, we propose several variations of sampling methods for NHBSA and explore conditions for them to work well with NHBSA.", "In recent years, quite a few search algorithms have been used to solve Web service composition problem. However, it is still lack of systematic analysis about these methods. In the paper, we attempt to analyze the effect of three typical meta-heuristic search algorithms. The experimental analysis is performed according to the search-based Web service composition framework. In the experiments, we mainly carry out the analysis as follows: different abstract service number, different candidate service number and different QoS constraint strength. Based on the above analysis, some guidelines of realizing Web service composition via search algorithms are yielded. For example, EDA is suitable for the composition problem with large abstract service number. PSO is more effective for the case of large candidate service number. GA is a good choose for the little-scale Web service composition problem. In addition, EDA can effectively settle the composition problem with strong QoS constraint. GA is suitable for the problem with medium constraint strength, and PSO has the poor ability to tackle the QoS constraint with high strength." ], "cite_N": [ "@cite_28", "@cite_34", "@cite_12", "@cite_6" ], "mid": [ "2755766203", "1589199859", "2072404156", "2079363226" ] }
Memetic EDA-Based Approaches to Comprehensive Quality-Aware Automated Semantic Web Service Composition
S ERVICE Oriented Architecture (SOA) has been contributing to the reuse of software components [3]. Web services are one of the most successful implementations of SOA to provide services as "modular, self-describing, self-contained applications that are available on the Internet" [4]. Often, users' requirements cannot be satisfied by one existing web service, Web service composition aims to loosely couple a set of Web services to provide a value-added composite service (i.e., a solution of service composition) that accommodates users' complex requirements. These requirements are related to functional (i.e., quality of semantic matchmaking as QoSM) and non-functional (i.e., Quality of service as QoS) requirements, which give birth to semantic web service composition and QoS-aware web service composition, with the aim of optimizing QoSM and QoS of service composition solutions respectively. Many researchers have been working on solving these optimization problems in web service composition [5], [6], [7], [8], [9], [10], [11], [12], [13]. Existing works that study the above problems are classified as semi-automated and fully-automated web service composition [14] with two different assumptions. One assumes that users know an abstract service composition workflow, and all the composite services produced by the composition system must strictly obey the given workflow. However, this assumption is not always valid since the workflow may not be provided or not even known by users. The second group of research works does not rely on any existing work-flows. Instead, a composite service will be constructed from scratch by selecting and connecting multiple atomic services obtained from the service repository [14]. Therefore, this construction process can end up with different workflows. Apparently, compared to semi-automated web service composition, fully-automated web service composition opens new opportunities to improve QoS and QoSM further due to different workflows automatically constructed. Nevertheless, the difficulty of the composition task is also increased. AI planning and Evolutionary Computation (EC) are two of the most widely used techniques for semi-automated and fully-automated web service composition [5], [7], [10], [13], [15], [16], [17]. AI planning techniques focus on creating valid composite services, where the functional correctness is always ensured with gradually constructed workflows. However, these approaches do not optimize the QoS or QoSM of the solutions produced [18]. EC techniques have been widely used to solve service composite problems that aim to optimize either one or both of QoSM and QoS, and are potentially more useful in practice as they can efficiently find "good enough" composite solutions. Important approaches [5], [6], [7], [8], [9], [10], [11], [12], [13] based on Genetic Algorithms (GA) [19], Genetic Programming (GP) [20], Particle Swarm Optimization (PSO) [21] and Estimation of Distribution Algorithm (EDA) [22], have been widely investigated in the literature. To effectively search for good solutions, EC techniques often employ useful information distilled from promising solutions to produce new offspring. The information can be used either implicitly or explicitly. Conventional EC techniques, such as GA and GP, fall in the implicit camp by producing new solutions through recombining solutions evolved previously [5], [7], [13]. In contrast, one EC technique that has achieved prominent success through explicit arXiv:1906.07900v1 [cs.AI] 19 Jun 2019 use of information is Estimation of Distribution Algorithm (EDA) [23]. In EDA, information about promising solutions evolved previously is captured compactly in the form of probability models. EDA has been successfully utilized for semi-automated service composition [6], [24], but they can not support fully automated service composition. We recently proposed a new EDA-based approach for fully automated web service composition through reliable and accurate learning of a probability model that encodes the distribution of promising solutions [12], i.e., a distribution model. EDA stresses more on global exploration, rather than local exploitation [25]. It is due to that the distribution model has the objective of exploring more promising regions in the entire solution space, without attempting to improve the quality of any specific solutions evolved previously. However, the optimization performance can often be improved directly through local modifications to promising solutions. By restricting the target region for local search and avoiding most of the randomness involved in sampling directly from the distribution model, this can potentially expedite the search of optimal solutions. Therefore, to improve its competency in finding more effective solutions, an idea is to enhance EDA with local search, namely, memetic EDA. Memetic EDA has been successfully applied to many optimizations problems with local search operators [26], [25], such as arc routing and assembly flow-shop scheduling problems. On the one hand, although memetic EDA has been successfully applied to many applications, those memetic approaches work inappropriate for web service composition, as these local search operators are only applicable to domain-specific or problem-specific solution representations [25], [27]. On the other hand, despite the recent success in EDA-based service composition, the effectiveness of this approach can be enhanced by introducing memetic EDA. Several challenges remain to be addressed in developing a memetic EDA approach to service composition as follows: First, a composite service is commonly represented as a DAG, exploring the neighborhood of a DAG, especially large DAGs, is computationally infeasible [28]. Note that the discussed neighborhood is structured by local search operators on the search space, where neighbor solutions can be generated iteratively from a given candidate solution. Therefore, researchers [9], [29] often indirectly defined the neighborhood of a composite service represented in the form of a permutation, which can be converted to a DAG through a separate decoding process. Often, socalled "swap" operators produce neighbors by swapping two random elements in a permutation. Consequently, a neighborhood is defined by the collection of permutations obtainable through a "swap" to any given permutation. However, such neighborhood often contains a large proportion of neighboring permutations with inferior quality. For effective local research, the neighborhood must be refined to exclude most of the clearly unwise swapping choices by exploiting domain-specific knowledge. Second, as we know, it is very challenging to determine which candidate solutions are to be selected for local search in memetic algorithms, as the selection method has a significant impact on the effectiveness and efficiency of memetic EDA. Should an equal chance be given to all the candidate solutions or only elite solutions should be considered for local search? Moreover, what are elite solutions, and how many of them should be modified locally? However, answers to these challenging questions often vary from many factors, such as EC algorithms, domain problems, etc. Therefore, it is challenging to determine one effective selection strategy for the memetic EDA-based approach to service composition. Third, a traditional strategy that exclusively explores the whole neighboring space of composite services can incur high computation cost without guarantee of improving solution quality. For example, for permutation-based representation, if a simple swap operator is utilized for exploring the neighborhood, then the dimension of the permutation determines the computational complexity. In the context of service composition, the dimension of such permutation is usually equivalent to the size of the service repository. As the neighborhood size is extremely huge when many services are to be considered during the service composition process, this strategy is infeasible for practical use. Fourth, in EDA, although a probability distribution model is adjusted to trace promising searching areas throughout generations, one proportion of promising solutions (i.e., permutations) are more likely to be repetitively sampled, while the distribution model is getting converged along the generations. Furthermore, these repeatedly sampled solutions are often favorable to users, since they are candidate solutions with high quality. In the EDA-based approach to service composition, sampled permutationbased solutions are very costly as they require repetitive computation time for decoding and evaluations. To address these challenges above, we propose a memetic EDA-based approach, achieving substantially high performances in effectiveness and efficiency. These outstanding performances are observed by comparing it with some recently proposed web service composition approaches, such as a EDA-based approach [12], a PSO-based approach [10], and GA-and Memetic GA-based approaches [9]. In particular, an empirical, experimental study on the effectiveness of different neighborhoods structured by different local search operators is conducted. The contributions of this paper are listed below, and where the first contribution is to address the first challenge discussed previously, and the second contribution is proposed to address the remaining challenges. 1) To perform an effective local search in composite services, we first propose several neighborhood structures for candidate solutions. These neighborhoods are created by developing several novel domain-dependent local search operators, based on constructing and swapping effective building blocks of composite services for local improvements. Subsequently, we develop an effective memetic EDA-based approach based on our previous work [12], with nature integration with those local search operators. 2) To significantly reduce the computation time of our proposed memetic EDA-based approach, an integrated local search procedure is proposed with a modified EDA based on the standard EDA. To decrease computation losses in repetitive sampling and evaluations, we utilize an archiving technique to avoid sampling solutions repetitively. This technique is prevalent and straightforward to use. Besides that, the local search procedure employs an effective joint strategy to efficiently finding better solutions. This strategy considers fitness uniform distribution scheme and stochastic local search jointly with our proposed local search operators. 3) To demonstrate the performance of our memetic EDAbased approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 [1] and WSC-09 [2]. In particular, the new benchmark inherits the functionalities provided by services in benchmark dataset WSC-08 and WSC-09 and the QoS attributes of web services in benchmark dataset QWS [30]. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. This benchmark has been made freely available online as well as the codes of our memetic EDA-based approach 1 . We experimentally compare our memetic EDA-based approach with some state-of-the-art methods that have been recently proposed to solve the same or a similar service composition problem using the new benchmark. Our experimental results illustrate that our method can achieve cutting-edge performance. Literature on EC-Based fully automated web service composition Automated web service composition aims to loosely couple web services to fulfill a service request, without strictly obeying a pre-given abstract workflow. Instead, composition workflows are gradually built up while its component services are selected. Existing works in fully automated web service composition can be categorized into two approaches -direct approaches and indirect approaches [31]. The direct approaches represent composition solutions explicitly in the representation that displays actual execution flows of composite services, while the indirect approaches often represent composite services implicitly as permutations, which require a decoding process to build up actual execution workflows. 1. Two augmented benchmarks for automated web service composition is available from https://github.com/chenwangnida/Dataset, and the codes of our memetic EDA-based approach is available from https://github.com/chenwangnida/MENHBSA4SWSC. In the first category, tree-and graph-based representations are widely used to represent service composition solutions directly. A graph-based evolutionary process is introduced in [32] to directly evolve DAG-based service composition solutions, applying domain-dependent crossover and mutation operators with repairing methods. GP is utilized for searching optimal solutions represented as trees. [7] proposes a context-free grammar for randomly initializing treebased service composition solutions with correct structures of composite services. In contrast, [13] randomly initializes tree-based service composition solutions completely, but they develop adaptive crossover and mutation rates according to the diversity of the population for accelerating the speed of convergence. Both approaches [7], [13] utilize a penalization method for filtering incorrect solutions while evaluating the QoS of candidate solutions. To achieve higher performance, [5], [8] utilize a greedy search algorithm for creating correct DAG-based composition workflows, which are mapped to tree-based ones with different methods. During the evolutionary process, the correctness of the solutions is ensured by domain-dependent crossover and mutation. However, the mapped tree-based representations suffer a scalability issue, since many replicas of subtrees are produced from the mapping methods. To overcome this issue, [11] proposes a tree-like representation, on which the replicas of subtrees are handled by removing them, and inserting edges from the root of the replicas to the roots of the copies. In the second category, service composition solutions are represented as permutations, which are then decoded into solutions represented as DAGs [10], [31], [33]. PSO is utilized to find an optimized queue of services (i.e., a permutation), which can be decoded into a corresponding DAG-based composite service [33]. [10] extends [33] to jointly optimize QoSM and QoS, where a weighted DAG is decoded, where edge weights correspond to matchmaking quality between services. These two PSO-based approaches rely on PSO to determine the weights of particle's position (that corresponding with a service) to form an ordered service queue. Optimizing QoSM and QoS simultaneously is more challenging than optimizing QoS only because the searching space has significantly increased, and it demands more effective and efficient searching techniques. Apart from that, it has been suggested that utilizing the indirect representation often contributes to a higher performance, compared to direct representation [31]. It is due to that the search space is not unwittingly restricted by unconstrained random initialization of solutions and operators. In summary, EC techniques have been showing their promises in fully automated web service composition. Moreover, the indirect approaches have been indicated to be more effective. Therefore, EC techniques with indirect representations are exciting techniques to be focused on for solving service composition problem in this paper. Literature on memetic EC-based approaches and EDA Memetic algorithms have drawn growing attention from researchers in recent years and achieved significant successes in many applications [34]. By introducing local search, the performance of EC techniques can be improved. In the domain of service composition, to overcome the prematurity and proneness of GP, Tabu search is combined with GP to solve QoS-aware data-intensive web service composition [35]. [9] proposed an indirect memetic approach for QoSaware web service composition, where a domain-dependent crossover operator is proposed to produce candidate solutions. Besides that, an exhaustive local search is applied to composite solutions represented as permutations. However, the produced neighbors are likely to be decoded into the same composite solution. Therefore, the effectiveness of this local search operator demands further improvement. Recently, EDA has been used as a technique to tackle permutation-based optimization problems [23]. In particular, a distribution model is learned iteratively for each population. Subsequently, new offsprings are generated based on the learned model. Moreover, domain-dependent local search operators are often introduced to enhance the performances of EDA. For example, a probability matrix that is related to the job priority permutation of a solution is learned in EDA-based flow-shop scheduling problem, and different job-based local search operators were proposed to enhance the exploitation ability of EDA [25]. An Edge Histogram Matrix is applied to uncertain capacitated arc routing problems and is leaned from solutions represented by a set of routes [27]. To make local improvements, different move operators, such as single insertion and swap, are also proposed. The use of EDA has only been investigated for semiautomated web service composition [6], [24], [36]. However, we recently proposed an EDA-based approach for fully automated web service composition, where candidate solutions are represented as permutations over a given service repository. The success of the proposed method strongly depends on the distribution model and the way of learning the distribution model. We employ Node Histogram Matrix (NHM) to learn the distribution of promising solutions in one population, Node Histogram-Based Sampling Algorithm (NHBSA) [22] is empoloyed to produce candidate solutions. Although we started an initial study for fully automated service composition, it remains an opportunity to improve its performance further. EDA is good at global exploration, and local search operators are motivated to be introduced in EDA to enhance its capability in exploitation. In summary, on the one hand, memetic EDA-based approaches have been investigated in many problems, other than fully automated service composition, achieving promising results. On the other hand, notwithstanding success achieved in our initial investigation in EDA-based fully automated service composition, the performance of this EDA-based approach can be further improved by combining it with local search. SEMANTIC WEB SERVICE COMPOSITION PROB-LEM A semantic web service (service, for short) is considered as a tuple S = (I S , O S , QoS S ) where I S is a set of service inputs that are consumed by S, O S is a set of service outputs that are produced by S, and QoS S = {t S , c S , r S , a S } is a set of non-functional attributes of S. The inputs in I S and outputs in O S are parameters modeled through concepts in a domain-specific ontology O. The attributes t S , c S , r S , a S refer to the response time, cost, reliability, and availability of service S, respectively, which are four commonly used QoS attributes [37]. A service repository SR is a finite collection of services supported by a common ontology O. A composition task (also called service request) over a given SR is a tuple T = (I T , O T ) where I T is a set of task inputs, and O T is a set of task outputs. The inputs in I T and outputs in O T are parameters that are semantically described by concepts in the ontology O. Two special atomic services Start = (∅, I T , ∅) and End = (O T , ∅, ∅) are always included in SR to account for the input and output of a given composition task T . We use matchmaking types to describe the level of a match between outputs and inputs [38]. For concepts a, b in O the matchmaking returns exact if a and b are equivalent (a ≡ b), plugin if a is a sub-concept of b (a b), subsume if a is a super-concept of b (a b) , and f ail if none of previous matchmaking types is returned. In this paper we are only interested in exact and plugin matches for robust compositions, see [39]. As argued in [39] plugin matches are less preferable than exact matches due to the overheads associated with data processing. For plugin matches, the semantic similarity of concepts is suggested to be considered when comparing different plugin matches. A robust causal link [40] is a link between two matched services S and S , denoted as S → S , if an output a (a ∈ O S ) of S serves as the input b (b ∈ O S ) of S satisfying either a ≡ b or a b. For concepts a, b in O, the semantic similarity sim(a, b) is calculated based on the edge counting method in a taxonomy like WorldNet [41]. Advantages of this method are simple calculation and good semantic measurement [41]. Therefore, the matchmaking type and semantic similarity of a robust causal link is defined as follows: type link = 1 if a ≡ b (exact match) p if a b (plugin match) (1) sim link = sim(a, b) = 2Nc Na + N b(2) with a suitable parameter p, 0 < p < 1, and with N a , N b and N c , which measure the distances from concept a, concept b, and the closest common ancestor c of a and b to the top concept of the ontology O, respectively. However, if more than one pair of matched output and input exist from service S to service S , type e and sim e will take on their average values. The QoSM of a composite service is obtained by aggregating over all robust causal links as follows: M T = m j=1 type link j (3) SIM = 1 m m j=1 sim link j(4) Formal expressions as in [42] are used to represent service compositions. The constructors •, , + and * are used to denote sequential composition, parallel composi- C = r C = a C = ct C = t C = •(C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k d k=1 t C k (C 1 , . . . , C d ) d k=1 r C k d k=1 a C k d k=1 ct C k M AX{t C k |k ∈ {1, ..., d}} +(C 1 , . . . , C d ) d k=1 p k · r C k d k=1 p k · a C k d k=1 p k · ct C k d k=1 p k · t C k * C 0 r C 0 a C 0 · ct C 0 · t C 0 tion, choice, and iteration, respectively. The set of composite service expressions is the smallest collection SC that contains all atomic services and that is closed under sequential composition, parallel composition, choice, and iteration. That is, whenever C 0 , C 1 , . . . , C d are in SC then •(C 1 , . . . , C d ), (C 1 , . . . , C d ), +(C 1 , . . . , C d ) , and * C 0 are in SC, too. Let C be a composite service expression. If C denotes an atomic service S then its QoS is given by QoS S . Otherwise the QoS of C can be obtained inductively as summarized in Table 1. Herein, p 1 , . . . , p d with d k=1 p k = 1 denote the probabilities of the different options of the choice +, while denotes the average number of iterations. Therefore, QoS of a service composition solution, i.e., availability (A), reliability (R), execution time (T ), and cost (CT ) can be obtained by aggregating a C , r C , t C and ct C as in Table 1. In the presentation of this paper, we mainly focus on two constructors, sequence • and parallel , similar as in most automated service composition works [5], [8], [10], [11], [32], [33], where service composition solutions are represented as a Directed Acyclic Graph (DAG). We can easily calculate QoS of a composite service that is represented as a DAG [10] according to Table 1. When multiple quality criteria are involved in decision making, the fitness of a solution is defined as a weighted sum of all individual criteria in Eq. (5), assuming the preference of each quality criterion based on its relative importance is provided by the user [43]: F itness(C) = w 1M T +w 2Ŝ IM +w 3Â +w 4R +w 5 (1−T )+w 6 (1−ĈT )(5) with 6 k=1 w k = 1. This objective function is defined as a comprehensive quality model for service composition. We can adjust the weights according to the user's preferences.M T ,ŜIM ,Â,R,T , andĈT are normalized values calculated within the range from 0 to 1 using Eq. (6). To simplify the presentation we also use the notation (Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 ) = (M T, SIM, A, R, T, CT ). Q 1 and Q 2 have minimum value 0 and maximum value 1. The minimum and maximum value of Q 3 , Q 4 , Q 5 , and Q 6 are calculated across all the relevant services (that are determined in Sect. 4.2) in the service repository SR using greedy search in [5], [8]. Q k =        Q k −Q k,min Q k,max −Q k,min if k = 1, . . . , 4 and Q k,max − Q k,min = 0, Q k,max −Q k Q k,max −Q k,min if k = 5, 6 and Q k,max − Q k,min = 0, 1 otherwise. The goal of comprehensive quality-aware service composition is to find a composite service expression C that maximizes the objective function in Eq. (5). C is hence considered as the best possible solution for a given composition task T . MEMETIC EDA-BASED APPROACH FOR SE-MANTIC WEB SERVICE COMPOSITION In this section, we present our memetic EDA-based approach to fully automated semantic web service composition. We start by giving an overview of our memetic EDAbased approach. Subsequently, we discuss some essential steps in the approach: the first one is to discover relevant services and service layers, see details in Sect.4.2. The second one is to introduce a permutation-based representation proposed in our previous work, see details in Sect. 4.3 and 4.4. The third one is to introduce an effective joint strategy for a local search procedure, see details in Sect. 4.5. We propose several key ideas that are jointly employed to build our memetic EDA-based approach: 1) A composite service is commonly represented as a DAG, since a DAG can intuitively represent an execution flow of web services, and allows efficient computation of QoS. The success of the EDA strategy strongly relies on the proper distribution model for learning the knowledge of promising solutions. Our initial study [12] represents a composite service as a unique queue of services, i.e., a permutation of atomic services, which is mapped from a DAG-based solution. Composite services in this permutation form contributes to a distribution model to be learned and new permutationbased promising solutions to be sampled. Therefore, a bi-directional map is ensured between permutations and DAGs for learning and evaluation purposes. 2) To significantly decrease the computation time of the local search procedure, it is crucial to select a restricted number of suitable candidate solutions for local searches. We assume that candidate solutions with close fitness values are similar in their corresponding DAG forms, so neighbors produced from these candidate solutions can be the same. Therefore, we group candidate solutions based on their fitness values according to a uniform distribution scheme, which allows candidate solutions with the most considerable differences measured by single-objective fitness values can be effectively chosen for applying local search. 3) It is not efficient to exhaustively explore the whole neighbors in the conventional local search [9]. Instead, stochastically searching the neighboring solutions can significantly reduce computation cost [26]. Therefore, we introduce a stochastic local search with EDA to posite service is unusually computationally infeasible [28]. However, it is straightforward to define the neighborhood on a permutation-based representation by socalled swap operators. To develop effective swap operators, we utilize domain knowledge of service composition to create effective building blocks for these swap operators on permutation-based candidate solutions. These swap operators aim to exploit fitter neighbors effectively. That is they are likely to make local improvements in the produced neighbors. An overview of memetic EDA-based algorithm for automatic service composition An overview of the memetic EDA-based approach is represented in Figure 1, consisting of the following steps: initialize population, evaluate population, select superior subpopulation, learn probability model, sample individuals and return optimal solutions. We start with discovering all the relevant services that are related to a given composition request T in Step 1. Meanwhile, several service layers are identified (see details in Subsection 4.2). These relevant services are used to randomly generate m composite services represented as permutations, Π g k , where g = 0 and k = 1, . . . , m. In Step 2, these permutation-based individuals are decoded into DAG-based solutions using a forward graph building technique [10], based on which, the fitness in Eq. 5 of each individual can be calculated. In Step 3, we merge the current population P g with an archive. The archive is an empty individual set initially and will saved with elite composite services in the future. By adopting Breath-First Search (BFS) on each corresponding DAG-based solutions in the merged population, we produce another encoded permutation-based solutions Π g k . Then, the local search procedure is applied to a very small set of these permutations. This small permutation set is selected based on a fitness uniform selection scheme over the current population (see details in 4.5.1). For each permutation in the small set, a stochastic local search is employed to create new permutations as its neighbors, where the best neighbor is identified based on the fitness value. This permutation in the small set is replaced with its best neighbor (see details in Subsection 4.5). The top half of the best-performing solutions are reserved in P g according to their fitness values and put them into the archive as elite solutions. In Step 4, we use these elite solutions in the archive to learn a N HM g of generation g, which produces offsprings for generation g + 1 using NHBSA, see details in Subsection 4.4. Consequently, we go back to Step 2 to evaluate the fitness of new offsprings. The steps 2 to 4 will be repeated until the maximum number of generations is reached. Eventually, the best solutions found throughout the evolutionary process is returned. In a nutshell, we introduce a permutation-based representation derived from the common DAG-based one. In our proposed algorithm, we always switch between these two representations back and forth for better searching or evaluation purposes. Furthermore, an effective and efficient local search procedure is developed through the use of the selection scheme and the stochastic local search. Relevant Services and Service Layers Discovering relevant services and service layers is an initial, but crucial step for our memetic EDA-based approach. We achieve two goals at this initial stage: the first goal is to reduce the size of the service repository SR to keep only those that are relevant to the service composition task T ; the second goal is to identify service layers of these relevant services. In particular, a group of layers is identified, and each layer contains a set of services that have the same longest distance to Start. We adopt a layer discovering method in [44] to find relevant services and service layers as illustrated in the following example. Fig. 3 shows an example of discovering relevant services and service layers given a service request T , where five related services (i.e., S 0 , S 1 , S 2 , S 3 , and S 4 ) and two layers (i.e., L 1 and L 2 ) are found. In L 1 , S 0 , S 1 , S 2 , and S 4 can be satisfied by {a, b} of T , and they have the same distance to Start (Note that the distance is measured by the number of predecessors). While S 3 in L 2 requires additional inputs from other services and it is associated with a longer distance to Start. A Novel Permutation-Based Representation Service composition solutions are commonly represented as Directed Acyclic Graphs (DAGs) [5], [8], [10], [11], [32], [33]. Let G = (V, E) be a DAG-based composite solution from Start to End, where nodes correspond to the services and edges correspond to the robust causal links. Often, V does not contain all services in SR. Many combinatorial optimization problems naturally represent solutions as permutations, which can be different in different problems [23]. Here we present composite services as permutations, and we ensure a bi-directional map between permutations and DAGs. The bi-directional map is crucial for learning the distribution of promising composite solutions. Because it is less reliable to learn a distribution based on permutations if different permutations are mapped to the same DAG-based composition service. Let Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ) be a permutation, elements of which are {0, . . . , t, t + 1, . . . , n − 1} such that Π i = Π j for all i = j. Particularly, {0, . . . , t} are service indexes (i.e., id number) of the component services in the corresponding G , and is sorted based on the longest distance from Start to every component services of G. While {t + 1, . . . , n − 1} be indexes of remaining services in SR not utilized by G. We use Π g k to present the k th (out of m, m is population size) service composition solution, and P g = [Π g 0 , . . . , Π g k , . . . , Π g m−1 ] to represent a population of solutions of generation g. An example of producing a permutation-based composite solution is shown as follows. Fig. 3 illustrates a process to produce an permutation-based solution. As an example, take an permutation as [4, 1, 2, 3, 0]. This service index queue is decoded into a DAG G 0 0 representing a service composition that satisfies the composition task T . Afterwards G 0 0 is mapped to a permutation Π 0 0 = [1, 2, 3 | 4, 0]. Herein, each position on the left side of | corresponds to a service discovered by a BFS on G 0 0 from Start. This BFS additionally takes ascending order of service indexes during the search. While the right side corresponds to the remaining atomic services in SR, but not in G 0 0 . Note, that | is just displayed for the courtesy of the reader, rather than being part of the permutation-based representation. Furthermore, we also do not permit the encoding [1, 2, 3 | 0, 4], as no information can be extracted from G 0 0 to determine the positions of 0 and 4 in the permutation. A permutation-based population P g can be created with m permutation-based solutions. Consider m = 6, P g could be represented as follows: [22] proposed Node histogram-based sampling (NHBSA) as a tool for sampling new candidate solutions, which is commonly represented in the form of permutations. By employing the discussed representation of composite services in Sect. 4.3, we are now capable of applying NHBSA to sample new permutations as candidate composite services. The NHM at generation g, denoted by N HM g , is an n × n-matrix with entries e i,j as follows: P g =        sol g 0 sol g 1 sol g 2 sol g 3 sol g 4 sol g 5        =        Application of node histogram-based sampling e g i,j = n−1 k=0 δ i,j (sol g k ) + ε (7) δ i,j (sol g k ) = 1 if I g k (S i ) = j 0 otherwise (8) ε = m n − 1 b ratio(9) where i, j = 0, 1, . . . , n − 1, and b ratio is a predetermined bias. Roughly speaking, entry e g i,j counts the number of times that service S i appears in position j of the service queue over all solutions in population P g . We pick up an element in the N HM g as an example to demonstrate the meaning of each element in the NHM. For example, e g 0,0 ( that equals 2.6) is made of integer and decimal parts: 2 and 0.6. The integer number 2 means that service S 0 appears at the first position 2 times, while the decimal number 0.6 is a ε bias. Once we have computed N HM g , we use node histogram-based sampling [22] to sample new permutations for the next generation. Effective Local Search Procedure Through a Joint Strategy In this section, we introduce a joint strategy of our local search procedure: we begin with an introduction of a selection of suitable individuals for employing local search. This selection aims to choose the individuals based on global and local population information using a fitness uniform selection scheme in Algorithim 2. Subsequently, we present several local search operators with the representation discussed in 4.3. These operators are specially designed to work seamlessly with different neighborhoods that are investigated in this paper. The joint strategy for local search is summarized in ALGORITHM 1. Fig. 1) Input : P g , n nb and n set Output: updated P g 1 Select a small number n set of individulals to form a subset SelectedIndiSet of P g using ALGORITHM 2; 2 foreach Π in SelectedIndiSet do 3 Generate a size n nb of neighbors from Π by local search ; 4 Identify the best neighbor Π best with the highest fitness ; 5 replace Π with Π best ; 6 return P g ; ALGORITHM 1 takes three inputs: P g the gth population, n set the number of seleted individuals for local search and n nb the number of neighbors. In this algorithm, we start by selecting a fixed and small number n set of candidate solutions to form a subset SelectedIndiSet of the current population P g using ALGORITHM 2, see details in Section 4.5.1. These selected solutions are used for local search. For each solution Π in SelectedIndiSet, we produce a number n nb of neighbors from Π by local search, see details in Section 4.5.2, and then we identify the best neighbor Π best from the produced neighbors. We replace the best neighbor Π best with the selected Π in the small solutions set SelectedIndiSet. Eventually, we return a updated P g . ALGORITHM 1. Joint strategy for local search (Step 3.3 in Application of uniform distribution schema Two types of selection schemes for selecting suitable individuals for local search have been studied [34]: random selection scheme, and statistics scheme. The random selection scheme is a primary selection method, where a local search is potentially applied to all individuals with a predefined rate. However, it can be less effective as it does not assign local search to the most suitable candidate solutions, and it is more time-consuming when the population size is huge. This statistics scheme often chooses more suitable individuals based on the statistics information of the current population. For example, this method can assign local search on a set of candidate solutions with the highest differences measured by their fitness values. Our selection scheme, inspired by [45], is proposed based on the statistics information that aims to select a small number of suitable individuals for local search, making a good balance of local improvement and execution time. This selection scheme is presented in ALGORITHM 2. This algorithm applied a local search on a set of selected individuals SelectedIndiSet. The size of SelectedIndiSet, n set , is a pre-defined parameter. SelectedIndiSet consists of one elite individual and n set − 1 individuals from n set − 1 groups of individuals in each generation. Particularly, we calculate a uniform fitness interval based on the maximal fitness value, maxf itness and minimal fitness value, minf itness of the ALGORITHM 2. Fitness uniform selection scheme Input : P g and n set Output: selected solutions SelectedIndiSet 1 SelectedIndiSet ← {} ; 2 Sort P g in descending order based on the fitness ; 3 Put the first individual in P g into SelectedIndiSet ; 4 Calculate fitness range for n set − 1 groups based on an uniform interval between maxf itness and minf itness ; 5 Assign each permutation in P g to n set − 1 groups based on the fitness value ; 6 Random select one permutation from each group and put it in SelectedIndiSet; 7 return SelectedIndiSet; current population P g . Therefore, the population is divided into n set − 1 groups based on the calculated fitness interval. Consequently, these groups represent different groups of individuals, and each group represents close similarities based on their fitness. Note that, for every generation, the actual number of selected individuals for local search could be less than n set , because there could be no individuals fall into one group based on its fitness value. Stochastic Local Search Operators To investigate an appropriate structure of neighborhood for composite services, suitable local search operators must be proposed to effectively utilize domain knowledge. Then we repeatedly assign these local search operators to SelectedIndiSet for exploring their neighboring solutions. Apart from that, to balance the quality of local improvement and computation time, only a random subset of the entire large neighborhood is explored by a stochastic strategy. Based on the discussed permutation-based representation in Sect. 4.3, local search operators are proposed in a straightforward way as "swap". In this paper, we investigate four different swap operators: 1) Constrained One-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), two service indexes Π a , where 0 ≤ a ≤ t, and Π b , where t + 1 ≤ b ≤ n − 1, are selected and exchanged. The one-point swap local search operator is inspired by [9], which swaps a pair of service indexes in a permutation. In [9], local search exclusively explores the neighborhood based on one selected index of the permutation, so the size of the neighborhood associated with the index is n − 1. However, it can be very computational expensive because the number of swaps becomes significant for large n. Besides that, it can be less flexible as the neighborhoods are just focusing on those neighborhoods in relation to one selected index. Herein we propose a more efficient and flexible local search with one-point swap: first, we pre-determine a fixed, relatively small number of neighbors n nb to be produced for a considerable computational time assigned for local search; second, we randomly produce n nb neighbors by swapping two randomly selected indexes, rather than by swapping n−1 indexes with one fixed index. We expect that swapping two randomly selected indexes is more effective within a budget computation time for making local improvements. Meanwhile, we constrain the two randomly selected indexes that they must be before | and after | respectively in every swap because these swaps exclude those have lower opportunities for local improvements. For example, one neighbor is created by swapping one pair of used service indexes. This swap operation has a higher chance to produce the same DAG-based solution. Figure 4 shows an example of one-point swap for a selected individual. 2) Constrained Two-Point Swap: For a permutation Π = (Π 0 , . . . , Π t , Π t+1 , . . . , Π n−1 ), four service indexes Π a1 , Π a2 , Π b1 , and Π b2 are selected, where 0 ≤ a 1 ≤ t, 0 ≤ a 2 ≤ t, t + 1 ≤ b 1 ≤ n − 1, t + 1 ≤ b 2 ≤ n − 1, a 1 = a 2 , and b 1 = b 2 . Π a1 and Π b1 are exchanged. Likewise, Π a2 and Π b2 are exchanged. Motivated by the one-point swap proposed above, we created two-point swap operator by combing two constrained one-point swap into a single operator. We make a hypothesis that the two-point swap could efficiently produce a higher quality neighbor by one local change, rather than producing two neighbors by a sequence of two constrained one-point local changes. Primarily, given a budgeted number of candidate solutions for local search, a two-point swap operator can perform a more efficient local search for finding highquality solutions. Figure 5 shows an example of a twopoint swap for a selected individual and a produced neighbors. Constrained One-Block Swap is proposed based on the concept of a block, i.e., consecutive points (service indexes) in a permutation. In this swap, two blocks are built up based on two randomly generated starting point Π a and Π b before | and after I of a permutation respectively. After swaps, produced neighbors inherit two parts of the original permutation. Figure 5 shows an example of a constrained one-block swap for a permutation, where one block is built up from the start position StartP os1 to the last positions of used services, and another block is built up from the start position StartP os2 to the last index. EXPERIMENTS We conduct experiments to evaluate the performances of our memetic EDA-based approaches, i.e., memetic EDA with constrained one-point swap (henceforth referred to as MEEDA-OP), memetic EDA with constrained two-point swap (henceforth referred to as MEEDA-TP), memetic EDA with constrained layer-based one-point swap (henceforth referred to as MEEDA-LOP) and memetic EDA with constrained one-block swap (henceforth referred to as MEEDA-OB). These memetic EDA-based approaches are compared to some state-of-the-art EC-based methods that were recently proposed to solve the same or similar problems: a PSO-based approach [10] (henceforth referred to as PSO), a GA-based approach (henceforth referred to as GA), a memetic GA-based approach [9] (henceforth referred to as MEGA) and an EDA-based approach [12] (henceforth referred to as NHM-EDA). Two benchmarks, WSC-08 [1] and WSC-09 [2] extended with QoS attributes , which generated from the QoS distribution from QWS [30] are created. These two benchmarks have already been broadly employed in service composition [5], [10], [13] for experimental evaluations. Moreover, the number of web services in the service repository is doubled as a new benchmark (with much bigger searching space) to demonstrate that memetic EDA can maintain high performance on our problem with significantly larger sizes. We also make this benchmark available to the public. Particuarly, WSC08 contains 8 composition tasks with increasing size of service repository, i.e., 316, 1116, 1216, 2082, 2180, 4396, 8226, and 16238, and WSC09 contains 5 composition tasks with increasing size of service repository, i.e., 1144, 8258, 16276, 16602, and 30422 SRs respectively. The population size is set to 200, the number of generations equals to 100, and b ratio is 0.0002. The size of SelectedIndiSet is 6, and the number of neighbors of each individual in SelectedIndiSet explored by local search operators n nb is 20. For all the competing methods, we follow strictly their settings in their papers. In GA, the crossover rate is set to 0.95, and the mutation rate is set to 0.05. In MEGA, the crossover rate is set to 0.95, and local search rate is 0.05. We run the experiment with 30 independent repetitions. Following existing works [10], [11], [12], the weights of the fitness function Eq. (5) are simply configured to balance the QoSM and QoS. In particular, we set both w 1 and w 2 to 0.25, and w 3 , w 4 , w 5 and w 6 all to 0.125. More experiments have been conducted and show that all our methods work consistently well under different weight settings. The p of type link is determined by the preference of users, and is recommended as 0.75 for the plugin match according to [39]. Comparison of the Fitness We employ the independent-sample T-test with a significance level of 5% to verify the observed differences in performance concerning fitness value and execution time. In particular, we use a pairwise comparison to compare all competing approaches, and then the top performances are identified, and its related value is highlighted in green color in Table 2. Note that those methods that consistently find the best-known solutions over 30 runs with 0 standard deviations are also marked as top performances. The pairwise comparison results for fitness are summarized in Table 3, where textitwin/draw/loss shows the scores of one method compared to all the others, and displays the frequency that this method outperforms, equals or is outperformed by the competing method. This testing and comparison methods are also used in Sect 5.2. One of the objectives of the experiments is to evaluate the effectiveness of the proposed memetic EDA-based approaches comparing to NHM-EDA [12], PSO [10], GA and MEGA [9]. Table 2 shows the mean value of the fitness value and the standard deviation over 30 repetitions. The pairwise comparison results of the fitness value are summarized in Table 3. From Table 2 and Table 3, we observe some interesting behaviors of these approaches in finding highquality solutions. Based on these observations, we also make some analysis and possible conclusions below: Firstly, for the two baseline methods -PSO and GA, all EDA-based approaches (with and without local search) consistently outperform PSO. However, only memetic EDAbased approaches outperform GA. Then, MEGA [9] achieved very comparable results to all our memetic EDA-based methods. However, MEEDA-LOP achieves the best performance. As shown in Table 3, MEEDA-LOP only loss 1 out of 13 composition tasks over WSC-08 land WSC-09. Furthermore, MEEDA-LOP has achieved extremely stable performance in the most runs with 0 standard deviation. In addition, MEEDA-OP, MEEDA-TP, MEEDA-OB, and MEEDA-LOP significantly outperforms NHM-EDA [12]. This observation corresponds well with our expectation that the exploitation ability of EDA can be enhanced by hybridizing it with local search. We can see that all memetic EDA-based approaches reach a better balance of exploration and exploitation. Furthermore, for the four memetic EDA-based approaches, MEEDA-OB is the worst while MEEDA-OP and MEEDA-TP are very comparable to each other. This observation demonstrates that the neighborhood based on blocks is considered to be less suitable for service composition problems, it is due to that swapping building blocks can potentially ruin the learned distribution of promising solutions. Lastly, MEEDA-LOP is the best performer. This observation corresponds well with our assumption that using the layer-based information can further improve the effectiveness of one-point swap. MEEDA-LOP applies the local search operator to a much smaller, but useful set of services considered in MEEDA-OP. In summary, we sort all the competing approaches based on the effectiveness in a descending order: MEEDA-LOP > MEGA > MEEDA-TP = MEEDA-OP > MEEDA-OB > GA > EDA > PSO. Comparison of the Execution Time The second objective of our experiment is to study the efficiency of all the proposed EDA-based approaches comparing to EDA [12], PSO [10], GA and MEGA [9]. Table 4 shows the mean value of the execution time and the standard deviation over 30 repetitions. The pairwise comparison results for the execution time are summarized Table 5. From the two tables above, we make some analysis and possible conclusions about the execution time of these approaches as below: First, MEEDA-LOP requires consistently less execution time compared to other approaches, which can be observed from the highlighted execution time in Table 4. It is a remarkable observation that the local search in MEEDA-LOP based on layers and constrained one-point swap requires less computation time compared to MEEDA-OP. However, this significant improvement is mainly due to two techniques in MEEDA-LOP. The first one is the archive technique, which reserves half population-size elite individuals to the next generation, and significantly reduces the overall computation time for the decoding and evaluation of the reserved individuals in the future. The second one is the layer-based information, which improves the effectiveness of one-point swap, resulting in learning more accurate and reliable NHM. Therefore, useful services are more likely to be put in the front of the permutation, which accelerates the execution time in the decoding process. Second, in contrast, MEGA requires the highest execution time, because all the candidate solutions in MEGA have an opportunity for local search using random selection scheme, and MEGA also exclusively searches the whole neighborhood based on one position. These results confirm that the combination of the random selection scheme and the exclusively local search strategy in MEGA is less effective and more time-consuming than our statistics scheme and stochastic local search operators. Lastly, MEEDA-OB is also very computation-intensive among all the memetic EDA-based approaches. It is due to that one-block swap retards accurate distributions to be learned as local improvements of one-block swap is less effective, so required services for service composition are less likely to be put at the front of a service queue. Also, building blocks consume extra time in MEEDA-OB. In summary, we sort all the competing approaches based on the execution time in a ascending order: MEEDA-LOP > MEEDA-OP > MEEDA-TP > PSO > GA > MEEDA-OB > MEGA. Comparison of the Convergence Rate The third objective of our experiment is to study the convergence rate of all the approaches over 30 independent runs. We have used WSC08-3 and WSC09-2 as two examples to illustrate the performance of all the compared methods. MEGA requires much higher time for execution, we set different execution time scales for two tasks of WSC08-3 and WSC09-2 to easily observe their differences. First, we observe a significant increase in the fitness value towards the optimum over all the approaches excluding MEGA. These approaches eventually reach different levels of plateaus. Given the same budget of execution time, all memetic EDA-based methods happen to converge significantly faster and require much less time than the baseline PSO over all the composition tasks. Second, MEGA suffers from the the scalability issue when the size of the service repository is doubled in our new benchmark. The complexity of its local search strongly depends on n, i.e., the dimension of each permutation. Therefore, MEGA does not even converge at all when the same amount of execution time that is required by other approaches is assigned. Lastly, MEEDA-LOP is consistently ranked as a top performer among all the competing methods. The convergence rate of MEEDA-OP and MEEDA-TP presents a very similar pattern. However, MEEDA-OB happens to converge slower than the others, but it eventually reaches comparable results compared to MEEDA-OP and MEEDA-TP. Comparison of local search operators We investigate how often the mean fitness of neighbors is better than the fitness of their original permutation in MEEDA-OP, MEEDA-TP, MEEDA-LOP, and MEEDA-BP to demonstrate which swap-based local search operator is more likely to produce better solutions. Herein we use the composition task WSC0803 as an example to demonstrate the percentage of better neighbors produced by our four memetic EDA-based approaches along generations over 30 runs for WSC08-03 in Fig. 9. The result shows that MEEDA-BP and MEEDA-TP are less like to produce better solutions while MEEDA-OP and MEEDA-LOP are very comparable to each other, although slightly higher percentages of better mean fitness can be achieved by MEEDA-LOP. We further analyze differences between layer-based constrained one-point swap and constraint one-point swap operator using a permutation in Figure 10. Figure 10 exhibits an example of two produced neighbors from a permutation using constraint one-point swaps without considering layer information. In the example, one identical solution can be decoded from both the given permutation and the produced two neighbors, resulting in no local exploitation. In contrast, the discussed swapping cases are not qualified for the layer-based constraint one-point swap, where any produced neighbor must strictly follow the layer order on the left-hand side of the permutation. In the example, a given permutation is highlighted with two layers (i.e., L 1 and L 2 ) in ascending order. Particularly, S 1 , S 2 ∈ L 1 and S 3 ∈ L 2 . When the constrained one-point swap is performed, S 3 in the given permutation are replaced with S 4 or S 0 in the produced neighbor 1 and neighbor 2 respectively. However, L 2 is destroyed in the produced neighbors because of S 4 ∈ L 1 and S 0 ∈ L 1 . However, if the layer-based one-point swap is applied to the given permutation, it prevents these two neighbors from being exploited. In general, all produced neighbors must keep all the ordered layers from the given permutation. CONCLUSION In this paper, we propose effective and efficient memetic EDA-based approaches to fully automated service composition. The success of this memetic approach principally relies on the local search, where several ideas are jointly employed. In particular, we proposed several neighborhood structures by different local search operators, which are integrated with our permutation-based representation naturally. Besides that, a uniform distribution scheme and a stochastic strategy are also jointly utilized for selecting and applying local search. The experiments show that one of our proposed approach MEEDA-LOP achieves significantly better effectiveness and efficiency, compared to some stateof-the-art EC-based approaches and other memetic EDAbased approaches we proposed in the paper. Future work can investigate variable neighborhood with combinations of more than one local search operators in one evolutionary process, and investigate memetic EDA for handling multiobjective service composition problems.
9,373
1810.12118
2950784788
Question answering (QA) has significantly benefitted from deep learning techniques in recent years. However, domain-specific QA remains a challenge due to the significant amount of data required to train a neural network. This paper studies the answer sentence selection task in the Bible domain and answer questions by selecting relevant verses from the Bible. For this purpose, we create a new dataset BibleQA based on bible trivia questions and propose three neural network models for our task. We pre-train our models on a large-scale QA dataset, SQuAD, and investigate the effect of transferring weights on model accuracy. Furthermore, we also measure the model accuracies with different answer context lengths and different Bible translations. We affirm that transfer learning has a noticeable improvement in the model accuracy. We achieve relatively good results with shorter context lengths, whereas longer context lengths decreased model accuracy. We also find that using a more modern Bible translation in the dataset has a positive effect on the task.
From a text retrieval perspective, question answering embodies the task of finding the relevant piece of text containing the answer and subsequently extracting the answer @cite_20 . This view led to open-domain QA, which encompasses the majority of today's QA systems. In recent years, QA began incorporating machine learning, with the IBM Watson being one of the most famous systems @cite_16 . The primary approach behind Watson is extensive data, statistical and machine learning analysis. Several other neural network approaches have also been explored. used neural networks to answer quiz bowl type questions, where given a description the task is to identify the subject being discussed @cite_1 . extended simple RNN models with an attention mechanism to enable transitive reasoning and made steps towards reasoning-based QA @cite_30 . Malinowski proposed a model using both CNN and LSTM to incorporate image recognition and QA @cite_5 .
{ "abstract": [ "Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets.", "", "We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.", "IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.", "A method for preparing silica-containing olefin polymerization catalysts, and the process performable therewith, the preparation involving adding an alkali met al silicate to an acid under defined conditions of addition to produce a hydrogel, recovering the gel in the substantially dry condition by employment of an oxygenated organic compound and impregnating the gel with a chromium compound." ], "cite_N": [ "@cite_30", "@cite_1", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "2131494463", "", "2952246170", "2171278097", "2086511124" ] }
Finding Answers from the Word of God: Domain Adaptation for Neural Networks in Biblical Question Answering
The desire for a computer system that could answer natural language questions has been an ability that signifies artificial intelligence. In recent years, neural networks and machine learning have become popular approaches for question answering (QA) tasks within the natural language processing (NLP) community. One problem with this approach is the expense of creating a suitable dataset for specific domains. Machine learning works well under the assumption that training and test are from the same distribution. Therefore, the tasks that machine learning can solve is highly dependent on the dataset. As a result, most of neural QA research predominantly uses existing datasets, and not as much work has been done for domain-specific QA. In this paper, we focus on the task of answer sentence selection, specifically to answer questions by selecting verses from the Bible as answers. This task takes as input a question and a context paragraph and asks for a sentence from the context that contains the answer to the given question. In our case, the context would consist of passages from the Bible. The Bible is not only an influential literary work but is also the most important religious document amongst Christians. So far, not much work has been done using this corpus within QA or other NLP tasks. Even today, it is still widely read and studied amongst both the religious and the secular community. The Bible is often seen as a source of wisdom where people turn to seek answers to the big questions in life. A QA system that can answer a question using passages from the Bible has the potential to be very beneficial for the users. A biblical QA system could be useful for non-Christians, seeking to learn more about the Bible. They might ask questions such as: "Who is Jesus? ", "What will happen when I die? ". The system could then output a series of relevant verses as answers. The same system could also be useful for scholars seeking to use the Bible as a historical or archaeological document. They could be interested in questions such as, "When did Babylon destroy the Jerusalem temple? ", "Where was the city of Jericho located? ". Such questions can be answered from the relevant passages that described the historical aspect of the Bible. Finally, one of the widest use cases could be from the Christian community who uphold the Bible as the ultimate authority for their faith and life. They could be interested in a wide variety of questions, ranging from theological to practical. For example, "Is salvation by faith or by works? ", "How should I treat people that have wronged me? ", "How should I pray? ". While many of these questions can also be answered through a search engine, the quality of results from search engines can often be questionable and answering using passages directly from the Bible is a valuable resource from a Christian perspective. Contribution. The goal of the research is to investigate neural-based methods for answering biblical question through verse selection. (1) Since large-scale datasets are needed for efficient learning using neural network methods, our first contribution involves the creation of a new dataset BibleQA. The dataset consists of Biblical questions and the corresponding verse as derived from an existing set of questions that is available on the Internet. (2) Then, for biblical sentence selection, we design three answer selection models based on different neural network architectures. Each of these models takes as input a question and an answer verse and outputs a predicted probability of the verse containing the answer to the question. (3) Thirdly, we leverage transfer learning techniques by pre-training the models on a larger QA dataset and provide insight into the effect of domain adaptation used in QA tasks. Our experiments also reveal how changing context lengths affects the performance of answer selection, and reveals new insights regarding various Bible translation. Methodologies For our task, we employ three main neural network models for comparison purposes: one using recurrent neural network (RNN), one using convolutional neural networks (CNN), another using an adapted Bi-direction Attention Flow model (BiDAF) first suggested by Seo et al. [23]. The three models all follow the same general architecture, and the subsequent sections will describe the architecture in more detail: 1. Embedding: The input question and answers are first pre-processed and converted to word vectors. 2. Encoding: The embedded sentences are then processed and encoded, to obtain one single vector representation that captures the sentence. 3. Answer Selection: Based on the encoded question and answer, select an answer as the predicted output. Word Embedding Word embedding captures word context using distributed word vectors. The underlying intuition is that words in similar environments tend to have similar meanings [17]. Here we make use of both GloVe vectors as well as word2vec. word2vec is modelled as a shallow, two-layered neural network which uses stochastic gradient descent and backpropagation to iteratively make a word embedding more similar to that of its neighbor words. The model successfully reduces the complexity of the non-linear hidden layer and made it possible to learn high dimension word vectors on a significant amount of data. GloVe is an alternative unsupervised learning algorithm to word2vec vectors, which is also used to obtain vector representation for words [20]. GloVe has pretrained vectors available online, that were trained on 6 billion tokens from Wikipedia and various news outlets, making it very suitable for training on SQuAD. As the Bible consists of many words and names that are unique to its context, we also trained our own word vectors for the Bible. We used a combination of all four aforementioned English translations for the word vector training, which includes around 3 million words altogether. In the vector training, we used a context window size of 5 and the Continuous Bag-of-Words algorithm to train vectors of dimension 200. The resulting word vectors were able to capture the general semantics of the Biblespecific vocabularies. Below are some of the most similar words for a few selected words in descending order of similarity, using the derived word vectors. From these lists, we see that the most similar words for 'God' captures many qualities and roles God is seen to have throughout the Bible. The most similar words for 'David' are other names who were closely related to him: Saul, his primary adversary; Absalom and Solomon, his sons; Joab his army commander and Jonathan his best friend. God: lord, saviour, holiness, mercy, lovingkindness, sworn, redeemer, salvation, jehovah, endureth sin: trespass, sins, guilt, guilty, transgression, forgiven, sinned, iniquity, forgive, ignorance david: saul, absalom, joab, abimelech, solomon, abner, jonathan, abraham, achish, samuel The trained word vectors were concatenated with the GloVe vectors in the transfer learning process so that the training on BibleQA would be more meaningful. Models for the QA System The baseline model. This model acts as the basis of comparison for all other results. The model uses a random function to uniformly randomly generate an output in the range [0, 1] for each data point. The baseline gives us a model that performs at a level that does not involve any learning and simply assigning a random prediction for each question-answer pair. The baseline is then compared with our models to evaluate the improvement made by more sophisticated models. The RNN model. Recurrent networks are designed to model sequences, allowing the users to work with sequences while preserving structural information. They are particularly useful in NLP tasks due to the sequential nature of languages. A recurrent network consists of loops, where the output of a particular layer is passed back to the same layer as input. This allows information to persist and capture long-term dependencies such as those that appear in sequences. One of the most popular implementations of RNN is the Long Short-Term Memory (LSTM), which was introduced to mitigate the vanishing gradients problem [12]. As the sequence grows longer, the distance between the current word and the dependent context grows longer. However, this means that the error gradients in later steps in the sequence diminish quickly in the back-propagation and do not reach earlier input signals, hence the gradients "vanish". This makes it very difficult to capture relevant information. LSTM introduces a vector that acts as a memory cell, which preserves gradients over time. The access to the memory cell is controlled by gating components that can be thought of as logical gates. Our RNN model makes use of LSTM layers to produce vector representations of the question and answer phrases. The output is obtained as a probability between [0, 1] that indicates the similarity between the question vector and the answer vector. This is based on the intuition that sentences that have closer vectors should be more similar, and therefore the answer should be more relevant to the question. The word embedding layer transforms each word into a word vector. A question is then a sequence of word vectors x = (x 1 , x 2 , . . . , x t ) and an answer is another sequence of word vectors y = (y 1 , y 2 , . . . , y t ). The encoding procedure applies two LSTMs, one for questions Q and the other for answers A: For each example ( x, y), it sets for each t s t = (C t , h t ) = R LSTM (s t−1 , x t−1 ), and s t = (C t , h t ) = R LSTM (s t−1 , y t−1 ) where s t (s t ) is the tth state, C t (C t ) is the memory-cell states, and h t (h t ) is the output state and R LSTM (R LSTM ) is the LSTM networks for the question (answer). The output would be m = (Q e , A e ) where Q e = (h 1 , h 2 , . . . , h t ), A e = (h 1 , h 2 , . . . , h t ). Finally, we concatenate the question and answer vectors, pass them through a final layer which uses the sigmoid activation function σ = 1 1+e −x , and obtain predicted likelihood for the answer being the correct one for the question. Fig. 1 shows an overview of the layers. The CNN model. CNNs are special feed-forward neural networks with fully connected layers and consisting of convolutional and pooling layers. These specialized layers are useful for finding strong local information present within the input regardless of the position of the signals, e.g., in a QA task, a sentence may contain a key phrase that strongly indicates it as the answer to a question. In NLP, a CNN network Our CNN model uses the convolutional and pooling layers to represent the question and answer phrases. We also use a dropout layer to regulate the weights and avoid overfitting. For each question and answer sequence, the convolution layer applies a kernel across the sequence, transforms it using a filter, passes through a max-pooling layer to obtain the element-wise maximum, and finally passes through an output layer which returns a prediction. The convolution and max-pooling layers for each question x and answer y is: Q c i = f W T x i:i+k−1 + b and A c i = f W T y i:i+k−1 + b Q v k = max 1<i<m Q c i [k] and A v i = max 1<i<m A c i [k] where k is the window size, f is the relu activation function f (x) = max{0, x}, W is the filter vector that performs the linear transformation and b is the bias parameter of the network. Finally, the output layers are the same as for the RNN model. Fig. 2 shows an overview of the layers. BiDAF Model. The BiDAF model was proposed for the SQuAD dataset for spanlevel QA tasks [23]. The original model was used to find the start and end indices of the answer to a question within a context paragraph: The question and context paragraphs are first converted to vectors using both word and character embeddings then combined to form a phrase embedding using LSTM. The question and context paragraphs are then combined to produce a set of query-aware vectors for every word in the context. Finally, another LSTM is used to scan the context paragraph, and the output layer produces the start and end indices for the question. For our sentence-level task, we modify the original BiDAF model slightly. We use the candidate sentences as the context in the original model, eliminate the use of character-embeddings, and output only the probability of the answer sentence being the correct answer to the question. Fig. 3 shows an overview of the layers. The word and phrasal embedding layers are similar to the RNN model in using LSTM, whose result contains the matrix representations of a question q and a candidate answer a. Following that, Q2C and C2Q are two layers that compute the "attention" for the question and answer which is essentially the interaction between question and answers. More formally, we use U :i to indicate the ith column vector of any matrix U . The bidirectional attention is determined by a similarity matrix S j,k between the jth answer word and the kth question word, defined as S j,k = α(a :j , q :k ) where α( a, q) is a trainable scalar function that represents the similarity between vectors a, q. The Q2C, or query to context layer, signifies which answer words have the closest similarity to one of the question words. The C2Q, or context to query layer, determines which question words are the most relevant to each answer word. The exact definition of S j,k and Q2C, C2Q can be found in [23]. The Q2C and C2Q are then concatenated with the answer embedding to output query-aware vector representations of the context words; it takes the form of a matrix G, whose jth column is G j = β(A :j ,Q :j ,Ã :j ) where β is a trainable function that combines the three representations. The result is then passed on to one more LSTM layer to output representation matrix m, and finally to the output layer which predicts the probability of a being the answer of q. Experiments Experiment Setup We performed experiments for each of the baseline, RNN, CNN and BiDAF model. For any dataset, we use 70% of the samples for training and 30% for testing. Out of the 70% training data points, 10% will be used as the validation set for monitoring overfitting. At each epoch, the model trains on the training sets and calculates the loss and accuracy concerning both the training and validation set. If the validation loss is much higher than the training loss and the validation accuracy is much lower than training accuracy, then we can conclude that the model is overfitting to the training set and will need to modify our model accordingly. We used the GloVe word vectors (dimension d = 100) for training the SQuAD models, and both GloVe and word2vec vectors for the BibleQA models where the word2vec vectors (d = 200) are trained on 4 versions of the Bible. The loss function that our models will learn to minimize is the binary cross entropy, defined as: L(Θ) = − 1 N N i=1 (y i · log(p i ) + (1 − y i ) · log(1 − p i )) where Θ is the set of parameters, N represents the number of training instances, p i is the probability of class 1, 1 − p i is the probability of class 0, and y i ∈ {0, 1} is the true label of the ith observation. Here, the value of p i is found by the probability of the activation layer using a sigmoid activation: For weight vector Θ and input vector x, p i = 1 1+e −Θ· x . We used the adaptive gradient (AdaGrad) optimizer to train the neural networks, which is a modified stochastic gradient with per-parameter learning rate [8]. The learning rate of a model determines the rate of update between each iteration of backpropagation. AdaGrad allows the learning rate to adapt based on the parameters. It performs larger updates for infrequent parameters and smaller updates for frequent parameters, and often improves convergence in tasks where the data is sparse -such as NLP and image recognition. Let g τ = ∇L(Θ) be the gradient at iteration τ . The per-parameter update for AdaGrad uses the following formula: Θ τ +1 = Θ τ − η G τ,τ g τ where η is the learning rate, and G τ,τ = τ j=1 g 2 j,τ produces a scaling factor for the parameter Θ τ . This leads to a different learning rate update for each parameter based on the scaling factor and the learning process is adaptive. For our experiments, we will use two metrics to evaluate their performance: (1) F1 score, a widely-used accuracy indicator, is defined as F 1 = 2P R/(P + R) where P and R are precision and recall, resp. (2) Mean reciprocal rank (MRR) which commonly measures accuracy of ranked outputs, and is applicable here as we are essentially ranking the candidate sentences for each question. MRR is defined as 1 n n i=1 rank −1 i where n is the number of questions and rank i is the rank of the correct answer of the ith question. Experiment 1: Transfer Learning Parameter Tuning Goal and method. The first experiment investigates the effect of transfer learning on the model accuracy. We pre-train the model on the SQuAD dataset and compare that with training only on BibleQA. For each model, 1) we first run the model on BibleQA to obtain a set results. 2) Then, we train the same model on SQuAD to obtain the trained weights. 3) Finally, we run the model on BibleQA once again using the trained weights from SQuAD and perform weight fine-tuning once again. We then compare the results of each model to see whether there are improvements from before using the transferred weights. We tune parameters such as learning rate and epoch to find the best performing model. We find that a learning rate of η = 0.001 worked the best for the RNN and BiDAF model, and η = 0.0001 for the CNN model. We use an early stopping mechanism for determining the optimal number of epochs trained, which monitors the validation loss at each epoch and stops the training once the model stops improving. We set the patience to 10, which means that the model will wait for 10 epochs before terminating the training. Optimal results are achieved with 20 to 30 epochs. Results and analysis. Table 1 contains the results we obtained from before and after the weight transfer. We can see that using the transferred weight improves the F1 results by 0.08 to 0.09 (which is around 20%-30% improvement). This shows that just as we hypothesized, pre-training had a positive effect on the training accuracy. However, while the F1 score increases with transferring weights, the MRR of the model decreased by 0.05 and 0.06 for the CNN and BiDAF model, with only the RNN model increasing by 0.05. This was surprising as it is often assumed that there is a correlation between different evaluation measures -that a higher F1 would also result in a higher MRR. By considering what the MRR is measuring, it seems that these models had a higher average ranking for the correct output. However since these models also had lower F1, they are less likely to choose the correct answer as the top ranking answer. So while the models improved the F1 score, it is choosing the correct output more often, but it also ranks the correct answer lower in the cases where the model incorrectly predicts the results. This is an interesting phenomenon and will be worth looking into in the future. Out of the three models, the RNN model performs the best overall with the highest F1 score both before and after weight transfer, and the highest MRR after weight transfer. Experiment 2: Answer Context Length Goal and method. The second experiment aims to find the variation of prediction results by changing the length of the answer context, or in other words, the number of candidate sentences for each question. During the dataset construction phase, after we identify the verse that corresponds to the correct answer to a question, we include a different number of context verses surrounding the correct answer. We created three types of datasets this way: BibleQA-3, BibleQA-10, and BibleQA-chapter. For BibleQA-3 and BibleQA-10, we included 3 and 10 verses surrounding the true verse respectively as candidates. For the chapter version, we included all verses from the same chapter that usually ranges from 10 to 60 verses. Each RNN, CNN, and BiDAF model was tuned on BibleQA-3 for the maximal accuracy result, and subsequently, the same model was used for the prediction for BibleQA-10 and BibleQA-chapter. For all datasets, we included all four Bible translations: KJV, ASV, YLT and WEB. Results and analysis. We used the tuned models from the last experiment for training each dataset, which compares the effect of changing the length of the context has on the model accuracy. Table 2 describes the results among the three datasets with different answer context lengths. Across all three datasets, the CNN model performs the best using the F1 measure, while BiDAF generally has the best MRR score. This echoes the interesting phenomenon as mentioned above as to why certain models would have a higher F1 score but lower MRR than others. Once again, more investigation is needed. For the shortest context with three verses, all the models significantly improve on the baseline by 0.13 to 0.19 F1, with the best results from the RNN model on both F1 and MRR. As the context length increases to 10 verses, the model accuracy significantly decreases, improving only 0.03 to 0.05 of the F1 score from the baseline model. The RNN drops its performance compared to others, and CNN rises as the model with the highest F1, but BiDAF becomes the model with the highest MRR. Finally, in the longest context length of using the entire chapter, the models perform at around the same level as the baseline model, if not worse. This shows that the models are not yet able to be used for longer contexts. A larger dataset and longer training time could be used to train a more robust model that can deal with longer context lengths in the future. Experiment 3: Translation Version Goal and method. The third experiment focuses on the differences among various English translations of the Bible. As mentioned above, we used 4 English translations in training the word vectors creating the dataset. Each of these translations varies in their translation philosophy, as well as the modernity of their language. YLT is the most literal English translation of the original languages. The other three translations are roughly the same in their translation philosophy in that they are not as literal as YLT, but also strive to truthfully capture the original meaning. Ordering them by modernity, KJV was the oldest translation being published 400 years ago, while WEB is the most recent at 2000. The literality of the translations could affect the sentence representations, as they could choose to use certain words for translation that are a more direct Table 2: Comparison between BibleQA-3, BibleQA-10 and BibleQA-Chapter for different models translation of the intended meaning. The modernity of the language could also affect the word vector, since older translations are likely to contain obsolete words that are no longer used, and therefore the word vectors may not necessarily capture an accurate representation. We want to compare whether the model prediction changes based on the level of literal translation, or by the language modernity. For this experiment, we create four further datasets, each only using one particular translation. We use the format of BibleQA-10, and select 10 candidate verses for each question. We use the same three models, and compare the result for each translation individually. Results and analysis. Table 3 compares the results for each translation within each model. Looking at both F1 and MRR scores, WEB achieves the best performance, having the top MRR result for RNN and the top F1 score for CNN/BiDAF. This suggests that using a translation with more modern language can be beneficial for the QA process, and it could be the case that the word vectors used were able to capture more accurate meanings. The KJV translation follows the WEB translation and has the highest performance for the RNN model using the F1 measure, as well as for the CNN model under MRR. The high performance was surprising, as we expected that for a translation such as KJV, some of the archaic language used in the translation could have been a deterrent for learning useful word vectors. It turns out that despite the choice of English words, the KJV still may perform relatively well in NLP tasks. The YLT has the highest MRR result for the BiDAF model. However, the variance is not large enough for the result to be significant, and YLT does not perform particularly well in any other models. From this, we conclude that the translation philosophy and the level of literalness do not necessarily play a dominant role in training QA system. In this paper, we leverage transfer learning techniques to study domain adaptation in QA tasks using the BibleQA dataset. Transferring the weights from the much larger SQuAD dataset has a noticeable improvement in the model accuracy. This showed the potential of using transferred weights for this particular task. We also find that RNN was the best performing model, while BiDAF did not perform as well as expected despite being the most complicated model. This suggests that simpler architectures can still sometimes achieve relatively good results. When increasing the number of candidate sentences to choose from as answers to questions, unsurprisingly, the model performances deteriorates. Comparing different Bible translations that vary in degrees of literalness in translation as well as the modernity of language, we find that the World English Bible gives the best results, followed by the King James Version. The modernity of the language may be attributed to the good performance of WEB. At the same time, although KJV was written centuries ago and uses different words than we do now, it is still able to produce useful results. Furthermore, Young's Literal Translation being the most literal translation does not perform particularly well. We conclude that the translation philosophy, and how literal a translation is, does not necessarily improve the results. Our system has certain limitations, and here we will suggest some potential improvements and direction for future research. The implementation of the BiDAF model was entirely dependent on the DeepQA library, which has very recently been deprecated. The researchers behind DeepQA has ported the library to PyTorch 4 which they have found to be better for NLP research. In the future, it could be worthwhile to consider implementation using PyTorch instead of Keras, and to use reliable and stable software frameworks. The sentence encoding methods used in our models are still relatively simple, in particular, the RNN and the CNN models. The main issue with our current method of encoding is that it mostly only takes into consideration the semantics of the sentences, and not as much the syntax. While it is a simple method that has shown to have worked relatively well, to achieve better accuracy we could consider incorporating an encoding scheme which considers both the syntax and the semantics, such as the treeLSTM [24] or an ensemble of different encoding schemes. The domain adaptation methods used in our systems was also a simple approach which only involves pre-training the weights and transferring the weights. More exploration into improving the transfer learning method could be beneficial. For example, transferring only the weights of certain layers or tuning them at different learning rates. As transfer learning becomes more widely used in NLP research, we expect that more effective methods would emerge that can improve the system. The accuracy of the model is highly dependent on the quality of the dataset. The BibleQA dataset we created was had only 886 distinct question. Extending the size of the dataset is also a worthwhile task in the future, that can be done by manually adding more questions, combining with other sources of Bible questions, or potentially leveraging techniques that could automatically generate questions based on a text.
4,795
1810.12118
2950784788
Question answering (QA) has significantly benefitted from deep learning techniques in recent years. However, domain-specific QA remains a challenge due to the significant amount of data required to train a neural network. This paper studies the answer sentence selection task in the Bible domain and answer questions by selecting relevant verses from the Bible. For this purpose, we create a new dataset BibleQA based on bible trivia questions and propose three neural network models for our task. We pre-train our models on a large-scale QA dataset, SQuAD, and investigate the effect of transferring weights on model accuracy. Furthermore, we also measure the model accuracies with different answer context lengths and different Bible translations. We affirm that transfer learning has a noticeable improvement in the model accuracy. We achieve relatively good results with shorter context lengths, whereas longer context lengths decreased model accuracy. We also find that using a more modern Bible translation in the dataset has a positive effect on the task.
Answer sentence selection is a QA task which involves selecting the sentence that is the most likely to contain the answer. Early approaches were predominantly syntactical, using the idea that the question and answer sentence should relate to each other loosely through syntactical transformations. proposed a generative model that transforms the answers to the questions @cite_0 . Wang and Manning introduced a probabilistic model that models tree-edit operations on dependency parse trees, making use of sophisticated linguistics features @cite_28 . Other similar models include using dynamic programming to find the optimal tree edit sequences @cite_27 . The main drawback of these approaches is that they require too much feature engineering and were difficult to adapt to new domains. Only recently, researchers started applying neural network models. used CNN models for answer sentence selection on the TREC benchmark @cite_33 . also proposed several CNN models for answer sentence selection task. Wang and Nyberg constructed a joint-vector based on both the question and the answer using an LSTM model @cite_18 .
{ "abstract": [ "In this paper, we present an approach that address the answer sentence selection problem for question answering. The proposed method uses a stacked bidirectional Long-Short Term Memory (BLSTM) network to sequentially read words from question and answer sentences, and then outputs their relevance scores. Unlike prior work, this approach does not require any syntactic parsing or external knowledge resources such as WordNet which may not be available in some domains or languages. The full system is based on a combination of the stacked BLSTM relevance model and keywords matching. The results of our experiments on a public benchmark dataset from TREC show that our system outperforms previous work which requires syntactic features and external knowledge resources.", "Answer sentence selection is the task of identifying sentences that contain the answer to a given question. This is an important problem in its own right as well as in the larger context of open domain question answering. We propose a novel approach to solving this task via means of distributed representations, and learn to match questions with answers by considering their semantic encoding. This contrasts prior work on this task, which typically relies on classifiers with large numbers of hand-crafted syntactic and semantic features and various external resources. Our approach does not require any feature engineering nor does it involve specialist linguistic data, making this model easily applicable to a wide range of domains and languages. Experimental results on a standard benchmark dataset from TREC demonstrate that---despite its simplicity---our model matches state of the art performance on the answer sentence selection task.", "A range of Natural Language Processing tasks involve making judgments about the semantic relatedness of a pair of sentences, such as Recognizing Textual Entailment (RTE) and answer selection for Question Answering (QA). A key challenge that these tasks face in common is the lack of explicit alignment annotation between a sentence pair. We capture the alignment by using a novel probabilistic model that models tree-edit operations on dependency parse trees. Unlike previous tree-edit models which require a separate alignment-finding phase and resort to ad-hoc distance metrics, our method treats alignments as structured latent variables, and offers a principled framework for incorporating complex linguistic features. We demonstrate the robustness of our model by conducting experiments for RTE and QA, and show that our model performs competitively on both tasks with the same set of general features.", "", "Our goal is to extract answers from preretrieved sentences for Question Answering (QA). We construct a linear-chain Conditional Random Field based on pairs of questions and their possible answer sentences, learning the association between questions and answer types. This casts answer extraction as an answer sequence tagging problem for the first time, where knowledge of shared structure between question and source sentence is incorporated through features based on Tree Edit Distance (TED). Our model is free of manually created question and answer templates, fast to run (processing 200 QA pairs per second excluding parsing time), and yields an F1 of 63.3 on a new public dataset based on prior TREC QA evaluations. The developed system is open-source, and includes an implementation of the TED model that is state of the art in the task of ranking QA pairs." ], "cite_N": [ "@cite_18", "@cite_33", "@cite_28", "@cite_0", "@cite_27" ], "mid": [ "2251202616", "1591825359", "2112729630", "", "2125313055" ] }
Finding Answers from the Word of God: Domain Adaptation for Neural Networks in Biblical Question Answering
The desire for a computer system that could answer natural language questions has been an ability that signifies artificial intelligence. In recent years, neural networks and machine learning have become popular approaches for question answering (QA) tasks within the natural language processing (NLP) community. One problem with this approach is the expense of creating a suitable dataset for specific domains. Machine learning works well under the assumption that training and test are from the same distribution. Therefore, the tasks that machine learning can solve is highly dependent on the dataset. As a result, most of neural QA research predominantly uses existing datasets, and not as much work has been done for domain-specific QA. In this paper, we focus on the task of answer sentence selection, specifically to answer questions by selecting verses from the Bible as answers. This task takes as input a question and a context paragraph and asks for a sentence from the context that contains the answer to the given question. In our case, the context would consist of passages from the Bible. The Bible is not only an influential literary work but is also the most important religious document amongst Christians. So far, not much work has been done using this corpus within QA or other NLP tasks. Even today, it is still widely read and studied amongst both the religious and the secular community. The Bible is often seen as a source of wisdom where people turn to seek answers to the big questions in life. A QA system that can answer a question using passages from the Bible has the potential to be very beneficial for the users. A biblical QA system could be useful for non-Christians, seeking to learn more about the Bible. They might ask questions such as: "Who is Jesus? ", "What will happen when I die? ". The system could then output a series of relevant verses as answers. The same system could also be useful for scholars seeking to use the Bible as a historical or archaeological document. They could be interested in questions such as, "When did Babylon destroy the Jerusalem temple? ", "Where was the city of Jericho located? ". Such questions can be answered from the relevant passages that described the historical aspect of the Bible. Finally, one of the widest use cases could be from the Christian community who uphold the Bible as the ultimate authority for their faith and life. They could be interested in a wide variety of questions, ranging from theological to practical. For example, "Is salvation by faith or by works? ", "How should I treat people that have wronged me? ", "How should I pray? ". While many of these questions can also be answered through a search engine, the quality of results from search engines can often be questionable and answering using passages directly from the Bible is a valuable resource from a Christian perspective. Contribution. The goal of the research is to investigate neural-based methods for answering biblical question through verse selection. (1) Since large-scale datasets are needed for efficient learning using neural network methods, our first contribution involves the creation of a new dataset BibleQA. The dataset consists of Biblical questions and the corresponding verse as derived from an existing set of questions that is available on the Internet. (2) Then, for biblical sentence selection, we design three answer selection models based on different neural network architectures. Each of these models takes as input a question and an answer verse and outputs a predicted probability of the verse containing the answer to the question. (3) Thirdly, we leverage transfer learning techniques by pre-training the models on a larger QA dataset and provide insight into the effect of domain adaptation used in QA tasks. Our experiments also reveal how changing context lengths affects the performance of answer selection, and reveals new insights regarding various Bible translation. Methodologies For our task, we employ three main neural network models for comparison purposes: one using recurrent neural network (RNN), one using convolutional neural networks (CNN), another using an adapted Bi-direction Attention Flow model (BiDAF) first suggested by Seo et al. [23]. The three models all follow the same general architecture, and the subsequent sections will describe the architecture in more detail: 1. Embedding: The input question and answers are first pre-processed and converted to word vectors. 2. Encoding: The embedded sentences are then processed and encoded, to obtain one single vector representation that captures the sentence. 3. Answer Selection: Based on the encoded question and answer, select an answer as the predicted output. Word Embedding Word embedding captures word context using distributed word vectors. The underlying intuition is that words in similar environments tend to have similar meanings [17]. Here we make use of both GloVe vectors as well as word2vec. word2vec is modelled as a shallow, two-layered neural network which uses stochastic gradient descent and backpropagation to iteratively make a word embedding more similar to that of its neighbor words. The model successfully reduces the complexity of the non-linear hidden layer and made it possible to learn high dimension word vectors on a significant amount of data. GloVe is an alternative unsupervised learning algorithm to word2vec vectors, which is also used to obtain vector representation for words [20]. GloVe has pretrained vectors available online, that were trained on 6 billion tokens from Wikipedia and various news outlets, making it very suitable for training on SQuAD. As the Bible consists of many words and names that are unique to its context, we also trained our own word vectors for the Bible. We used a combination of all four aforementioned English translations for the word vector training, which includes around 3 million words altogether. In the vector training, we used a context window size of 5 and the Continuous Bag-of-Words algorithm to train vectors of dimension 200. The resulting word vectors were able to capture the general semantics of the Biblespecific vocabularies. Below are some of the most similar words for a few selected words in descending order of similarity, using the derived word vectors. From these lists, we see that the most similar words for 'God' captures many qualities and roles God is seen to have throughout the Bible. The most similar words for 'David' are other names who were closely related to him: Saul, his primary adversary; Absalom and Solomon, his sons; Joab his army commander and Jonathan his best friend. God: lord, saviour, holiness, mercy, lovingkindness, sworn, redeemer, salvation, jehovah, endureth sin: trespass, sins, guilt, guilty, transgression, forgiven, sinned, iniquity, forgive, ignorance david: saul, absalom, joab, abimelech, solomon, abner, jonathan, abraham, achish, samuel The trained word vectors were concatenated with the GloVe vectors in the transfer learning process so that the training on BibleQA would be more meaningful. Models for the QA System The baseline model. This model acts as the basis of comparison for all other results. The model uses a random function to uniformly randomly generate an output in the range [0, 1] for each data point. The baseline gives us a model that performs at a level that does not involve any learning and simply assigning a random prediction for each question-answer pair. The baseline is then compared with our models to evaluate the improvement made by more sophisticated models. The RNN model. Recurrent networks are designed to model sequences, allowing the users to work with sequences while preserving structural information. They are particularly useful in NLP tasks due to the sequential nature of languages. A recurrent network consists of loops, where the output of a particular layer is passed back to the same layer as input. This allows information to persist and capture long-term dependencies such as those that appear in sequences. One of the most popular implementations of RNN is the Long Short-Term Memory (LSTM), which was introduced to mitigate the vanishing gradients problem [12]. As the sequence grows longer, the distance between the current word and the dependent context grows longer. However, this means that the error gradients in later steps in the sequence diminish quickly in the back-propagation and do not reach earlier input signals, hence the gradients "vanish". This makes it very difficult to capture relevant information. LSTM introduces a vector that acts as a memory cell, which preserves gradients over time. The access to the memory cell is controlled by gating components that can be thought of as logical gates. Our RNN model makes use of LSTM layers to produce vector representations of the question and answer phrases. The output is obtained as a probability between [0, 1] that indicates the similarity between the question vector and the answer vector. This is based on the intuition that sentences that have closer vectors should be more similar, and therefore the answer should be more relevant to the question. The word embedding layer transforms each word into a word vector. A question is then a sequence of word vectors x = (x 1 , x 2 , . . . , x t ) and an answer is another sequence of word vectors y = (y 1 , y 2 , . . . , y t ). The encoding procedure applies two LSTMs, one for questions Q and the other for answers A: For each example ( x, y), it sets for each t s t = (C t , h t ) = R LSTM (s t−1 , x t−1 ), and s t = (C t , h t ) = R LSTM (s t−1 , y t−1 ) where s t (s t ) is the tth state, C t (C t ) is the memory-cell states, and h t (h t ) is the output state and R LSTM (R LSTM ) is the LSTM networks for the question (answer). The output would be m = (Q e , A e ) where Q e = (h 1 , h 2 , . . . , h t ), A e = (h 1 , h 2 , . . . , h t ). Finally, we concatenate the question and answer vectors, pass them through a final layer which uses the sigmoid activation function σ = 1 1+e −x , and obtain predicted likelihood for the answer being the correct one for the question. Fig. 1 shows an overview of the layers. The CNN model. CNNs are special feed-forward neural networks with fully connected layers and consisting of convolutional and pooling layers. These specialized layers are useful for finding strong local information present within the input regardless of the position of the signals, e.g., in a QA task, a sentence may contain a key phrase that strongly indicates it as the answer to a question. In NLP, a CNN network Our CNN model uses the convolutional and pooling layers to represent the question and answer phrases. We also use a dropout layer to regulate the weights and avoid overfitting. For each question and answer sequence, the convolution layer applies a kernel across the sequence, transforms it using a filter, passes through a max-pooling layer to obtain the element-wise maximum, and finally passes through an output layer which returns a prediction. The convolution and max-pooling layers for each question x and answer y is: Q c i = f W T x i:i+k−1 + b and A c i = f W T y i:i+k−1 + b Q v k = max 1<i<m Q c i [k] and A v i = max 1<i<m A c i [k] where k is the window size, f is the relu activation function f (x) = max{0, x}, W is the filter vector that performs the linear transformation and b is the bias parameter of the network. Finally, the output layers are the same as for the RNN model. Fig. 2 shows an overview of the layers. BiDAF Model. The BiDAF model was proposed for the SQuAD dataset for spanlevel QA tasks [23]. The original model was used to find the start and end indices of the answer to a question within a context paragraph: The question and context paragraphs are first converted to vectors using both word and character embeddings then combined to form a phrase embedding using LSTM. The question and context paragraphs are then combined to produce a set of query-aware vectors for every word in the context. Finally, another LSTM is used to scan the context paragraph, and the output layer produces the start and end indices for the question. For our sentence-level task, we modify the original BiDAF model slightly. We use the candidate sentences as the context in the original model, eliminate the use of character-embeddings, and output only the probability of the answer sentence being the correct answer to the question. Fig. 3 shows an overview of the layers. The word and phrasal embedding layers are similar to the RNN model in using LSTM, whose result contains the matrix representations of a question q and a candidate answer a. Following that, Q2C and C2Q are two layers that compute the "attention" for the question and answer which is essentially the interaction between question and answers. More formally, we use U :i to indicate the ith column vector of any matrix U . The bidirectional attention is determined by a similarity matrix S j,k between the jth answer word and the kth question word, defined as S j,k = α(a :j , q :k ) where α( a, q) is a trainable scalar function that represents the similarity between vectors a, q. The Q2C, or query to context layer, signifies which answer words have the closest similarity to one of the question words. The C2Q, or context to query layer, determines which question words are the most relevant to each answer word. The exact definition of S j,k and Q2C, C2Q can be found in [23]. The Q2C and C2Q are then concatenated with the answer embedding to output query-aware vector representations of the context words; it takes the form of a matrix G, whose jth column is G j = β(A :j ,Q :j ,Ã :j ) where β is a trainable function that combines the three representations. The result is then passed on to one more LSTM layer to output representation matrix m, and finally to the output layer which predicts the probability of a being the answer of q. Experiments Experiment Setup We performed experiments for each of the baseline, RNN, CNN and BiDAF model. For any dataset, we use 70% of the samples for training and 30% for testing. Out of the 70% training data points, 10% will be used as the validation set for monitoring overfitting. At each epoch, the model trains on the training sets and calculates the loss and accuracy concerning both the training and validation set. If the validation loss is much higher than the training loss and the validation accuracy is much lower than training accuracy, then we can conclude that the model is overfitting to the training set and will need to modify our model accordingly. We used the GloVe word vectors (dimension d = 100) for training the SQuAD models, and both GloVe and word2vec vectors for the BibleQA models where the word2vec vectors (d = 200) are trained on 4 versions of the Bible. The loss function that our models will learn to minimize is the binary cross entropy, defined as: L(Θ) = − 1 N N i=1 (y i · log(p i ) + (1 − y i ) · log(1 − p i )) where Θ is the set of parameters, N represents the number of training instances, p i is the probability of class 1, 1 − p i is the probability of class 0, and y i ∈ {0, 1} is the true label of the ith observation. Here, the value of p i is found by the probability of the activation layer using a sigmoid activation: For weight vector Θ and input vector x, p i = 1 1+e −Θ· x . We used the adaptive gradient (AdaGrad) optimizer to train the neural networks, which is a modified stochastic gradient with per-parameter learning rate [8]. The learning rate of a model determines the rate of update between each iteration of backpropagation. AdaGrad allows the learning rate to adapt based on the parameters. It performs larger updates for infrequent parameters and smaller updates for frequent parameters, and often improves convergence in tasks where the data is sparse -such as NLP and image recognition. Let g τ = ∇L(Θ) be the gradient at iteration τ . The per-parameter update for AdaGrad uses the following formula: Θ τ +1 = Θ τ − η G τ,τ g τ where η is the learning rate, and G τ,τ = τ j=1 g 2 j,τ produces a scaling factor for the parameter Θ τ . This leads to a different learning rate update for each parameter based on the scaling factor and the learning process is adaptive. For our experiments, we will use two metrics to evaluate their performance: (1) F1 score, a widely-used accuracy indicator, is defined as F 1 = 2P R/(P + R) where P and R are precision and recall, resp. (2) Mean reciprocal rank (MRR) which commonly measures accuracy of ranked outputs, and is applicable here as we are essentially ranking the candidate sentences for each question. MRR is defined as 1 n n i=1 rank −1 i where n is the number of questions and rank i is the rank of the correct answer of the ith question. Experiment 1: Transfer Learning Parameter Tuning Goal and method. The first experiment investigates the effect of transfer learning on the model accuracy. We pre-train the model on the SQuAD dataset and compare that with training only on BibleQA. For each model, 1) we first run the model on BibleQA to obtain a set results. 2) Then, we train the same model on SQuAD to obtain the trained weights. 3) Finally, we run the model on BibleQA once again using the trained weights from SQuAD and perform weight fine-tuning once again. We then compare the results of each model to see whether there are improvements from before using the transferred weights. We tune parameters such as learning rate and epoch to find the best performing model. We find that a learning rate of η = 0.001 worked the best for the RNN and BiDAF model, and η = 0.0001 for the CNN model. We use an early stopping mechanism for determining the optimal number of epochs trained, which monitors the validation loss at each epoch and stops the training once the model stops improving. We set the patience to 10, which means that the model will wait for 10 epochs before terminating the training. Optimal results are achieved with 20 to 30 epochs. Results and analysis. Table 1 contains the results we obtained from before and after the weight transfer. We can see that using the transferred weight improves the F1 results by 0.08 to 0.09 (which is around 20%-30% improvement). This shows that just as we hypothesized, pre-training had a positive effect on the training accuracy. However, while the F1 score increases with transferring weights, the MRR of the model decreased by 0.05 and 0.06 for the CNN and BiDAF model, with only the RNN model increasing by 0.05. This was surprising as it is often assumed that there is a correlation between different evaluation measures -that a higher F1 would also result in a higher MRR. By considering what the MRR is measuring, it seems that these models had a higher average ranking for the correct output. However since these models also had lower F1, they are less likely to choose the correct answer as the top ranking answer. So while the models improved the F1 score, it is choosing the correct output more often, but it also ranks the correct answer lower in the cases where the model incorrectly predicts the results. This is an interesting phenomenon and will be worth looking into in the future. Out of the three models, the RNN model performs the best overall with the highest F1 score both before and after weight transfer, and the highest MRR after weight transfer. Experiment 2: Answer Context Length Goal and method. The second experiment aims to find the variation of prediction results by changing the length of the answer context, or in other words, the number of candidate sentences for each question. During the dataset construction phase, after we identify the verse that corresponds to the correct answer to a question, we include a different number of context verses surrounding the correct answer. We created three types of datasets this way: BibleQA-3, BibleQA-10, and BibleQA-chapter. For BibleQA-3 and BibleQA-10, we included 3 and 10 verses surrounding the true verse respectively as candidates. For the chapter version, we included all verses from the same chapter that usually ranges from 10 to 60 verses. Each RNN, CNN, and BiDAF model was tuned on BibleQA-3 for the maximal accuracy result, and subsequently, the same model was used for the prediction for BibleQA-10 and BibleQA-chapter. For all datasets, we included all four Bible translations: KJV, ASV, YLT and WEB. Results and analysis. We used the tuned models from the last experiment for training each dataset, which compares the effect of changing the length of the context has on the model accuracy. Table 2 describes the results among the three datasets with different answer context lengths. Across all three datasets, the CNN model performs the best using the F1 measure, while BiDAF generally has the best MRR score. This echoes the interesting phenomenon as mentioned above as to why certain models would have a higher F1 score but lower MRR than others. Once again, more investigation is needed. For the shortest context with three verses, all the models significantly improve on the baseline by 0.13 to 0.19 F1, with the best results from the RNN model on both F1 and MRR. As the context length increases to 10 verses, the model accuracy significantly decreases, improving only 0.03 to 0.05 of the F1 score from the baseline model. The RNN drops its performance compared to others, and CNN rises as the model with the highest F1, but BiDAF becomes the model with the highest MRR. Finally, in the longest context length of using the entire chapter, the models perform at around the same level as the baseline model, if not worse. This shows that the models are not yet able to be used for longer contexts. A larger dataset and longer training time could be used to train a more robust model that can deal with longer context lengths in the future. Experiment 3: Translation Version Goal and method. The third experiment focuses on the differences among various English translations of the Bible. As mentioned above, we used 4 English translations in training the word vectors creating the dataset. Each of these translations varies in their translation philosophy, as well as the modernity of their language. YLT is the most literal English translation of the original languages. The other three translations are roughly the same in their translation philosophy in that they are not as literal as YLT, but also strive to truthfully capture the original meaning. Ordering them by modernity, KJV was the oldest translation being published 400 years ago, while WEB is the most recent at 2000. The literality of the translations could affect the sentence representations, as they could choose to use certain words for translation that are a more direct Table 2: Comparison between BibleQA-3, BibleQA-10 and BibleQA-Chapter for different models translation of the intended meaning. The modernity of the language could also affect the word vector, since older translations are likely to contain obsolete words that are no longer used, and therefore the word vectors may not necessarily capture an accurate representation. We want to compare whether the model prediction changes based on the level of literal translation, or by the language modernity. For this experiment, we create four further datasets, each only using one particular translation. We use the format of BibleQA-10, and select 10 candidate verses for each question. We use the same three models, and compare the result for each translation individually. Results and analysis. Table 3 compares the results for each translation within each model. Looking at both F1 and MRR scores, WEB achieves the best performance, having the top MRR result for RNN and the top F1 score for CNN/BiDAF. This suggests that using a translation with more modern language can be beneficial for the QA process, and it could be the case that the word vectors used were able to capture more accurate meanings. The KJV translation follows the WEB translation and has the highest performance for the RNN model using the F1 measure, as well as for the CNN model under MRR. The high performance was surprising, as we expected that for a translation such as KJV, some of the archaic language used in the translation could have been a deterrent for learning useful word vectors. It turns out that despite the choice of English words, the KJV still may perform relatively well in NLP tasks. The YLT has the highest MRR result for the BiDAF model. However, the variance is not large enough for the result to be significant, and YLT does not perform particularly well in any other models. From this, we conclude that the translation philosophy and the level of literalness do not necessarily play a dominant role in training QA system. In this paper, we leverage transfer learning techniques to study domain adaptation in QA tasks using the BibleQA dataset. Transferring the weights from the much larger SQuAD dataset has a noticeable improvement in the model accuracy. This showed the potential of using transferred weights for this particular task. We also find that RNN was the best performing model, while BiDAF did not perform as well as expected despite being the most complicated model. This suggests that simpler architectures can still sometimes achieve relatively good results. When increasing the number of candidate sentences to choose from as answers to questions, unsurprisingly, the model performances deteriorates. Comparing different Bible translations that vary in degrees of literalness in translation as well as the modernity of language, we find that the World English Bible gives the best results, followed by the King James Version. The modernity of the language may be attributed to the good performance of WEB. At the same time, although KJV was written centuries ago and uses different words than we do now, it is still able to produce useful results. Furthermore, Young's Literal Translation being the most literal translation does not perform particularly well. We conclude that the translation philosophy, and how literal a translation is, does not necessarily improve the results. Our system has certain limitations, and here we will suggest some potential improvements and direction for future research. The implementation of the BiDAF model was entirely dependent on the DeepQA library, which has very recently been deprecated. The researchers behind DeepQA has ported the library to PyTorch 4 which they have found to be better for NLP research. In the future, it could be worthwhile to consider implementation using PyTorch instead of Keras, and to use reliable and stable software frameworks. The sentence encoding methods used in our models are still relatively simple, in particular, the RNN and the CNN models. The main issue with our current method of encoding is that it mostly only takes into consideration the semantics of the sentences, and not as much the syntax. While it is a simple method that has shown to have worked relatively well, to achieve better accuracy we could consider incorporating an encoding scheme which considers both the syntax and the semantics, such as the treeLSTM [24] or an ensemble of different encoding schemes. The domain adaptation methods used in our systems was also a simple approach which only involves pre-training the weights and transferring the weights. More exploration into improving the transfer learning method could be beneficial. For example, transferring only the weights of certain layers or tuning them at different learning rates. As transfer learning becomes more widely used in NLP research, we expect that more effective methods would emerge that can improve the system. The accuracy of the model is highly dependent on the quality of the dataset. The BibleQA dataset we created was had only 886 distinct question. Extending the size of the dataset is also a worthwhile task in the future, that can be done by manually adding more questions, combining with other sources of Bible questions, or potentially leveraging techniques that could automatically generate questions based on a text.
4,795
1810.11624
2898490764
Time series clustering is the process of grouping time series with respect to their similarity or characteristics. Previous approaches usually combine a specific distance measure for time series and a standard clustering method. However, these approaches do not take the similarity of the different subsequences of each time series into account, which can be used to better compare the time series objects of the dataset. In this paper, we propose a novel technique of time series clustering based on two clustering stages. In a first step, a least squares polynomial segmentation procedure is applied to each time series, which is based on a growing window technique that returns different-length segments. Then, all the segments are projected into same dimensional space, based on the coefficients of the model that approximates the segment and a set of statistical features. After mapping, a first hierarchical clustering phase is applied to all mapped segments, returning groups of segments for each time series. These clusters are used to represent all time series in the same dimensional space, after defining another specific mapping process. In a second and final clustering stage, all the time series objects are grouped. We consider internal clustering quality to automatically adjust the main parameter of the algorithm, which is an error threshold for the segmenta- tion. The results obtained on 84 datasets from the UCR Time Series Classification Archive have been compared against two state-of-the-art methods, showing that the performance of this methodology is very promising.
Many of the proposals for time series clustering are based on the combination of a distance measure and a clustering algorithm. First, we will analyse the most important distance measures proposed for time series comparison, and then we will introduce the clustering methods that can be applied based on them Further information about time series clustering can be found in @cite_37 or @cite_56 .
{ "abstract": [ "Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research.", "Clustering is a solution for classifying enormous data when there is not any early knowledge about classes. With emerging new concepts like cloud computing and big data and their vast applications in recent years, research works have been increased on unsupervised solutions like clustering algorithms to extract knowledge from this avalanche of data. Clustering time-series data has been used in diverse scientific areas to discover patterns which empower data analysts to extract valuable information from complex and massive datasets. In case of huge datasets, using supervised classification solutions is almost impossible, while clustering can solve this problem using un-supervised approaches. In this research work, the focus is on time-series data, which is one of the popular data types in clustering problems and is broadly used from gene expression data in biology to stock market analysis in finance. This review will expose four main components of time-series clustering and is aimed to represent an updated investigation on the trend of improvements in efficiency, quality and complexity of clustering time-series approaches during the last decade and enlighten new paths for future works. Anatomy of time-series clustering is revealed by introducing its 4 main component.Research works in each of the four main components are reviewed in detail and compared.Analysis of research works published in the last decade.Enlighten new paths for future works for time-series clustering and its components." ], "cite_N": [ "@cite_37", "@cite_56" ], "mid": [ "2097747115", "1894414046" ] }
Time series clustering based on the characterisation of segment typologies
Abstract-Time series clustering is the process of grouping time series with respect to their similarity or characteristics. Previous approaches usually combine a specific distance measure for time series and a standard clustering method. However, these approaches do not take the similarity of the different subsequences of each time series into account, which can be used to better compare the time series objects of the dataset. In this paper, we propose a novel technique of time series clustering based on two clustering stages. In a first step, a least squares polynomial segmentation procedure is applied to each time series, which is based on a growing window technique that returns different-length segments. Then, all the segments are projected into same dimensional space, based on the coefficients of the model that approximates the segment and a set of statistical features. After mapping, a first hierarchical clustering phase is applied to all mapped segments, returning groups of segments for each time series. These clusters are used to represent all time series in the same dimensional space, after defining another specific mapping process. In a second and final clustering stage, all the time series objects are grouped. We consider internal clustering quality to automatically adjust the main parameter of the algorithm, which is an error threshold for the segmentation. The results obtained on 84 datasets from the UCR Time Series Classification Archive have been compared against two state-of-the-art methods, showing that the performance of this methodology is very promising. Index Terms-Time series clustering, data mining, segmentation, feature extraction T IME series are an important class of temporal data objects collected chronologically [1]. Given that they tend to be high dimensional, directly dealing with them in its raw format is very expensive in terms of processing and storage cost, what makes them difficult to analyse. However, time series have applications in many different fields of science, engineering, economics, finance, etc. In recent years, there has been a high explosion of interest in mining time series databases. Clustering is one of these data mining techniques, where similar data are organized into related or homogeneous groups without specific knowledge of the group definitions [2]. Usually, clustering is used as a pre-processing step for other data mining tasks. Time series clustering consists in grouping time series. There are several recent review papers dealing with time series clustering [3], [4], [5]. It can be used as a preprocessing step D. Guijo, A. M. Durán-Rosal, P. A. Gutiérrez, and, C. Hervás-Martínez are with the Department of Computer Science and Numerical Analysis, University of Córdoba, Rabanales Campus, Albert Einstein Building 3rd Floor, 14071 Córdoba, Spain. E-mails: {i22gurud, aduran, pagutierrez, chervas}@uco.es A. Troncoso is with the Department of Computer Languages and Systems, University Pablo de Olavide, Sevilla, Spain. E-mail: [email protected] for anomaly detection [6], for recognizing dynamic changes in the time series [7], for prediction [8] and for classification [9]. For example, the application of these techniques can be used to discover common patterns preceding important paleoclimate events [10] or mining gene expression patterns [11]. Time series clustering can be approached by considering specific distance measures for time series combined with standard clustering techniques [12], [4]. Some of these metrics are designed for equal-length time series, such as the standard Euclidean distance, which is applied to time series in [13], while others, such as the Dynamic Time Warping (DTW) [14], [15], can be used for time series of different size. There have been many attempts to obtain better time series distance metrics as extensions of DTW [16], [17], [18], [19]. Moreover, apart from adapting distance measures, some authors propose specific adaptations of the clustering algorithm to deal with their special characteristics [19]. On the other hand, time series segmentation consists in cutting the series in some specific points, trying to achieve two different objectives: (1) dividing time series in segments as a procedure for discovering useful patterns (homogeneous segments) [20], [21], [22], [10], or (2) approximating the time series with a set of simple models for each segment without losing too much information [23], [24], [25], [26]. These works of time series segmentation open a new perspective for time series clustering, given that previous time series clustering proposals only search for similarities between the different time series but do not exploit the similarities which can be found in the subsequences of each time series. In this paper, we propose a novel clustering methodology, which firstly applies time series segmentation via a very fast online polynomial approximation method. Then, unequallength segments are projected into feature-vectors of equal length, in order to reduce the dimensionality of the original data and have the same length for each mapped segment. A first clustering procedure is applied to group the segments of each time series to recognise similar behaviour segments. Using the results of these clustering processes, and applying a new mapping stage, a second and final clustering process groups the different time series of the dataset. The proposed method is referred to as two-stage statistical segmentationclustering time series procedure (TS3C). In this way, the method is able to summarise the types of segments that can appear in each individual time series and exploit them to increase the quality of the clustering process. For adjusting the value of the main parameter of TS3C (which is an error threshold for the segmentation), internal clustering criteria are used, where two different strategies are proposed (considering one single criterion or using a majority voting procedure with a variety of different criteria). The following advantages can be attributed to TS3C: • It is able to exploit the similarities found in the segments of each individual time series to improve final clustering quality. • It is based on the lowest error approximation of these segments for a particular dataset, allowing the extraction of robust coefficients representing the trend of the segments and their statistical features. • It is domain-independent, not including any special characteristic of the datasets considered. • The formulation is based on two different mapping processes, where the final clustering computational cost does not depend on the size (original number of points) of the time series, but on the number of clusters used to represent it. • The parameter of TS3C is automatically adjusted based on internal criteria. The remainder of the paper is organized as follows. Section I summarises the background of time series clustering and the motivation for our clustering approach. Section II describes the algorithm in detail. Section III presents the experimental results using benchmark time series datasets to show the suitability of the proposed method. Finally, Section IV concludes the paper and outlines some directions for future research. A. Time series clustering There are many works proposed for time series clustering, although their objectives can be very different. Indeed, time series clustering can be classified into three categories [4]: • Whole time series clustering defines each time series as a discrete object and clusters a set of time series measuring their similarity and applying a conventional clustering algorithm. • Subsequence clustering is considered as the clustering of segments obtained from a time series segmentation algorithm. One of its main advantages is that it can discover patterns within each time series. • Time point clustering combine the temporal proximity of time points with the similarity between their corresponding values. We focus on whole time series clustering, which can be applied in three different ways [4]: • Shape-based approach: This method works with the raw time series data, matching, as well as possible, the shapes of the different time series. An appropriate distance measure has to be used, specifically adapted for time series. Then, a conventional clustering algorithm is applied. An example of this approach is that proposed by Paparrizos et al. [27], which uses a normalized version of the cross-correlation measure (in order to consider the time series shapes) and a method to compute cluster centroids based on the properties of this distance. Policker et al. [28] presented a model and a set of algorithms for estimating the parameters of a non-stationary time series. This model uses a time varying mixture of stationary sources, similar to hidden markov models (HMMs).Also, Asadi et al. [29] proposed a new method based on HMM ensembles, addressing the HMM-based methods problem in separating models of distinct classes. • Feature-based approach: In this case, time series are transformed into a set of statistical characteristics, where the length of this vector is less than the original time series. Each time series is converted into a feature vector of the same length, a standard distance measure is calculated and a clustering algorithm is applied. An example of this approach was presented by Räsänen et al. [30], based on an efficient computational method for statistical feature-based clustering. Möller-Levet et al. [31], developed a fuzzy clustering algorithm based on the short time series distance (STS), this method being highly sensitive to scale. Hautamaki et al. [32] proposed a raw time series clustering using the dynamic time warping (DTW) distance for hierarchical and partitional clustering algorithms. The problem of DTW is that it can be sensitive to noise. • Model-based approach: Raw time series are converted into a set of model parameters, followed by a model distance measurement and a classic clustering algorithm. McDowell et al. [33] presented a model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models cluster number with a Dirichlet process and temporal dependencies with Gaussian processes, demonstrating its accuracy on simulated gene expression data sets. Xiong et al. [34] used a model consisting of mixtures of autoregressive moving average (ARMA) models. This method involves a difficult parameter initialization for the expectation maximization (EM) algorithm. In general, model-based approaches suffer from scalability issues [35]. Yang et al. [36] presented an unsupervised ensemble learning approach to time series clustering using a combination of RPCL (rival penalized competitive learning) with other representations. Many of the proposals for time series clustering are based on the combination of a distance measure and a clustering algorithm. First, we will analyse the most important distance measures proposed for time series comparison, and then we will introduce the clustering methods that can be applied based on them 1 . 1) Distance measures for time series: Two of the most important distance metrics for time series comparison are the euclidean distance (ED) [13] and the dynamic time warping (DTW) [14], [15]. The first one, ED, compares two time series, X = {x t } N t=1 and Y = {y t } N t=1 , of length N as follows: ED(X, Y ) = N t=1 (x t − y t ) 2 .(1) As can be seen, ED forces both series to have the same length. In contrast, DTW follows the main idea of ED, but applying a local non-linear alignment. This alignment is achieved by deriving a matrix M with the ED between any two points of X and Y . Then, a warping path, w = {w 1 , w 2 , ..., w r }, is calculated from the matrix of elements M. By using dynamic programming [37], the previous warping path w can be computed on matrix M such as the following condition is satisfied [14]: DT W (X, Y ) = min r i=1 w i .(2) A popular alternative is to constrain the warping path in order to visit only a low number of cells on matrix M is widely applied [16]. Recently, Wang et al. [12] evaluated 9 distances measures and demonstrated that DTW is the most accurate distance measure with respect to the rest of measures, while ED is the most efficient one. Moreover, new distances measures have arisen in recent years. Łuczak et al. [18] constructed a new parametric distance function, combining DTW and the derivative DTW distance (D DT W ) [38] (which is computed as the DTW distance considering the derivatives of the time series), where a single real number parameter, α, controls the contribution of each of the two measures to the total value of the combined distances. This distance between time series X and Y is defined as follows: DD DT W (X, Y ) = (1 − α) DT W (X, Y ) + α D DT W (X, Y ),(3) where α ∈ [0, 1] is a parameter selected by considering the best value for an internal evaluation measure known as inter-group variance (−V ). This novel metric is shown to outperform the results obtained by DT W and D DT W , because it has the advantages of both. Another state-of-the-art distance measure is based on the invariability to the scale and the translation of the time series and was proposed by Yang et al. [19]. This distance between time series X and Y is defined as follows: ISDist(X, Y ) = min α,q || X − αY (q) || || X || ,(4) where Y (q) is the time series shifted q time units, and || · || is the l 2 norm. α is the scaling coefficient, that could be adjusted to the optimal one by setting the gradient to zero. 2) Clustering algorithms: Clustering is a field of data mining based on discovering groups of objects without any form of supervision. Among the most used metodologies, hierarchical clustering [39] is based on an agglomerative or a divisive algorithm. The agglomerative approach starts considering each element in a single cluster, and, for each iteration, the pair of clusters with more similarity are merged. On the contrary, the divisive algorithm starts including all elements in a single cluster, and, for each iteration, clusters are divided into smaller subgroups. On the other hand, partitional clustering [39] divides the data into k clusters, where each cluster contains at least one element of the dataset. The idea behind this clustering is to minimize the average distance of elements to the cluster centre (also called prototype). Depending on the prototype, there are different algorithms: (1) k-means [40] uses centroids, i.e. the averaged element of the objects does not have to be an object belonging to the dataset, (2) k-medoids [32], [41] uses an object of the cluster as the prototype. There are also some specific proposals for time series clustering. For example, Wang et al. [42] proposed a method for clustering time series based on their structural characteristics, introducing the following set of features: trend, seasonality, serial correlation, chaos, non-linearity and self-similarity. B. Clustering evaluation measures Evaluating the extracted clusters is not a trivial task and has been extensively researched [43]. In this paper, we focus on numerical measures, that are applied to judge various aspects of clusters validity [44]. Different clustering algorithms obtain different clusters and different clustering structures, thus evaluating clustering results is quite important, in order to evaluate clustering structures objectively and quantitatively. There are two different testing criteria [45]: external criteria and internal criteria. External criteria uses class labels (also known as ground truth) for evaluating the assigned labels. Note that the ground truth is not used during the clustering algorithm. On the other hand, internal criteria evaluates the goodness of a clustering structure without respect to external information. 1) Internal metrics: Among the different internal criteria, the most important ones are [46]: SSE = 1 T k i=1 x∈Ci ED(x, C i ) 2 .(5) • Normalised sum of squared error (NSSE): This measure look for well-separated groups, maximizing the distance intra-clusters. This can be done by considering the following expression: N SSE = 1 T k i=1 x∈Ci ED(x, C i ) 2 (T − 1)! k i=1 k j=i+1 ED(C i , C j )(6) • Caliński and Harabasz (CH) [47]: This index is defined as the ratio between the internal dispersion of clusters and the dispersion within clusters: CH = Tr(S B ) · (T − k) Tr(S W ) · (k − 1) ,(7) where T is the number of time series and k is the number of cluster used to group segments. Moreover, Tr(S B ) and Tr(S W ) are given by: Tr(S B ) = k i=1 T Ci || C i − Y || 2 ,(8)Tr(S W ) = k i=1 y∈Ci || y − C i || 2 ,(9) where, T Ci is the number of time series that belong to the cluster C i , C i is the centroid of cluster i, and Y is the mean of the time series that belong to the cluster C i . • Silhouette index (SI) [48]: This measure combines both cohesion and separation, so it is based on the intracluster (a(x, C i )) and inter-cluster (b(x, C i )) distances respectively. This distances are given as follows: a(x, C i ) = 1 T Ci y∈Ci ED(x, y),(10)b(x, C i ) = min C l ,l =i    1 T C l y∈C l ED(x, y)    ,(11) where ED(x, y) is the Euclidean distance between x and y time series, as we defined before. Finally, SI index is defined as: SI = 1 T k i=1 x∈Ci b(x, C i ) − a(x, C i ) max(a(x, C i ), b(x, C i )) .(12) • Davies-Bouldin (DB) [49]: The validation of clustering following this measure tries to find compact clusters, which centroids are far away from each other. This index is defined as: DB = 1 k k i=1 max i =j α i + α j ED(C i , C j ) ,(13) where α i is the average distance of all elements in cluster C i to centroid C i , and ED(C i , C j ) is the euclidean distance between the centroids C i and C j . • DUnn index (DU) [50]: The Dunn index ponders positively the compact and well-separated clusters. The Dunn index for k clusters C i with i = 1, ..., k is defined as: DU = min i∈{1,...,l} min j∈{i+1,...,k} δ(C i , C j ) M ,(14)M = max m∈{1,...,k} diam(C m )(15) where δ(C i , C j ) is the dissimilarity between clusters C i and C j , and diam(C m ) is the diameter of the cluster C m , which are given as follows: δ(C i , C j ) = min x∈Ci, y∈Cj || x − y ||,(16)diam(C m ) = max x,y∈Cm || x − y || .(17) The Dunn index is very sensitive to noise, and different variants have been considered. We chose the three variants that had betters results in [46], where are referred to as GD33, GD43 and GD53. These variants have the following equations for δ(C i , C j ), respectively: δ(C i , C j ) = 1 N Ci N Cj x∈Ci y∈Cj ED(x, y),(18)δ(C i , C j ) = ED(C i , C j ),(19)δ(C i , C j ) = 1 N Ci + N Cj · x∈Ci ED(x, C i )+ (20) + y∈Cj ED(y, C j )   . For the last variant (GD53), a new definition of diam(C m ) is included: diam(C m ) = 2 N Ci x∈Ci d * ps (x, C i ),(21) where d * ps (x, C i ) is the Point Symmetry-Distance between the object x and the cluster C i 2 . • COP index (COP): This index uses the distance from the points to their cluster centroids and the furthest neighbour distance. The equation is the following: 2) External metrics: On the other hand, external indices measure the similarity between the cluster assignment and the ground truth, which has to be given as a form of evaluation but should not be used during the clustering. There are many metrics in the literature [51], although the most widely used is the rand index (RI) [52]. This measure penalizes false positive and false negative decisions during clustering. RI is given as: COP = 1 T k i=1 y∈Ci ED(y, C i ) N Ci · min x / ∈Ci max y∈Ci ED(x, y)(22)RI = a + b a + b + c + d ,(23) where a is the number of time series that are assigned to the same cluster and belong to the same class (according to ground truth), b is the number of time series that are assigned to different clusters and belong to different classes, c is the number of time series that are assigned to different clusters but belong to the same class, and d is the number of time series that are assigned to the same cluster but belong to different classes. C. Time series segmentation One of the steps of our proposal is based on dividing each time series in a sequence of segments. This is known as time series segmentation, which consists in cutting the time series in some specific points, trying to achieve different objectives, where, as mentioned before, the two main points of view are: • Discovering similar patterns: The main objective is the discovery and characterization of important events in the time series, by obtaining similar segments. The methods of Chung et al. [20], Tseng et al. [21] and Nikolaou et al. [10] are all based on evolutionary algorithms, given the large size of the search space when deciding the cut points. • Approximating the time series by a set of simple models, e.g. linear interpolation or polynomial regression: These methods could also be considered as representation methods. The main goal of these methods is to summarize a single time series, in order to reduce the difficulty of processing, analysing or exploring large time series, approximating the segments obtained by linear models. Keogh et al. [26] proposed some methods which use linear interpolations between the cut points. Oliver et al. [23], [24] develop a method that detect points with high variation and, then, replace each segment with the corresponding approximation. Finally, the method proposed by Fuchs et al. [25] is a growing window procedure (known as SwiftSeg), which returns unequallength segments based on a online method. SwiftSeg is very fast, simultaneously obtaining a segmentation of the time series and the coefficients of the polynomial least squares approximation, the computational cost depending only on the degree of the polynomial instead of the window length. When compared to many other segmentation methods, SwiftSeg is shown to be very accurate while involving a low computational cost [25]. II. A TWO-STAGE STATISTICAL SEGMENTATION-CLUSTERING TIME SERIES PROCEDURE (TS3C) Given a time series clustering dataset, D = {Y i } T i=1 , where Y i = {y t } Ni t=1 is a time series of length N i , the objective of the proposed algorithm is to organize them into L groups, G = {G 1 , G 2 , . . . G L }, optimizing the clustering quality, where ∀G i = G j , G i ∩ G j = ∅ and L l=1 G l = G. The algorithm is based on two well-identified stages. The first stage is applied individually to each time series and acts as a dimensionality reduction. It consists of a segmentation procedure and a clustering of segments, discovering common patterns of each time series. The second clustering stage is applied to the mapped time series to discover the groups. The main steps of the algorithm are summarized in Fig. 1. A. First stage The first stage of TS3C consists of a time series segmentation, the extraction of statistical features of each segment, and the clustering of the segments for each time series. The steps of the first stage can be checked in Figure 2. 1) Time series segmentation: In general, segmentation problems are used for discovering cut points in the time series to achieve different objectives. For a given time series of length N i , the segmentation consists in finding m segments defined by t = {t s } m−1 s=1 cut points. In this way, the set of segments S = {s 1 , s 2 , . . . , s m } are formed by: s 1 = {y 1 , . . . , y t1 }, s 2 = {y t1 , . . . , y t2 }, . . . , s m = {y tm−1 , . . . , y Ni }. Specifically, in this paper, we apply SwiftSeg, a growing window procedure proposed in [25]. The algorithm iteratively introduces points of the time series into a growing window and simultaneously updates the corresponding least-squares polynomial approximation of the segment and its error. The window grows until an error threshold is exceeded. When this happens, a cut point (t s ) is included and the segment is finished. The process is repeated until reaching the end of the time series. We consider the following error function (standard error of prediction, SEP ): SEP s = √ SSE s |Y s | ,(24) where, SSE s stands for Sum of Squared Errors of segment s, and |Y s | is the average value of segment s. SSE s and Y s are defined as: SSE s = ts i=ts−1 (ŷ i − y i ) 2 ,(25)Y s = 1 t s − t s−1 + 1 ts i=ts−1 y i ,(26) where, y i is the time series value at time i, andŷ i is its corresponding least-squares polynomial approximation. Apply time series segmentation 3: for Each segment do 4: Extract the coefficients of the segment 5: Compute the statistical features 6: Combine the coefficients and the statistical features into a single array 7: end for 8: Cluster all the mapped segments 9: Based on the previous clustering, map each time series 10: end for 11: Cluster mapped time series 12: Evaluate the goodness of the clustering 13: return Best quality clustering error from which the window is not further grown is denoted as SEP max and has to be defined by the user. 2) Segment mapping: After the segmentation process, each segment is mapped to an array, including the polynomial coefficients of the least squares approximation of the segment and a set of statistical features. Thus, each segment is projected into a l-dimensional space, where l is the length of the mapped segment. The coefficients are directly obtained from the update procedure of the time series segmentation growing window specified in [25]. We discard the intercept, given that we are interested in shape of the segment, not in its relative value. Moreover, we compute the following statistical features: 1) The variance (S 2 s ) measures the variability of the segment: S 2 s = 1 ts−ts−1+1 ts i=ts−1 (y i − y s ) 2 ,(27) where y i are the time series values of the segment, and y s is the average of the values of the segment s. 2) The skewness (γ 1s ) represents the asymmetry of the distribution of the time series values in the segment with respect to the arithmetic mean: γ 1s = 1 ts−t s−1 +1 ts i=t s−1 (yi−ys) 3 σ 3 s ,(28) whereσ s is the standard deviation of the s-th segment. 3) The autocorrelation coefficient (AC s ) is a measure of the correlation between the current values of the time series and the previous ones: AC s = ts i=t s−1 (yi−ys)·(yi+1−ys) S 2 s .(29) Using these statistical features and the coefficients previously extracted, each segment is mapped into a l-dimensional array (l = c + f ), which is used as the segment representation, where c is the degree of the polynomial and f is the number of statistical features (f = 3, in our case). The mapping is then defined by: v s = (p s , S 2 s , γ 1s , AC s ),(30) where p s are the parameters of the polynomial approximation that approximates the segment s. This procedure is able to reduce the length of the segment from (t s − t s−1 + 1) to (c + f ). 3) Segment clustering: A hierarchical clustering is subsequently applied to group all the segments of a time series, represented by the set of arrays {v s , s ∈ {1, . . . , m}}. The main goal is representing all the time series with arrays of the same length and significantly reducing the size of the representation. The hierarchical clustering used is agglomerative, using the Ward distance defined in [53] as the similarity measure. The number of clusters considered for segment mapping is k = 2, for all the datasets and time series. This value is found to be robust enough for extracting a minimum amount of information about the internal characteristics of the series. B. Second stage The second stage of the method proposed consists of mapping the time series to a common representation, clustering them and evaluating its quality. The steps of the second stage are summarised in Figure 3. 1) Time series mapping: The first stage transform each time series to a set of clustered segments. Now, a specific mapping process is used to represent all time series in the same dimensional space. For each time series, Y i , we extract the corresponding centroids from the process described in Section II-A3, C ij , where i ∈ {1, . . . , T }, j ∈ {1, . . . , k}, k being the number of clusters and T being the number of time series. For each cluster, C ij , we extract: • Its centroid C ij , i.e. the average of all cluster points. • The mapping of the segment with higher variance, denoted as X Cij (in order to represent the extreme segments, i.e. the most characteristic segment of the cluster C ij ). In this way, the length of the mapped cluster is w = (l × 2), where, (l×2) is the length of both the centroid and the extreme segment. This process is applied to each cluster of each time series. The mapping process of a centroid can be formally specified as: (C ij , X Cij ), ∀i ∈ {1, . . . , T }, ∀j ∈ {1, . . . , k}. Apart from the representation of each cluster, two more characteristics of the time series are also considered: Measuring the quality of the clustering process Fig. 3. The second stage consists of four steps: firstly, each cluster is represented by a set of statistical features, which, in conjunction, represents the mapped time series, Y t . Then, a clustering process is applied to mapped time series, clusters being denoted as G l . After that, the measurement of the clustering quality is performed, using different strategies based on internal indices to choose the best configuration of SEPmax. Finally, an external index compares our approach to the ground truth. • The error difference (M D Ci ) between the segment least similar to its centroid (farthest segment) and the segment most similar to its centroid (closest segment). We evaluate the error of a segment by using the Mean Squared Error (MSE) of the corresponding polynomial approximation. • The number of segments of the time series, N Ci . The order in which the clusters are arranged in the mapping is important and has to be consistent along all the time series. This is done by a simple matching procedure, where the centroids of one time series are used as reference, and, for the rest of time series, the closest centroids with respect to the reference ones are matched together. Once the matching is defined, each time series is transformed into a mapped time series Y i , composed by the characteristics of the extracted clusters. Thus, the length of a mapped time series is (w × k) + v, k being the number of clusters, and v being the number of the extra information for the time series, which is 2 in our case. 2) Time series clustering: In this step, the algorithm receives the mapped time series and the clustering is performed, choosing again an agglomerative hierarchical methodology. The idea is to group similar time series in the same cluster. In our experiments, the number of clusters to be found, L, is defined as the number of classes of the dataset (given that we consider time series classification datasets for the evaluation). In a real scenario, L should be given by the user. This advantage is given to all the methods compared. C. Parameter adjustment The TS3C algorithm previously defined involves only one important parameter that has to be adjusted by the user: the error threshold for the segmentation procedure, SEP max (see Section II-A1). We propose to adjust it considering internal clustering evaluation metrics (see Section I-B), which can be used without knowing the ground truth labels. In this way, the algorithm is run using a set of values for this parameter, all these cases being evaluated in terms of these internal measures. Two different strategies are proposed to select the best parameter value: • Selecting the SEP max leading to the best Caliński and Harabasz index (CH), given that this index has been proved to be very robust [46]. • Selecting the SEP max which obtains the best value for the highest number of internal measures. All the internal metrics defined in Section I-B are used in this case. We refer to this option as majority voting. III. EXPERIMENTAL RESULTS AND DISCUSSION In this section, the experiment results are presented and discussed. Firstly, we detail the characteristics of the datasets used in the experiments. Secondly, we explain the experimental setting. Then, we show the results and discuss them. Finally, an statistical analysis of the results is performed. A. Datasets datasets from the UCR Time Series Classification Archive [54] have been considered. This benchmark repository (last updated in Summer 2015) is made of synthetic and real datasets of different domains. The repository was originally proposed for time series classification, so that each dataset was split into training and test subsets. However, for time series clustering, where the class label will only be considered for evaluating the clustering quality, we can safely merge these subsets. The details of the datasets are included in Table I. Also, we have computed the Imbalance Ratio (IR) for each dataset, as the ratio of the number of instances in the majority class with respect to the number of examples in the minority class [55]. Although the length of the time series is the same for all elements of each dataset of the repository, the TS3C algorithm could be applied to datasets with different-length time series. B. Experimental setting The experimental design for the datasets under study is presented in this subsection. The degree of the polynomial of the least-square approximation is set to 1, given that higher order polynomials led to worse results. The number of groups for the segment clustering is k = 2, given that the nature of the different time series datasets seems to be very similar. The other parameter of the algorithm, SEP max has been adjusted using the two options described in Section II-C: (1) directly selecting the clustering leading to the best Caliński and Harabasz (CH) measure (TS3C CH ), and (2) considering all the internal measures in Section I-B and applying a majority voting procedure to select the best one (TS3C M V ). The range considered for the parameter SEP max is the following one {10, 20, 30, . . . , 100}. The Rand Index (RI) is used as external measure for evaluating the results. The number of clusters (for the time series clustering stage) is set to the number of real labels in each dataset. We compare our method against two state-of-the-art algorithms: • DD DT W distance metric together with a hierarchical clustering algorithm (DD DT W -HC) [18]. This method considers the negative intergroup variance (−V ) as the internal cluster validation measure to set the α value (see Section I-A1). This is the best technique from those proposed in [18]. • K-Spectral Centroid (KSC). This algorithm, proposed by Yang et al. [19], is able to find clusters of time series that share a distinct temporal pattern. See more details in Sections I-A1 and I-A2. Because KSC algorithm is stochastic, it was run 30 times, while the rest of methods (TS3C CH , TS3C M V and DD DT W -HC) are deterministic (and they have been run once). The computational time needed by all the algorithms will also be analysed in this section. C. Results The results of TS3C CH and TS3C M V are shown in Table II, including both the RI performance the computational time needed for the algorithms (average computational time in case of KSC). Note that for some datasets, the running time of DD DT W -HC was higher than 763587 (maximum time of the rest of methods), so that they have been marked with "> 763587" and the results have been taken from [18]. As can be seen, we have included, as a subscript, the error threshold for the segmentation algorithm (SEP max ) of the best clustering configuration for the TS3C CH and TS3C M V methods (obtained using internal criteria). From the results in Table II, the following facts can be highlighted: • Compared with DD DT W -HC, TS3C CH obtains better solutions for 48 datasets, slightly worse results for 34, and obtains the same solution for the remaining 2 datasets. If DD DT W -HC is compared with TS3C M V , our approach obtains better solutions in 50 datasets, worse results for 32, and similar results for the remaining 2 datasets. • Compared with KSC, TS3C CH leads to better solutions in 42 datasets, while in 41 the results are slightly worse. Finally, for the remaining dataset, the result is the same. When this method is compared with TS3C M V , better solutions are obtained in 45 cases, slightly worse solutions are found for 37 datasets, and no differences for 2 datasets. Analysing average performance, the mean RI values are 0.661, 0.657, 0.606 and 0.601, for TS3C CH , TS3C M V , DD DT W -HC and KSC, respectively. D. Statistical analysis Based on the previous results, we consider all datasets to apply a set of non-parametric statistical tests in order to determine whether the differences found are obtained by chance. Given that the mean values across all datasets do not follow a normal distribution, we run the Wilcoxon signedrank test, which is a nonparametric test that can be used to determine whether two dependent samples were selected from populations having the same distribution [56], [57]. This design for the statistical tests makes possible the comparison of the deterministic methods (TS3C CH , TS3C M V and DD DT W -HC) with the stochastic method (KSC, for which the average RI from the 30 runs is used). Results of the tests made using average RI are shown in TABLE III. As can be observed, the differences are statistically significant for α = 0.05 between TS3C CH and DD DT W -HC, and between TS3C M V and DD DT W -HC. Also, if we consider α = 0.10, the methodology TS3C M V is statistically better than KSC. Consequently, these results show that the proposed methodology obtains more robust results than these state-ofthe-art alternatives. On the other side, results of the tests made using average computational time are shown in TABLE IV. In this case, considering α = 0.05, there are statistically significant differences between: TS3C M V and TS3C CH , TS3C CH and DD DT W -HC, TS3C M V and DD DT W -HC and, KSC and DD DT W -HC. This means that both TS3C methods are more efficient than DD DT W -HC, and that there are no significant differences when comparing them to KSC. IV. CONCLUSIONS In this paper, we have presented and tested a novel time series clustering approach, for the purpose of exploiting the similarities that can be found in subsequences of the time series analysed. The method is a two-stage statistical segmentationclustering time series procedure, TS3C, which is based on: (1) a least squares polynomial segmentation procedure, using the growing window method, (2) an extraction of features of each segment (polynomial trend coefficients, variance, skewness and autocorrelation coefficient), (3) a clustering of these features using a hierarchical clustering, (4) a representation of each cluster by its centroid, the segment with higher variance, the difference in MSE, and the number of segments, (5) a mapping of the time series using the information of its clusters, and (6) a final clustering stage using the mapped dataset as input. Internal performance measures are used to adjust the only parameter value. The proposed TS3C method is compared against two stateof-the-art methods: hierarchical clustering using the DD DT W distance measure (DD DT W -HC) and the K-Spectral Centroid clustering algorithm (KSC). Our method outperforms both methods using two different approaches for deciding the values of the parameters. Although the segmentation process and the first hierarchical clustering involves a considerable computational load, the global cost is acceptable, given that the final clustering does not depend on the size of the original time series. In addition, a Wilcoxon signed-rank test statistical test is used to evaluate whether that the methodology is statistically more accurate and/or more efficient than the state-of-the-art algorithms. A future line of research corresponds to the use of different approximation methods and segmentation techniques, with the purpose of reducing the computational cost of the first stage. Another direction can be the application of this methodology as previous step for prediction tasks (ordinal or nominal classification). The best result is highlighted in bold face, while the second one is shown in italics The best result is highlighted in bold face, while the second one is shown in italics
6,992
1810.10989
2898290428
Speech synthesis is widely used in many practical applications. In recent years, speech synthesis technology has developed rapidly. However, one of the reasons why synthetic speech is unnatural is that it often has over-smoothness. In order to improve the naturalness of synthetic speech, we first extract the mel-spectrogram of speech and convert it into a real image, then take the over-smooth mel-spectrogram image as input, and use image-to-image translation Generative Adversarial Networks(GANs) framework to generate a more realistic mel-spectrogram. Finally, the results show that this method greatly reduces the over-smoothness of synthesized speech and is more close to the mel-spectrogram of real speech.
So far, GAN has been widely used in the field of computer vision and image processing, and has achieved many amazing results. @cite_4 @cite_6 @cite_7 @cite_8 . Pix2pixHD @cite_9 is the state-of-the-art for image-to-image translation.
{ "abstract": [ "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048x1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ], "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_6" ], "mid": [ "2552465644", "2949117887", "2123045220", "2774625825", "2173520492" ] }
0
1810.10656
2898446106
An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.
Visual Question answering has developed dramatically in the last few years @cite_31 @cite_27 @cite_35 . Practically all current works are based on casting the problem into a multi class classification problem, where image features, retrieved by a Convolutional Neural Network, are fused with question features (mostly extracted by Recurrent Neural Network) and used to predict one of the common training set answers, mostly short and succinct answers. These methods have the advantage of not requiring to incorporate a complicated parsing and understanding process and may present decent results when trained and tested on current existing datasets, yet they lack some important human characteristics like using a compositional process, utilizing existing and meaningful sub processes. Using meaningful sub processes allows humans to focus on different aspects and scopes according to the specific task, utilize existing abilities and modularly integrate novel ones, understand limitation and provide elaborations including suggestions of alternatives.
{ "abstract": [ "Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Given an image and a question in natural language, it requires reasoning over visual elements of the image and general knowledge to infer the correct answer. In the first part of this survey, we examine the state of the art by comparing modern approaches to the problem. We classify methods by their mechanism to connect the visual and textual modalities. In particular, we examine the common approach of combining convolutional and recurrent neural networks to map images and questions to a common feature space. We also discuss memory-augmented and modular architectures that interface with structured knowledge bases. In the second part of this survey, we review the datasets available for training and evaluating VQA systems. The various datatsets contain questions at different levels of complexity, which require different capabilities and types of reasoning. We examine in depth the question answer pairs from the Visual Genome project, and evaluate the relevance of the structured annotations of images with scene graphs for VQA. Finally, we discuss promising future directions for the field, in particular the connection to structured knowledge bases and the use of natural language processing models.", "Visual Question Answering (VQA) presents a unique challenge as it requires the ability to understand and encode the multi-modal inputs - in terms of image processing and natural language processing. The algorithm further needs to learn how to perform reasoning over this multi-modal representation so it can answer the questions correctly. This paper presents a survey of different approaches proposed to solve the problem of Visual Question Answering. We also describe the current state of the art model in later part of paper. In particular, the paper describes the approaches taken by various algorithms to extract image features, text features and the way these are employed to predict answers. We also briefly discuss the experiments performed to evaluate the VQA models and report their performances on diverse datasets including newly released VQA2.0[8].", "Abstract Visual Question Answering (VQA) is a recent problem in computer vision and natural language processing that has garnered a large amount of interest from the deep learning, computer vision, and natural language processing communities. In VQA, an algorithm needs to answer text-based questions about images. Since the release of the first VQA dataset in 2014, additional datasets have been released and many algorithms have been proposed. In this review, we critically examine the current state of VQA in terms of problem formulation, existing datasets, evaluation metrics, and algorithms. In particular, we discuss the limitations of current datasets with regard to their ability to properly train and assess VQA algorithms. We then exhaustively review existing algorithms for VQA. Finally, we discuss possible future directions for VQA and image understanding research." ], "cite_N": [ "@cite_27", "@cite_31", "@cite_35" ], "mid": [ "2496096353", "2756766706", "2529436507" ] }
Understand, Compose and Respond Understand, Compose and Respond -Answering Visual Questions by a Composition of Abstract Procedures
Human ability to answer a question related to an image is remarkable in several ways. Given a single image, a large number of different questions can be answered about it. Answering these questions may require the detection and analysis of subtle, non-salient cues. Prior information and data obtained through experience are also incorporated into the process, to enable answering the question, which may be highly complex. The answering process itself is open to reasoning, allowing for example elaborations on the answer, or explaining how it was reached. In the last few years, the problem of image question-answering by a machine was addressed by many studies (Teney, Anderson, He, & Hengel, 2017a;Pandhre The system we propose and describe in this work handles a wide range of questions about images, without training on any questions (zero-shot learning). We concentrate on designing a general process for this task and not on fitting results to the statistics of a specific dataset, as current end-to-end approaches. Our system uses many existing methods for different visual tasks, such as detection, classification, segmentation, or extracting objects' properties and relations. In some cases novel detection methods were developed, however this is not a main focus of the work, as our system is modular, enabling 'plugging in' new detectors to enhance its capabilities. The structure of questions A central aspect of our scheme is that different questions share a similar structure or subcomponents with similar structure. For instance, the following questions have components with a common structure: What kind of pants is the person on the bed wearing? → person on bed Is the giraffe behind a fence? → giraffe behind fence The part with common structure can be represented as: There exist X of class c x and Y of class c y , such that r(X, Y ) Such structures may serve as building blocks for a compositional question representation. All components with similar structures can be handled by the same procedure, performing part of the answering task. In our analysis, questions could be represented by a combination of a few types of structures which we refer to as "basic patterns". These patterns are short parametric logical phrases that represent an atomic segment of the question structure. Each basic pattern dictates a particular implementation scheme utilizing a pool of implemented building blocks. The combination of basic patterns determines the entire procedure of answering the question. One advantage of such a scheme is that it is modular, allowing the addition of building blocks to increase the scope of the scheme, with no dependency on the statistics of a specific visual questions' dataset. A second advantage is that the coverage of queries grows exponentially with the number of building blocks without the need to encounter such queries as training examples. Additional advantage is "understanding" capabilities. The basic meaningful components breaks the process and allows a separate analysis of each component, including reasons of failure and explanations. The aspect of questions' coverage is also addressed in other directions. Such a direction is increasing the recognizable vocabulary of the question using commonsense knowledge. Utilizing commonsense knowledge In many cases answering a question requires integration of prior commonsense knowledge, especially about semantic relations between concepts. For example when answering the question 'What animal is this?' detection capabilities of specific animals (e.g. horse, dog, cat) will not suffice, since the answer requires the general notion of 'animal' and which particular instances belong to it. However, a query to an external knowledge database (e.g. ConceptNet (Speer & Havasi, 2013)), may provide subcategories of 'animal'. Consequently, specific detectors can be activated to seek these specific recognizable animal types. These knowledge databases are mostly based on information extracted from the internet and include commonsense information about the world. Querying such a database allows the completion of missing information such as semantic connections between object's classes (e.g. synonym, superordinate, subordinate) as in the example above, the typical usage of different objects, and more. Integrating this type of information is important when answering questions asked by humans, as it is common knowledge and treated as universally available. UnCoRd Answering System Approach Overview Our Understand, Compose and Respond (UnCoRd) approach is based on the following observations: • There is a representation of the question in terms of objects, their classes, properties and relations, including quantifiers and logical connectives as well as non logical symbols: predicates and functions. The representation has an 'abstract' structure, i.e. independent of the particular objects, classes, properties and relations that are represented as parameters. A single abstract representation can represent many different concrete questions. Our main thesis is that the procedure to be applied for obtaining the answer depends on the abstract structure of the question and not the particular elements. Hence, it is important to use the right kind of abstract representation, which will allow this mapping to procedures (where all questions with the same abstract structure require the same procedure). A proper parsing and mapping of the language question to its abstract representation should be obtained to use this method. • The question has a compositional structure: there are basic components put together in particular ways. The abstract representations are composed from 'basic patterns' and methods for putting them together into more complex compound structures. This compound structure determines how the procedures are constructed. There are basic procedures for the basic patterns, and methods of composing from them a more complex procedure to deal with the compound abstract structures. In other words, we get a procedure for the entire question by having procedures for the basic components and a procedure to put them together. We would like our system to meet the following criteria: -Answer correctly and efficiently. -"Understanding" the question, in the sense of: • Breaking the answering procedure into a set of simple visual tasks. • Identify which tasks it can perform and what are its limitations. Indicate if something is missing or unknown. • Ability to explain and reason -elaboration of the answering process using the image and intermediate results, including error correction and alternative suggestion. -Modularity and robustness: handling questions and image categories of various types, not limited by a training set. -Though not using a human psychology model, the ability to handle questions that people answer easily (and may be "hard" for computers) is desired, e.g. 'odd man out'. A question can be seen as a statement about the image that the answering system tries to make true or refute. Making the statement true requires an assignment of the particular classes, properties and relations to the image. Their identification in the image is based on pre-trained classifiers and detectors. The recognizable set is modular and can be increased by adding new detectors or switching to stronger ones. Logical operations will be used to generate logic sentences with a formulation that fits first order logic (including functions) with some extensions. The answering procedure is generated according to the input question in the following manner: Question → Question representation → procedure A proper representation is fundamental to allow a successful mapping of the question into the answering routine. This representation should be concise and support generating the same procedure when applied to similar structured questions with different choices of classes, properties and relations. To obtain that, the visual elements (object classes, object properties and object relations) would be parameters, integrated using logic operations (e.g. ∧, ∨) and quantifiers (e.g. ∀, ∃, ∃5 ) into basic logic patterns corresponding to specific structures. These patterns are combined and merged to compose a more complicated structures that create the representation of the question and can be mapped to the answering procedure. We use a directed graph to describe the question which is a natural choice in our case and allows diverse compositions of substructures. In this graph each node represents an object entity and its description (e.g. a list of required properties). These nodes are linked by the graph edges which represents relation between objects. The graph is divided into small segments that relate either to one node and correspond to part of its information (e.g. object class and one property) or to an edge and the two classes of the nodes it connects. Each of these graph segments matches a basic pattern that is handled by a corresponding procedure, using the specific visual elements of this substructure. The graph representation allows to decompose the answering procedure into a set of elementary procedures and put them together to generate a modular answering procedure. The elementary procedures invoke visual analyzers, which are the basic modules of the process. Each class, property and relation, has a visual analyzer to establish it. More general visual operations that serve more than one particular visual element (e.g. depth estimation) are activated according to need and their results are available to all basic procedures. The overall routine is obtained by applying these procedures and operations at an appropriate order, to appropriate objects, where the amount of required assignments per object are set by the quantifier of the corresponding node. The visual elements may have 'types', such as classes that can be basic or subordinate (i.e. basic with additional properties), properties that may be comparative (e.g. 'older than') and relations which can be symmetric (e.g. 'beside') or not. The entire process of answering a visual question is described in Figure 1. It starts by receiving the input language question and mapping it to a graph representation. The next stage is running a recursive procedure that follows the graph and invokes the procedures associated with the basic structures, using the specific visual elements as inputs. After the results are obtained, the answer is returned. Questions with a simple structure (e.g. "Is there a red car?") can be represented by matching one specific pattern to a question. This covers a wide range of questions, however by allowing a composition of simple patterns, into a more complicated structures, the quantity of supported questions is raised substantially (from ∼60% to ∼90%, according to an analysis of 542 questions on images asked freely by people and using a set of 12 patterns). This composition is done using a graph. For example in the question "Is there a red car to the right of the yellow bus? " there are two parts with a simple structure "Is there an object of class c with a property p?" connected by the relation "to the right of", which corresponds to another simple structure: "Is there an object of class c 1 and an object of class c 2 that have the relation r between them?". The graph representing the question is: Map into a graph representation question Run a recursive procedure following the graph image Answer When a specific question is given, the question is parsed and mapped to a directed graph, where the visual elements are its parameters. This graph corresponds to a logic expression that is composed of simple expressions, that may share the object variables. Some of the parametric visual elements are variables that require estimation based on the image. Once the variables are estimated, the logic expression is evaluated (as true or false) and the query is answered accordingly. The formulation of the logic expression fit first order logic (including functions) with some extensions (e.g. a variable-sized set of arguments or outputs for some functions). Each simple logic expression is related to a basic pattern, which corresponds to a basic procedure. The basic procedure obtains an answer to the expression by activating visual analyzers according to the types of object classes, properties and relations (which are inputs to the basic procedure). Such a system will have the ability of constant improvement by adding detectors for new classes, properties and relations according to requirements. Similar characteristics are also evident in human learning, where new learned details are integrated into the existing mechanism of world perception. The UnCoRd system is implemented following the approach described above. It answers visual questions using a composed process that follows the graph representation of the question, activating real world visual analyzers. This system is described in the following section. System Description Mapping to a Directed Graph One of the system's main tasks is to translate the query, given in natural language, into an abstract representation which will then be mapped into a procedure (the first step, described in Figure 1). We first use the START parser (Katz, 1988(Katz, , 1997 The generated set of ternary expressions is used for the generation of a graph representation, where nodes represent objects and edges represent relations between objects. The node include all of the object's requirements according to the question, mainly its class, properties that may be required (e.g. 'red') or queried (e.g. 'what color') and quantifiers that are not the default existence quantifier (e.g. 'all', 'two'). The directed edges correspond to relations between objects where the edge direction implies the direction of relation. Each edge is also assigned a direction of progress for the answering procedure. It is instantiated as the relation direction, but may be modified according to initial object detection to enhance detection abilities (see Section 3.2.2 for details). An example for a mapping of a question to a directed graph can be seen in Figure 2. The graph representation is used to fit an answering procedure for each particular question. Fragments of information are extracted from subgraphs that include up to two connected nodes. A graph fragment includes a subset of elements (classes, properties, property functions and relations) that has a mapping to one of a few basic logic patterns. This mapping, combined with the particular accompanying visual elements defines a logic expression that selects and guides a component of the answering procedure. For example a fragment of the node's class and a required property is mapped to the pattern ∃X (c X (X) ∧ p X (X)). The specific class c X and property p X define the particular logic expression that should be checked. Such mappings are done for the entire graph, where each fragment of it is mapped into a basic logic pattern and specific visual elements. These simple logic expressions, joined using logic operations, constitute one logic expression that represents the entire question. Each basic logic patterns has a dedicated procedure that performs the evaluation required to confirm or refute it, using visual analysis according to the image. The procedure provide an answer according to an accompanying query. We use the following notations for describing the basic logic patterns: X, Y -Objects c(X) -A class, evaluated for object X (as True/False), e.g. 'person', 'boy', 'bird', 'train'. p(X) -A predicate property (predicate of arity 1), evaluated for object X, (as True/False), e.g. 'blue', 'male', 'big'. f (X) -A property function. Returns properties of a specific type, e.g. 'color', 'age', 'size'. g(S t ) -A global property function for a subset of objects of the same class: S t ⊂ {X t : c t (X t )} . Returns properties of a specific type, e.g. 'quantity', 'difference', 'similarity'. p f -A predicate property, constrained to possible return values of f (X) (e.g. blue = color(X), male = gender(X), big = size(X)). a g -One of the possible values returned by g(X) (e.g. 3 = quantity(S t ), where S t = {X t : c t (X t )}). r(X, Y ) -Relation between objects X and Y (predicate of arity 2), e.g. X below Y → below(X, Y ) and in the same manner looking at(X, Y ), near(X, Y ). ?--A query, the requested answer. Objects (or other elements) starting with a capital letter (e.g. X, Y ) are unknown elements (variables) that should be estimated according to the image. The particular used patterns were selected since they provide a small, simple and basic set that can naturally compose the logic representation of the question. This small set provides a high flexibility in composing a wide variety of logic expressions using the different visual elements. From a conducted survey and other checks it was evident that this set is empirically sufficient to represent the set of analyzed queries. Following are the basic logic patterns that are mapped to basic procedures in the question answering process (followed by their corresponding graph fragment). The ∃ quantifier may be replaced by other quantifiers (e.g. ∀, ∃2). • Property Existence: ∃X (c X (X) ∧ p X (X)); ?-∃/c X c: c X p: p X Examples: 'Is there a brown bear?' (query for validity with a specific object class) 'What is the purple object?' (unknown and queried object class) An example for a modification due to a quantifier parameter: ∀X (c X (X)∧p X (X)); ?-∃, e.g. 'Are all bears brown?' • Function Property: ∃X (c X (X)), f (X) = P f ; ?-P f c: c X f : P f Example: 'what color is the chair?' • Property of a Set: ∀X t ∃S t (S t = {X t : c t (X t )}), g(S t ) = A g ; ?-A g c: c Xt g: A g Example: 'How many planes are in the photo?' • Object Existence: ∃X (c X (X)); ?-∃/c c: c X Examples: 'Is this a dog?' 'What is it?' • Relation Existence: ∃X ∃Y (c X (X) ∧ c Y (Y ) ∧ r(X, Y )); ?-∃/c X /c Y c: c Y c: c X r Examples: 'Is the man looking at the children?' (validity query) 'What is on top of the television?' (query for one of the classes) The combination and composition of these patterns has a powerful representation capabilities and provides a mapping to a set of basic procedures that constitute the full answering procedure. The procedure composition of "real-world" visual tasks allows both the use of existing detectors, including separate improvement of each task and explaining, elaborating and correcting questions. As mentioned above, modified quantifiers may be added to nodes according to amount of objects required in the questions (see Figure 2). These quantifiers may be either numbers (e.g. 'Are there three guys?' ) or 'all' for entire group of objects. Setting the group may be according to subtle phrase differences which affect the answering procedure flow and results as can be seen in Figure 3. The graph naturally represents objects, their properties and binary connections between them. Though this covers a wide variety of questions, using global image information and some extensions to the basic graph increase the support to additional attributes. Property of a group is an example for such an extension. Properties that uses global information are 'closest' and 'size' (which is relative to other objects). Specific implementations for complicated attributes may be added as a dedicated tasks or by a preprocessing, braking it into graph plausible segments. An example for such an implementation in our system is 'odd man out' (e.g. "How is one cow not like the others?"), where the relations 'diff <f >' and 'sim <f >' (for different and similar values of property f correspondingly) are used to check and compare the properties of objects. An example is given in Figure 4. The 'similarity' attribute (queries for a property that is similar for all objects in the group) is handled in the same manner. The main building blocks of the question representation are the visual elements: object classes, object properties and object relations. The question in (a) requires all 'dog' objects to be both black and small, hence the first dog that is not black renders the logic phrase false and the answer is "no" (failed object and reason are marked in the image). The question in (b) requires only that the black dogs would be small, hence all dogs are checked for color, and the size of the black ones is verified to be small. Since it is true, the answer is "yes". • Object Classes Object class is the category of object required by the question. It does not necessarily match the used object detector. To enlarge the coverage of supported object classes we define a few categories of object classes and handle them accordingly. -Basic Classes These are the classes specifically covered by the main multi-class object detector. We currently use instance segmentation by mask R-CNN (He, Gkioxari, Dollár, & Girshick, 2017) for the 80 classes of COCO dataset (Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, & Zitnick, 2014). Having the segmented object is very useful as this accuracy is required in many cases (e.g. for the relation 'touch'). Other detection methods are also integrated and may be used instead. . This is a complicated attribute that requires special treatment and mapping to the graph representation. Bounding boxes for birds with common property (red) and for 'odd man out' bird (yellow) are marked (in red and yellow correspondingly). [Object detection is based on faster R-CNN + DeepLab]. (Mathias, Benenson, Pedersoli, & Van Gool, 2014) for the detected 'person' objects, followed by age and gender classifier (Levi & Hassner, 2015) on the results (an example is demonstrated in Figure 5). -Superordinate Classes Each category of a superordinate class includes a few basic classes (for example furniture, animal). To check this, we use ConceptNet (Speer & Havasi, 2013), which is a commonsense knowledge database, based on data extracted from the internet (see also section 3.2.2). It includes concepts and predefined relations between them. We use the relations: 'InstanceOf', 'IsA', 'MadeOf' and 'PartOf' with the requested class, and keep the results that fit our basic classes list. The detected objects of these classes are retrieved and used for the rest of the procedure. Also if the query is for the type of the requested superordinate class, the name of the detected basic class is given as an answer (see Figure 5 for an example). -Similar Classes A class that has a synonym or a very similar class in the basic classes set may be also searched as this corresponding class. These correspondences are extracted using the 'Synonym' and 'SimilarTo' relations in Concept-Net. -A Group of Objects To identify a class that represents a group of objects (possibly of different optional basic classes), the ConceptNet relation 'MemberOf' is used (e.g. flock → bird, sheep; fleet → bus, ship...). A quantity requirement is added of at least two objects (demonstrated in Figure 5). -Sub Objects Some objects are part of a 'known' objects and can be extracted according to the detection of the host object and additional processing. We apply human pose estimation (Chen & Yuille, 2014) to obtain the different body parts when requested (e.g. 'left/right hand', 'left/right foot' ). Relative areas of objects (e.g. 'the middle of the bus' ) are also treated as sub objects. In these cases left and right are different than other uses of left/right as a location property (e.g. 'the left box' ). A 'shirt' is also treated as a sub object, corresponding to the torso area, provided by human pose estimation results (an example is given in Figure 5). • Object Properties Objects have various visual properties. We differentiate between the binary properties (e.g. 'red') and a function property that returns the property of the object from a specific category (e.g. 'color'). Table 1 describes the used set of properties divided (most of them) to groups of function properties. • Object Relations Relations between two objects are represented by the directed graph edges. Detection of relations varies and require "simple" information for some (e.g. 'to the right of') and complicated visual features for others (e.g. 'wearing'). We combine specific rule based detection for some relations and a deep neural network for others. -Rule based relation classification: Based on spatial checks, using (when needed) morphological methods, depth estimation (Liu, Shen, Lin, & Reid, 2016), face detection (Mathias et al., 2014), face key points detection (Zhu & Ramanan, 2012) and gaze estimation (Recasens * , Khosla * , Vondrick, & Torralba, 2015). Properties' Group Predicate Properties color/colors 11 colors (e.g. 'black', 'blue',...) age a ages and ages inequalities (based on 8 age groups) gender a female/male location b (e.g. where) spatial image location (e.g. 'bottom (of the image)' ) relative location bc location relative to other objects (e.g. 'the left dog' ) type subclass (when available) size 'small', 'big', 'average' quantity d number of objects difference d (odd man out) no direct binary property similarity d no direct binary property Simplifications, compositions of relations are used, as well as exploiting commonsense knowledge (by querying ConceptNet (Speer & Havasi, 2013)). A special type of relations are the comparison relations, sim <f > and diff <f >, that checks similarity or difference of function property f correspondingly. -Deep neural network classifier: Based on the DR-Net method (Dai, Zhang, & Lin, 2017) for the relation predicate classification. This method, as other visual relation detectors, utilizes object detection. To avoid coupling of relation detection with object detection, which would reduce the robustness of our system, and yet exploit object detection when possible, we've added a layer that was trained to project a closeness measure based on the GloVe word embedding (Pennington, Socher, & Manning, 2014) and generate a representation for any object class. This way, object classes that were not trained for the relation classification still have a representation projected on the DR-Net object classes vector. We use the version trained for the 70 VRD dataset (Lu, Krishna, Bernstein, & Fei-Fei, 2016a) relations. Since relations are also used as an attention for object detection (3.2.2), inverse relations are matched to each relation, when possible. This way, attention can be used for both directions of the relation. Recursive Procedure The final stage of answering the question is activating a recursive procedure to follow the graph nodes and edges, invoke the relevant basic procedures and integrate all the information to provide the answer. A basic scheme of the procedure is given in Figure 6 and in Algorithm 1. External Knowledge Working Memory Figure 6: A scheme of the recursive answering procedure. At each step the current node (cur node) is set and the objects are examined according to node's requirements. If succeeded, a new cur node is set (according to a relation or next global parent node) and the function is called again to handle the subgraph starting from it. The required visual elements: c: object class, p i : an object property, f : function property, g: property of a set, r i : a relation. The daughter object detection is activated only when none was detected in previous stages. Note that the estimated maps of depth and color names are calculated by the procedure according to needs. The first step is a preliminary object detection, carried out by applying instance segmentation on the image. Then, a recursive function (getGraphAnswer) is invoked for node handling (starting at a global parent node). It runs specific procedures that activate visual analyzers to check the requirements (properties, relations) and fetch required information (function property). The retrieved objects that fulfill the requirements are coupled to the corresponding question objects, so that next checks would be held on the same objects. The number of required objects is mainly according to quantifiers. Once a node checks are completed, the same function (getGraphAnswer) is invoked for next node. Next node is determined according to relation (graph edge) or next global parent node. Once all nodes are queried, the checks for entire set are activated (if needed). Answers are provided by all basic procedures and final answer is set according to precedence (e.g. queried property type has a priority over binary answers). if success ∧ ¬ empty(g) then answer = g(valid objs) end Return answer end a. According to object detection and previous checks b. According to quantifiers and other requirements c. Either to daughter node or next global parent node Working Memory The global information gathered through the answering process is stored in a "Working Memory" component. It stores the calculations that may be required at several stages of the process. This information is calculated only if needed and includes objects and their retrieved data, depth map, current node, currently used objects and more. Common Knowledge When a person is answering a visual question, there is an important role to prior common knowledge. This includes connection between classes, famous brands and logos, knowing the role and characteristics of objects and actions, anticipation of the future, knowing to ignore details and more. Some of the issues related to prior commonsense knowledge are addressed by our system. The main uses of prior knowledge are common relations in images (using the Visual Genome dataset (Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al., 2017)) and commonsense knowledge on categories of objects, as well as connections between them (using ConceptNet (Speer & Havasi, 2013)). • Visual Genome Dataset The Visual Genome dataset (Krishna et al., 2017) contains (among many others) annotations for objects and binary relations between them for a set of 108077 images. Common relations involving specific objects are extracted from this dataset (by demand) and used as prior knowledge to assist detection. It allows refining the search area when an object is not detected in the initial detection as described below and demonstrated in Figure 7. • ConceptNet To obtain general commonsense knowledge we use ConceptNet database (version 5) (Speer & Havasi, 2013). The source of information for this database is the internet (results from additional databases are also incorporated). It allows querying for concepts and relations between them of the form: concept1 -relation → concept2 (e.g. horse -IsA → animal) The query is performed by providing two of the triplet [relation, concept1, concept2] and querying for the third. These common knowledge relations provide complement capabilities for answering 'real world' questions in which such common knowledge is assumed. We currently use ConceptNet mainly to extend understanding of objects' classes (e.g. superordinate classes, similar classes) as described for example in section 3.2.1. Examples for questions are given in Figure 5 for connections between classes. Guided Object Detection A question may refer to specific objects in the image that may be hard to detect (e.g. due to size, occlusion, clutter). When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. We use relations with detected objects as an attention source. Two sources for such an attention are used. • Attention by common relations: The source for this attention is from the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images. When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. This is done using the annotation of the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images (see also section 3.2.2). We seek the most common relation of the requested object (with an object from our known classes' set) and a corresponding relative location. Then, if the other object is found we activate the object detector on the relevant area. An additional search area is obtained by the relation's spatial constraints. An example of using common relations as an attention is given in Figure 7. (a) Detection results on entire image (b) Detected bottle using common relations • Attention by question relations: The question itself may include relations that can assist detection by focussing on relevant areas. Since the processing is according to the question graph representation, relation edge directions are modified from detected to undetected objects. This allows using relations with a verified detected object as a detection guidance for undetected objects in the same manner described above. The usage of this type of attention is demonstrated in Figure 8. "Understanding" Capabilities Having a system that breaks the visual answering task into real world sub tasks has many advantages. Other than abilities of modular modifications and improvements, the meaningful, compositional process is utilized and leveraged to provide information derived from internal processing. Failure reasons and verified alternatives are provided, as well as elaborations on detected objects. Provide Alternatives/Corrections When the logic expression representing the question is not valid for the given image, alternatives for the failed part are searched, such that a close expression may be validated and provided as a supplement to the answer. The checks include alternative objects, relations and also properties according to the following: • For failed object classes alternative classes are checked. (a) Detection results on entire image (b) Detected clock using question relation • Real properties are specified for objects with failed properties. • For failed relations alternative relations are checked. • Additional attempts with close persons subordinate classes (e.g. when failed to classify a person as a woman, other sub-person classes are checked). Examples are given in Figure 9 (note that some include multiple rounds of attempts). Answer Elaboration During the answering process, related information may be accumulated for verifying the logical expression representing the question. This information is provided as part of the answer, explaining and elaborating it. The following supplementals are included: • If object detection was by a related class (e.g. synonym, parts of a group, subordinate classes), it is specified in the answer (including numbers of each subclass). • The hint relation used as an attention for object detection, is indicated (if used). • If queried function properties (e.g. color) are different for different relevant objects, property for each object is specified. Some examples can be seen in Figure 10. Integration in Related Applications As the answering process accumulates real "knowledge" related to the image, it may be saved and used for extended applications. One of them may be a discourse on the image, where follow up questions may be answered. Additional application may be correction of image caption (Bernardi, Cakici, Elliott, Erdem, Erdem, Ikizler-Cinbis, Keller, Muscat, Plank, et al., 2016), where caption may be transformed into a question and the answer may verify it or correct it (as described in Section 3.3.1). An example for image caption correction is given in Figure 11. Results Analysis Our system is currently limited by the visual elements it is able to recognize. It is not trained or optimized for any visual question answering dataset. Since our goals include question "understanding" and modularity, we first focus in basic capabilities that will be developed with time to be more comprehensive. We've checked our system for various aspects and specific examples and provide an analysis. We've examined graph representation for a random set of questions to see current status as well as potential. The performance of Question Representation First we check the representation capabilities of our system. To do that we've sampled randomly 100 questions from the VQA dataset (Antol et al., 2015) and checked their graph representation. Results are given in Table 2. Current Potential Fit 72 100 No fit Vocabulary 12 Other 14 Unparsed 2 Table 2: Representation results on a random set of 100 questions from the VQA dataset (Antol et al., 2015). The vocabulary no fit cases are miss representation due to fail in phrases recognition. 'Unparsed' are questions that START couldn't parse. The 'Potential' column represent questions that may be represented by the graph. Caption: a man sitting on a bench with a large umbrella Q: Is there a man on a bench with a large umbrella? A: There is no bench There is no man There is no umbrella There is nothing on a object Existing alternative relations: 'boat in front of a bird' Figure 11: Example for image caption correction. Image caption is the result of the Neu-ralTalk model (Karpathy & Fei-Fei, 2015) It is not always clear whether a representation is accurate, as in some cases a representation may fit the language structure but less accurate for the actual meaning. For example a simple representation of the question "Is this picture in focus?" may be: c: focus c: picture in However, 'in focus' represents a single element and should be recognized as such. This demonstrates the importance of vocabulary knowledge. In another example, the following questions have a similar structure: Are they all wearing the same color ? Are they all wearing the same pants? However, 'color' and 'pants' belong to two different types of visual elements and hence questions should have different representations. Sometimes minor phrasing changes have a substantial effect on parsing and representation. The variation in phrasing may also include grammar inaccuracies and typos. This sensitivity reduce the consistency of the representation and adds noise and inaccuracies to the system. For the two "Unparsed" questions in our representation test, simple corrections lead to successes. The corrections are (original → corrected): What season do these toy's represent? → What season do these toys represent? Where are these items at? → Where are these items? There are other cases where a minor phrasing change corrects the representation, as can be seen in Figure 12. Additional parsing limitation is no indication coordinating conjunctions ('or', 'and') between phrases. Hence both are treated as 'and'. As mentioned before, since the questions are free form, they may involve slang, typos or wrong grammar. The question meaning may even be not clear. For example the question 'How is the table design?' may be the correct intended question. However it may be that the intended question is "How is the table designed ?". All the questions sampled in this analysis can be potentially represented using the suggested graph representation. This demonstrates that in general our scheme has a very high representation capabilities. However some require identification of complicated properties and related terms e.g. "Is the refrigerator capacity greater than 22 cubic feet?" (similar comparisons of property's quantity already exist for age). The issue of adding description levels rises for complicated properties that may have a natural representation using properties of properties, e.g. Is this the normal use for the object holding the flowers? How is the table designed ? Where do these animals originate? In some cases it may be reasonable to alter the exact meaning into a more reasonable one to handle, e.g Does this truck have all of its original parts? → Are all the parts of this truck original? In other checks performed, there were (very few) cases where relations between multiple objects of different types were required (e.g. 'Does this image contain more mountain, sky or grass?'). A support for such cases may be added in the future. Question Answering Our current implementation is obviously limited by the number of recognizable visual elements, queried both explicitly and implicitly. It does not include any training or adaptation to any Visual Question Answering dataset. Also, some implementations maybe incomplete or arbitrary, e.g. 'location', which implementation is relative to image. Answers are, however mostly self aware. When running on the VQA (Antol et al., 2015) dataset most answers indicate the unfamiliar visual element which prevents answering (e.g. "Unknown class: linoleum"). Examples with proper answers are shown in Figure 13. It includes the use of ConceptNet (Speer & Havasi, 2013) in some cases to obtain prior knowledge regarding related classes (e.g. subclasses) and other commonsense knowledge. Examples with wrong answers are shown in Figure 14. The reasons for failures include detection failures, unknown visual elements, missing prior knowledge and other assumptions. Further examination of the results provides some insights regarding additional sources of failure. One element that adds "noise" to the system is the use of internet based external knowledge database. While providing essential information, retrieved data is also prone to errors and yields detection attempts of wrong objects. This is demonstrated by the results of queries of 'carpet' and the relation IsA which imply that the following may be a carpet: 'Barack Obama', 'book', 'monitor',' a plastic bag', 'a glass of water', etc. Another example for such an error is the retrieved relation 'chair IsA door'. A partial solution is using the associated weights that indicate the strength of each result. Some results may be misleading as they may refer to different meanings of the queried words. Following are examples for such results: 'train isA control' 'monitor isA track' 'screen door isA door' In some cases the intersection of retrieved classes with recognizable objects is so small, that it may cause a wrong conclusion based on a very superficial check. An example for this is the question "Are these toys?", where the recognizable retrieved classes are 'bicycle', 'skateboard', 'frisbee', 'kite' and 'motorcycle' hence answering 'no' if none of them was detected. An interesting observation regarding the estimation of some visual elements is for the generation of color name maps (Van De Weijer, Schmid, & Verbeek, 2007), which is based on supervised learning (11 optional colors per pixel). When object colors are required, the map is generated for the object area in the image, and based on dominant colors the answer is provided. Retrieving object color may appear as a trivial task, as the intensity of original RGB image channels should provide the exact color of each pixel. However, using such methods fail to obtain the perceived color, as it is hardly related to levels of actual RGB channels. Hence, learning methods are incorporated to address this problem, and still there are many inaccuracies. In addition to these inaccuracies, the required process for obtaining perceived color of an object is not consistent. This can be seen in the examples of Figure 15, where inquiring for the color of a person requires different color naming and focus on specific regions. The bus example also requires specific behavior, where the windows and wheels areas of the bus should be ignored. Q: What color is the horse? Q: What color is the bus? Q: Is the man white? A: grey A: black A: yes Figure 15: Demonstration of perceived color challenges. Each column corresponds to one example. For each example, the top image is the input image with markings of relevant results. The bottom image is a map of color names corresponding to the required object. Below the images, the question and corresponding answer are given. First column demonstrates classifications errors in the generated map of color names due to shading. Second column require ignoring the windows and wheels areas for an accurate answer. For the example of the third column, only specific area should be checked and colors should correspond to different names. [Object detection is based on faster R-CNN + DeepLab]. As previously mentioned the parser sensitivity to phrasing and other issues such as its indifference to type of phrase coordinators ('and', 'or') causes representation failures or misrepresentations, which results with inability to provide a correct answer. For example when 'or' is used (e.g. "Are the flowers yellow or white?") the answer will be always 'no', as both options are required to be true. Hence, we get an answer which is irrelevant to the question. Questions may be misinterpreted due to multiple meaning of words and phrases or subtle differences. As previously discussed this mainly effects the use of external knowledge database where a wide range of concepts may be used, which may lead to an unclear meaning of a concept (e.g. 'train'-vehicle vs. learn, 'monitor'-screen vs. supervise). Such confusions happen also for the question itself. An example for a misinterpreted question is "What is the table/bus number?", which is interpreted as "What is the number of tables/buses?" Currently, other than enhancing object detection by attention from question relations, details from the question are not used as hints for correctness of expressions. A case where such information may be further utilized is when the query is for a property of an object. In this case there may be a prior assumption or an increase in probability that such an object exists. Of course, an automatic assumption of existence is not desirable. However, reduction in classification thresholds, additional attempts using hints and other measures may be utilized to reflect the higher probability for the existence of such an object. For example, given the question "What is the age of the man?", the probability that a man indeed exist in the image should rise, and refuting this assumption should be performed only when the evidence is substantial. Discussion and Conclusions We have presented an approach to visual question answering that seeks to compose an answering procedure based on the 'abstract' structure of the query. We exploit the compositional nature of the question and represent it as a directed graph with objects represented as nodes and relations as edges. Each basic component of this graph representation is mapped to a dedicated basic procedure. The collection of these basic procedures are put together, along with additional required processes, into a complex procedure for the entire query. This procedure incorporates query details and intermediate results and stores them in the graph nodes and a working memory module. The stored information completes the guidance to the procedure and allows handling different types of visual elements. Question relations are used as an attention source to enhance object detection. Querying for external common information is also handled by the procedure in order to complete the required prior knowledge needed to answer the question. Breaking the answering process into basic meaningful components, corresponding to basic logic patterns, enables awareness at each step to the accomplished and unaccomplished parts of the task. This includes recognizing and reporting on failures and limitations, that in many cases are corrected and provided with valid alternatives. Elaborations to the answers are provided, according to the stored information. Since the building blocks include simple real world detectors, the system is modular and its improvement is not limited. Human abilities motivate us to examine and handle some complicated attributes that are addressed naturally by humans, even though they may hardly appear in real queries. These attributes, such as 'odd man out', demonstrate representation challenges, that require extending the natural graph representation. Currently specific configuration is created to represent these attributes. Future upgrades may allow handling it more smoothly. Evaluation of representation capabilities demonstrated that, even though potentially, our scheme can represent practically all queries, current state of the system is limited. The observed problems include limitations in vocabulary identification, sensitivity to phrasing and cases of grammatical similarity for different elements (e.g. 'wearing the same color' vs. 'wearing the same pants'). Additionally, some rare representation limitations exist, such as relations between more than two objects of different classes. Even though the recognition abilities are currently limited due to scope of existing detectors, the system is self aware and mostly reply by specifying its limitation (which may trigger an addition of the desired detectors to the system). The representation limitations discussed in 4.1 are a fundamental source of failures, which is added to incremental chances for errors of the used detectors. Our system does not exploit any language bias of the question. The answer is exclusively provided by the procedure evaluating the logic representation of the question. However, improvement is ongoing, as detectors keep improving and their scope keeps growing. Current approaches to visual question answering use mostly end-to-end schemes that are very different than our approach. Although some methods include adaptive aspects, the optimization process is more likely to exploit language bias than the complex mechanisms required for proper answering. These methods maximize statistical results, but are likely to fail in addressing subtle, yet meaningful cases. This fits the analysis of current models, demonstrating the tendency to utilize only part of the question, provide same answers for different images and fail on novel forms. A combination of UnCoRd system and an end-toend model may be beneficial in some cases, for example enhancing UnCoRd elaborations with "intuitive" answer in some cases (such as unknown visual elements). We've integrated and examined various aspects of answering questions on images using our answering system. Much more research and investigation is required for all these aspects, as well as others. Future research will include learning the representation mapping and making it more robust, further investigating and improving the visual elements analyzers (e.g. combine the type of object when possible for property detection) and more.
8,012
1810.10656
2898446106
An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.
Incorporating the question information is largely addressed by seeking mechanisms for image-language features fusion. A large focus in this line of works was in simplifying bilinear pulling (which is based on the outer product of the two feature vectors) by reducing dimensionality of the features @cite_29 or a low rank factorization @cite_28 @cite_0 .
{ "abstract": [ "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how our MUTAN model generalizes some of the latest VQA architectures, providing state-of-the-art results.", "", "Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multi-modal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multi-modal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a co-attention mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-the-art performance on the real-world VQA dataset. Code available at this https URL." ], "cite_N": [ "@cite_28", "@cite_29", "@cite_0" ], "mid": [ "2616125804", "2963383024", "2744822616" ] }
Understand, Compose and Respond Understand, Compose and Respond -Answering Visual Questions by a Composition of Abstract Procedures
Human ability to answer a question related to an image is remarkable in several ways. Given a single image, a large number of different questions can be answered about it. Answering these questions may require the detection and analysis of subtle, non-salient cues. Prior information and data obtained through experience are also incorporated into the process, to enable answering the question, which may be highly complex. The answering process itself is open to reasoning, allowing for example elaborations on the answer, or explaining how it was reached. In the last few years, the problem of image question-answering by a machine was addressed by many studies (Teney, Anderson, He, & Hengel, 2017a;Pandhre The system we propose and describe in this work handles a wide range of questions about images, without training on any questions (zero-shot learning). We concentrate on designing a general process for this task and not on fitting results to the statistics of a specific dataset, as current end-to-end approaches. Our system uses many existing methods for different visual tasks, such as detection, classification, segmentation, or extracting objects' properties and relations. In some cases novel detection methods were developed, however this is not a main focus of the work, as our system is modular, enabling 'plugging in' new detectors to enhance its capabilities. The structure of questions A central aspect of our scheme is that different questions share a similar structure or subcomponents with similar structure. For instance, the following questions have components with a common structure: What kind of pants is the person on the bed wearing? → person on bed Is the giraffe behind a fence? → giraffe behind fence The part with common structure can be represented as: There exist X of class c x and Y of class c y , such that r(X, Y ) Such structures may serve as building blocks for a compositional question representation. All components with similar structures can be handled by the same procedure, performing part of the answering task. In our analysis, questions could be represented by a combination of a few types of structures which we refer to as "basic patterns". These patterns are short parametric logical phrases that represent an atomic segment of the question structure. Each basic pattern dictates a particular implementation scheme utilizing a pool of implemented building blocks. The combination of basic patterns determines the entire procedure of answering the question. One advantage of such a scheme is that it is modular, allowing the addition of building blocks to increase the scope of the scheme, with no dependency on the statistics of a specific visual questions' dataset. A second advantage is that the coverage of queries grows exponentially with the number of building blocks without the need to encounter such queries as training examples. Additional advantage is "understanding" capabilities. The basic meaningful components breaks the process and allows a separate analysis of each component, including reasons of failure and explanations. The aspect of questions' coverage is also addressed in other directions. Such a direction is increasing the recognizable vocabulary of the question using commonsense knowledge. Utilizing commonsense knowledge In many cases answering a question requires integration of prior commonsense knowledge, especially about semantic relations between concepts. For example when answering the question 'What animal is this?' detection capabilities of specific animals (e.g. horse, dog, cat) will not suffice, since the answer requires the general notion of 'animal' and which particular instances belong to it. However, a query to an external knowledge database (e.g. ConceptNet (Speer & Havasi, 2013)), may provide subcategories of 'animal'. Consequently, specific detectors can be activated to seek these specific recognizable animal types. These knowledge databases are mostly based on information extracted from the internet and include commonsense information about the world. Querying such a database allows the completion of missing information such as semantic connections between object's classes (e.g. synonym, superordinate, subordinate) as in the example above, the typical usage of different objects, and more. Integrating this type of information is important when answering questions asked by humans, as it is common knowledge and treated as universally available. UnCoRd Answering System Approach Overview Our Understand, Compose and Respond (UnCoRd) approach is based on the following observations: • There is a representation of the question in terms of objects, their classes, properties and relations, including quantifiers and logical connectives as well as non logical symbols: predicates and functions. The representation has an 'abstract' structure, i.e. independent of the particular objects, classes, properties and relations that are represented as parameters. A single abstract representation can represent many different concrete questions. Our main thesis is that the procedure to be applied for obtaining the answer depends on the abstract structure of the question and not the particular elements. Hence, it is important to use the right kind of abstract representation, which will allow this mapping to procedures (where all questions with the same abstract structure require the same procedure). A proper parsing and mapping of the language question to its abstract representation should be obtained to use this method. • The question has a compositional structure: there are basic components put together in particular ways. The abstract representations are composed from 'basic patterns' and methods for putting them together into more complex compound structures. This compound structure determines how the procedures are constructed. There are basic procedures for the basic patterns, and methods of composing from them a more complex procedure to deal with the compound abstract structures. In other words, we get a procedure for the entire question by having procedures for the basic components and a procedure to put them together. We would like our system to meet the following criteria: -Answer correctly and efficiently. -"Understanding" the question, in the sense of: • Breaking the answering procedure into a set of simple visual tasks. • Identify which tasks it can perform and what are its limitations. Indicate if something is missing or unknown. • Ability to explain and reason -elaboration of the answering process using the image and intermediate results, including error correction and alternative suggestion. -Modularity and robustness: handling questions and image categories of various types, not limited by a training set. -Though not using a human psychology model, the ability to handle questions that people answer easily (and may be "hard" for computers) is desired, e.g. 'odd man out'. A question can be seen as a statement about the image that the answering system tries to make true or refute. Making the statement true requires an assignment of the particular classes, properties and relations to the image. Their identification in the image is based on pre-trained classifiers and detectors. The recognizable set is modular and can be increased by adding new detectors or switching to stronger ones. Logical operations will be used to generate logic sentences with a formulation that fits first order logic (including functions) with some extensions. The answering procedure is generated according to the input question in the following manner: Question → Question representation → procedure A proper representation is fundamental to allow a successful mapping of the question into the answering routine. This representation should be concise and support generating the same procedure when applied to similar structured questions with different choices of classes, properties and relations. To obtain that, the visual elements (object classes, object properties and object relations) would be parameters, integrated using logic operations (e.g. ∧, ∨) and quantifiers (e.g. ∀, ∃, ∃5 ) into basic logic patterns corresponding to specific structures. These patterns are combined and merged to compose a more complicated structures that create the representation of the question and can be mapped to the answering procedure. We use a directed graph to describe the question which is a natural choice in our case and allows diverse compositions of substructures. In this graph each node represents an object entity and its description (e.g. a list of required properties). These nodes are linked by the graph edges which represents relation between objects. The graph is divided into small segments that relate either to one node and correspond to part of its information (e.g. object class and one property) or to an edge and the two classes of the nodes it connects. Each of these graph segments matches a basic pattern that is handled by a corresponding procedure, using the specific visual elements of this substructure. The graph representation allows to decompose the answering procedure into a set of elementary procedures and put them together to generate a modular answering procedure. The elementary procedures invoke visual analyzers, which are the basic modules of the process. Each class, property and relation, has a visual analyzer to establish it. More general visual operations that serve more than one particular visual element (e.g. depth estimation) are activated according to need and their results are available to all basic procedures. The overall routine is obtained by applying these procedures and operations at an appropriate order, to appropriate objects, where the amount of required assignments per object are set by the quantifier of the corresponding node. The visual elements may have 'types', such as classes that can be basic or subordinate (i.e. basic with additional properties), properties that may be comparative (e.g. 'older than') and relations which can be symmetric (e.g. 'beside') or not. The entire process of answering a visual question is described in Figure 1. It starts by receiving the input language question and mapping it to a graph representation. The next stage is running a recursive procedure that follows the graph and invokes the procedures associated with the basic structures, using the specific visual elements as inputs. After the results are obtained, the answer is returned. Questions with a simple structure (e.g. "Is there a red car?") can be represented by matching one specific pattern to a question. This covers a wide range of questions, however by allowing a composition of simple patterns, into a more complicated structures, the quantity of supported questions is raised substantially (from ∼60% to ∼90%, according to an analysis of 542 questions on images asked freely by people and using a set of 12 patterns). This composition is done using a graph. For example in the question "Is there a red car to the right of the yellow bus? " there are two parts with a simple structure "Is there an object of class c with a property p?" connected by the relation "to the right of", which corresponds to another simple structure: "Is there an object of class c 1 and an object of class c 2 that have the relation r between them?". The graph representing the question is: Map into a graph representation question Run a recursive procedure following the graph image Answer When a specific question is given, the question is parsed and mapped to a directed graph, where the visual elements are its parameters. This graph corresponds to a logic expression that is composed of simple expressions, that may share the object variables. Some of the parametric visual elements are variables that require estimation based on the image. Once the variables are estimated, the logic expression is evaluated (as true or false) and the query is answered accordingly. The formulation of the logic expression fit first order logic (including functions) with some extensions (e.g. a variable-sized set of arguments or outputs for some functions). Each simple logic expression is related to a basic pattern, which corresponds to a basic procedure. The basic procedure obtains an answer to the expression by activating visual analyzers according to the types of object classes, properties and relations (which are inputs to the basic procedure). Such a system will have the ability of constant improvement by adding detectors for new classes, properties and relations according to requirements. Similar characteristics are also evident in human learning, where new learned details are integrated into the existing mechanism of world perception. The UnCoRd system is implemented following the approach described above. It answers visual questions using a composed process that follows the graph representation of the question, activating real world visual analyzers. This system is described in the following section. System Description Mapping to a Directed Graph One of the system's main tasks is to translate the query, given in natural language, into an abstract representation which will then be mapped into a procedure (the first step, described in Figure 1). We first use the START parser (Katz, 1988(Katz, , 1997 The generated set of ternary expressions is used for the generation of a graph representation, where nodes represent objects and edges represent relations between objects. The node include all of the object's requirements according to the question, mainly its class, properties that may be required (e.g. 'red') or queried (e.g. 'what color') and quantifiers that are not the default existence quantifier (e.g. 'all', 'two'). The directed edges correspond to relations between objects where the edge direction implies the direction of relation. Each edge is also assigned a direction of progress for the answering procedure. It is instantiated as the relation direction, but may be modified according to initial object detection to enhance detection abilities (see Section 3.2.2 for details). An example for a mapping of a question to a directed graph can be seen in Figure 2. The graph representation is used to fit an answering procedure for each particular question. Fragments of information are extracted from subgraphs that include up to two connected nodes. A graph fragment includes a subset of elements (classes, properties, property functions and relations) that has a mapping to one of a few basic logic patterns. This mapping, combined with the particular accompanying visual elements defines a logic expression that selects and guides a component of the answering procedure. For example a fragment of the node's class and a required property is mapped to the pattern ∃X (c X (X) ∧ p X (X)). The specific class c X and property p X define the particular logic expression that should be checked. Such mappings are done for the entire graph, where each fragment of it is mapped into a basic logic pattern and specific visual elements. These simple logic expressions, joined using logic operations, constitute one logic expression that represents the entire question. Each basic logic patterns has a dedicated procedure that performs the evaluation required to confirm or refute it, using visual analysis according to the image. The procedure provide an answer according to an accompanying query. We use the following notations for describing the basic logic patterns: X, Y -Objects c(X) -A class, evaluated for object X (as True/False), e.g. 'person', 'boy', 'bird', 'train'. p(X) -A predicate property (predicate of arity 1), evaluated for object X, (as True/False), e.g. 'blue', 'male', 'big'. f (X) -A property function. Returns properties of a specific type, e.g. 'color', 'age', 'size'. g(S t ) -A global property function for a subset of objects of the same class: S t ⊂ {X t : c t (X t )} . Returns properties of a specific type, e.g. 'quantity', 'difference', 'similarity'. p f -A predicate property, constrained to possible return values of f (X) (e.g. blue = color(X), male = gender(X), big = size(X)). a g -One of the possible values returned by g(X) (e.g. 3 = quantity(S t ), where S t = {X t : c t (X t )}). r(X, Y ) -Relation between objects X and Y (predicate of arity 2), e.g. X below Y → below(X, Y ) and in the same manner looking at(X, Y ), near(X, Y ). ?--A query, the requested answer. Objects (or other elements) starting with a capital letter (e.g. X, Y ) are unknown elements (variables) that should be estimated according to the image. The particular used patterns were selected since they provide a small, simple and basic set that can naturally compose the logic representation of the question. This small set provides a high flexibility in composing a wide variety of logic expressions using the different visual elements. From a conducted survey and other checks it was evident that this set is empirically sufficient to represent the set of analyzed queries. Following are the basic logic patterns that are mapped to basic procedures in the question answering process (followed by their corresponding graph fragment). The ∃ quantifier may be replaced by other quantifiers (e.g. ∀, ∃2). • Property Existence: ∃X (c X (X) ∧ p X (X)); ?-∃/c X c: c X p: p X Examples: 'Is there a brown bear?' (query for validity with a specific object class) 'What is the purple object?' (unknown and queried object class) An example for a modification due to a quantifier parameter: ∀X (c X (X)∧p X (X)); ?-∃, e.g. 'Are all bears brown?' • Function Property: ∃X (c X (X)), f (X) = P f ; ?-P f c: c X f : P f Example: 'what color is the chair?' • Property of a Set: ∀X t ∃S t (S t = {X t : c t (X t )}), g(S t ) = A g ; ?-A g c: c Xt g: A g Example: 'How many planes are in the photo?' • Object Existence: ∃X (c X (X)); ?-∃/c c: c X Examples: 'Is this a dog?' 'What is it?' • Relation Existence: ∃X ∃Y (c X (X) ∧ c Y (Y ) ∧ r(X, Y )); ?-∃/c X /c Y c: c Y c: c X r Examples: 'Is the man looking at the children?' (validity query) 'What is on top of the television?' (query for one of the classes) The combination and composition of these patterns has a powerful representation capabilities and provides a mapping to a set of basic procedures that constitute the full answering procedure. The procedure composition of "real-world" visual tasks allows both the use of existing detectors, including separate improvement of each task and explaining, elaborating and correcting questions. As mentioned above, modified quantifiers may be added to nodes according to amount of objects required in the questions (see Figure 2). These quantifiers may be either numbers (e.g. 'Are there three guys?' ) or 'all' for entire group of objects. Setting the group may be according to subtle phrase differences which affect the answering procedure flow and results as can be seen in Figure 3. The graph naturally represents objects, their properties and binary connections between them. Though this covers a wide variety of questions, using global image information and some extensions to the basic graph increase the support to additional attributes. Property of a group is an example for such an extension. Properties that uses global information are 'closest' and 'size' (which is relative to other objects). Specific implementations for complicated attributes may be added as a dedicated tasks or by a preprocessing, braking it into graph plausible segments. An example for such an implementation in our system is 'odd man out' (e.g. "How is one cow not like the others?"), where the relations 'diff <f >' and 'sim <f >' (for different and similar values of property f correspondingly) are used to check and compare the properties of objects. An example is given in Figure 4. The 'similarity' attribute (queries for a property that is similar for all objects in the group) is handled in the same manner. The main building blocks of the question representation are the visual elements: object classes, object properties and object relations. The question in (a) requires all 'dog' objects to be both black and small, hence the first dog that is not black renders the logic phrase false and the answer is "no" (failed object and reason are marked in the image). The question in (b) requires only that the black dogs would be small, hence all dogs are checked for color, and the size of the black ones is verified to be small. Since it is true, the answer is "yes". • Object Classes Object class is the category of object required by the question. It does not necessarily match the used object detector. To enlarge the coverage of supported object classes we define a few categories of object classes and handle them accordingly. -Basic Classes These are the classes specifically covered by the main multi-class object detector. We currently use instance segmentation by mask R-CNN (He, Gkioxari, Dollár, & Girshick, 2017) for the 80 classes of COCO dataset (Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, & Zitnick, 2014). Having the segmented object is very useful as this accuracy is required in many cases (e.g. for the relation 'touch'). Other detection methods are also integrated and may be used instead. . This is a complicated attribute that requires special treatment and mapping to the graph representation. Bounding boxes for birds with common property (red) and for 'odd man out' bird (yellow) are marked (in red and yellow correspondingly). [Object detection is based on faster R-CNN + DeepLab]. (Mathias, Benenson, Pedersoli, & Van Gool, 2014) for the detected 'person' objects, followed by age and gender classifier (Levi & Hassner, 2015) on the results (an example is demonstrated in Figure 5). -Superordinate Classes Each category of a superordinate class includes a few basic classes (for example furniture, animal). To check this, we use ConceptNet (Speer & Havasi, 2013), which is a commonsense knowledge database, based on data extracted from the internet (see also section 3.2.2). It includes concepts and predefined relations between them. We use the relations: 'InstanceOf', 'IsA', 'MadeOf' and 'PartOf' with the requested class, and keep the results that fit our basic classes list. The detected objects of these classes are retrieved and used for the rest of the procedure. Also if the query is for the type of the requested superordinate class, the name of the detected basic class is given as an answer (see Figure 5 for an example). -Similar Classes A class that has a synonym or a very similar class in the basic classes set may be also searched as this corresponding class. These correspondences are extracted using the 'Synonym' and 'SimilarTo' relations in Concept-Net. -A Group of Objects To identify a class that represents a group of objects (possibly of different optional basic classes), the ConceptNet relation 'MemberOf' is used (e.g. flock → bird, sheep; fleet → bus, ship...). A quantity requirement is added of at least two objects (demonstrated in Figure 5). -Sub Objects Some objects are part of a 'known' objects and can be extracted according to the detection of the host object and additional processing. We apply human pose estimation (Chen & Yuille, 2014) to obtain the different body parts when requested (e.g. 'left/right hand', 'left/right foot' ). Relative areas of objects (e.g. 'the middle of the bus' ) are also treated as sub objects. In these cases left and right are different than other uses of left/right as a location property (e.g. 'the left box' ). A 'shirt' is also treated as a sub object, corresponding to the torso area, provided by human pose estimation results (an example is given in Figure 5). • Object Properties Objects have various visual properties. We differentiate between the binary properties (e.g. 'red') and a function property that returns the property of the object from a specific category (e.g. 'color'). Table 1 describes the used set of properties divided (most of them) to groups of function properties. • Object Relations Relations between two objects are represented by the directed graph edges. Detection of relations varies and require "simple" information for some (e.g. 'to the right of') and complicated visual features for others (e.g. 'wearing'). We combine specific rule based detection for some relations and a deep neural network for others. -Rule based relation classification: Based on spatial checks, using (when needed) morphological methods, depth estimation (Liu, Shen, Lin, & Reid, 2016), face detection (Mathias et al., 2014), face key points detection (Zhu & Ramanan, 2012) and gaze estimation (Recasens * , Khosla * , Vondrick, & Torralba, 2015). Properties' Group Predicate Properties color/colors 11 colors (e.g. 'black', 'blue',...) age a ages and ages inequalities (based on 8 age groups) gender a female/male location b (e.g. where) spatial image location (e.g. 'bottom (of the image)' ) relative location bc location relative to other objects (e.g. 'the left dog' ) type subclass (when available) size 'small', 'big', 'average' quantity d number of objects difference d (odd man out) no direct binary property similarity d no direct binary property Simplifications, compositions of relations are used, as well as exploiting commonsense knowledge (by querying ConceptNet (Speer & Havasi, 2013)). A special type of relations are the comparison relations, sim <f > and diff <f >, that checks similarity or difference of function property f correspondingly. -Deep neural network classifier: Based on the DR-Net method (Dai, Zhang, & Lin, 2017) for the relation predicate classification. This method, as other visual relation detectors, utilizes object detection. To avoid coupling of relation detection with object detection, which would reduce the robustness of our system, and yet exploit object detection when possible, we've added a layer that was trained to project a closeness measure based on the GloVe word embedding (Pennington, Socher, & Manning, 2014) and generate a representation for any object class. This way, object classes that were not trained for the relation classification still have a representation projected on the DR-Net object classes vector. We use the version trained for the 70 VRD dataset (Lu, Krishna, Bernstein, & Fei-Fei, 2016a) relations. Since relations are also used as an attention for object detection (3.2.2), inverse relations are matched to each relation, when possible. This way, attention can be used for both directions of the relation. Recursive Procedure The final stage of answering the question is activating a recursive procedure to follow the graph nodes and edges, invoke the relevant basic procedures and integrate all the information to provide the answer. A basic scheme of the procedure is given in Figure 6 and in Algorithm 1. External Knowledge Working Memory Figure 6: A scheme of the recursive answering procedure. At each step the current node (cur node) is set and the objects are examined according to node's requirements. If succeeded, a new cur node is set (according to a relation or next global parent node) and the function is called again to handle the subgraph starting from it. The required visual elements: c: object class, p i : an object property, f : function property, g: property of a set, r i : a relation. The daughter object detection is activated only when none was detected in previous stages. Note that the estimated maps of depth and color names are calculated by the procedure according to needs. The first step is a preliminary object detection, carried out by applying instance segmentation on the image. Then, a recursive function (getGraphAnswer) is invoked for node handling (starting at a global parent node). It runs specific procedures that activate visual analyzers to check the requirements (properties, relations) and fetch required information (function property). The retrieved objects that fulfill the requirements are coupled to the corresponding question objects, so that next checks would be held on the same objects. The number of required objects is mainly according to quantifiers. Once a node checks are completed, the same function (getGraphAnswer) is invoked for next node. Next node is determined according to relation (graph edge) or next global parent node. Once all nodes are queried, the checks for entire set are activated (if needed). Answers are provided by all basic procedures and final answer is set according to precedence (e.g. queried property type has a priority over binary answers). if success ∧ ¬ empty(g) then answer = g(valid objs) end Return answer end a. According to object detection and previous checks b. According to quantifiers and other requirements c. Either to daughter node or next global parent node Working Memory The global information gathered through the answering process is stored in a "Working Memory" component. It stores the calculations that may be required at several stages of the process. This information is calculated only if needed and includes objects and their retrieved data, depth map, current node, currently used objects and more. Common Knowledge When a person is answering a visual question, there is an important role to prior common knowledge. This includes connection between classes, famous brands and logos, knowing the role and characteristics of objects and actions, anticipation of the future, knowing to ignore details and more. Some of the issues related to prior commonsense knowledge are addressed by our system. The main uses of prior knowledge are common relations in images (using the Visual Genome dataset (Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al., 2017)) and commonsense knowledge on categories of objects, as well as connections between them (using ConceptNet (Speer & Havasi, 2013)). • Visual Genome Dataset The Visual Genome dataset (Krishna et al., 2017) contains (among many others) annotations for objects and binary relations between them for a set of 108077 images. Common relations involving specific objects are extracted from this dataset (by demand) and used as prior knowledge to assist detection. It allows refining the search area when an object is not detected in the initial detection as described below and demonstrated in Figure 7. • ConceptNet To obtain general commonsense knowledge we use ConceptNet database (version 5) (Speer & Havasi, 2013). The source of information for this database is the internet (results from additional databases are also incorporated). It allows querying for concepts and relations between them of the form: concept1 -relation → concept2 (e.g. horse -IsA → animal) The query is performed by providing two of the triplet [relation, concept1, concept2] and querying for the third. These common knowledge relations provide complement capabilities for answering 'real world' questions in which such common knowledge is assumed. We currently use ConceptNet mainly to extend understanding of objects' classes (e.g. superordinate classes, similar classes) as described for example in section 3.2.1. Examples for questions are given in Figure 5 for connections between classes. Guided Object Detection A question may refer to specific objects in the image that may be hard to detect (e.g. due to size, occlusion, clutter). When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. We use relations with detected objects as an attention source. Two sources for such an attention are used. • Attention by common relations: The source for this attention is from the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images. When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. This is done using the annotation of the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images (see also section 3.2.2). We seek the most common relation of the requested object (with an object from our known classes' set) and a corresponding relative location. Then, if the other object is found we activate the object detector on the relevant area. An additional search area is obtained by the relation's spatial constraints. An example of using common relations as an attention is given in Figure 7. (a) Detection results on entire image (b) Detected bottle using common relations • Attention by question relations: The question itself may include relations that can assist detection by focussing on relevant areas. Since the processing is according to the question graph representation, relation edge directions are modified from detected to undetected objects. This allows using relations with a verified detected object as a detection guidance for undetected objects in the same manner described above. The usage of this type of attention is demonstrated in Figure 8. "Understanding" Capabilities Having a system that breaks the visual answering task into real world sub tasks has many advantages. Other than abilities of modular modifications and improvements, the meaningful, compositional process is utilized and leveraged to provide information derived from internal processing. Failure reasons and verified alternatives are provided, as well as elaborations on detected objects. Provide Alternatives/Corrections When the logic expression representing the question is not valid for the given image, alternatives for the failed part are searched, such that a close expression may be validated and provided as a supplement to the answer. The checks include alternative objects, relations and also properties according to the following: • For failed object classes alternative classes are checked. (a) Detection results on entire image (b) Detected clock using question relation • Real properties are specified for objects with failed properties. • For failed relations alternative relations are checked. • Additional attempts with close persons subordinate classes (e.g. when failed to classify a person as a woman, other sub-person classes are checked). Examples are given in Figure 9 (note that some include multiple rounds of attempts). Answer Elaboration During the answering process, related information may be accumulated for verifying the logical expression representing the question. This information is provided as part of the answer, explaining and elaborating it. The following supplementals are included: • If object detection was by a related class (e.g. synonym, parts of a group, subordinate classes), it is specified in the answer (including numbers of each subclass). • The hint relation used as an attention for object detection, is indicated (if used). • If queried function properties (e.g. color) are different for different relevant objects, property for each object is specified. Some examples can be seen in Figure 10. Integration in Related Applications As the answering process accumulates real "knowledge" related to the image, it may be saved and used for extended applications. One of them may be a discourse on the image, where follow up questions may be answered. Additional application may be correction of image caption (Bernardi, Cakici, Elliott, Erdem, Erdem, Ikizler-Cinbis, Keller, Muscat, Plank, et al., 2016), where caption may be transformed into a question and the answer may verify it or correct it (as described in Section 3.3.1). An example for image caption correction is given in Figure 11. Results Analysis Our system is currently limited by the visual elements it is able to recognize. It is not trained or optimized for any visual question answering dataset. Since our goals include question "understanding" and modularity, we first focus in basic capabilities that will be developed with time to be more comprehensive. We've checked our system for various aspects and specific examples and provide an analysis. We've examined graph representation for a random set of questions to see current status as well as potential. The performance of Question Representation First we check the representation capabilities of our system. To do that we've sampled randomly 100 questions from the VQA dataset (Antol et al., 2015) and checked their graph representation. Results are given in Table 2. Current Potential Fit 72 100 No fit Vocabulary 12 Other 14 Unparsed 2 Table 2: Representation results on a random set of 100 questions from the VQA dataset (Antol et al., 2015). The vocabulary no fit cases are miss representation due to fail in phrases recognition. 'Unparsed' are questions that START couldn't parse. The 'Potential' column represent questions that may be represented by the graph. Caption: a man sitting on a bench with a large umbrella Q: Is there a man on a bench with a large umbrella? A: There is no bench There is no man There is no umbrella There is nothing on a object Existing alternative relations: 'boat in front of a bird' Figure 11: Example for image caption correction. Image caption is the result of the Neu-ralTalk model (Karpathy & Fei-Fei, 2015) It is not always clear whether a representation is accurate, as in some cases a representation may fit the language structure but less accurate for the actual meaning. For example a simple representation of the question "Is this picture in focus?" may be: c: focus c: picture in However, 'in focus' represents a single element and should be recognized as such. This demonstrates the importance of vocabulary knowledge. In another example, the following questions have a similar structure: Are they all wearing the same color ? Are they all wearing the same pants? However, 'color' and 'pants' belong to two different types of visual elements and hence questions should have different representations. Sometimes minor phrasing changes have a substantial effect on parsing and representation. The variation in phrasing may also include grammar inaccuracies and typos. This sensitivity reduce the consistency of the representation and adds noise and inaccuracies to the system. For the two "Unparsed" questions in our representation test, simple corrections lead to successes. The corrections are (original → corrected): What season do these toy's represent? → What season do these toys represent? Where are these items at? → Where are these items? There are other cases where a minor phrasing change corrects the representation, as can be seen in Figure 12. Additional parsing limitation is no indication coordinating conjunctions ('or', 'and') between phrases. Hence both are treated as 'and'. As mentioned before, since the questions are free form, they may involve slang, typos or wrong grammar. The question meaning may even be not clear. For example the question 'How is the table design?' may be the correct intended question. However it may be that the intended question is "How is the table designed ?". All the questions sampled in this analysis can be potentially represented using the suggested graph representation. This demonstrates that in general our scheme has a very high representation capabilities. However some require identification of complicated properties and related terms e.g. "Is the refrigerator capacity greater than 22 cubic feet?" (similar comparisons of property's quantity already exist for age). The issue of adding description levels rises for complicated properties that may have a natural representation using properties of properties, e.g. Is this the normal use for the object holding the flowers? How is the table designed ? Where do these animals originate? In some cases it may be reasonable to alter the exact meaning into a more reasonable one to handle, e.g Does this truck have all of its original parts? → Are all the parts of this truck original? In other checks performed, there were (very few) cases where relations between multiple objects of different types were required (e.g. 'Does this image contain more mountain, sky or grass?'). A support for such cases may be added in the future. Question Answering Our current implementation is obviously limited by the number of recognizable visual elements, queried both explicitly and implicitly. It does not include any training or adaptation to any Visual Question Answering dataset. Also, some implementations maybe incomplete or arbitrary, e.g. 'location', which implementation is relative to image. Answers are, however mostly self aware. When running on the VQA (Antol et al., 2015) dataset most answers indicate the unfamiliar visual element which prevents answering (e.g. "Unknown class: linoleum"). Examples with proper answers are shown in Figure 13. It includes the use of ConceptNet (Speer & Havasi, 2013) in some cases to obtain prior knowledge regarding related classes (e.g. subclasses) and other commonsense knowledge. Examples with wrong answers are shown in Figure 14. The reasons for failures include detection failures, unknown visual elements, missing prior knowledge and other assumptions. Further examination of the results provides some insights regarding additional sources of failure. One element that adds "noise" to the system is the use of internet based external knowledge database. While providing essential information, retrieved data is also prone to errors and yields detection attempts of wrong objects. This is demonstrated by the results of queries of 'carpet' and the relation IsA which imply that the following may be a carpet: 'Barack Obama', 'book', 'monitor',' a plastic bag', 'a glass of water', etc. Another example for such an error is the retrieved relation 'chair IsA door'. A partial solution is using the associated weights that indicate the strength of each result. Some results may be misleading as they may refer to different meanings of the queried words. Following are examples for such results: 'train isA control' 'monitor isA track' 'screen door isA door' In some cases the intersection of retrieved classes with recognizable objects is so small, that it may cause a wrong conclusion based on a very superficial check. An example for this is the question "Are these toys?", where the recognizable retrieved classes are 'bicycle', 'skateboard', 'frisbee', 'kite' and 'motorcycle' hence answering 'no' if none of them was detected. An interesting observation regarding the estimation of some visual elements is for the generation of color name maps (Van De Weijer, Schmid, & Verbeek, 2007), which is based on supervised learning (11 optional colors per pixel). When object colors are required, the map is generated for the object area in the image, and based on dominant colors the answer is provided. Retrieving object color may appear as a trivial task, as the intensity of original RGB image channels should provide the exact color of each pixel. However, using such methods fail to obtain the perceived color, as it is hardly related to levels of actual RGB channels. Hence, learning methods are incorporated to address this problem, and still there are many inaccuracies. In addition to these inaccuracies, the required process for obtaining perceived color of an object is not consistent. This can be seen in the examples of Figure 15, where inquiring for the color of a person requires different color naming and focus on specific regions. The bus example also requires specific behavior, where the windows and wheels areas of the bus should be ignored. Q: What color is the horse? Q: What color is the bus? Q: Is the man white? A: grey A: black A: yes Figure 15: Demonstration of perceived color challenges. Each column corresponds to one example. For each example, the top image is the input image with markings of relevant results. The bottom image is a map of color names corresponding to the required object. Below the images, the question and corresponding answer are given. First column demonstrates classifications errors in the generated map of color names due to shading. Second column require ignoring the windows and wheels areas for an accurate answer. For the example of the third column, only specific area should be checked and colors should correspond to different names. [Object detection is based on faster R-CNN + DeepLab]. As previously mentioned the parser sensitivity to phrasing and other issues such as its indifference to type of phrase coordinators ('and', 'or') causes representation failures or misrepresentations, which results with inability to provide a correct answer. For example when 'or' is used (e.g. "Are the flowers yellow or white?") the answer will be always 'no', as both options are required to be true. Hence, we get an answer which is irrelevant to the question. Questions may be misinterpreted due to multiple meaning of words and phrases or subtle differences. As previously discussed this mainly effects the use of external knowledge database where a wide range of concepts may be used, which may lead to an unclear meaning of a concept (e.g. 'train'-vehicle vs. learn, 'monitor'-screen vs. supervise). Such confusions happen also for the question itself. An example for a misinterpreted question is "What is the table/bus number?", which is interpreted as "What is the number of tables/buses?" Currently, other than enhancing object detection by attention from question relations, details from the question are not used as hints for correctness of expressions. A case where such information may be further utilized is when the query is for a property of an object. In this case there may be a prior assumption or an increase in probability that such an object exists. Of course, an automatic assumption of existence is not desirable. However, reduction in classification thresholds, additional attempts using hints and other measures may be utilized to reflect the higher probability for the existence of such an object. For example, given the question "What is the age of the man?", the probability that a man indeed exist in the image should rise, and refuting this assumption should be performed only when the evidence is substantial. Discussion and Conclusions We have presented an approach to visual question answering that seeks to compose an answering procedure based on the 'abstract' structure of the query. We exploit the compositional nature of the question and represent it as a directed graph with objects represented as nodes and relations as edges. Each basic component of this graph representation is mapped to a dedicated basic procedure. The collection of these basic procedures are put together, along with additional required processes, into a complex procedure for the entire query. This procedure incorporates query details and intermediate results and stores them in the graph nodes and a working memory module. The stored information completes the guidance to the procedure and allows handling different types of visual elements. Question relations are used as an attention source to enhance object detection. Querying for external common information is also handled by the procedure in order to complete the required prior knowledge needed to answer the question. Breaking the answering process into basic meaningful components, corresponding to basic logic patterns, enables awareness at each step to the accomplished and unaccomplished parts of the task. This includes recognizing and reporting on failures and limitations, that in many cases are corrected and provided with valid alternatives. Elaborations to the answers are provided, according to the stored information. Since the building blocks include simple real world detectors, the system is modular and its improvement is not limited. Human abilities motivate us to examine and handle some complicated attributes that are addressed naturally by humans, even though they may hardly appear in real queries. These attributes, such as 'odd man out', demonstrate representation challenges, that require extending the natural graph representation. Currently specific configuration is created to represent these attributes. Future upgrades may allow handling it more smoothly. Evaluation of representation capabilities demonstrated that, even though potentially, our scheme can represent practically all queries, current state of the system is limited. The observed problems include limitations in vocabulary identification, sensitivity to phrasing and cases of grammatical similarity for different elements (e.g. 'wearing the same color' vs. 'wearing the same pants'). Additionally, some rare representation limitations exist, such as relations between more than two objects of different classes. Even though the recognition abilities are currently limited due to scope of existing detectors, the system is self aware and mostly reply by specifying its limitation (which may trigger an addition of the desired detectors to the system). The representation limitations discussed in 4.1 are a fundamental source of failures, which is added to incremental chances for errors of the used detectors. Our system does not exploit any language bias of the question. The answer is exclusively provided by the procedure evaluating the logic representation of the question. However, improvement is ongoing, as detectors keep improving and their scope keeps growing. Current approaches to visual question answering use mostly end-to-end schemes that are very different than our approach. Although some methods include adaptive aspects, the optimization process is more likely to exploit language bias than the complex mechanisms required for proper answering. These methods maximize statistical results, but are likely to fail in addressing subtle, yet meaningful cases. This fits the analysis of current models, demonstrating the tendency to utilize only part of the question, provide same answers for different images and fail on novel forms. A combination of UnCoRd system and an end-toend model may be beneficial in some cases, for example enhancing UnCoRd elaborations with "intuitive" answer in some cases (such as unknown visual elements). We've integrated and examined various aspects of answering questions on images using our answering system. Much more research and investigation is required for all these aspects, as well as others. Future research will include learning the representation mapping and making it more robust, further investigating and improving the visual elements analyzers (e.g. combine the type of object when possible for property detection) and more.
8,012
1810.10656
2898446106
An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.
In order to extract image information that is more informative to the question and avoid the noise of irrelevant image areas, many works incorporated attention mechanisms. During the attention stage image areas, that are considered more relevant, are multiplied by higher weights and contribute more to answering the question. Attention may be stacked for multiple stages @cite_59 with the motivation of refining it for complicated questions. Extracting relevant areas was also performed by integrating regions of detected objects related to question words @cite_15 . The attention concept was also extended to include both image features and the question representation @cite_48 , where both attention types effect each other. Additional attention mechanisms utilize CRF @cite_20 , consider all word-region interactions @cite_4 , incorporate correlations between image, question and candidate answer @cite_16 and combine grid based and object detection based regions @cite_54 @cite_51 .
{ "abstract": [ "A key solution to visual question answering (VQA) exists in how to fuse visual and language features extracted from an input image and question. We show that an attention mechanism that enables dense, bi-directional interactions between the two modalities contributes to boost accuracy of prediction of answers. Specifically, we present a simple architecture that is fully symmetric between visual and language representations, in which each question word attends on image regions and each image region attends on question words. It can be stacked to form a hierarchy for multi-step interactions between an image-question pair. We show through experiments that the proposed architecture achieves a new state-of-the-art on VQA and VQA 2.0 despite its small size. We also present qualitative evaluation, demonstrating how the proposed attention mechanism can generate reasonable attention maps on images and questions, which leads to the correct answer prediction.", "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3 to 60.5 , and from 61.6 to 63.3 on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCO-QA.", "Recently, the Visual Question Answering (VQA) task has gained increasing attention in artificial intelligence. Existing VQA methods mainly adopt the visual attention mechanism to associate the input question with corresponding image regions for effective question answering. The free-form region based and the detection-based visual attention mechanisms are mostly investigated, with the former ones attending free-form image regions and the latter ones attending pre-specified detection-box regions. We argue that the two attention mechanisms are able to provide complementary information and should be effectively integrated to better solve the VQA problem. In this paper, we propose a novel deep neural network for VQA that integrates both attention mechanisms. Our proposed framework effectively fuses features from free-form image regions, detection boxes, and question representations via a multi-modal multiplicative feature embedding scheme to jointly attend question-related free-form image regions and detection boxes for more accurate question answering. The proposed method is extensively evaluated on two publicly available datasets, COCO-QA and VQA, and outperforms state-of-the-art approaches. Source code is available at this https URL", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "Visual Question and Answering (VQA) problems are attracting increasing interest from multiple research disciplines. Solving VQA problems requires techniques from both computer vision for understanding the visual contents of a presented image or video, as well as the ones from natural language processing for understanding semantics of the question and generating the answers. Regarding visual content modeling, most of existing VQA methods adopt the strategy of extracting global features from the image or video, which inevitably fails in capturing fine-grained information such as spatial configuration of multiple objects. Extracting features from auto-generated regions -- as some region-based image recognition methods do -- cannot essentially address this problem and may introduce some overwhelming irrelevant features with the question. In this work, we propose a novel Focused Dynamic Attention (FDA) model to provide better aligned image content representation with proposed questions. Being aware of the key words in the question, FDA employs off-the-shelf object detector to identify important regions and fuse the information from the regions and global features via an LSTM unit. Such question-driven representations are then combined with question representation and fed into a reasoning unit for generating the answers. Extensive evaluation on a large-scale benchmark dataset, VQA, clearly demonstrate the superior performance of FDA over well-established baselines.", "The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset.", "", "Visual attention, which assigns weights to image regions according to their relevance to a question, is considered as an indispensable part by most Visual Question Answering models. Although the questions may involve complex relations among multiple regions, few attention models can effectively encode such cross-region relations. In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Conditional Random Field on image regions. We demonstrate how to convert the iterative inference algorithms, Mean Field and Loopy Belief Propagation, as recurrent layers of an end-to-end neural network. We empirically evaluated our model on 3 datasets, in which it surpasses the best baseline model of the newly released CLEVR dataset by 9.5 , and the best published model on the VQA dataset by 1.25 . Source code is available at https: github.com zhuchen03 vqa-sva." ], "cite_N": [ "@cite_4", "@cite_48", "@cite_54", "@cite_59", "@cite_15", "@cite_16", "@cite_51", "@cite_20" ], "mid": [ "2796001012", "2963668159", "2770883544", "2171810632", "2340874616", "2751804245", "2737766105", "2743640935" ] }
Understand, Compose and Respond Understand, Compose and Respond -Answering Visual Questions by a Composition of Abstract Procedures
Human ability to answer a question related to an image is remarkable in several ways. Given a single image, a large number of different questions can be answered about it. Answering these questions may require the detection and analysis of subtle, non-salient cues. Prior information and data obtained through experience are also incorporated into the process, to enable answering the question, which may be highly complex. The answering process itself is open to reasoning, allowing for example elaborations on the answer, or explaining how it was reached. In the last few years, the problem of image question-answering by a machine was addressed by many studies (Teney, Anderson, He, & Hengel, 2017a;Pandhre The system we propose and describe in this work handles a wide range of questions about images, without training on any questions (zero-shot learning). We concentrate on designing a general process for this task and not on fitting results to the statistics of a specific dataset, as current end-to-end approaches. Our system uses many existing methods for different visual tasks, such as detection, classification, segmentation, or extracting objects' properties and relations. In some cases novel detection methods were developed, however this is not a main focus of the work, as our system is modular, enabling 'plugging in' new detectors to enhance its capabilities. The structure of questions A central aspect of our scheme is that different questions share a similar structure or subcomponents with similar structure. For instance, the following questions have components with a common structure: What kind of pants is the person on the bed wearing? → person on bed Is the giraffe behind a fence? → giraffe behind fence The part with common structure can be represented as: There exist X of class c x and Y of class c y , such that r(X, Y ) Such structures may serve as building blocks for a compositional question representation. All components with similar structures can be handled by the same procedure, performing part of the answering task. In our analysis, questions could be represented by a combination of a few types of structures which we refer to as "basic patterns". These patterns are short parametric logical phrases that represent an atomic segment of the question structure. Each basic pattern dictates a particular implementation scheme utilizing a pool of implemented building blocks. The combination of basic patterns determines the entire procedure of answering the question. One advantage of such a scheme is that it is modular, allowing the addition of building blocks to increase the scope of the scheme, with no dependency on the statistics of a specific visual questions' dataset. A second advantage is that the coverage of queries grows exponentially with the number of building blocks without the need to encounter such queries as training examples. Additional advantage is "understanding" capabilities. The basic meaningful components breaks the process and allows a separate analysis of each component, including reasons of failure and explanations. The aspect of questions' coverage is also addressed in other directions. Such a direction is increasing the recognizable vocabulary of the question using commonsense knowledge. Utilizing commonsense knowledge In many cases answering a question requires integration of prior commonsense knowledge, especially about semantic relations between concepts. For example when answering the question 'What animal is this?' detection capabilities of specific animals (e.g. horse, dog, cat) will not suffice, since the answer requires the general notion of 'animal' and which particular instances belong to it. However, a query to an external knowledge database (e.g. ConceptNet (Speer & Havasi, 2013)), may provide subcategories of 'animal'. Consequently, specific detectors can be activated to seek these specific recognizable animal types. These knowledge databases are mostly based on information extracted from the internet and include commonsense information about the world. Querying such a database allows the completion of missing information such as semantic connections between object's classes (e.g. synonym, superordinate, subordinate) as in the example above, the typical usage of different objects, and more. Integrating this type of information is important when answering questions asked by humans, as it is common knowledge and treated as universally available. UnCoRd Answering System Approach Overview Our Understand, Compose and Respond (UnCoRd) approach is based on the following observations: • There is a representation of the question in terms of objects, their classes, properties and relations, including quantifiers and logical connectives as well as non logical symbols: predicates and functions. The representation has an 'abstract' structure, i.e. independent of the particular objects, classes, properties and relations that are represented as parameters. A single abstract representation can represent many different concrete questions. Our main thesis is that the procedure to be applied for obtaining the answer depends on the abstract structure of the question and not the particular elements. Hence, it is important to use the right kind of abstract representation, which will allow this mapping to procedures (where all questions with the same abstract structure require the same procedure). A proper parsing and mapping of the language question to its abstract representation should be obtained to use this method. • The question has a compositional structure: there are basic components put together in particular ways. The abstract representations are composed from 'basic patterns' and methods for putting them together into more complex compound structures. This compound structure determines how the procedures are constructed. There are basic procedures for the basic patterns, and methods of composing from them a more complex procedure to deal with the compound abstract structures. In other words, we get a procedure for the entire question by having procedures for the basic components and a procedure to put them together. We would like our system to meet the following criteria: -Answer correctly and efficiently. -"Understanding" the question, in the sense of: • Breaking the answering procedure into a set of simple visual tasks. • Identify which tasks it can perform and what are its limitations. Indicate if something is missing or unknown. • Ability to explain and reason -elaboration of the answering process using the image and intermediate results, including error correction and alternative suggestion. -Modularity and robustness: handling questions and image categories of various types, not limited by a training set. -Though not using a human psychology model, the ability to handle questions that people answer easily (and may be "hard" for computers) is desired, e.g. 'odd man out'. A question can be seen as a statement about the image that the answering system tries to make true or refute. Making the statement true requires an assignment of the particular classes, properties and relations to the image. Their identification in the image is based on pre-trained classifiers and detectors. The recognizable set is modular and can be increased by adding new detectors or switching to stronger ones. Logical operations will be used to generate logic sentences with a formulation that fits first order logic (including functions) with some extensions. The answering procedure is generated according to the input question in the following manner: Question → Question representation → procedure A proper representation is fundamental to allow a successful mapping of the question into the answering routine. This representation should be concise and support generating the same procedure when applied to similar structured questions with different choices of classes, properties and relations. To obtain that, the visual elements (object classes, object properties and object relations) would be parameters, integrated using logic operations (e.g. ∧, ∨) and quantifiers (e.g. ∀, ∃, ∃5 ) into basic logic patterns corresponding to specific structures. These patterns are combined and merged to compose a more complicated structures that create the representation of the question and can be mapped to the answering procedure. We use a directed graph to describe the question which is a natural choice in our case and allows diverse compositions of substructures. In this graph each node represents an object entity and its description (e.g. a list of required properties). These nodes are linked by the graph edges which represents relation between objects. The graph is divided into small segments that relate either to one node and correspond to part of its information (e.g. object class and one property) or to an edge and the two classes of the nodes it connects. Each of these graph segments matches a basic pattern that is handled by a corresponding procedure, using the specific visual elements of this substructure. The graph representation allows to decompose the answering procedure into a set of elementary procedures and put them together to generate a modular answering procedure. The elementary procedures invoke visual analyzers, which are the basic modules of the process. Each class, property and relation, has a visual analyzer to establish it. More general visual operations that serve more than one particular visual element (e.g. depth estimation) are activated according to need and their results are available to all basic procedures. The overall routine is obtained by applying these procedures and operations at an appropriate order, to appropriate objects, where the amount of required assignments per object are set by the quantifier of the corresponding node. The visual elements may have 'types', such as classes that can be basic or subordinate (i.e. basic with additional properties), properties that may be comparative (e.g. 'older than') and relations which can be symmetric (e.g. 'beside') or not. The entire process of answering a visual question is described in Figure 1. It starts by receiving the input language question and mapping it to a graph representation. The next stage is running a recursive procedure that follows the graph and invokes the procedures associated with the basic structures, using the specific visual elements as inputs. After the results are obtained, the answer is returned. Questions with a simple structure (e.g. "Is there a red car?") can be represented by matching one specific pattern to a question. This covers a wide range of questions, however by allowing a composition of simple patterns, into a more complicated structures, the quantity of supported questions is raised substantially (from ∼60% to ∼90%, according to an analysis of 542 questions on images asked freely by people and using a set of 12 patterns). This composition is done using a graph. For example in the question "Is there a red car to the right of the yellow bus? " there are two parts with a simple structure "Is there an object of class c with a property p?" connected by the relation "to the right of", which corresponds to another simple structure: "Is there an object of class c 1 and an object of class c 2 that have the relation r between them?". The graph representing the question is: Map into a graph representation question Run a recursive procedure following the graph image Answer When a specific question is given, the question is parsed and mapped to a directed graph, where the visual elements are its parameters. This graph corresponds to a logic expression that is composed of simple expressions, that may share the object variables. Some of the parametric visual elements are variables that require estimation based on the image. Once the variables are estimated, the logic expression is evaluated (as true or false) and the query is answered accordingly. The formulation of the logic expression fit first order logic (including functions) with some extensions (e.g. a variable-sized set of arguments or outputs for some functions). Each simple logic expression is related to a basic pattern, which corresponds to a basic procedure. The basic procedure obtains an answer to the expression by activating visual analyzers according to the types of object classes, properties and relations (which are inputs to the basic procedure). Such a system will have the ability of constant improvement by adding detectors for new classes, properties and relations according to requirements. Similar characteristics are also evident in human learning, where new learned details are integrated into the existing mechanism of world perception. The UnCoRd system is implemented following the approach described above. It answers visual questions using a composed process that follows the graph representation of the question, activating real world visual analyzers. This system is described in the following section. System Description Mapping to a Directed Graph One of the system's main tasks is to translate the query, given in natural language, into an abstract representation which will then be mapped into a procedure (the first step, described in Figure 1). We first use the START parser (Katz, 1988(Katz, , 1997 The generated set of ternary expressions is used for the generation of a graph representation, where nodes represent objects and edges represent relations between objects. The node include all of the object's requirements according to the question, mainly its class, properties that may be required (e.g. 'red') or queried (e.g. 'what color') and quantifiers that are not the default existence quantifier (e.g. 'all', 'two'). The directed edges correspond to relations between objects where the edge direction implies the direction of relation. Each edge is also assigned a direction of progress for the answering procedure. It is instantiated as the relation direction, but may be modified according to initial object detection to enhance detection abilities (see Section 3.2.2 for details). An example for a mapping of a question to a directed graph can be seen in Figure 2. The graph representation is used to fit an answering procedure for each particular question. Fragments of information are extracted from subgraphs that include up to two connected nodes. A graph fragment includes a subset of elements (classes, properties, property functions and relations) that has a mapping to one of a few basic logic patterns. This mapping, combined with the particular accompanying visual elements defines a logic expression that selects and guides a component of the answering procedure. For example a fragment of the node's class and a required property is mapped to the pattern ∃X (c X (X) ∧ p X (X)). The specific class c X and property p X define the particular logic expression that should be checked. Such mappings are done for the entire graph, where each fragment of it is mapped into a basic logic pattern and specific visual elements. These simple logic expressions, joined using logic operations, constitute one logic expression that represents the entire question. Each basic logic patterns has a dedicated procedure that performs the evaluation required to confirm or refute it, using visual analysis according to the image. The procedure provide an answer according to an accompanying query. We use the following notations for describing the basic logic patterns: X, Y -Objects c(X) -A class, evaluated for object X (as True/False), e.g. 'person', 'boy', 'bird', 'train'. p(X) -A predicate property (predicate of arity 1), evaluated for object X, (as True/False), e.g. 'blue', 'male', 'big'. f (X) -A property function. Returns properties of a specific type, e.g. 'color', 'age', 'size'. g(S t ) -A global property function for a subset of objects of the same class: S t ⊂ {X t : c t (X t )} . Returns properties of a specific type, e.g. 'quantity', 'difference', 'similarity'. p f -A predicate property, constrained to possible return values of f (X) (e.g. blue = color(X), male = gender(X), big = size(X)). a g -One of the possible values returned by g(X) (e.g. 3 = quantity(S t ), where S t = {X t : c t (X t )}). r(X, Y ) -Relation between objects X and Y (predicate of arity 2), e.g. X below Y → below(X, Y ) and in the same manner looking at(X, Y ), near(X, Y ). ?--A query, the requested answer. Objects (or other elements) starting with a capital letter (e.g. X, Y ) are unknown elements (variables) that should be estimated according to the image. The particular used patterns were selected since they provide a small, simple and basic set that can naturally compose the logic representation of the question. This small set provides a high flexibility in composing a wide variety of logic expressions using the different visual elements. From a conducted survey and other checks it was evident that this set is empirically sufficient to represent the set of analyzed queries. Following are the basic logic patterns that are mapped to basic procedures in the question answering process (followed by their corresponding graph fragment). The ∃ quantifier may be replaced by other quantifiers (e.g. ∀, ∃2). • Property Existence: ∃X (c X (X) ∧ p X (X)); ?-∃/c X c: c X p: p X Examples: 'Is there a brown bear?' (query for validity with a specific object class) 'What is the purple object?' (unknown and queried object class) An example for a modification due to a quantifier parameter: ∀X (c X (X)∧p X (X)); ?-∃, e.g. 'Are all bears brown?' • Function Property: ∃X (c X (X)), f (X) = P f ; ?-P f c: c X f : P f Example: 'what color is the chair?' • Property of a Set: ∀X t ∃S t (S t = {X t : c t (X t )}), g(S t ) = A g ; ?-A g c: c Xt g: A g Example: 'How many planes are in the photo?' • Object Existence: ∃X (c X (X)); ?-∃/c c: c X Examples: 'Is this a dog?' 'What is it?' • Relation Existence: ∃X ∃Y (c X (X) ∧ c Y (Y ) ∧ r(X, Y )); ?-∃/c X /c Y c: c Y c: c X r Examples: 'Is the man looking at the children?' (validity query) 'What is on top of the television?' (query for one of the classes) The combination and composition of these patterns has a powerful representation capabilities and provides a mapping to a set of basic procedures that constitute the full answering procedure. The procedure composition of "real-world" visual tasks allows both the use of existing detectors, including separate improvement of each task and explaining, elaborating and correcting questions. As mentioned above, modified quantifiers may be added to nodes according to amount of objects required in the questions (see Figure 2). These quantifiers may be either numbers (e.g. 'Are there three guys?' ) or 'all' for entire group of objects. Setting the group may be according to subtle phrase differences which affect the answering procedure flow and results as can be seen in Figure 3. The graph naturally represents objects, their properties and binary connections between them. Though this covers a wide variety of questions, using global image information and some extensions to the basic graph increase the support to additional attributes. Property of a group is an example for such an extension. Properties that uses global information are 'closest' and 'size' (which is relative to other objects). Specific implementations for complicated attributes may be added as a dedicated tasks or by a preprocessing, braking it into graph plausible segments. An example for such an implementation in our system is 'odd man out' (e.g. "How is one cow not like the others?"), where the relations 'diff <f >' and 'sim <f >' (for different and similar values of property f correspondingly) are used to check and compare the properties of objects. An example is given in Figure 4. The 'similarity' attribute (queries for a property that is similar for all objects in the group) is handled in the same manner. The main building blocks of the question representation are the visual elements: object classes, object properties and object relations. The question in (a) requires all 'dog' objects to be both black and small, hence the first dog that is not black renders the logic phrase false and the answer is "no" (failed object and reason are marked in the image). The question in (b) requires only that the black dogs would be small, hence all dogs are checked for color, and the size of the black ones is verified to be small. Since it is true, the answer is "yes". • Object Classes Object class is the category of object required by the question. It does not necessarily match the used object detector. To enlarge the coverage of supported object classes we define a few categories of object classes and handle them accordingly. -Basic Classes These are the classes specifically covered by the main multi-class object detector. We currently use instance segmentation by mask R-CNN (He, Gkioxari, Dollár, & Girshick, 2017) for the 80 classes of COCO dataset (Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, & Zitnick, 2014). Having the segmented object is very useful as this accuracy is required in many cases (e.g. for the relation 'touch'). Other detection methods are also integrated and may be used instead. . This is a complicated attribute that requires special treatment and mapping to the graph representation. Bounding boxes for birds with common property (red) and for 'odd man out' bird (yellow) are marked (in red and yellow correspondingly). [Object detection is based on faster R-CNN + DeepLab]. (Mathias, Benenson, Pedersoli, & Van Gool, 2014) for the detected 'person' objects, followed by age and gender classifier (Levi & Hassner, 2015) on the results (an example is demonstrated in Figure 5). -Superordinate Classes Each category of a superordinate class includes a few basic classes (for example furniture, animal). To check this, we use ConceptNet (Speer & Havasi, 2013), which is a commonsense knowledge database, based on data extracted from the internet (see also section 3.2.2). It includes concepts and predefined relations between them. We use the relations: 'InstanceOf', 'IsA', 'MadeOf' and 'PartOf' with the requested class, and keep the results that fit our basic classes list. The detected objects of these classes are retrieved and used for the rest of the procedure. Also if the query is for the type of the requested superordinate class, the name of the detected basic class is given as an answer (see Figure 5 for an example). -Similar Classes A class that has a synonym or a very similar class in the basic classes set may be also searched as this corresponding class. These correspondences are extracted using the 'Synonym' and 'SimilarTo' relations in Concept-Net. -A Group of Objects To identify a class that represents a group of objects (possibly of different optional basic classes), the ConceptNet relation 'MemberOf' is used (e.g. flock → bird, sheep; fleet → bus, ship...). A quantity requirement is added of at least two objects (demonstrated in Figure 5). -Sub Objects Some objects are part of a 'known' objects and can be extracted according to the detection of the host object and additional processing. We apply human pose estimation (Chen & Yuille, 2014) to obtain the different body parts when requested (e.g. 'left/right hand', 'left/right foot' ). Relative areas of objects (e.g. 'the middle of the bus' ) are also treated as sub objects. In these cases left and right are different than other uses of left/right as a location property (e.g. 'the left box' ). A 'shirt' is also treated as a sub object, corresponding to the torso area, provided by human pose estimation results (an example is given in Figure 5). • Object Properties Objects have various visual properties. We differentiate between the binary properties (e.g. 'red') and a function property that returns the property of the object from a specific category (e.g. 'color'). Table 1 describes the used set of properties divided (most of them) to groups of function properties. • Object Relations Relations between two objects are represented by the directed graph edges. Detection of relations varies and require "simple" information for some (e.g. 'to the right of') and complicated visual features for others (e.g. 'wearing'). We combine specific rule based detection for some relations and a deep neural network for others. -Rule based relation classification: Based on spatial checks, using (when needed) morphological methods, depth estimation (Liu, Shen, Lin, & Reid, 2016), face detection (Mathias et al., 2014), face key points detection (Zhu & Ramanan, 2012) and gaze estimation (Recasens * , Khosla * , Vondrick, & Torralba, 2015). Properties' Group Predicate Properties color/colors 11 colors (e.g. 'black', 'blue',...) age a ages and ages inequalities (based on 8 age groups) gender a female/male location b (e.g. where) spatial image location (e.g. 'bottom (of the image)' ) relative location bc location relative to other objects (e.g. 'the left dog' ) type subclass (when available) size 'small', 'big', 'average' quantity d number of objects difference d (odd man out) no direct binary property similarity d no direct binary property Simplifications, compositions of relations are used, as well as exploiting commonsense knowledge (by querying ConceptNet (Speer & Havasi, 2013)). A special type of relations are the comparison relations, sim <f > and diff <f >, that checks similarity or difference of function property f correspondingly. -Deep neural network classifier: Based on the DR-Net method (Dai, Zhang, & Lin, 2017) for the relation predicate classification. This method, as other visual relation detectors, utilizes object detection. To avoid coupling of relation detection with object detection, which would reduce the robustness of our system, and yet exploit object detection when possible, we've added a layer that was trained to project a closeness measure based on the GloVe word embedding (Pennington, Socher, & Manning, 2014) and generate a representation for any object class. This way, object classes that were not trained for the relation classification still have a representation projected on the DR-Net object classes vector. We use the version trained for the 70 VRD dataset (Lu, Krishna, Bernstein, & Fei-Fei, 2016a) relations. Since relations are also used as an attention for object detection (3.2.2), inverse relations are matched to each relation, when possible. This way, attention can be used for both directions of the relation. Recursive Procedure The final stage of answering the question is activating a recursive procedure to follow the graph nodes and edges, invoke the relevant basic procedures and integrate all the information to provide the answer. A basic scheme of the procedure is given in Figure 6 and in Algorithm 1. External Knowledge Working Memory Figure 6: A scheme of the recursive answering procedure. At each step the current node (cur node) is set and the objects are examined according to node's requirements. If succeeded, a new cur node is set (according to a relation or next global parent node) and the function is called again to handle the subgraph starting from it. The required visual elements: c: object class, p i : an object property, f : function property, g: property of a set, r i : a relation. The daughter object detection is activated only when none was detected in previous stages. Note that the estimated maps of depth and color names are calculated by the procedure according to needs. The first step is a preliminary object detection, carried out by applying instance segmentation on the image. Then, a recursive function (getGraphAnswer) is invoked for node handling (starting at a global parent node). It runs specific procedures that activate visual analyzers to check the requirements (properties, relations) and fetch required information (function property). The retrieved objects that fulfill the requirements are coupled to the corresponding question objects, so that next checks would be held on the same objects. The number of required objects is mainly according to quantifiers. Once a node checks are completed, the same function (getGraphAnswer) is invoked for next node. Next node is determined according to relation (graph edge) or next global parent node. Once all nodes are queried, the checks for entire set are activated (if needed). Answers are provided by all basic procedures and final answer is set according to precedence (e.g. queried property type has a priority over binary answers). if success ∧ ¬ empty(g) then answer = g(valid objs) end Return answer end a. According to object detection and previous checks b. According to quantifiers and other requirements c. Either to daughter node or next global parent node Working Memory The global information gathered through the answering process is stored in a "Working Memory" component. It stores the calculations that may be required at several stages of the process. This information is calculated only if needed and includes objects and their retrieved data, depth map, current node, currently used objects and more. Common Knowledge When a person is answering a visual question, there is an important role to prior common knowledge. This includes connection between classes, famous brands and logos, knowing the role and characteristics of objects and actions, anticipation of the future, knowing to ignore details and more. Some of the issues related to prior commonsense knowledge are addressed by our system. The main uses of prior knowledge are common relations in images (using the Visual Genome dataset (Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al., 2017)) and commonsense knowledge on categories of objects, as well as connections between them (using ConceptNet (Speer & Havasi, 2013)). • Visual Genome Dataset The Visual Genome dataset (Krishna et al., 2017) contains (among many others) annotations for objects and binary relations between them for a set of 108077 images. Common relations involving specific objects are extracted from this dataset (by demand) and used as prior knowledge to assist detection. It allows refining the search area when an object is not detected in the initial detection as described below and demonstrated in Figure 7. • ConceptNet To obtain general commonsense knowledge we use ConceptNet database (version 5) (Speer & Havasi, 2013). The source of information for this database is the internet (results from additional databases are also incorporated). It allows querying for concepts and relations between them of the form: concept1 -relation → concept2 (e.g. horse -IsA → animal) The query is performed by providing two of the triplet [relation, concept1, concept2] and querying for the third. These common knowledge relations provide complement capabilities for answering 'real world' questions in which such common knowledge is assumed. We currently use ConceptNet mainly to extend understanding of objects' classes (e.g. superordinate classes, similar classes) as described for example in section 3.2.1. Examples for questions are given in Figure 5 for connections between classes. Guided Object Detection A question may refer to specific objects in the image that may be hard to detect (e.g. due to size, occlusion, clutter). When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. We use relations with detected objects as an attention source. Two sources for such an attention are used. • Attention by common relations: The source for this attention is from the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images. When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. This is done using the annotation of the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images (see also section 3.2.2). We seek the most common relation of the requested object (with an object from our known classes' set) and a corresponding relative location. Then, if the other object is found we activate the object detector on the relevant area. An additional search area is obtained by the relation's spatial constraints. An example of using common relations as an attention is given in Figure 7. (a) Detection results on entire image (b) Detected bottle using common relations • Attention by question relations: The question itself may include relations that can assist detection by focussing on relevant areas. Since the processing is according to the question graph representation, relation edge directions are modified from detected to undetected objects. This allows using relations with a verified detected object as a detection guidance for undetected objects in the same manner described above. The usage of this type of attention is demonstrated in Figure 8. "Understanding" Capabilities Having a system that breaks the visual answering task into real world sub tasks has many advantages. Other than abilities of modular modifications and improvements, the meaningful, compositional process is utilized and leveraged to provide information derived from internal processing. Failure reasons and verified alternatives are provided, as well as elaborations on detected objects. Provide Alternatives/Corrections When the logic expression representing the question is not valid for the given image, alternatives for the failed part are searched, such that a close expression may be validated and provided as a supplement to the answer. The checks include alternative objects, relations and also properties according to the following: • For failed object classes alternative classes are checked. (a) Detection results on entire image (b) Detected clock using question relation • Real properties are specified for objects with failed properties. • For failed relations alternative relations are checked. • Additional attempts with close persons subordinate classes (e.g. when failed to classify a person as a woman, other sub-person classes are checked). Examples are given in Figure 9 (note that some include multiple rounds of attempts). Answer Elaboration During the answering process, related information may be accumulated for verifying the logical expression representing the question. This information is provided as part of the answer, explaining and elaborating it. The following supplementals are included: • If object detection was by a related class (e.g. synonym, parts of a group, subordinate classes), it is specified in the answer (including numbers of each subclass). • The hint relation used as an attention for object detection, is indicated (if used). • If queried function properties (e.g. color) are different for different relevant objects, property for each object is specified. Some examples can be seen in Figure 10. Integration in Related Applications As the answering process accumulates real "knowledge" related to the image, it may be saved and used for extended applications. One of them may be a discourse on the image, where follow up questions may be answered. Additional application may be correction of image caption (Bernardi, Cakici, Elliott, Erdem, Erdem, Ikizler-Cinbis, Keller, Muscat, Plank, et al., 2016), where caption may be transformed into a question and the answer may verify it or correct it (as described in Section 3.3.1). An example for image caption correction is given in Figure 11. Results Analysis Our system is currently limited by the visual elements it is able to recognize. It is not trained or optimized for any visual question answering dataset. Since our goals include question "understanding" and modularity, we first focus in basic capabilities that will be developed with time to be more comprehensive. We've checked our system for various aspects and specific examples and provide an analysis. We've examined graph representation for a random set of questions to see current status as well as potential. The performance of Question Representation First we check the representation capabilities of our system. To do that we've sampled randomly 100 questions from the VQA dataset (Antol et al., 2015) and checked their graph representation. Results are given in Table 2. Current Potential Fit 72 100 No fit Vocabulary 12 Other 14 Unparsed 2 Table 2: Representation results on a random set of 100 questions from the VQA dataset (Antol et al., 2015). The vocabulary no fit cases are miss representation due to fail in phrases recognition. 'Unparsed' are questions that START couldn't parse. The 'Potential' column represent questions that may be represented by the graph. Caption: a man sitting on a bench with a large umbrella Q: Is there a man on a bench with a large umbrella? A: There is no bench There is no man There is no umbrella There is nothing on a object Existing alternative relations: 'boat in front of a bird' Figure 11: Example for image caption correction. Image caption is the result of the Neu-ralTalk model (Karpathy & Fei-Fei, 2015) It is not always clear whether a representation is accurate, as in some cases a representation may fit the language structure but less accurate for the actual meaning. For example a simple representation of the question "Is this picture in focus?" may be: c: focus c: picture in However, 'in focus' represents a single element and should be recognized as such. This demonstrates the importance of vocabulary knowledge. In another example, the following questions have a similar structure: Are they all wearing the same color ? Are they all wearing the same pants? However, 'color' and 'pants' belong to two different types of visual elements and hence questions should have different representations. Sometimes minor phrasing changes have a substantial effect on parsing and representation. The variation in phrasing may also include grammar inaccuracies and typos. This sensitivity reduce the consistency of the representation and adds noise and inaccuracies to the system. For the two "Unparsed" questions in our representation test, simple corrections lead to successes. The corrections are (original → corrected): What season do these toy's represent? → What season do these toys represent? Where are these items at? → Where are these items? There are other cases where a minor phrasing change corrects the representation, as can be seen in Figure 12. Additional parsing limitation is no indication coordinating conjunctions ('or', 'and') between phrases. Hence both are treated as 'and'. As mentioned before, since the questions are free form, they may involve slang, typos or wrong grammar. The question meaning may even be not clear. For example the question 'How is the table design?' may be the correct intended question. However it may be that the intended question is "How is the table designed ?". All the questions sampled in this analysis can be potentially represented using the suggested graph representation. This demonstrates that in general our scheme has a very high representation capabilities. However some require identification of complicated properties and related terms e.g. "Is the refrigerator capacity greater than 22 cubic feet?" (similar comparisons of property's quantity already exist for age). The issue of adding description levels rises for complicated properties that may have a natural representation using properties of properties, e.g. Is this the normal use for the object holding the flowers? How is the table designed ? Where do these animals originate? In some cases it may be reasonable to alter the exact meaning into a more reasonable one to handle, e.g Does this truck have all of its original parts? → Are all the parts of this truck original? In other checks performed, there were (very few) cases where relations between multiple objects of different types were required (e.g. 'Does this image contain more mountain, sky or grass?'). A support for such cases may be added in the future. Question Answering Our current implementation is obviously limited by the number of recognizable visual elements, queried both explicitly and implicitly. It does not include any training or adaptation to any Visual Question Answering dataset. Also, some implementations maybe incomplete or arbitrary, e.g. 'location', which implementation is relative to image. Answers are, however mostly self aware. When running on the VQA (Antol et al., 2015) dataset most answers indicate the unfamiliar visual element which prevents answering (e.g. "Unknown class: linoleum"). Examples with proper answers are shown in Figure 13. It includes the use of ConceptNet (Speer & Havasi, 2013) in some cases to obtain prior knowledge regarding related classes (e.g. subclasses) and other commonsense knowledge. Examples with wrong answers are shown in Figure 14. The reasons for failures include detection failures, unknown visual elements, missing prior knowledge and other assumptions. Further examination of the results provides some insights regarding additional sources of failure. One element that adds "noise" to the system is the use of internet based external knowledge database. While providing essential information, retrieved data is also prone to errors and yields detection attempts of wrong objects. This is demonstrated by the results of queries of 'carpet' and the relation IsA which imply that the following may be a carpet: 'Barack Obama', 'book', 'monitor',' a plastic bag', 'a glass of water', etc. Another example for such an error is the retrieved relation 'chair IsA door'. A partial solution is using the associated weights that indicate the strength of each result. Some results may be misleading as they may refer to different meanings of the queried words. Following are examples for such results: 'train isA control' 'monitor isA track' 'screen door isA door' In some cases the intersection of retrieved classes with recognizable objects is so small, that it may cause a wrong conclusion based on a very superficial check. An example for this is the question "Are these toys?", where the recognizable retrieved classes are 'bicycle', 'skateboard', 'frisbee', 'kite' and 'motorcycle' hence answering 'no' if none of them was detected. An interesting observation regarding the estimation of some visual elements is for the generation of color name maps (Van De Weijer, Schmid, & Verbeek, 2007), which is based on supervised learning (11 optional colors per pixel). When object colors are required, the map is generated for the object area in the image, and based on dominant colors the answer is provided. Retrieving object color may appear as a trivial task, as the intensity of original RGB image channels should provide the exact color of each pixel. However, using such methods fail to obtain the perceived color, as it is hardly related to levels of actual RGB channels. Hence, learning methods are incorporated to address this problem, and still there are many inaccuracies. In addition to these inaccuracies, the required process for obtaining perceived color of an object is not consistent. This can be seen in the examples of Figure 15, where inquiring for the color of a person requires different color naming and focus on specific regions. The bus example also requires specific behavior, where the windows and wheels areas of the bus should be ignored. Q: What color is the horse? Q: What color is the bus? Q: Is the man white? A: grey A: black A: yes Figure 15: Demonstration of perceived color challenges. Each column corresponds to one example. For each example, the top image is the input image with markings of relevant results. The bottom image is a map of color names corresponding to the required object. Below the images, the question and corresponding answer are given. First column demonstrates classifications errors in the generated map of color names due to shading. Second column require ignoring the windows and wheels areas for an accurate answer. For the example of the third column, only specific area should be checked and colors should correspond to different names. [Object detection is based on faster R-CNN + DeepLab]. As previously mentioned the parser sensitivity to phrasing and other issues such as its indifference to type of phrase coordinators ('and', 'or') causes representation failures or misrepresentations, which results with inability to provide a correct answer. For example when 'or' is used (e.g. "Are the flowers yellow or white?") the answer will be always 'no', as both options are required to be true. Hence, we get an answer which is irrelevant to the question. Questions may be misinterpreted due to multiple meaning of words and phrases or subtle differences. As previously discussed this mainly effects the use of external knowledge database where a wide range of concepts may be used, which may lead to an unclear meaning of a concept (e.g. 'train'-vehicle vs. learn, 'monitor'-screen vs. supervise). Such confusions happen also for the question itself. An example for a misinterpreted question is "What is the table/bus number?", which is interpreted as "What is the number of tables/buses?" Currently, other than enhancing object detection by attention from question relations, details from the question are not used as hints for correctness of expressions. A case where such information may be further utilized is when the query is for a property of an object. In this case there may be a prior assumption or an increase in probability that such an object exists. Of course, an automatic assumption of existence is not desirable. However, reduction in classification thresholds, additional attempts using hints and other measures may be utilized to reflect the higher probability for the existence of such an object. For example, given the question "What is the age of the man?", the probability that a man indeed exist in the image should rise, and refuting this assumption should be performed only when the evidence is substantial. Discussion and Conclusions We have presented an approach to visual question answering that seeks to compose an answering procedure based on the 'abstract' structure of the query. We exploit the compositional nature of the question and represent it as a directed graph with objects represented as nodes and relations as edges. Each basic component of this graph representation is mapped to a dedicated basic procedure. The collection of these basic procedures are put together, along with additional required processes, into a complex procedure for the entire query. This procedure incorporates query details and intermediate results and stores them in the graph nodes and a working memory module. The stored information completes the guidance to the procedure and allows handling different types of visual elements. Question relations are used as an attention source to enhance object detection. Querying for external common information is also handled by the procedure in order to complete the required prior knowledge needed to answer the question. Breaking the answering process into basic meaningful components, corresponding to basic logic patterns, enables awareness at each step to the accomplished and unaccomplished parts of the task. This includes recognizing and reporting on failures and limitations, that in many cases are corrected and provided with valid alternatives. Elaborations to the answers are provided, according to the stored information. Since the building blocks include simple real world detectors, the system is modular and its improvement is not limited. Human abilities motivate us to examine and handle some complicated attributes that are addressed naturally by humans, even though they may hardly appear in real queries. These attributes, such as 'odd man out', demonstrate representation challenges, that require extending the natural graph representation. Currently specific configuration is created to represent these attributes. Future upgrades may allow handling it more smoothly. Evaluation of representation capabilities demonstrated that, even though potentially, our scheme can represent practically all queries, current state of the system is limited. The observed problems include limitations in vocabulary identification, sensitivity to phrasing and cases of grammatical similarity for different elements (e.g. 'wearing the same color' vs. 'wearing the same pants'). Additionally, some rare representation limitations exist, such as relations between more than two objects of different classes. Even though the recognition abilities are currently limited due to scope of existing detectors, the system is self aware and mostly reply by specifying its limitation (which may trigger an addition of the desired detectors to the system). The representation limitations discussed in 4.1 are a fundamental source of failures, which is added to incremental chances for errors of the used detectors. Our system does not exploit any language bias of the question. The answer is exclusively provided by the procedure evaluating the logic representation of the question. However, improvement is ongoing, as detectors keep improving and their scope keeps growing. Current approaches to visual question answering use mostly end-to-end schemes that are very different than our approach. Although some methods include adaptive aspects, the optimization process is more likely to exploit language bias than the complex mechanisms required for proper answering. These methods maximize statistical results, but are likely to fail in addressing subtle, yet meaningful cases. This fits the analysis of current models, demonstrating the tendency to utilize only part of the question, provide same answers for different images and fail on novel forms. A combination of UnCoRd system and an end-toend model may be beneficial in some cases, for example enhancing UnCoRd elaborations with "intuitive" answer in some cases (such as unknown visual elements). We've integrated and examined various aspects of answering questions on images using our answering system. Much more research and investigation is required for all these aspects, as well as others. Future research will include learning the representation mapping and making it more robust, further investigating and improving the visual elements analyzers (e.g. combine the type of object when possible for property detection) and more.
8,012
1810.10656
2898446106
An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.
Combining results of meaningful tasks (other than using pre-trained networks as visual features) such as object detection was in the focus of several additional works. One such work uses object and attribute recognition tasks for proposed regions and combines them with corresponding representations from question and candidate answer @cite_37 . The use of visual concepts (object class and attributes) of attended regions and comparing them to extracted concepts from the question was proposed as well @cite_21 . In another work concatenating pairs of vectors representing two detected objects and their properties with the encoded question was used to allow relation reasoning @cite_1 . Objects and relations between them was utilized in a work that used graph representation for both the image (synthetic images) and the question @cite_49 . For the image graph objects were the nodes and edges were the spatial relations between them and for the question graph words were the nodes and their dependencies were the edges. Representations were merged in an attention-like mechanism to fuse the features and predict the answer.
{ "abstract": [ "An important goal of computer vision is to build systems that learn visual representations over time that can be applied to many tasks. In this paper, we investigate a vision-language embedding as a core representation and show that it leads to better cross-task transfer than standard multitask learning. In particular, the task of visual recognition is aligned to the task of visual question answering by forcing each to use the same word-region embeddings. We show this leads to greater inductive transfer from recognition to VQA than standard multitask learning. Visual recognition also improves, especially for categories that have relatively few recognition training labels but appear often in the VQA setting. Thus, our paper takes a small step towards creating more general vision systems by showing the benefit of interpretable, flexible, and trainable core representations.", "A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -- Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.", "Visual Question Answering (VQA) is a novel problem domain where multi-modal inputs must be processed in order to solve the task given in the form of a natural language. As the solutions inherently require to combine visual and natural language processing with abstract reasoning, the problem is considered as AI-complete. Recent advances indicate that using high-level, abstract facts extracted from the inputs might facilitate reasoning. Following that direction we decided to develop a solution combining state-of-the-art object detection and reasoning modules. The results, achieved on the well-balanced CLEVR dataset, confirm the promises and show significant, few percent improvements of accuracy on the complex \"counting\" task.", "This paper proposes to improve visual question answering (VQA) with structured representations of both scene contents and questions. A key challenge in VQA is to require joint reasoning over the visual and text domains. The predominant CNN LSTM-based approach to VQA is limited by monolithic vector representations that largely ignore structure in the scene and in the form of the question. CNN feature vectors cannot effectively capture situations as simple as multiple object instances, and LSTMs process questions as series of words, which does not reflect the true complexity of language structure. We instead propose to build graphs over the scene objects and over the question words, and we describe a deep neural network that exploits the structure in these representations. This shows significant benefit over the sequential processing of LSTMs. The overall efficacy of our approach is demonstrated by significant improvements over the state-of-the-art, from 71.2 to 74.4 in accuracy on the \"abstract scenes\" multiple-choice benchmark, and from 34.7 to 39.1 in accuracy over pairs of \"balanced\" scenes, i.e. images with fine-grained differences and opposite yes no answers to a same question." ], "cite_N": [ "@cite_37", "@cite_21", "@cite_1", "@cite_49" ], "mid": [ "2604608429", "2771951981", "2786686366", "2522258376" ] }
Understand, Compose and Respond Understand, Compose and Respond -Answering Visual Questions by a Composition of Abstract Procedures
Human ability to answer a question related to an image is remarkable in several ways. Given a single image, a large number of different questions can be answered about it. Answering these questions may require the detection and analysis of subtle, non-salient cues. Prior information and data obtained through experience are also incorporated into the process, to enable answering the question, which may be highly complex. The answering process itself is open to reasoning, allowing for example elaborations on the answer, or explaining how it was reached. In the last few years, the problem of image question-answering by a machine was addressed by many studies (Teney, Anderson, He, & Hengel, 2017a;Pandhre The system we propose and describe in this work handles a wide range of questions about images, without training on any questions (zero-shot learning). We concentrate on designing a general process for this task and not on fitting results to the statistics of a specific dataset, as current end-to-end approaches. Our system uses many existing methods for different visual tasks, such as detection, classification, segmentation, or extracting objects' properties and relations. In some cases novel detection methods were developed, however this is not a main focus of the work, as our system is modular, enabling 'plugging in' new detectors to enhance its capabilities. The structure of questions A central aspect of our scheme is that different questions share a similar structure or subcomponents with similar structure. For instance, the following questions have components with a common structure: What kind of pants is the person on the bed wearing? → person on bed Is the giraffe behind a fence? → giraffe behind fence The part with common structure can be represented as: There exist X of class c x and Y of class c y , such that r(X, Y ) Such structures may serve as building blocks for a compositional question representation. All components with similar structures can be handled by the same procedure, performing part of the answering task. In our analysis, questions could be represented by a combination of a few types of structures which we refer to as "basic patterns". These patterns are short parametric logical phrases that represent an atomic segment of the question structure. Each basic pattern dictates a particular implementation scheme utilizing a pool of implemented building blocks. The combination of basic patterns determines the entire procedure of answering the question. One advantage of such a scheme is that it is modular, allowing the addition of building blocks to increase the scope of the scheme, with no dependency on the statistics of a specific visual questions' dataset. A second advantage is that the coverage of queries grows exponentially with the number of building blocks without the need to encounter such queries as training examples. Additional advantage is "understanding" capabilities. The basic meaningful components breaks the process and allows a separate analysis of each component, including reasons of failure and explanations. The aspect of questions' coverage is also addressed in other directions. Such a direction is increasing the recognizable vocabulary of the question using commonsense knowledge. Utilizing commonsense knowledge In many cases answering a question requires integration of prior commonsense knowledge, especially about semantic relations between concepts. For example when answering the question 'What animal is this?' detection capabilities of specific animals (e.g. horse, dog, cat) will not suffice, since the answer requires the general notion of 'animal' and which particular instances belong to it. However, a query to an external knowledge database (e.g. ConceptNet (Speer & Havasi, 2013)), may provide subcategories of 'animal'. Consequently, specific detectors can be activated to seek these specific recognizable animal types. These knowledge databases are mostly based on information extracted from the internet and include commonsense information about the world. Querying such a database allows the completion of missing information such as semantic connections between object's classes (e.g. synonym, superordinate, subordinate) as in the example above, the typical usage of different objects, and more. Integrating this type of information is important when answering questions asked by humans, as it is common knowledge and treated as universally available. UnCoRd Answering System Approach Overview Our Understand, Compose and Respond (UnCoRd) approach is based on the following observations: • There is a representation of the question in terms of objects, their classes, properties and relations, including quantifiers and logical connectives as well as non logical symbols: predicates and functions. The representation has an 'abstract' structure, i.e. independent of the particular objects, classes, properties and relations that are represented as parameters. A single abstract representation can represent many different concrete questions. Our main thesis is that the procedure to be applied for obtaining the answer depends on the abstract structure of the question and not the particular elements. Hence, it is important to use the right kind of abstract representation, which will allow this mapping to procedures (where all questions with the same abstract structure require the same procedure). A proper parsing and mapping of the language question to its abstract representation should be obtained to use this method. • The question has a compositional structure: there are basic components put together in particular ways. The abstract representations are composed from 'basic patterns' and methods for putting them together into more complex compound structures. This compound structure determines how the procedures are constructed. There are basic procedures for the basic patterns, and methods of composing from them a more complex procedure to deal with the compound abstract structures. In other words, we get a procedure for the entire question by having procedures for the basic components and a procedure to put them together. We would like our system to meet the following criteria: -Answer correctly and efficiently. -"Understanding" the question, in the sense of: • Breaking the answering procedure into a set of simple visual tasks. • Identify which tasks it can perform and what are its limitations. Indicate if something is missing or unknown. • Ability to explain and reason -elaboration of the answering process using the image and intermediate results, including error correction and alternative suggestion. -Modularity and robustness: handling questions and image categories of various types, not limited by a training set. -Though not using a human psychology model, the ability to handle questions that people answer easily (and may be "hard" for computers) is desired, e.g. 'odd man out'. A question can be seen as a statement about the image that the answering system tries to make true or refute. Making the statement true requires an assignment of the particular classes, properties and relations to the image. Their identification in the image is based on pre-trained classifiers and detectors. The recognizable set is modular and can be increased by adding new detectors or switching to stronger ones. Logical operations will be used to generate logic sentences with a formulation that fits first order logic (including functions) with some extensions. The answering procedure is generated according to the input question in the following manner: Question → Question representation → procedure A proper representation is fundamental to allow a successful mapping of the question into the answering routine. This representation should be concise and support generating the same procedure when applied to similar structured questions with different choices of classes, properties and relations. To obtain that, the visual elements (object classes, object properties and object relations) would be parameters, integrated using logic operations (e.g. ∧, ∨) and quantifiers (e.g. ∀, ∃, ∃5 ) into basic logic patterns corresponding to specific structures. These patterns are combined and merged to compose a more complicated structures that create the representation of the question and can be mapped to the answering procedure. We use a directed graph to describe the question which is a natural choice in our case and allows diverse compositions of substructures. In this graph each node represents an object entity and its description (e.g. a list of required properties). These nodes are linked by the graph edges which represents relation between objects. The graph is divided into small segments that relate either to one node and correspond to part of its information (e.g. object class and one property) or to an edge and the two classes of the nodes it connects. Each of these graph segments matches a basic pattern that is handled by a corresponding procedure, using the specific visual elements of this substructure. The graph representation allows to decompose the answering procedure into a set of elementary procedures and put them together to generate a modular answering procedure. The elementary procedures invoke visual analyzers, which are the basic modules of the process. Each class, property and relation, has a visual analyzer to establish it. More general visual operations that serve more than one particular visual element (e.g. depth estimation) are activated according to need and their results are available to all basic procedures. The overall routine is obtained by applying these procedures and operations at an appropriate order, to appropriate objects, where the amount of required assignments per object are set by the quantifier of the corresponding node. The visual elements may have 'types', such as classes that can be basic or subordinate (i.e. basic with additional properties), properties that may be comparative (e.g. 'older than') and relations which can be symmetric (e.g. 'beside') or not. The entire process of answering a visual question is described in Figure 1. It starts by receiving the input language question and mapping it to a graph representation. The next stage is running a recursive procedure that follows the graph and invokes the procedures associated with the basic structures, using the specific visual elements as inputs. After the results are obtained, the answer is returned. Questions with a simple structure (e.g. "Is there a red car?") can be represented by matching one specific pattern to a question. This covers a wide range of questions, however by allowing a composition of simple patterns, into a more complicated structures, the quantity of supported questions is raised substantially (from ∼60% to ∼90%, according to an analysis of 542 questions on images asked freely by people and using a set of 12 patterns). This composition is done using a graph. For example in the question "Is there a red car to the right of the yellow bus? " there are two parts with a simple structure "Is there an object of class c with a property p?" connected by the relation "to the right of", which corresponds to another simple structure: "Is there an object of class c 1 and an object of class c 2 that have the relation r between them?". The graph representing the question is: Map into a graph representation question Run a recursive procedure following the graph image Answer When a specific question is given, the question is parsed and mapped to a directed graph, where the visual elements are its parameters. This graph corresponds to a logic expression that is composed of simple expressions, that may share the object variables. Some of the parametric visual elements are variables that require estimation based on the image. Once the variables are estimated, the logic expression is evaluated (as true or false) and the query is answered accordingly. The formulation of the logic expression fit first order logic (including functions) with some extensions (e.g. a variable-sized set of arguments or outputs for some functions). Each simple logic expression is related to a basic pattern, which corresponds to a basic procedure. The basic procedure obtains an answer to the expression by activating visual analyzers according to the types of object classes, properties and relations (which are inputs to the basic procedure). Such a system will have the ability of constant improvement by adding detectors for new classes, properties and relations according to requirements. Similar characteristics are also evident in human learning, where new learned details are integrated into the existing mechanism of world perception. The UnCoRd system is implemented following the approach described above. It answers visual questions using a composed process that follows the graph representation of the question, activating real world visual analyzers. This system is described in the following section. System Description Mapping to a Directed Graph One of the system's main tasks is to translate the query, given in natural language, into an abstract representation which will then be mapped into a procedure (the first step, described in Figure 1). We first use the START parser (Katz, 1988(Katz, , 1997 The generated set of ternary expressions is used for the generation of a graph representation, where nodes represent objects and edges represent relations between objects. The node include all of the object's requirements according to the question, mainly its class, properties that may be required (e.g. 'red') or queried (e.g. 'what color') and quantifiers that are not the default existence quantifier (e.g. 'all', 'two'). The directed edges correspond to relations between objects where the edge direction implies the direction of relation. Each edge is also assigned a direction of progress for the answering procedure. It is instantiated as the relation direction, but may be modified according to initial object detection to enhance detection abilities (see Section 3.2.2 for details). An example for a mapping of a question to a directed graph can be seen in Figure 2. The graph representation is used to fit an answering procedure for each particular question. Fragments of information are extracted from subgraphs that include up to two connected nodes. A graph fragment includes a subset of elements (classes, properties, property functions and relations) that has a mapping to one of a few basic logic patterns. This mapping, combined with the particular accompanying visual elements defines a logic expression that selects and guides a component of the answering procedure. For example a fragment of the node's class and a required property is mapped to the pattern ∃X (c X (X) ∧ p X (X)). The specific class c X and property p X define the particular logic expression that should be checked. Such mappings are done for the entire graph, where each fragment of it is mapped into a basic logic pattern and specific visual elements. These simple logic expressions, joined using logic operations, constitute one logic expression that represents the entire question. Each basic logic patterns has a dedicated procedure that performs the evaluation required to confirm or refute it, using visual analysis according to the image. The procedure provide an answer according to an accompanying query. We use the following notations for describing the basic logic patterns: X, Y -Objects c(X) -A class, evaluated for object X (as True/False), e.g. 'person', 'boy', 'bird', 'train'. p(X) -A predicate property (predicate of arity 1), evaluated for object X, (as True/False), e.g. 'blue', 'male', 'big'. f (X) -A property function. Returns properties of a specific type, e.g. 'color', 'age', 'size'. g(S t ) -A global property function for a subset of objects of the same class: S t ⊂ {X t : c t (X t )} . Returns properties of a specific type, e.g. 'quantity', 'difference', 'similarity'. p f -A predicate property, constrained to possible return values of f (X) (e.g. blue = color(X), male = gender(X), big = size(X)). a g -One of the possible values returned by g(X) (e.g. 3 = quantity(S t ), where S t = {X t : c t (X t )}). r(X, Y ) -Relation between objects X and Y (predicate of arity 2), e.g. X below Y → below(X, Y ) and in the same manner looking at(X, Y ), near(X, Y ). ?--A query, the requested answer. Objects (or other elements) starting with a capital letter (e.g. X, Y ) are unknown elements (variables) that should be estimated according to the image. The particular used patterns were selected since they provide a small, simple and basic set that can naturally compose the logic representation of the question. This small set provides a high flexibility in composing a wide variety of logic expressions using the different visual elements. From a conducted survey and other checks it was evident that this set is empirically sufficient to represent the set of analyzed queries. Following are the basic logic patterns that are mapped to basic procedures in the question answering process (followed by their corresponding graph fragment). The ∃ quantifier may be replaced by other quantifiers (e.g. ∀, ∃2). • Property Existence: ∃X (c X (X) ∧ p X (X)); ?-∃/c X c: c X p: p X Examples: 'Is there a brown bear?' (query for validity with a specific object class) 'What is the purple object?' (unknown and queried object class) An example for a modification due to a quantifier parameter: ∀X (c X (X)∧p X (X)); ?-∃, e.g. 'Are all bears brown?' • Function Property: ∃X (c X (X)), f (X) = P f ; ?-P f c: c X f : P f Example: 'what color is the chair?' • Property of a Set: ∀X t ∃S t (S t = {X t : c t (X t )}), g(S t ) = A g ; ?-A g c: c Xt g: A g Example: 'How many planes are in the photo?' • Object Existence: ∃X (c X (X)); ?-∃/c c: c X Examples: 'Is this a dog?' 'What is it?' • Relation Existence: ∃X ∃Y (c X (X) ∧ c Y (Y ) ∧ r(X, Y )); ?-∃/c X /c Y c: c Y c: c X r Examples: 'Is the man looking at the children?' (validity query) 'What is on top of the television?' (query for one of the classes) The combination and composition of these patterns has a powerful representation capabilities and provides a mapping to a set of basic procedures that constitute the full answering procedure. The procedure composition of "real-world" visual tasks allows both the use of existing detectors, including separate improvement of each task and explaining, elaborating and correcting questions. As mentioned above, modified quantifiers may be added to nodes according to amount of objects required in the questions (see Figure 2). These quantifiers may be either numbers (e.g. 'Are there three guys?' ) or 'all' for entire group of objects. Setting the group may be according to subtle phrase differences which affect the answering procedure flow and results as can be seen in Figure 3. The graph naturally represents objects, their properties and binary connections between them. Though this covers a wide variety of questions, using global image information and some extensions to the basic graph increase the support to additional attributes. Property of a group is an example for such an extension. Properties that uses global information are 'closest' and 'size' (which is relative to other objects). Specific implementations for complicated attributes may be added as a dedicated tasks or by a preprocessing, braking it into graph plausible segments. An example for such an implementation in our system is 'odd man out' (e.g. "How is one cow not like the others?"), where the relations 'diff <f >' and 'sim <f >' (for different and similar values of property f correspondingly) are used to check and compare the properties of objects. An example is given in Figure 4. The 'similarity' attribute (queries for a property that is similar for all objects in the group) is handled in the same manner. The main building blocks of the question representation are the visual elements: object classes, object properties and object relations. The question in (a) requires all 'dog' objects to be both black and small, hence the first dog that is not black renders the logic phrase false and the answer is "no" (failed object and reason are marked in the image). The question in (b) requires only that the black dogs would be small, hence all dogs are checked for color, and the size of the black ones is verified to be small. Since it is true, the answer is "yes". • Object Classes Object class is the category of object required by the question. It does not necessarily match the used object detector. To enlarge the coverage of supported object classes we define a few categories of object classes and handle them accordingly. -Basic Classes These are the classes specifically covered by the main multi-class object detector. We currently use instance segmentation by mask R-CNN (He, Gkioxari, Dollár, & Girshick, 2017) for the 80 classes of COCO dataset (Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, & Zitnick, 2014). Having the segmented object is very useful as this accuracy is required in many cases (e.g. for the relation 'touch'). Other detection methods are also integrated and may be used instead. . This is a complicated attribute that requires special treatment and mapping to the graph representation. Bounding boxes for birds with common property (red) and for 'odd man out' bird (yellow) are marked (in red and yellow correspondingly). [Object detection is based on faster R-CNN + DeepLab]. (Mathias, Benenson, Pedersoli, & Van Gool, 2014) for the detected 'person' objects, followed by age and gender classifier (Levi & Hassner, 2015) on the results (an example is demonstrated in Figure 5). -Superordinate Classes Each category of a superordinate class includes a few basic classes (for example furniture, animal). To check this, we use ConceptNet (Speer & Havasi, 2013), which is a commonsense knowledge database, based on data extracted from the internet (see also section 3.2.2). It includes concepts and predefined relations between them. We use the relations: 'InstanceOf', 'IsA', 'MadeOf' and 'PartOf' with the requested class, and keep the results that fit our basic classes list. The detected objects of these classes are retrieved and used for the rest of the procedure. Also if the query is for the type of the requested superordinate class, the name of the detected basic class is given as an answer (see Figure 5 for an example). -Similar Classes A class that has a synonym or a very similar class in the basic classes set may be also searched as this corresponding class. These correspondences are extracted using the 'Synonym' and 'SimilarTo' relations in Concept-Net. -A Group of Objects To identify a class that represents a group of objects (possibly of different optional basic classes), the ConceptNet relation 'MemberOf' is used (e.g. flock → bird, sheep; fleet → bus, ship...). A quantity requirement is added of at least two objects (demonstrated in Figure 5). -Sub Objects Some objects are part of a 'known' objects and can be extracted according to the detection of the host object and additional processing. We apply human pose estimation (Chen & Yuille, 2014) to obtain the different body parts when requested (e.g. 'left/right hand', 'left/right foot' ). Relative areas of objects (e.g. 'the middle of the bus' ) are also treated as sub objects. In these cases left and right are different than other uses of left/right as a location property (e.g. 'the left box' ). A 'shirt' is also treated as a sub object, corresponding to the torso area, provided by human pose estimation results (an example is given in Figure 5). • Object Properties Objects have various visual properties. We differentiate between the binary properties (e.g. 'red') and a function property that returns the property of the object from a specific category (e.g. 'color'). Table 1 describes the used set of properties divided (most of them) to groups of function properties. • Object Relations Relations between two objects are represented by the directed graph edges. Detection of relations varies and require "simple" information for some (e.g. 'to the right of') and complicated visual features for others (e.g. 'wearing'). We combine specific rule based detection for some relations and a deep neural network for others. -Rule based relation classification: Based on spatial checks, using (when needed) morphological methods, depth estimation (Liu, Shen, Lin, & Reid, 2016), face detection (Mathias et al., 2014), face key points detection (Zhu & Ramanan, 2012) and gaze estimation (Recasens * , Khosla * , Vondrick, & Torralba, 2015). Properties' Group Predicate Properties color/colors 11 colors (e.g. 'black', 'blue',...) age a ages and ages inequalities (based on 8 age groups) gender a female/male location b (e.g. where) spatial image location (e.g. 'bottom (of the image)' ) relative location bc location relative to other objects (e.g. 'the left dog' ) type subclass (when available) size 'small', 'big', 'average' quantity d number of objects difference d (odd man out) no direct binary property similarity d no direct binary property Simplifications, compositions of relations are used, as well as exploiting commonsense knowledge (by querying ConceptNet (Speer & Havasi, 2013)). A special type of relations are the comparison relations, sim <f > and diff <f >, that checks similarity or difference of function property f correspondingly. -Deep neural network classifier: Based on the DR-Net method (Dai, Zhang, & Lin, 2017) for the relation predicate classification. This method, as other visual relation detectors, utilizes object detection. To avoid coupling of relation detection with object detection, which would reduce the robustness of our system, and yet exploit object detection when possible, we've added a layer that was trained to project a closeness measure based on the GloVe word embedding (Pennington, Socher, & Manning, 2014) and generate a representation for any object class. This way, object classes that were not trained for the relation classification still have a representation projected on the DR-Net object classes vector. We use the version trained for the 70 VRD dataset (Lu, Krishna, Bernstein, & Fei-Fei, 2016a) relations. Since relations are also used as an attention for object detection (3.2.2), inverse relations are matched to each relation, when possible. This way, attention can be used for both directions of the relation. Recursive Procedure The final stage of answering the question is activating a recursive procedure to follow the graph nodes and edges, invoke the relevant basic procedures and integrate all the information to provide the answer. A basic scheme of the procedure is given in Figure 6 and in Algorithm 1. External Knowledge Working Memory Figure 6: A scheme of the recursive answering procedure. At each step the current node (cur node) is set and the objects are examined according to node's requirements. If succeeded, a new cur node is set (according to a relation or next global parent node) and the function is called again to handle the subgraph starting from it. The required visual elements: c: object class, p i : an object property, f : function property, g: property of a set, r i : a relation. The daughter object detection is activated only when none was detected in previous stages. Note that the estimated maps of depth and color names are calculated by the procedure according to needs. The first step is a preliminary object detection, carried out by applying instance segmentation on the image. Then, a recursive function (getGraphAnswer) is invoked for node handling (starting at a global parent node). It runs specific procedures that activate visual analyzers to check the requirements (properties, relations) and fetch required information (function property). The retrieved objects that fulfill the requirements are coupled to the corresponding question objects, so that next checks would be held on the same objects. The number of required objects is mainly according to quantifiers. Once a node checks are completed, the same function (getGraphAnswer) is invoked for next node. Next node is determined according to relation (graph edge) or next global parent node. Once all nodes are queried, the checks for entire set are activated (if needed). Answers are provided by all basic procedures and final answer is set according to precedence (e.g. queried property type has a priority over binary answers). if success ∧ ¬ empty(g) then answer = g(valid objs) end Return answer end a. According to object detection and previous checks b. According to quantifiers and other requirements c. Either to daughter node or next global parent node Working Memory The global information gathered through the answering process is stored in a "Working Memory" component. It stores the calculations that may be required at several stages of the process. This information is calculated only if needed and includes objects and their retrieved data, depth map, current node, currently used objects and more. Common Knowledge When a person is answering a visual question, there is an important role to prior common knowledge. This includes connection between classes, famous brands and logos, knowing the role and characteristics of objects and actions, anticipation of the future, knowing to ignore details and more. Some of the issues related to prior commonsense knowledge are addressed by our system. The main uses of prior knowledge are common relations in images (using the Visual Genome dataset (Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al., 2017)) and commonsense knowledge on categories of objects, as well as connections between them (using ConceptNet (Speer & Havasi, 2013)). • Visual Genome Dataset The Visual Genome dataset (Krishna et al., 2017) contains (among many others) annotations for objects and binary relations between them for a set of 108077 images. Common relations involving specific objects are extracted from this dataset (by demand) and used as prior knowledge to assist detection. It allows refining the search area when an object is not detected in the initial detection as described below and demonstrated in Figure 7. • ConceptNet To obtain general commonsense knowledge we use ConceptNet database (version 5) (Speer & Havasi, 2013). The source of information for this database is the internet (results from additional databases are also incorporated). It allows querying for concepts and relations between them of the form: concept1 -relation → concept2 (e.g. horse -IsA → animal) The query is performed by providing two of the triplet [relation, concept1, concept2] and querying for the third. These common knowledge relations provide complement capabilities for answering 'real world' questions in which such common knowledge is assumed. We currently use ConceptNet mainly to extend understanding of objects' classes (e.g. superordinate classes, similar classes) as described for example in section 3.2.1. Examples for questions are given in Figure 5 for connections between classes. Guided Object Detection A question may refer to specific objects in the image that may be hard to detect (e.g. due to size, occlusion, clutter). When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. We use relations with detected objects as an attention source. Two sources for such an attention are used. • Attention by common relations: The source for this attention is from the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images. When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. This is done using the annotation of the Visual Genome dataset (Krishna et al., 2017), where objects and relations between them are annotated in images (see also section 3.2.2). We seek the most common relation of the requested object (with an object from our known classes' set) and a corresponding relative location. Then, if the other object is found we activate the object detector on the relevant area. An additional search area is obtained by the relation's spatial constraints. An example of using common relations as an attention is given in Figure 7. (a) Detection results on entire image (b) Detected bottle using common relations • Attention by question relations: The question itself may include relations that can assist detection by focussing on relevant areas. Since the processing is according to the question graph representation, relation edge directions are modified from detected to undetected objects. This allows using relations with a verified detected object as a detection guidance for undetected objects in the same manner described above. The usage of this type of attention is demonstrated in Figure 8. "Understanding" Capabilities Having a system that breaks the visual answering task into real world sub tasks has many advantages. Other than abilities of modular modifications and improvements, the meaningful, compositional process is utilized and leveraged to provide information derived from internal processing. Failure reasons and verified alternatives are provided, as well as elaborations on detected objects. Provide Alternatives/Corrections When the logic expression representing the question is not valid for the given image, alternatives for the failed part are searched, such that a close expression may be validated and provided as a supplement to the answer. The checks include alternative objects, relations and also properties according to the following: • For failed object classes alternative classes are checked. (a) Detection results on entire image (b) Detected clock using question relation • Real properties are specified for objects with failed properties. • For failed relations alternative relations are checked. • Additional attempts with close persons subordinate classes (e.g. when failed to classify a person as a woman, other sub-person classes are checked). Examples are given in Figure 9 (note that some include multiple rounds of attempts). Answer Elaboration During the answering process, related information may be accumulated for verifying the logical expression representing the question. This information is provided as part of the answer, explaining and elaborating it. The following supplementals are included: • If object detection was by a related class (e.g. synonym, parts of a group, subordinate classes), it is specified in the answer (including numbers of each subclass). • The hint relation used as an attention for object detection, is indicated (if used). • If queried function properties (e.g. color) are different for different relevant objects, property for each object is specified. Some examples can be seen in Figure 10. Integration in Related Applications As the answering process accumulates real "knowledge" related to the image, it may be saved and used for extended applications. One of them may be a discourse on the image, where follow up questions may be answered. Additional application may be correction of image caption (Bernardi, Cakici, Elliott, Erdem, Erdem, Ikizler-Cinbis, Keller, Muscat, Plank, et al., 2016), where caption may be transformed into a question and the answer may verify it or correct it (as described in Section 3.3.1). An example for image caption correction is given in Figure 11. Results Analysis Our system is currently limited by the visual elements it is able to recognize. It is not trained or optimized for any visual question answering dataset. Since our goals include question "understanding" and modularity, we first focus in basic capabilities that will be developed with time to be more comprehensive. We've checked our system for various aspects and specific examples and provide an analysis. We've examined graph representation for a random set of questions to see current status as well as potential. The performance of Question Representation First we check the representation capabilities of our system. To do that we've sampled randomly 100 questions from the VQA dataset (Antol et al., 2015) and checked their graph representation. Results are given in Table 2. Current Potential Fit 72 100 No fit Vocabulary 12 Other 14 Unparsed 2 Table 2: Representation results on a random set of 100 questions from the VQA dataset (Antol et al., 2015). The vocabulary no fit cases are miss representation due to fail in phrases recognition. 'Unparsed' are questions that START couldn't parse. The 'Potential' column represent questions that may be represented by the graph. Caption: a man sitting on a bench with a large umbrella Q: Is there a man on a bench with a large umbrella? A: There is no bench There is no man There is no umbrella There is nothing on a object Existing alternative relations: 'boat in front of a bird' Figure 11: Example for image caption correction. Image caption is the result of the Neu-ralTalk model (Karpathy & Fei-Fei, 2015) It is not always clear whether a representation is accurate, as in some cases a representation may fit the language structure but less accurate for the actual meaning. For example a simple representation of the question "Is this picture in focus?" may be: c: focus c: picture in However, 'in focus' represents a single element and should be recognized as such. This demonstrates the importance of vocabulary knowledge. In another example, the following questions have a similar structure: Are they all wearing the same color ? Are they all wearing the same pants? However, 'color' and 'pants' belong to two different types of visual elements and hence questions should have different representations. Sometimes minor phrasing changes have a substantial effect on parsing and representation. The variation in phrasing may also include grammar inaccuracies and typos. This sensitivity reduce the consistency of the representation and adds noise and inaccuracies to the system. For the two "Unparsed" questions in our representation test, simple corrections lead to successes. The corrections are (original → corrected): What season do these toy's represent? → What season do these toys represent? Where are these items at? → Where are these items? There are other cases where a minor phrasing change corrects the representation, as can be seen in Figure 12. Additional parsing limitation is no indication coordinating conjunctions ('or', 'and') between phrases. Hence both are treated as 'and'. As mentioned before, since the questions are free form, they may involve slang, typos or wrong grammar. The question meaning may even be not clear. For example the question 'How is the table design?' may be the correct intended question. However it may be that the intended question is "How is the table designed ?". All the questions sampled in this analysis can be potentially represented using the suggested graph representation. This demonstrates that in general our scheme has a very high representation capabilities. However some require identification of complicated properties and related terms e.g. "Is the refrigerator capacity greater than 22 cubic feet?" (similar comparisons of property's quantity already exist for age). The issue of adding description levels rises for complicated properties that may have a natural representation using properties of properties, e.g. Is this the normal use for the object holding the flowers? How is the table designed ? Where do these animals originate? In some cases it may be reasonable to alter the exact meaning into a more reasonable one to handle, e.g Does this truck have all of its original parts? → Are all the parts of this truck original? In other checks performed, there were (very few) cases where relations between multiple objects of different types were required (e.g. 'Does this image contain more mountain, sky or grass?'). A support for such cases may be added in the future. Question Answering Our current implementation is obviously limited by the number of recognizable visual elements, queried both explicitly and implicitly. It does not include any training or adaptation to any Visual Question Answering dataset. Also, some implementations maybe incomplete or arbitrary, e.g. 'location', which implementation is relative to image. Answers are, however mostly self aware. When running on the VQA (Antol et al., 2015) dataset most answers indicate the unfamiliar visual element which prevents answering (e.g. "Unknown class: linoleum"). Examples with proper answers are shown in Figure 13. It includes the use of ConceptNet (Speer & Havasi, 2013) in some cases to obtain prior knowledge regarding related classes (e.g. subclasses) and other commonsense knowledge. Examples with wrong answers are shown in Figure 14. The reasons for failures include detection failures, unknown visual elements, missing prior knowledge and other assumptions. Further examination of the results provides some insights regarding additional sources of failure. One element that adds "noise" to the system is the use of internet based external knowledge database. While providing essential information, retrieved data is also prone to errors and yields detection attempts of wrong objects. This is demonstrated by the results of queries of 'carpet' and the relation IsA which imply that the following may be a carpet: 'Barack Obama', 'book', 'monitor',' a plastic bag', 'a glass of water', etc. Another example for such an error is the retrieved relation 'chair IsA door'. A partial solution is using the associated weights that indicate the strength of each result. Some results may be misleading as they may refer to different meanings of the queried words. Following are examples for such results: 'train isA control' 'monitor isA track' 'screen door isA door' In some cases the intersection of retrieved classes with recognizable objects is so small, that it may cause a wrong conclusion based on a very superficial check. An example for this is the question "Are these toys?", where the recognizable retrieved classes are 'bicycle', 'skateboard', 'frisbee', 'kite' and 'motorcycle' hence answering 'no' if none of them was detected. An interesting observation regarding the estimation of some visual elements is for the generation of color name maps (Van De Weijer, Schmid, & Verbeek, 2007), which is based on supervised learning (11 optional colors per pixel). When object colors are required, the map is generated for the object area in the image, and based on dominant colors the answer is provided. Retrieving object color may appear as a trivial task, as the intensity of original RGB image channels should provide the exact color of each pixel. However, using such methods fail to obtain the perceived color, as it is hardly related to levels of actual RGB channels. Hence, learning methods are incorporated to address this problem, and still there are many inaccuracies. In addition to these inaccuracies, the required process for obtaining perceived color of an object is not consistent. This can be seen in the examples of Figure 15, where inquiring for the color of a person requires different color naming and focus on specific regions. The bus example also requires specific behavior, where the windows and wheels areas of the bus should be ignored. Q: What color is the horse? Q: What color is the bus? Q: Is the man white? A: grey A: black A: yes Figure 15: Demonstration of perceived color challenges. Each column corresponds to one example. For each example, the top image is the input image with markings of relevant results. The bottom image is a map of color names corresponding to the required object. Below the images, the question and corresponding answer are given. First column demonstrates classifications errors in the generated map of color names due to shading. Second column require ignoring the windows and wheels areas for an accurate answer. For the example of the third column, only specific area should be checked and colors should correspond to different names. [Object detection is based on faster R-CNN + DeepLab]. As previously mentioned the parser sensitivity to phrasing and other issues such as its indifference to type of phrase coordinators ('and', 'or') causes representation failures or misrepresentations, which results with inability to provide a correct answer. For example when 'or' is used (e.g. "Are the flowers yellow or white?") the answer will be always 'no', as both options are required to be true. Hence, we get an answer which is irrelevant to the question. Questions may be misinterpreted due to multiple meaning of words and phrases or subtle differences. As previously discussed this mainly effects the use of external knowledge database where a wide range of concepts may be used, which may lead to an unclear meaning of a concept (e.g. 'train'-vehicle vs. learn, 'monitor'-screen vs. supervise). Such confusions happen also for the question itself. An example for a misinterpreted question is "What is the table/bus number?", which is interpreted as "What is the number of tables/buses?" Currently, other than enhancing object detection by attention from question relations, details from the question are not used as hints for correctness of expressions. A case where such information may be further utilized is when the query is for a property of an object. In this case there may be a prior assumption or an increase in probability that such an object exists. Of course, an automatic assumption of existence is not desirable. However, reduction in classification thresholds, additional attempts using hints and other measures may be utilized to reflect the higher probability for the existence of such an object. For example, given the question "What is the age of the man?", the probability that a man indeed exist in the image should rise, and refuting this assumption should be performed only when the evidence is substantial. Discussion and Conclusions We have presented an approach to visual question answering that seeks to compose an answering procedure based on the 'abstract' structure of the query. We exploit the compositional nature of the question and represent it as a directed graph with objects represented as nodes and relations as edges. Each basic component of this graph representation is mapped to a dedicated basic procedure. The collection of these basic procedures are put together, along with additional required processes, into a complex procedure for the entire query. This procedure incorporates query details and intermediate results and stores them in the graph nodes and a working memory module. The stored information completes the guidance to the procedure and allows handling different types of visual elements. Question relations are used as an attention source to enhance object detection. Querying for external common information is also handled by the procedure in order to complete the required prior knowledge needed to answer the question. Breaking the answering process into basic meaningful components, corresponding to basic logic patterns, enables awareness at each step to the accomplished and unaccomplished parts of the task. This includes recognizing and reporting on failures and limitations, that in many cases are corrected and provided with valid alternatives. Elaborations to the answers are provided, according to the stored information. Since the building blocks include simple real world detectors, the system is modular and its improvement is not limited. Human abilities motivate us to examine and handle some complicated attributes that are addressed naturally by humans, even though they may hardly appear in real queries. These attributes, such as 'odd man out', demonstrate representation challenges, that require extending the natural graph representation. Currently specific configuration is created to represent these attributes. Future upgrades may allow handling it more smoothly. Evaluation of representation capabilities demonstrated that, even though potentially, our scheme can represent practically all queries, current state of the system is limited. The observed problems include limitations in vocabulary identification, sensitivity to phrasing and cases of grammatical similarity for different elements (e.g. 'wearing the same color' vs. 'wearing the same pants'). Additionally, some rare representation limitations exist, such as relations between more than two objects of different classes. Even though the recognition abilities are currently limited due to scope of existing detectors, the system is self aware and mostly reply by specifying its limitation (which may trigger an addition of the desired detectors to the system). The representation limitations discussed in 4.1 are a fundamental source of failures, which is added to incremental chances for errors of the used detectors. Our system does not exploit any language bias of the question. The answer is exclusively provided by the procedure evaluating the logic representation of the question. However, improvement is ongoing, as detectors keep improving and their scope keeps growing. Current approaches to visual question answering use mostly end-to-end schemes that are very different than our approach. Although some methods include adaptive aspects, the optimization process is more likely to exploit language bias than the complex mechanisms required for proper answering. These methods maximize statistical results, but are likely to fail in addressing subtle, yet meaningful cases. This fits the analysis of current models, demonstrating the tendency to utilize only part of the question, provide same answers for different images and fail on novel forms. A combination of UnCoRd system and an end-toend model may be beneficial in some cases, for example enhancing UnCoRd elaborations with "intuitive" answer in some cases (such as unknown visual elements). We've integrated and examined various aspects of answering questions on images using our answering system. Much more research and investigation is required for all these aspects, as well as others. Future research will include learning the representation mapping and making it more robust, further investigating and improving the visual elements analyzers (e.g. combine the type of object when possible for property detection) and more.
8,012
1810.10637
2898389697
We introduce efficient algorithms which achieve nearly optimal regrets for the problem of stochastic online shortest path routing with end-to-end feedback. The setting is a natural application of the combinatorial stochastic bandits problem, a special case of the linear stochastic bandits problem. We show how the difficulties posed by the large scale action set can be overcome by the networked structure of the action set. Our approach presents a novel connection between bandit learning and shortest path algorithms. Our main contribution is an adaptive exploration algorithm with nearly optimal instance-dependent regret for any directed acyclic network. We then modify it so that nearly optimal worst case regret is achieved simultaneously. Driven by the carefully designed Top-Two Comparison (TTC) technique, the algorithms are efficiently implementable. We further conduct extensive numerical experiments to show that our proposed algorithms not only achieve superior regret performances, but also reduce the runtime drastically.
Stochastic multi-armed bandits is a prevalent framework for sequential decision-making. Early work on stochastic MAB problems @cite_30 @cite_17 @cite_6 tended to be more focused on asymptotic guarantees, whereas more recent work @cite_33 @cite_7 has been directed towards a non-asymptotic analysis in which regret can be bounded over a fixed time horizons @math . Two of the best-known and well-studied techniques are known as the UCB algorithm that follows the OFU principle @cite_33 and the explore then exploit algorithm @cite_15 @cite_2 . Recently, the Bayesian setting accompanied by the (TS) technique has also been thoroughly analyzed due to the ease of implementation and favorable empirical results @cite_11 .
{ "abstract": [ "Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves.", "This paper considers the use of a simple posterior sampling algorithm to balance between exploration and exploitation when learning to optimize actions such as in multi-armed bandit problems. The algorithm, also known as Thompson Sampling, offers significant advantages over the popular upper confidence bound (UCB) approach, and can be applied to problems with finite or infinite action spaces and complicated relationships among action rewards. We make two theoretical contributions. The first establishes a connection between posterior sampling and UCB algorithms. This result lets us convert regret bounds developed for UCB algorithms into Bayesian regret bounds for posterior sampling. Our second theoretical contribution is a Bayesian regret bound for posterior sampling that applies broadly and can be specialized to many model classes. This bound depends on a new notion we refer to as the eluder dimension, which measures the degree of dependence among action rewards. Compared to UCB algorithm Bayesian regret bounds for specific model classes, our general bound matches the best available for linear models and is stronger than the best available for generalized linear models. Further, our analysis provides insight into performance advantages of posterior sampling, which are highlighted through simulation results that demonstrate performance surpassing recently proposed UCB algorithms.", "Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.", "", "3. Multi‐armed Bandit Allocation Indices. By J. C. Gittins. ISBN 0 471 92059 2. Wiley, Chichester, 1989. xii + 252pp. £29.95.", "Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory, textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical introduction and a review of the more advanced results. The chapters are as follows: Stochastic bandits; Lower bounds; Bayesian Bandits and Thompson Sampling; Lipschitz Bandits; Full Feedback and Adversarial Costs; Adversarial Bandits; Linear Costs and Semi-bandits; Contextual Bandits; Bandits and Zero-Sum Games; Bandits with Knapsacks; Incentivized Exploration and Connections to Mechanism Design.", "In the stochastic multi-armed bandit problem we consider a modification of the UCB algorithm of [4]. For this modified algorithm we give an improved bound on the regret with respect to the optimal reward. While for the original UCB algorithm the regret in K-armed bandits after T trials is bounded by const · ( K (T) ), where Δ measures the distance between a suboptimal arm and the optimal arm, for the modified UCB algorithm we show an upper bound on the regret of const · ( K (T ^2 ) ).", "" ], "cite_N": [ "@cite_30", "@cite_11", "@cite_33", "@cite_7", "@cite_6", "@cite_2", "@cite_15", "@cite_17" ], "mid": [ "1998498767", "2949366694", "2168405694", "", "2317700292", "2939627158", "1975779216", "2009551863" ] }
0
1810.10637
2898389697
We introduce efficient algorithms which achieve nearly optimal regrets for the problem of stochastic online shortest path routing with end-to-end feedback. The setting is a natural application of the combinatorial stochastic bandits problem, a special case of the linear stochastic bandits problem. We show how the difficulties posed by the large scale action set can be overcome by the networked structure of the action set. Our approach presents a novel connection between bandit learning and shortest path algorithms. Our main contribution is an adaptive exploration algorithm with nearly optimal instance-dependent regret for any directed acyclic network. We then modify it so that nearly optimal worst case regret is achieved simultaneously. Driven by the carefully designed Top-Two Comparison (TTC) technique, the algorithms are efficiently implementable. We further conduct extensive numerical experiments to show that our proposed algorithms not only achieve superior regret performances, but also reduce the runtime drastically.
A special case of linear bandits is combinatorial bandits where the action set is constrained to subset of @math In combinatorial stochastic bandits, it is often assumed that the reward loss vector is observed at all the coordinates sampled by the action taken. This is the so-called semi-bandit feedback setting @cite_32 . The authors of @cite_9 initiated the study of combinatorial stochastic bandits under semi-bandit feedback and a network-structured action set; while @cite_14 studied the general action set case. The authors of @cite_21 further characterized tight upper and lower bounds for this problem. Assuming the noise is independent across different coordinates, the authors of @cite_1 improved upon the results obtained in @cite_21 . For the bandit feedback case, the authors of @cite_8 gives algorithms that require brute-force search over the action space with instance dependent regret @math . For adversarial combinatorial bandits, the authors of @cite_24 presented the efficient and optimal algorithm for the semi-bandit feedback case while the authors of @cite_10 described an optimal algorithm for the bandit feedback case, but its computational complexity scales linearly with the number of actions.
{ "abstract": [ "We define a general framework for a large class of combinatorial multi-armed bandit (CMAB) problems, where simple arms with unknown distributions form super arms. In each round, a super arm is played and the outcomes of its related simple arms are observed, which helps the selection of super arms in future rounds. The reward of the super arm depends on the outcomes of played arms, and it only needs to satisfy two mild assumptions, which allow a large class of nonlinear reward instances. We assume the availability of an (α, β)-approximation oracle that takes the means of the distributions of arms and outputs a super arm that with probability β generates an β fraction of the optimal expected reward. The objective of a CMAB algorithm is to minimize (α, β)- approximation regret, which is the difference in total expected reward between the αbeta; fraction of expected reward when always playing the optimal super arm, and the expected reward of playing super arms according to the algorithm. We provide CUCB algorithm that achieves O(log n) regret, where n is the number of rounds played, and we further provide distribution-independent bounds for a large class of reward functions. Our regret analysis is tight in that it matches the bound for classical MAB problem up to a constant factor, and it significantly improves the regret bound in a recent paper on combinatorial bandits with linear rewards. We apply our CMAB framework to two new applications, probabilistic maximum coverage (PMC) for online advertising and social influence maximization for viral marketing, both having nonlinear reward structures.", "We consider the adaptive shortest-path routing problem in wireless networks under unknown and stochastically varying link states. In this problem, we aim to optimize the quality of communication between a source and a destination through adaptive path selection. Due to the randomness and uncertainties in the network dynamics, the quality of each link varies over time according to a stochastic process with unknown distributions. After a path is selected for communication, the aggregated quality of all links on this path (e.g., total path delay) is observed. The quality of each individual link is not observable. We formulate this problem as a multi-armed bandit with dependent arms. We show that by exploiting arm dependencies, a regret polynomial with network size can be achieved while maintaining the optimal logarithmic order with time. This is in sharp contrast with the exponential regret order with network size offered by a direct application of the classic MAB policies that ignore arm dependencies. Furthermore, our results are obtained under a general model of link-quality distributions (including heavy-tailed distributions) and find applications in cognitive radio and ad hoc networks with unknown and dynamic communication environments.", "We formulate the following combinatorial multi-armed bandit (MAB) problem: There are N random variables with unknown mean that are each instantiated in an i.i.d. fashion over time. At each time multiple random variables can be selected, subject to an arbitrary constraint on weights associated with the selected variables. All of the selected individual random variables are observed at that time, and a linearly weighted combination of these selected variables is yielded as the reward. The goal is to find a policy that minimizes regret, defined as the difference between the reward obtained by a genie that knows the mean of each random variable, and that obtained by the given policy. This formulation is broadly applicable and useful for stochastic online versions of many interesting tasks in networks that can be formulated as tractable combinatorial optimization problems with linear objective functions, such as maximum weighted matching, shortest path, and minimum spanning tree computations. Prior work on multi-armed bandits with multiple plays cannot be applied to this formulation because of the general nature of the constraint. On the other hand, the mapping of all feasible combinations to arms allows for the use of prior work on MAB with single-play, but results in regret, storage, and computation growing exponentially in the number of unknown variables. We present new efficient policies for this problem that are shown to achieve regret that grows logarithmically with time, and polynomially in the number of unknown variables. Furthermore, these policies only require storage that grows linearly in the number of unknown parameters. For problems where the underlying deterministic problem is tractable, these policies further require only polynomial computation. For computationally intractable problems, we also present results on a different notion of regret that is suitable when a polynomial-time approximation algorithm is used.", "A stochastic combinatorial semi-bandit is an online learning problem where at each step a learning agent chooses a subset of ground items subject to constraints, and then observes stochastic weights of these items and receives their sum as a payoff. In this paper, we close the problem of computationally and sample efficient learning in stochastic combinatorial semi-bandits. In particular, we analyze a UCB-like algorithm for solving the problem, which is known to be computationally efficient; and prove @math and @math upper bounds on its @math -step regret, where @math is the number of ground items, @math is the maximum number of chosen items, and @math is the gap between the expected returns of the optimal and best suboptimal solutions. The gap-dependent bound is tight up to a constant factor and the gap-free bound is tight up to a polylogarithmic factor.", "This paper studies online shortest path routing over multi-hop networks. Link costs or delays are time-varying and modeled by independent and identically distributed random processes, whose parameters are initially unknown. The parameters, and hence the optimal path, can only be estimated by routing packets through the network and observing the realized delays. Our aim is to find a routing policy that minimizes the regret (the cumulative difference of expected delay) between the path chosen by the policy and the unknown optimal path. We formulate the problem as a combinatorial bandit optimization problem and consider several scenarios that differ in where routing decisions are made and in the information available when making the decisions. For each scenario, we derive a tight asymptotic lower bound on the regret that has to be satisfied by any online routing policy. These bounds help us to understand the performance improvements we can expect when (i) taking routing decisions at each hop rather than at the source only, and (ii) observing per-link delays rather than end-to-end path delays. In particular, we show that (i) is of no use while (ii) can have a spectacular impact. Three algorithms, with a trade-off between computational complexity and performance, are proposed. The regret upper bounds of these algorithms improve over those of the existing algorithms, and they significantly outperform state-of-the-art algorithms in numerical experiments.", "We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called \"semi-bandit\", and \"bandit\" problems. We consider both @math -, and @math -type of restrictions for the losses assigned by the adversary. We formulate a general strategy using Bregman projections on top of a potential-based gradient descent, which generalizes the ones studied in the series of papers (2007), (2008), (2008), Cesa-Bianchi and Lugosi (2009), Helmbold and Warmuth (2009), (2010), (2010), (2010) and Audibert and Bubeck (2010). We provide simple proofs that recover most of the previous results. We propose new upper bounds for the semi-bandit game. Moreover we derive lower bounds for all three feedback assumptions. With the only exception of the bandit game, the upper and lower bounds are tight, up to a constant factor. Finally, we answer a question asked by (2010) by showing that the exponentially weighted average forecaster is suboptimal against @math adversaries.", "", "Numerous machine learning problems require an exploration basis - a mechanism to explore the action space. We define a novel geometric notion of exploration basis with low variance, called volumetric spanners, and give efficient algorithms to construct such a basis. We show how efficient volumetric spanners give rise to the first efficient and optimal regret algorithm for bandit linear optimization over general convex sets. Previously such results were known only for specific convex sets, or under special conditions such as the existence of an efficient self-concordant barrier for the underlying set." ], "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_32", "@cite_24", "@cite_10" ], "mid": [ "2185823609", "2950317007", "2093562354", "1578264931", "2204757301", "2951667255", "", "1840913622" ] }
0
1810.10317
2896243688
Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems. However, there are inevitably noises in the external information, severely reducing the benefit that the existing methods could receive from the incorporation. To tackle the problem, this study pays special attention to the discrimination of the noises during the incorporation. We argue that there exist two kinds of noise in this external information, i.e. global noise and local noise, which affect the translations for the whole sentence and for some specific words, respectively. Accordingly, we propose a general framework that learns to jointly discriminate both the global and local noises, so that the external information could be better leveraged. Our model is trained on the dataset derived from the original parallel corpus without any external labeled data or annotation. Experimental results in various real-world scenarios, language pairs, and neural architectures indicate that discriminating noises contributes to significant improvements in translation quality by being able to better incorporate the external information, even in very noisy conditions.
Besides, most of the previous methods require the presence of specific resources for training, e.g. translation of the parallel data generated by existing MT system(s) @cite_5 @cite_9 . propose approaches to use an SMT model to provide word and phrase recommendations for an attention-based NMT, where the two systems are deeply coupled. propose to train context-aware translation models by the aids of large document discourse-level data. In contrast, our training procedure is more general and simpler, which only uses word sampling from the original parallel data and requires no external resources. To leverage outside information, such as words, for a generation task, propose to use lexical constraints on decoding process to utilize correct external word translations. propose to use copying mechanism in the single-turn dialogue task, inspiring us for the basic framework. Compared to their attempts, our approach provides more robust solutions to discriminate noises.
{ "abstract": [ "Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods.", "Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result." ], "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2608870981", "2532807140" ] }
Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation
Recently, Neural Machine Translation (NMT) systems have achieved state of the art performance in large-scale machine translation tasks (Bahdanau et al., 2015;Cho et al., 2014;Vaswani et al., 2017;Gehring et al., 2017). While previous researches mainly aim at designing more sophisticated models to enhance NMT models themselves (Tu et al., 2016;Mi et al., 2016;Weng et al., 2017), another way to improve the translation performance is to provide outside assistance to the NMT systems Wang et al., 2017a,c Table 1: The motivating example. The first three lines are the source sentence, the reference translation and the translation from a current NMT system with a translation error ("搬运(moving)" to "working"). The next three lines give examples of three different external information: Human interactive suggestions (HUMAN), word translations generated by a bilingual dictionary (DICT) and the translation from a Statistic Machine Translation system (SMT). Each case contains correct translation that may help the NMT system. The last line shows the expected improved translation result of original NMT, in which the wrong translation "working" is corrected to "moving". Knowles and Koehn, 2016;Zhou et al., 2017). Here we refer to this outside assistance as the external information in general. The form and content of the external information could be of various kinds, depending on diverse real-world scenarios. In Table 1, we show examples of three different kinds of external information. Because the external information could be either long or short, either a whole sentence or several phrases or even just individual words, we propose to use a set of externally given words, called external words, as a general form to cover all these kinds of external information. Here external words could be any of the cases in Table 1. While previous approaches generally focus on how to integrate external information (Gu et al., 2017;Wang et al., 2017c), less attention is paid to noises in the given information. We argue that neglecting the noises will have adverse impacts on improving the translation quality. Furthermore, we divide the noises into the following two categories: • Global Noise: Words in the external words that are generally irrelevant to the translations of the words of the whole sentence. • Local Noise: At a given translation time step, words in the external words that are irrelevant to the translation of a specific word. Typically, the global noises will bring adverse effects for the whole translation, leading to incorporating noisy words, e.g. words "just" and "in" in Table 1 While the global noises are usually easy to notice, the local noises are tricky and receive less attention. We notice that, even when there is no global noise, some external words may still affect negatively at a certain time step, resulting in wrong translations. E.g. in Table 1, when generating the word "moving" in the example sentence, the external word "workers" (correct translation of "工人") is the local noise. As a result, handling of local noises is also essential. In this paper, we propose a general framework to tackle the noise problem for diverse scenarios of external information. Our framework employs two separate word discriminators for the two kinds of noises, respectively, i.e. a global word discriminator and a local word discriminator. The global discriminator decides whether the provided words are useful or not, and the local discriminator decides whether the words should be applied at the current translation step. Our framework is trained with synthetic training data generated by directly sampling words from parallel sentences, which requires no additional data or manual annotation. Experiments are conducted on two language pairs, two neural architectures, and four real-world scenarios where the external information could be machine translation results, lexical table of an SMT system, word-based translation from a bilingual dictionary or simply bag-ofwords of the target sentence. We get the following conclusions: • The noises indeed prevents NMT models from benefiting more from external information. • Discriminating the noises leads that our model significantly outperforms the one without discrimination in translation quality as a consequence of better incorporating the external information, especially in very noisy conditions. • Once the model is trained on the synthetic dataset, it can be directly used to improve different real-world scenarios without any taskspecific tuning. It also indicates that the form of external words generalizes well to cover various types of external information. Related Work Previous work focuses on integrating a certain kind of external information. For example, interactive machine translation systems could now employ assistance from humans, which could be as simple as one single correction of the translation (Knowles and Koehn, 2016;Peris et al., 2017;Hokamp and Liu, 2017). Some studies try to integrate external dictionaries into the NMT models (Luong et al., 2014;Arthur et al., 2016; or improve the translation based on extra parallel data or the output of other translation systems (Zhou et al., 2017;Niehues et al., 2016;Gu et al., 2017). Unlike these work, we study a more general form of external information, which is applicable to different scenarios. Besides, most of the previous methods require the presence of specific resources for training, e.g. translation of the parallel data generated by existing MT system(s) (Zhou et al., 2017;Niehues et al., 2016). Wang et al. (2017c,d) propose approaches to use an SMT model to provide word and phrase recommendations for an attention-based NMT, where the two systems are deeply coupled. Wang et al. (2017b); Tu et al. (2018); Voita et al. (2018) propose to train contextaware translation models by the aids of large document/discourse-level data. In contrast, our training procedure is more general and simpler, which only uses word sampling from the original parallel data and requires no external resources. To leverage outside information, such as words, for a generation task, Hokamp and Liu (2017); Post and Vilar (2018); Hasler et al. (2018) propose to use lexical constraints on decoding process to utilize correct external word translations. Gu et al. (2016) propose to use copying mechanism in the single-turn dialogue task, inspiring us for the basic framework. Compared to their attempts, our approach provides more robust solutions to discriminate noises. Notation We use the following notations throughout this paper. We denote a source sentence as X = x 1 , . . . , x I , and a target sentence as Y = y 1 , . . . , y T . The external words is denoted as Y E = {y E 1 , . . . , y E J }. Because we focus on the case where Y E is a set of words, no sequential relation between words in Y E is considered, which reduce the requirement for external information. This assumption makes the proposed methods applicable to wider applications, where the external words may be arbitrary. Neural Machine Translation Traditional NMT systems use an encoder-decoder architecture with attention mechanism for translation (Bahdanau et al., 2015;Luong et al., 2015). The specific neural structure could be Recurrent Neural Network (RNN) (Bahdanau et al., 2017), Convolutional Neural Network (CNN) (Gehring et al., 2017) or the self-attention network (Transformer) (Vaswani et al., 2017). NMT models the translation probabilities from the source sentence to the target sentence in a word-by-word manner: P (Y |X) = T t=1 P (y t |y <t , x)(1) In first, the encoder maps the source sentence x into distributed representations H = h 1 , . . . , h I . In the t-th step of the decoding process, the word translation is generated by the decoder according to the following probability: P (y t |y <t , X) = softmax(g(y t−1 , s t , c t )) (2) where g(·) is a non-linear activation function; y t−1 is the output word of time step t − 1; s t is the current decoder hidden state which is modeled as: s t = f (y t−1 , s <t , c t )(3) where f (·) is a transforming function depending on certain architectures; c t is the source context vector from the attention mechanism: c t = I i=1 α t,i · h i (4) α t,i = softmax a(s t , h i )(5) where a(·) is the attention model for the relation between s t and the i-th source representation h i . A Basic Reading-Fusion Framework The encoder-decoder architecture generates translation in a word-by-word manner. As a result, the incorporation of external words also affects the translation word-by-word, i.e. at each time step. Here we first present a basic and simple readingfusion framework, inspired by the structure of the Pointer Networks (Vinyals et al., 2015;Gulcehre et al., 2016) and the Copying Mechanism (Gu et al., 2016). Reading stage At each decoding time step t, an attention is performed between the concatenation of current decoding state s t and source context c t , and the embedding of each external words. q E t,j = softmax a([s t ; c t ], E(y E j ))(6) The resulting attention weight q E t,· is treated as a probability distribution over all external words at time step t. That is, P E (y j |X) = q E t,j . The higher P E (y j |X) is, the more related y E j is to the current translation. Fusion stage Similar to Gu et al. (2017), the probability distribution P E (y j |X) is then interpolated with the original word generation probability from the decoder to perform the integrated word generation: P (y t |X,Y E )=(1−β t )P (y t |y <t , X)+β t P E (y t |X)(7) The scalar fusion gate β t is used to determine the relevance between the external content c E t and the translation at the current time step. β t is computed based on the representation of external content (external context vectors c E t ) and the representations of current step (decoding state s t and source context vectors c t ): β t = f β (s t ; c t ; c E t )(8) where f β is feed-forward neural networks with sigmod activation. The external context vector c E t = J j=1 q E t,j · E(y E j ) is a weighted sum of the embeddings of the external words. Discriminating the Noise in External Information The reading-fusion framework could be interpreted as a way to copy the external word y E j as the next generated word, with the probability determined by P E t,· , and β t . However, when the external words are noisy, the integration process could be affected. In this section, we describe specific approaches to discriminate two different kinds of noise, i.e. global noise and local noise. Figure 1 illustrates the proposed architecture. Before decoding, a supervised global word discriminator is designed to determine whether each given external words is relevant to the current translation (Section 5.1). During decoding, in the reading stage, an attention mechanism is performed to select the correct external words or an extra <null> token; in the fusion stage, a supervised local word discriminator decides whether to use the obtained information for the translation of current word (Section 5.2). c t c E t ︷ ︷ Encoder External Info. Y E + ⟨ ⟩ External Prob. Dist. P E (y t | X ) Final Prob. Dist. P(y t | X, Y E ) Local Discriminator R e a d in g 1 − β t β t Original Prob. Dist. P(y t | y < t , X ) Fusion ︷ y t y 1 , y 2 Target Sentence : Y Decoder y t− 1 , y t− 1 s t c t At t-th timestep c E t {E(y E 0 ) E(y E j ) E(y E J )} Source Sentence X ⟨h 0 h i h I ⟩ Discriminating the Global Noise Because the source sentence and the external words are given before translation, a natural way of preventing the influence of the global noise is identifying them before translation. Here we propose a global word discriminator for filtering noisy input at the sentence level. Global Word Discriminator Given a source sentence X, for each external word y E j , the global word discriminator makes the decision based on the embedding of current external word E(y E j ), the source summarization z = h i ∈H h i /|H| from the encoder (Sennrich et al., 2017b), and the attentive context vector c D j , computed by an attention between the external word y E j and source hidden states H: D(y E j ) = f global (E(y E j ); z; c D j )(9) where f global is feed-forward neural networks with sigmod activation. Integration We integrate the results of the global word discriminator into the NMT translation by discounting the word generation probability q E t,j of the external word y E j by D(y E j ). We revise the Equation 6 as follow: q E t,j = softmax a([s t ; c t ], E(y E j )) · D(y E j ) (10) As a result, the external words determined as global noises by the discriminator will have lower probabilities. Learning Instead of training the decision of global word discriminator as a hidden variable with the whole model, we provide direct supervision to ensure its effectiveness. With the training instances, the global word discriminator is trained by minimizing the following cross entropy loss: loss g = J j=1 − b(y E j )·log(D(y E j )) (11) −(1−b(y E j ))·log(1−D(y E j )) where b(y E j ) = 0 y E j / ∈ Y 1 y E j ∈ Y b(y E j ) indicates if the given external word y E j is in the reference Y . Discriminating the Local Noise Even without global noise, the external words irrelevant to the current decoding time step may still attract attention mistakenly. To prevent the decoder from this unexpected influence, we propose to use a supervised local word discriminator to discriminate these noises. Additionally, let's imagine an extreme case, where there is no relevant external word in the current time step. All external words should be considered to be the local noises in the case, for which we propose an extra <null> token to distract the attention. Local Word Discriminator In the basic framework, the fusion gate β t automatically learns to distinguish the relevant information from the irrelevant one for a certain time step. To encourage better decisions, we add a discriminative supervision for β t . We can now refer the fusion gate to local word discriminator. At each time step t during training, the word choice y t is determined by the reference, which we use as the supervised information. In cases where the external words Y E contain y t , the external context is considered relevant. Thus the local word discriminator is trained to predict positive. Otherwise, the external context is surely irrelevant, where the local word discriminator is trained to predict negative. Learning The local word discriminator is trained via the following cross entropy loss: loss l = T t=1 − b(y t ) · log(β t ) (12) − (1 − b(y t )) · log(1 − β t ) where b(y t ) = 0 y t / ∈ Y E 1 y t ∈ Y E b(y t ) indicates if y t is in the external words Y E . Handling the extreme case When the set of external words does not contain correct translation for the current target word, a natural idea is that no word is helpful at all. Therefore, we add a special <null> token, which is expected to draw higher attention when there is no proper word available, into the set of external words. We do not use the probability p E (<null>|X) to generate target word. The token decreases the weights of the irrelevant words, easing the burden of local word discriminator. Model Training Given the dataset of training triples { X m , Y m , Y E m } M m=1 , the model parameters are trained by minimizing the loss L(θ, θ g , θ l ), where θ, θ g , θ l are the parameter of the NMT, global discriminator and local discriminator, respectively. L(θ, θ g , θ l ) = 1 M M m=1 −log P θ (Y m |X m , Y E m ) + λ 1 · loss g θg + λ 2 · loss l θ l(13) where λ 1 and λ 2 are hyper-parameters. Algorithm 1: Construction of the synthetic dataset Input: Parallel dataset D1 = { Xm, Ym } M m=1 , vocabulary of target language V 1 Initialize: D2 = ∅ 2 foreach sentence pair X, Y ∼ D1 do 3 Y E = ∅ 4 Randomly sample a value of p-ratio ζ ∼ U(0, 1) In order to train our model with proposed discriminating components, we propose a self-generated approach to construct a synthetic dataset of external words. Given parallel corpus D 1 = { X m , Y m } M m=1 , synthetic external words are constructed for each sentence as training data, by sampling words in the reference as positive words, and the rest words in the vocabulary as negative words, respectively. Therefore, no additional data or annotation is required for training. Here we measure the volume of external words by the ratio of the number of provided words to the length of the sentence, denoted as v-ratio: v-ratio = |Y E | |Y | = #posWord + #negWord |Y | We measure the quality of the external words by the ratio of positive words to the total provided words, denoted as p-ratio: p-ratio = #posWord |Y E | = #posWord #posWord+#negWord All models are trained using synthetic external words with v-ratio 1.0 and uniformly sampled pratio. See Alg. 1 for more details. Experiment We conduct experiments on Chinese-to-English (Zh-En) and English-to-German (En-De) translation tasks, respectively. For Zh-En, the training data consists of 1.6 million sentence pairs extracted from LDC 1 . We use NIST MT03 dataset as (Bojar et al., 2017) corpus, which consists of 5.6M sentence pairs. We use newstest2016 as our development set, and newstest2017 as our testset. We follow Sennrich et al. (2017a) to segment both German and English words into subwords using byte-pair encoding (Sennrich et al., 2016, BPE). We use the merged vocabulary after BPE for both languages. The translation evaluation metric is caseinsensitive BLEU (Papineni et al., 2002) for Zh-En 3 , and case-sensitive BLEU for En-De 4 , which are consistent with previous work. To evaluate and analyze the proposed approaches, we perform two categories of experiments on both synthetic and real-world settings. We first conduct experiments on the synthetic testsets similar to the training set. The aim of synthetic experiments is for analysis and ablation studies, where the experimental conditions could be easily manipulated. Then, we perform four real-world settings to evaluate the robustness and generality of our approach in practice. Note that in real-world settings, we use the same trained models from our main experiments without task-specific tuning for the different given datasets. Training details For Zh-En, we limit the vocabulary size to 30K words, while we keep full vocabularies for En-De. All the out-of-vocabulary words are mapped to a special token <UNK>. The value of λ 1 and λ 2 in Equation 13 are empirically set to 0.1, respectively. We train each model with sentences no longer than 50 words. The word embedding dimension is 512 and the size of all hidden layers is 1024. The training batch size is 80. The beam size is set to 5 for testing. Training are performed via Adadelta (Zeiler, 2012) on a single GTX1080. Model Comparison and Analysis Ablation Study on Zh-En Translation For the ablation study, we compare different components of our model and list the results in Table 2. Table 2 shows that when external words are available, all the models can make use of the assistance with different abilities. Discriminating noises are essential We can see that ignoring noises inside the external words only leads to a moderate improvement (line 1). Discriminating either global or local noises improves the baseline by 3.31 (line 3) and 3.49 (line 6) BLEU scores, respectively. Particularly, with the learned global discriminator, our model achieves close performance compared to the oracle model (line 2 v.s. line 3), which can be regarded as the up-bound of discriminating the global noise with the ground-truth decisions. Our final model (line 7) combines all the proposed components to handle both global and local noises simultaneously. It obtains the highest 4.03 BLEU improvement. The final model gains stronger performance than those with single component, which indicates that handling global and local noises are both essential, and complement to each other. Different Language Pair and Architecture We further validate the generality of the proposed model that we apply additional experiments on En-De translation for cross-language generality, and on neural Transformer-based NMT (Vaswani et al., 2017) for cross-architecture generality. Table 3 shows the same trend as Zh-En translation task. This result indicates the effectiveness and generality of our method across various language pairs and translation granularities (words and BPE units). WMT17 En-De translation Transformer-based architecture We also extend our approach to the recent emerging sequence-to-sequence architecture, TRANS-FORMER model, on Zh-En task. Table 4 shows the consistent improvement with experiment on RNNSEARCH, which demonstrates the proposed approach is transparent to neural architectures, leading to feasible extension to other sequenceto-sequence models, such as and CNN-based model (Gehring et al., 2017). Study on Varied Ratio of External Words To test the performance of our trained model under different conditions, we evaluate the translation quality improvement with different volume (v-ratio) and quality (p-ratio) of external words, respectively. For each experiment, we fix one of the ratio parameters and varies the other one. As shown in Figure 2a and 2b, when the provided information is very short (v-ratio 0.2) or noisy (pratio 0.1), the BASIC methods, only using simple attention and gating which is not able to process the noisy information, may not be able to improve the baseline model, showing the double-sided effect of noisy external words. However, our FINAL model could successfully discriminate the useful words from the noises and bring stable improvement in all the experimented conditions. Performance of the Global Word Discriminator To evaluate the prediction performance of the global word discriminator, we keep a held-out set of instances during training. Table 5 shows the precision, recall and F-1 score of the global word discriminator on this held-out set. The results show that on all test sets, the global discriminator achieves an F-1 score higher than 89%, showing that the discriminator is indeed able to distinguish noises from useful external words. Experiments with Real-World Scenarios We also test our system in four real-world scenarios, where four different sources of information are used as external inputs. complained to be less fluent and contain more errors, where de-noising is quite important. Lexical table (LEX) Besides directly using the translation outputs from Moses, the intermediate products such as the lexical table could also be leveraged to conduct word-level translations where each source word is mapped separately to its translation. Bilingual dictionary (DICT) Bilingual dictionary is relatively easy to obtain, compared with other resources such as other translation systems, human interaction or parallel corpora. We investigate the case where word-level translations are directly used as external information. The most severe type of noises of both LEX and DICT is that they are literally word-level translations without morphological changes in the context of the current sentence. Bag-of-Words prediction (BOW) Weng et al. (2017) propose a multi-tasking scheme to boost NMT by predicting the bag-of-words of target sentence using the proposed Word Predictions approach. We are curious if the predicted bag-ofwords have the potential to help the NMT. Here we follow the WPE configuration in their paper to train a word predictor, where the target bagof-words are predicted by the encoder's summarization z (see Weng et al. (2017) for details). The word predictor is trained on the top of our models, whose parameters are all frozen. We collect the top-K predicted words from its prediction where K = 1.0 × |X| for each source sentence. Note that, as same as Section 7.1, we directly use the trained model without any specific training on the given datasets. Interestingly and surprisingly, Table 6 shows the importance of de-noising in all the four scenarios, in which our model derives greater benefits from the noisy real-world external datasets. Furthermore, we present two practical examples in Table 7. These evaluations give (a) Human correction with one word as external information. SOURCE 我们不愿为以往达成的协定再度付出代 价。 REFER. we are not willing to pay again for the agreements that have been reached already. NMT we do not want to pay the price for a further price. SMT we do not want to pay the price for the agreements reached in the past. FINAL we do not want to pay the price for the agreements we have reached. (b) SMT output as external information. Table 7: Case study on de-noising and incorporating two different sources of external information. Surprisingly, correct revision could further improve the translation followed by the previous errors. a further demonstration of the effectiveness and generality of our discriminating approaches. Conclusion and Future Work In this paper, we focus on the noise problem when NMT models are able to access and incorporate external information. There are two kinds of noises in external information, i.e., the global and local noises. We propose a general framework that learns to discriminate the both noises directly on a self-generated synthetic dataset that requires nothing external but the original parallel data. We find that the noises indeed prevents NMT models from benefiting more from external information. In experiments, our noise discrimination shows its superiority in the incorporation of external information, especially in very noisy conditions. Further analysis indicates that our model can be directly used without any task-specific tuning in the various scenarios, where the patterns of noises are different. It also indicates that the form of external words generalizes well to cover various types of external information. For future work, it may be interesting to adapt our noise discrimination for sequentially encoded external information.
4,400
1810.10437
2922512839
Aspect-term sentiment analysis (ATSA) is a longstanding challenge in natural language understanding. It requires fine-grained semantical reasoning about a target entity appeared in the text. As manual annotation over the aspects is laborious and time-consuming, the amount of labeled data is limited for supervised learning. This paper proposes a semi-supervised method for the ATSA problem by using the Variational Autoencoder based on Transformer (VAET), which models the latent distribution via variational inference. By disentangling the latent representation into the aspect-specific sentiment and the lexical context, our method induces the underlying sentiment prediction for the unlabeled data, which then benefits the ATSA classifier. Our method is classifier agnostic, i.e., the classifier is an independent module and various advanced supervised models can be integrated. Experimental results are obtained on the SemEval 2014 task 4 and show that our method is effective with four classical classifiers. The proposed method outperforms two general semisupervised methods and achieves state-of-the-art performance.
Another related topic is semi-supervised learning for the text classification. A simple but efficient method is to use pre-trained modules, e.g., initializing the word embedding or bottom layers with pre-trained parameters. Although word embedding technique has been wildly used in NLP models, e.g., Glove @cite_9 and ELMo @cite_7 , other pretraining-based method is model-dependent. The ELMo @cite_7 replaces the embedding layer with the pre-trained BILSTM to capture the contextual representation. This method is complementary with the proposed method. The combination with our method may yield better performance than either of them alone, but that investigation is beyond the scope of this paper. However, other methods, e.g., the Transformer LM @cite_16 , proposed a unified semi-supervised framework to handle various tasks. This constraint prevents advanced supervised models from the semi-supervised learning.
{ "abstract": [ "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals." ], "cite_N": [ "@cite_9", "@cite_16", "@cite_7" ], "mid": [ "2250539671", "2896457183", "2962739339" ] }
Variational Semi-supervised Aspect-term Sentiment Analysis via Transformer
Aspect based sentiment analysis (ABSA) has two sub-tasks, namely aspect-term sentiment analysis (ATSA) and aspect-category sentiment analysis (ACSA). ACSA is to infer the sentiment polarity with regard to the predefined categories, e.g., the aspect f ood, price, ambience. On the other hand, ATSA aims at classifying the sentiment polarity of a given aspect word or phrase in the text. For example, given a review about a restaurant "the [pizza] aspect is the best if you like thin crusted pizza, however, the [service] aspect is awful.", the sentiment implications with regard to "pizza" and "service" are contrary. For the aspect * * : Equal Contribution "pizza", the sentiment polarity is "positive" while "negative" for the aspect "service". In contrast to document-level sentiment analysis, ATSA requires more fine-grained reasoning about the textual context. The task is worthy of investigation as it can obtain the attitude with regard to a specific entity which we are interested in. The task is widely applicated in analyzing the comments, such as opinion generation. Recently, many attempts (Tang et al., 2016b;Pan and Wang, 2018;Liu et al., 2018;Li et al., 2018a) focus on supervised learning and pay much attention to the interaction between the aspect and the context. However, the amount of labeled data is quite limited as the annotation about the aspects is laborious. Currently available data sets, e.g. Se-mEval, only has around 2K unique sentences and 3K sentence-aspect pairs, which is insufficient to fully exploit the power of the deep models. Fortunately, a large amount of unlabeled data is available for free and can be accessed easily from the websites. It will be of great significance if numerous unlabeled samples can be utilized to further facilitate the supervised ATSA classifier. Therefore, the semi-supervised ATSA is a promising research topic. In ATSA, achieving the sentiment of the aspectterm is semantically complicated and it is nontrivial for a model to capture sentimental similarity of the aspects, which causes the difficulties for semi-supervised learning. In this paper, we proposed a classifier-agnostic framework which named Aspect-term Semi-supervised Variational Autoencoder (Kingma and Welling, 2014) based on Transformer (ASVAET). The variational autoencoder offers the flexibility to customize the model structure. In other words, the proposed framework is compatible with other supervised neural networks to boost their performance. Our proposed model learns the latent representation of the input data and disentangles the representations into two independent parts, i.e., the aspectterm sentiment and the representation of the lexical context. By regarding the aspect sentiment polarity of the unlabeled data as the discrete latent variable, the model implicitly induces the sentiment polarity via the variational inference. Specifically, the representation of the lexical context is extracted by the encoder and the aspect-term sentiment polarity is inferred from the specific ATSA classifier. The decoder takes these two representations as inputs and reconstructs the original sentence by two unidirectional language models. In contrast to the conventional auto-regressive models, the latent representations have their specific meanings and are obtained from the encoder and the classifier to the input examples. Therefore, it is also possible to condition the sentence generation on the sentiment and lexical information w.r.t. a certain target entity. In addition, by separating the representation of the input sentence, the classifier becomes an independent module in our framework, which endows the method with the ability to integrate different classifiers. The method is presented in detail in Sec. 3. Experimental results are obtained on the two classical datasets from SemEval 2014 task 4 (Pontiki et al., 2014). Five recent available models are implemented as the classifier in ASVAET. Our method is able to utilize the unlabeled data and consistently improve the performance against the supervised models. Compared with other semisupervised methods, i.e., in-domain word embedding pre-training and self-training, the proposed method also demonstrates better performance. We also evaluate the effectiveness of labeled data and sharing embeddings, and show that the structure can provide the separation between lexical context and sentiment polarity in the latent space. Method Description In this section, the problem definition is provided and then the model framework is presented in detail. The ATSA task aims to classify a data sample with input sentence x = {x 1 , ..., x n } and corresponding aspect 1 a = {a 1 , ..., a m }, where a is a subsequence of x, into a sentiment polarity y, where y ∈ {P, O, N }. P, O, N denotes "positive", "neutral", "negative". For the semisupervised ATSA, we consider the following scenario. Given a dataset consisting of labeled samples S l and unlabeled samples S u , where the S l = {(x (i) l , a (i) l , y (i) l )} N l i=1 and S u = {(x (i) u , a (i) u } N l i=1 , the goal is to utilize S u to improve the classification performance over the supervised model using S l only. The architecture is depicted in Fig. 1. The method consists of three main components, i.e., the classifier, the encoder, and the decoder. The classifier can be any differentiable supervised ATSA model, which takes x and a as input, and outputs the prediction about y. The encoder transform the data into a latent space that is independent of the label y. And the decoder combines the outputs from the classifier and the encoder to reconstruct the input sentence. For the labeled data, the classifier and the autoencoder are trained with the given label y. For the unlabeled data, the y is regarded as the latent discrete variable and it is induced by maximizing the generative probability. As the classifier can be implemented by various models, the description of the classifier will be omitted. We present a autoencoder structure based on Transformer (Vaswani et al., 2017). In the following, the objective functions are clarified, followed by the model description. Variational Inference Using generative models is a common approach for semi-supervised learning, which tries to extract the information from the unlabeled data by modeling the data distribution. In VAE, the data distribution is modeled by optimizing the evidence lower bound (ELBO) of data log-likelihood, which leads to two objectives for labeled data and unlabeled data respectively. For the labeled data, VAE maximizes the ELBO of p(x, y|a). For the unlabeled data, it optimizes the ELBO of p(x|a), where the y is latent and integrated. Specifically, the dependency between variables is illustrated in Fig. 2. The ELBO of log p(x, y|a) can be given as follows: log p θ (x, y|a) ≥ E q φ (z|x,a,y) [log p θ (x|y, a, z)] − D KL (q φ (z|x, a, y)||p θ (z)) + log p θ (y) = L(x, a, y) ,(1) where z is the latent variable which represents lexical information over the sentence and D KL is the KullbackLeibler divergence. In terms of the unlabeled data, the ELBO of log p(x|a) can be extended from Eq. 1. log p θ (x|a) ≥ y q φ (y|x, a)(L(x, a, y)) + H(q φ (y|x, a)) = U(x, a) ,(2) where H is the entropy function and q φ (y|x, a) is the classification function. And q φ (y|x, a) can also be trained in the supervised manner using the labeled data. Combining the above objectives, the overall objective for the entire data set is: J = (x,a,y)∈S l −L(x, a, y) + x∈Su −U(x, a) + γ (x,a,y)∈S l − log q φ (y|x, a) ,(3) where γ is a hyper-parameter which controls the weight of the additional classification loss. To implement this objective, three components are required to model the q φ (y|x, a), q φ (z|x, a, y) and p θ (x|y, a, z) respectively. Classifier Various currently available models can be used as the classifier. For the unlabeled data, the classifier is used to predict the distribution of label y for the decoder, i.e., y ∼ q φ (y|x, a). The distribution q φ (y|x, a) will be tuned during maximizing the objective in Eq. 2. In this work, five classifiers are implemented in ASVAET and they are also used as the supervised baselines for the comparison. Transformer Encoder The encoder plays the role of q φ (z|x, a, y). This module attempts to extract the lexical feature that is independent of the label y when given data sample (x, a). In this way, the z and y jointly form the representative vector for the input data. In our implementation, we use a bidirectional encoder to construct sentences embeddings. It is referred as the Transformer encoder that is actually a sub-graph of the Transformer architechture (Vaswani et al., 2017), the architecture is shown in the left part of the Fig. 2. The encoder employs residual connections around each of the multi-head attention sub-layers, followed by Figure 1: This is the sketch of our model with bidirectional encoder and decoder. Assuming the aspect-term starts at the k-th position in x. Bottom: When using unlabeled data, the distribution of y ∼ q φ (y|x, a) is provided by the classifier. Left: The sequence is encoded by a Transformer block, which receives the summation of three embeddings, i.e., segment (used to distinguish aspect words) s xn , position p xn and word e xn . The encoding and the label y are used to parameterize the posterior q φ (z|x, a, y). Right: A sample z from the posterior q φ (z|x, a, y) and label y are passed to the generative network which estimates the probability p θ (x|y, a, z) by two unidirectional Transformer decoders. The number of aspect tokens is l a . layer normalization. To capture the aspect-term, we treat the aspect-term and its context differently by segment embeddings. To further emphasize the position of the conditional aspect, the position tag is also included for each token. The position tag indicates the distance from the token to the aspect. And then the position tag is transformed into a vector as defined in (Vaswani et al., 2017), which is added with the word embedding and segment embedding as the input of the Transformer encoder. Let g denote the output of the Transformer encoder after pooling which simply averaging the hidden states of the aspect-terms (the number of tokens is equal or greater than one) of the last layer, y is the indicator vector of the polar-ity. Then the distribution of z can be given as: z ∼ N (µ(x, y), diag(σ 2 (x, y))) , µ(x, y) = tanh(W µ [g : y] + b µ ) , σ(x, y) = tanh(W σ [g : y] + b σ ) . The sequences are divided into two parts by using segment embedding, the encoder can be aware of the position and the content of the aspect-term a by multi-head attention operation in the Transformer encoder. The information from two sides are aggregated into the aspect-term a, and therefore the resulting z can gather the information related to the aspect. Transformer Decoders The decoder is also a sub-graph of Transformer architechture (Vaswani et al., 2017) which focus on reconstructing original text. The main difference from the Transformer encoder is that the Transformer decoder is unidirectional by modifying the self-attention sub-layer to prevent positions from paying attention to subsequent positions. The textual sequence is well-known to be semantically complex and it is non-trivial for a Transformer decoder to capture the high-level semantics. Here we investigate two questions. How to implement p θ (x|y, a, z) without losing the information of a and how to capture the semantic polarity by a sequential model. For the first question, denoting that x is composed of three parts (x l , a, x r ), we use two Transformer decoders to model the left and right content. For the second question, we let each token is generated conditioned on the summation of the variables z and embedding y. One way to achieve p θ (x|y, a, z) is to separate the sequence into two parts, reversing the process in the two unidirectional decoder. For each decoder, the input state is represented by the summation of the four input i.e., the polarity indicator vector y from the classifier or the labeled dataset, the context vector z from the encoder, the input token embedding e xt and the position embedding p xt : ← − h trm t = ← − − f trm (e [xt:a] , p xt , y, z), x t ∈ [x l : a] p(x t−1 |·) = softmax(W p ← − h trm t + b p ) , log p θ (x l |a, y, z) = xt log p(x t |·), x t ∈ x l , − → h trm t = − − → f trm (e [a:xt] , p xt , y, z), x t ∈ [a : x r ] p(x t+1 |·) = softmax(W p − → h trm t + b p ) , log p θ (x r |a, y, z) = xt log p(x t |·), x t ∈ x r . It is equivalent to generate two sequences using two decoders. When decoding left part (or right part), the aspect will first get processed by the decoder and hence the decoder is aware of the aspect-terms. The position tag is also used in the decoder. Experiments Datasets and Preparation The models are evaluated on two benchmarks: Restaurant (REST) and Laptop (LAPTOP) datasets from the SemEval ATSA challenge (Pontiki et al., 2014). The REST dataset contains the reviews in the restaurant domain, while the LAPTOP dataset contains the reviews of Laptop products. The statistics of these two datasets are listed in Table 1. When processing these two datasets, we follow the same procedures as in another work (Lam et al., 2018). The dataset has a few samples that are labeled as "conflict" and these samples are removed. All tokens in the samples are lowercased without other preprocess, e.g., removing the stop words, symbols or digits. In terms of the unlabeled data, we obtained samples in the same domain for the REST and LAPTOP datasets. For the REST, the unlabeled The NLTK sentence tokenizer is utilized to extract the sentences from the raw comments. And each sentence is regarded as a sample in our model for both REST and LAPTOP. To obtain the aspects in the unlabeled samples, an open-sourced aspect extractor 4 is pre-trained using labeled data. The resulting test F1 score is 88.42 for the REST and 80.12 for the LAPTOP. Then the unlabeled data is processed by the pre-trained aspect extractor to obtain the aspects. The sentences that have no aspect are removed. And the sentences are filtered with maximal sentence length 80. The statistic of the resulting sentences is given in Table. 2. Model Configuration & Classifiers In the experiments, the model is fixed with a set of universal hyper-parameters. The number of units in the encoder and the decoder is 100 and the latent variable is of size 50 and the number of layers of both Transformer blocks is 2, the number of selfattention heads is 8. The KL weight klw should be carefully tuned to prevent the model from trapping in a local optimum, where z carries no useful information. In this work, the KL weight is set to be 1e-4. In term of word embedding, the pre-trained GloVe (Pennington et al., 2014) is used as the in- Table 3: Experimental results (%). For each classifier, we performed five experiments, i.e., the supervised classifier, the supervised classifier with pre-trained embedding using unlabeled data and our model with the classifier. The results are obtained after 5 runs, and we report the mean and the standard deviation of the test accuracy, and the Macro-averaged F1 score. Better results are in bold. denotes that the results are extracted from the original paper. put of the encoder and the decoder 5 and the outof-vocabulary words are excluded. And it is fixed during the training. The γ is set to be 10 across the experiments. We implemented and verified four kinds of mainstream ATSA classifiers integrated into our model, i.e., TC-LSTM (Tang et al., 2016a), Mem-Net (Tang et al., 2016b), BILSTM-ATT-G (Zhang and Liu, 2017), IAN (Ma et al., 2017) and TNet (Li et al., 2018b). • TC-LSTM: Two LSTMs are used to model the left and right context of the target separately, then the concatenation of two representations is used to predict the label. • MemNet: It uses the attention mechanism over the word embedding over multiple rounds to aggregate the information in the sentence, the vector of the final round is used for the prediction. • IAN: IAN adopts two LSTMs to derive the representations of the context and the target phrase interactively and the concatenation is fed to the softmax layer. • BILSTM-ATT-G: It models left and right contexts using two attention-based LSTMs 5 http://nlp.stanford.edu/data/glove.8B.300d.zip and makes use of a special gate layer to combine these two representations. The resulting vector is used for the prediction. • TNet-AS: Without using an attention module, TNet adopts a convolutional layer to get salient features from the transformed word representations originated from a bidirectional LSTM layer. Among current supervised models, TNet is currently one of the in-domain state-of-the-art methods and the TNet-AS is one of the two variants of TNet. The configuration of hyper-parameters and the training settings are the same as in the original papers. Various classifiers are tested here to demonstrate the robustness of our method and show that the performance can be consistently improved for different classifiers. Table 3 shows the experimental results on the REST and LAPTOP datasets. Two evaluation metrics are used here, i.e., classification accuracy and Macro-averaged F1 score. The latter is more sensitive when the dataset is class-imbalance. In this table, the semi-supervised results are obtained with 10K unlabeled data. We didn't observe further improvement with more unlabeled data. The mean and the standard deviation are reported over 5 runs. For each classifier clf, we conducted the following experiments: Main Results • clf : The classifier is trained using labeled data only. • clf (EMB): We use CBOW (Mikolov et al., 2013) to train the word embedding vectors using both labeled and unlabeled data. And the resulting vectors, instead of pre-trained GloVe vectors, are used to initialize the embedding matrix of the classifier. This is the embedding-level semi-supervised learning as the embedding layer is trained using in-domain data. • clf (ST): The self-training (ST) method is a typical semi-supervised learning method. We performed the self-training method over each classifier. At each epoch, we select the 1K samples with the best confidence and give them pseudo labels using the prediction. Then the classifier is re-trained with the new labeled data. The procedure loops until all the unlabeled samples are labeled. • clf (ASVAET): The proposed method that uses clf as the classifier. Note again that the classifier is an independent module in the proposed model, and the same configuration is used as in the supervised learning. Besides, we also include the results of several supervised models in the first block, i.e., CNN-ASP (Lam et al., 2018), AE-LSTM, ATAE-LSTM (Wang et al., 2016), GCAE , from the original paper. From the Table 3, the ASVAET is able to improve supervised performance consistently for all classifiers. For the MemNet, the test accuracy can be improved by about 2% by the TSSVAE, and so as the Macro-averaged F1. The TNet-AS outperforms the other three models. Compared with the other two semi-supervised methods, the ASVAET also shows better results. The ASVAET outperforms the compared semisupervised methods evidently. The adoption of indomain pre-trained word vectors is beneficial for the performance compared with the Glove vectors. Ablation Studies Effect of Labeled Data Here we investigated whether the ASVAET works with less labeled data. Without loss of general- ity, the MemNet is used as the basic classifier. We sampled different amount of labeled data to verify the improvement by using ASVAET. The test accuracy curve w.r.t. the amount of labeled data used is shown in Fig. 3. With fewer labeled samples, the test accuracy decreases, however, the improvement becomes more evident. When using 500 labeled samples, the improvement is about 3.2%. With full 3591 labeled samples, 1.5% gain can be obtained. This illustrates that our method can improve the accuracy with limited data. Effect of Sharing Embeddings In previous works, the word embedding is shared among all the components. In other words, the word embedding is also tuned in learning to reconstruct the data. It is questionable whether the improvement is obtained by using VAE or multi- Table 5: Nice sentences that are generated by controlling the sentiment polarity y using the decoder. task learning (text generation and classification). In the aforementioned experiments, the embedding layer is not shared between the classifier and autoencoder. This implementation guarantees that the improvement does not come from learning to generate. To verify if sharing embedding will benefit, we also conducted experiments with sharing embedding, as illustrated in Table. 4. The results indicate that the joint training for the embedding layer is negative for improving the performance in this task. The gradient from the autoencoder may collide with the gradients from the classifier and therefore, interferes with the optimization direction. Analysis of the Latent Space Transformer encodes the data into two representations, i.e., y and z. These two latent variable represented sentiment polarity and lexical context individually from the input text. We expect the y and z are fully disentangled and represent different meanings. The scatters of latent variable z (cf. Fig. 4) helps us have a better understanding. As shown in the figure, the distributions of three different polarities are very similar, which indicates that the lexical context reprensetation z is independent of the polarity y. The generation ability of the decoder is also investigated. Several sentences are generated and selected in the Table 5. By controlling the sentiment polarity y with the same z, the decoder can generate sentences with different sentiment in a similar format. This indicates that the decoder is trained successfully to perceive the y and model the relationship between the y and x. Conclusion A VAE-based framework has been proposed for the ATSA task. In this work, the encoder and decoder are constructed from the Transformers. Both analytical and experimental work has been carried out to show the effectiveness of the ASVAET. The method is verified with various kinds of classifiers. For all tested classifiers, the improvement is obtained when equipped with ASVAET, which demonstrates its universality. In this paper, the aspect-term is assumed to be known and there is an error accumulation problem when using the pre-trained aspect extractor. According to this, in future work, it is also interesting to show if it is possible to learn the aspect and sentiment polarity jointly for the unlabeled data. It will be of great importance if detailed knowledge can be extracted from the unlabeled data, which will shed light on other related tasks.
3,826
1810.10279
2896757367
Cloud platforms offer different types of virtual machines which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2 cloud, the user pays per hour for on-demand instances while spot instances are unused resources available for a lower price. Despite the monetary advantages, a spot instance can be terminated or hibernated by EC2 at any moment. Using both hibernation-prone spot instances (for cost sake) and on-demand instances, we propose in this paper a static scheduling for applications which are composed of independent tasks (bag-of-task) with deadline constraints. However, if a spot instance hibernates and it does not resume within a time which guarantees the application's deadline, a temporal failure takes place. Our scheduling, thus, aims at minimizing monetary costs of bag-of-tasks applications in EC2 cloud, respecting its deadline and avoiding temporal failures. Performance results with task execution traces, configuration of Amazon EC2 virtual machines, and EC2 market history confirms the effectiveness of our scheduling and that it tolerates temporal failures.
Bag-of-tasks on clouds are widely used not only for scientific applications but also for many commercial applications. In @cite_13 , Facebook reports that the jobs running on their own internal data centers are mostly independent tasks. Many works propose then scheduling the execution of independent tasks both on homogeneous and heterogeneous cloud environments @cite_3 . In the former, the performance and pricing of all available VMs are the same. In this case, authors usually consider either reserved VMs @cite_10 or on-demand VMs @cite_15 . For instance, @cite_15 study scheduling of applications on on-demand VMs distributed across different datacenters, focusing on the trade-offs between performance and cost while @cite_10 provides a solution that satisfies job deadlines while minimizing monetary cost. The proposed heuristics use both on-demand and reserved VMs. Works on heterogeneous cloud consider different types of VMs. For instance, in @cite_22 the authors present a heuristic algorithm for executing a bag-of-tasks applications taking into account either budget or deadline constraints. In @cite_3 , present an extensive survey and taxonomy of existing research in scheduling of bag-of-task applications on clouds.
{ "abstract": [ "", "Scheduling Bag-of-Tasks (BoT) applications on the cloud can be more challenging than grid and cluster environments. This is because a user may have a budgetary constraint or a deadline for executing the BoT application in order to keep the overall execution costs low. The research in this paper is motivated to investigate task scheduling on the cloud, given two hard constraints based on a user-defined budget and a deadline. A heuristic algorithm is proposed and implemented to satisfy the hard constraints for executing the BoT application in a cost effective manner. The proposed algorithm is evaluated using four scenarios that are based on the trade-off between performance and the cost of using different cloud resource types. The experimental evaluation confirms the feasibility of the algorithm in satisfying the constraints. The key observation is that multiple resource types can be a better alternative to using a single type of resource.", "Abstract Cloud computing has been widely adopted due to the flexibility in resource provisioning and on-demand pricing models. Entire clusters of Virtual Machines (VMs) can be dynamically provisioned to meet the computational demands of users. However, from a user’s perspective, it is still challenging to utilise cloud resources efficiently. This is because an overwhelmingly wide variety of resource types with different prices and significant performance variations are available. This paper presents a survey and taxonomy of existing research in optimising the execution of Bag-of-Task applications on cloud resources. A BoT application consists of multiple independent tasks, each of which can be executed by a VM in any order; these applications are widely used by both the scientific communities and commercial organisations. The objectives of this survey are as follows: (i) to provide the reader with a concise understanding of existing research on optimising the execution of BoT applications on the cloud, (ii) to define a taxonomy that categorises current frameworks to compare and contrast them, and (iii) to present current trends and future research directions in the area.", "Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91 of the time.", "Many web service providers use commercial cloud computing infrastructures like Amazon for flexible and reliable service deployment. For these web service providers, the cost of cloud computing usage becomes a big part of their IT department cost. Facing the diverse pricing models including on-demand, reserved, and spot instance, it is difficult for web service providers to optimize their cost. This paper introduces a new cloud brokerage service to help web service providers to minimize their cloud computing cost for deadline-constrained batch jobs, which have been a significant workload in web services. Our cloud brokerage service associates each batch job with deadline, and always tries to use cheaper reserved instances for computation to maintain a minimum cost. We achieve this with the following two steps: (1) given a set of jobs' specifications, determine the scheduling of jobs, (2) given the scheduling and pricing options, find an optimal instance renting strategy. We prove that both problems in two steps are computation intractable, and propose approximation algorithms for them. Trace-based evaluation shows that our cloud brokerage service can reduce up to 57 of the cloud computing cost." ], "cite_N": [ "@cite_13", "@cite_22", "@cite_3", "@cite_15", "@cite_10" ], "mid": [ "", "1542558562", "2962790327", "2951636684", "2051390526" ] }
A Bag-of-Tasks Scheduler Tolerant to Temporal Failures in Clouds
Abstract-Cloud platforms have emerged as a prominent environment to execute high performance computing (HPC) applications providing on-demand resources as well as scalability. They usually offer different classes of Virtual Machines (VMs) which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2 cloud, the user pays per hour for on-demand VMs while spot VMs are unused instances available for lower price. Despite the monetary advantages, a spot VM can be terminated, stopped, or hibernated by EC2 at any moment. Using both hibernation-prone spot VMs (for cost sake) and on-demand VMs, we propose in this paper a static scheduling for HPC applications which are composed by independent tasks (bag-of-task) with deadline constraints. However, if a spot VM hibernates and it does not resume within a time which guarantees the application's deadline, a temporal failure takes place. Our scheduling, thus, aims at minimizing monetary costs of bagof-tasks applications in EC2 cloud, respecting its deadline and avoiding temporal failures. To this end, our algorithm statically creates two scheduling maps: (i) the first one contains, for each task, its starting time and on which VM (i.e., an available spot or on-demand VM with the current lowest price) the task should execute; (ii) the second one contains, for each task allocated on a VM spot in the first map, its starting time and on which on-demand VM it should be executed to meet the application deadline in order to avoid temporal failures. The latter will be used whenever the hibernation period of a spot VM exceeds a time limit. Performance results from simulation with task execution traces, configuration of Amazon EC2 VM classes, and VMs market history confirm the effectiveness of our scheduling and that it tolerates temporal failures. Index Terms-Clouds, Temporal failures, Scheduling I. INTRODUCTION High Performance Computing (HPC) applications are typically executed in dedicated data centers. However, in the past few years, cloud computing has emerged as an attractive option to run these applications due to several advantages that it brings when compared with a dedicated infrastructure. Clouds provide a significant reduction in operational costs, besides offering a rapid elastic provisioning of computing resources like virtual machines and storage. However, in cloud environments, besides the usual goal of minimizing the execution time of the HPC application, it is also important to minimize the monetary cost of using cloud resources, i.e., there exists a trade-off between performance and monetary cost. In this paper, we are interested in HPC bag-of-task (BoT) applications with time constraints (deadlines) within which they must finish. BoT applications are composed of independent tasks which can be executed in any order and in parallel. Although simple, the BoT approach is used by several HPC applications such as parameter sweep applications, chromosome mapping, Monte Carlo simulation, computer imaging applications [1], [2], [3], [4]. Furthermore, they may require deadline-bounds where the correctness on the computation also depends on the time the computation of all tasks ends. Infrastructure-as-a-Service (IaaS) existing cloud platforms (e.g., Amazon EC2, Microsoft Azure, Google Cloud, etc.) enable users to dynamically acquire resources, usually as virtual machines (VMs), according to their application requirements (CPU, memory, I/O, etc,) in a pay-as-you-use price model. They usually offer different classes of VMs which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2, there are basically three classes 1 : (i) reserved VM instances, where the user pays an upfront price, guaranteeing long-term availability; (ii) ondemand VM instances which are allocated for specific time periods and incur a fixed cost per unit time of use, ensuring availability of the instance during this period; (iii) spot VM instances which are an unused instances available for lower price than on-demand price. The availability of spot VMs instances fluctuates based on the spot market's current demand. The allocation of a spot instance involves defining the VM type and a maximum price for how much the user is willing to pay. However, if there are not enough instances to meet clients demands, the VM in question can be interrupted by the cloud provider (temporarily or definitively). Despite the risk of unavailability, the main advantage of spot VMs is that their cost is much lower than on-demand VMs since the user requests unused instances at steep discounts, reducing the costs significantly. With Amazon's more recent announcement, an interrupted spot can either terminate, stop, or hibernate. Hence, when requesting a spot instance the user specifies the required type as well as the action that Amazon EC2 should take in case the VM instance is interrupted. Whenever a spot instance is hibernated by EC2, its memory and context are saved to the root of EC2 Block Storage (EBS) volume and, during the VM's pause, the user is only charged for EBS storage. EC2 resumes the hibernated instance, reloading the saved memory and context, only when there are enough availability for that type of instance with a spot price which is lower than the user's maximum price. Contrarily to stopped or terminated instances whose user is warned two minutes before the interruption of them, hibernated instances are paused immediately after noticing the user. Our proposal in this work is to provide a static cloud scheduler for Bag-of-Tasks applications using, for cost sake, hibernate-prone spot instances as much as possible, respecting the application deadline constraints while also minimizing the monetary costs of bag-of-tasks applications. However, if a spot instance hibernates, it might happen that it will not resume within a time which guarantees the deadline constraints of the application. In this case, a temporal failure would take place, i.e., correct computation is performed but too late to be useful (inability to meet deadlines). Thus, in order to avoid temporal failure in case of spot instance hibernation, our scheduler statically computes the time interval that an hibernated instance can stay in this state without violating the application's deadline. If the instance does not resume till the end of this interval, our scheduler will move the execution of the current tasks of the spot instance as well as those not executed yet to on-demand instances, in order to guarantee the application's deadline. Note that even after migrating the remaining task execution to on-demand VMs, the scheduler continues to look forward to minimizing monetary costs. The rest of the paper is organized as follows. Section II discusses some related work. Section III describes our proposed static scheduling, including its algorithms. Evaluation results from simulations conducted with real traces are presented in section IV. Finally, Section V concludes the paper and presents some future directions. III. A STATIC SCHEDULER OF BAG-OF-TASKS APPLICATIONS IN CLOUDS Aiming at reducing monetary costs, our proposed scheduling uses hibernate-prone spot instances. However, due to the possibility of hibernation and also the need to meet the application's deadline, the scheduler might migrate tasks that run on spot instances to on-demand ones, whenever the duration of an instance hibernation would induce a temporal failure. We denote primary tasks those which are allocated on VMs (spot or on-demand) that guarantee application's deadline with minimum monetary cost and we denote backup tasks those which are allocated on on-demand VMs and were originally primary tasks allocated on spot VMs. Backup tasks are only executed in case the hibernation state remains for such a long period of time that it is impossible to meet the deadline of the application, avoiding, thus, temporal failures. Therefore, a task might have two versions (primary and backup) which are statically scheduled on two different cores with time exclusion. The scheduling outputs two allocation mappings: one with primary tasks and the other one with backup tasks. Concerning the primary mapping, the proposed strategy aims at minimizing the monetary costs, by adopting hibernateprone spot instances with the highest processing power. Regarding the backup mapping, our strategy aims at minimizing monetary costs, by using the minimum number of the cheapest on-demand VMs, without violating the application's deadline. We assume that each task of the BoT application is executed in one core, requiring some main memory and that a set of different types of VMs are usually offered by cloud providers with a varying number of virtual cores (VCPUs) and memory sizes. Therefore, a VM running on a multi-core machine can execute more than one task simultaneously (one VCPU per task) provided there is enough main memory to allocate them. We also consider that VMs are offered in two different markets, spot and on-demand, where, contrarily to the former, the latter can not hibernate. Note that our solution only allocates spot VMs of those types that support hibernation. Figure 1 shows an example where the hibernation does not require backup tasks execution. In this example, a spot instance starts hibernating in time p and finishes in y, before the time limit, start bkp, when the backups should be triggered. Then, the deadline D can be met without executing the backups. On the other hand, Figure 2 presents a case where it is necessary to execute the backup tasks in an on-demand virtual machine to meet the deadline, since the hibernation exceeded the time limit, start bkp. Let M be the set of virtual machines, B the set of tasks that compose a bag-of-task application, and T = {1, . . . , D} the set of feasible periods, where D is the deadline defined by the user. For each VM, M keeps its storage capacity and the number of cores with the corresponding computation power. Set B keeps, for each task, information about (i) its execution time on a machine with known computational power (base time duration) and (ii) the amount of main memory that the task needs. Let Queue vmj ⊂ B be the set with all tasks scheduled on vm j . It is worth mentioning that the execution time of a task is re-calculated as the product between the original execution time and the VM slowdown where it will be executed. A VM slowdown is defined as P B Pvm j , where P B is the processing capacity of the machine used to calculate the basis time, and P vmj is the processing capacity of the VM. Thus, the slowdown represents the processing capacity of a VM when compared with the machine used to compute the basis time duration. When a VM is allocated for a user, he/she pays for a full-time interval called slot. That time is usually one hour. Thus, if a VM is used for 61 minutes, for example, the user will be charged for two slots (120 minutes). Note that one slot can correspond to several periods. For example, if each period corresponds to one minute, a slot of one hour would correspond to 60 periods. It is, thus, in the user's best interest to maximize the use of a slot already allocated. Let start slot vmj and end slot vmj be the time when the first slot was allocated to vm j and the end time of the last allocated slot for this same VM respectively, such that start slot vmj < end slot vmj . Whenever the execution time of a task allocated to vm j exceeds the end slot vmj , the user has to pay for another full interval. Thus, if part of that interval is not used by any task, we have a waste of time. To compute that waste, we define waste vmj in Equation 1, that is the time interval inside the last contracted slot at which vm j remains idle after executing all tasks allocated to it. waste vmj = end slot vmj − end tmax(1) Such that end tmax = max ∀t l ∈Queue vm j (end t l ) and end t l is the end time of task t l . A. Primary Task Scheduling Algorithm 1 shows the primary scheduling heuristic which is a greedy algorithm that allocates the set of tasks t i ∈ B to a set of VMs (spot and on-demand VMs). Tables I and II present the used variables and functions respectively. The algorithm receives B, M , D, and V M time limit as input parameters. The V M time limit defines the maximum occupation period of a VM. For example, if D = 100(h) and V M time limit = 0.5, the scheduling of the tasks should be done so as not to exceed the period D * V M time limit = 50(h). Since the objective is to respect the application deadline (even in the presence of hibernation) while minimizing monetary costs, all the choices made by the heuristic are guided by the VMs' prices, and by the deadline D and V M time limit , defined by the user. Initially, tasks are ordered in descending order by the memory size they require (line 1). Then, for each task, the algorithm applies a best fit heuristic that tries to include it in an already allocated slot of a virtual machine that presents the highest waste of time (lines 7 to 13), since it has enough memory and ensures that the task insertion will respect D * V M time limit . If such a VM does not exist, the heuristic tries to allocate new slots in an already allocated VM with enough memory to execute the task, but now with the smallest waste (lines 16 to 23). Similarly to the previous case, the slot allocation must not violate D * V M time limit (line 19). Allocating slots in an already allocated VM reduces boot time overhead in comparison of allocating a new VM. However, if such an allocation is not possible, the algorithm must allocate a new VM. In this case, the heuristic defines the best type of VM in terms of execution time (line 25) and, then, it chooses the market where this VM shall be acquired: ondemand or spot, considering the offered prices (lines 26 to 30). Finally, it updates the primary scheduling map (line 35). Figure 3 shows an example of scheduling of nine tasks in a virtual machine with two cores. In the example, there exist two gaps (one per core) which occur due to lack of memory to allocate a task within the current slot. The waste of time and deadline D are also shown. B. Backup Task Scheduling Let Succ vmj t k ⊂ Queue vmj be a set containing task t k and all its successors, i.e., all tasks that are allocated to the same core where t k is allocated and that execute after the end of t k . Let P arallel vmj ti ⊂ Queue vmj be a set containing all tasks that execute in parallel with t i in vm j . In order The proposed backup scheduling algorithm is presented in Algorithm 2, where Table III shows the used variables and Table IV describes the used procedures and functions. As can be seen in line 4, Rec Group vmj ti is created for each task t i ∈ Queue vmj . This algorithm employs a scheduling strategy similar to that presented in Algorithm 1, in which tasks are scheduled on different VMs using a best-fit heuristic. However, unlike the Algorithm 1, in Algorithm 2, the VMs selection prioritizes the on-demand VM with the cheapest monetary cost, resulting from the product of its price and the execution time of a backup task on it. Note that the backup scheduling has to ensure that if a migration event occurs, the number of periods required to perform the backup tasks respects the deadline. Thus, the VMs chosen in the function get best V M (lines 8 and 10) guarantees that end ti + runtime(Rec Group IV. EXPERIMENTAL RESULTS This section presents execution times and monetary costs of simulations accomplished with real BoT applications, using the configuration of Amazon EC2 virtual machines, and considering a real VMs market history. According to the information on Amazon Web Server (AWS) 2 , only the VMs of families C3, C4, C5, M4, M5, R3, and R4 with memory below 100 GB, running in the spot market, are able to hibernate if an interrupt occurs. Therefore, for the purposes of this work, the fourth generation general purpose VMs (M4) and the third and fourth generation VMs optimized for computation (C3 and C4) were used. By choosing the third and fourth generation VMs, it was possible to compute the slowdown using the data from [22]. The workload used in the evaluation were obtained from [23], a database that contains the execution traces of jobs submitted to Google's servers throughout the month of March 2011. Based on these traces, we have defined: (i) the number of tasks of a job; (ii) the execution time of each task of the job; and (iii) the average memory footprint. For the experiments, four BoT-type jobs were chosen from the first 10 days of the traces. Table V summarizes the main characteristics of these jobs, followed by the corresponding deadlines considering the virtual machines used in our tests. We adopted the shortest deadlines which enable the generation of valid primary and backup scheduling, for each job. These values were computed iteratively, using V M time limit value equals to 0.5, starting with D = 1(h) in increments of 1 hour, stopping at the first valid scheduling given by the algorithms 1 and 2. The execution times were obtained from Google machines used in 2011. As the hardware information and computational capacity of these machines are not provided, we assumed that these times were obtained with the VM with the lowest computational power, whose memory capacity was sufficient to meet the requirements of the tasks. As we can observe in Table VI, among the VMs, the ones containing VCPUs with the lowest computational power are c3.large and m4.large. Therefore, they are considered our baseline regarding processing capacity. Spot and on-demand VM prices were obtained on September 10, 2018, considering us-east-1 and us-east-1a regions. Table VI shows the characteristics of these VMs, along with the corresponding slowdown values of their VCPUs. Based on the latter and considering, as mentioned above, that the duration of the tasks, extracted from Google traces, were obtained from execution them on the slowest VMs (base time duration), the duration of each task in the other VMs was obtained through the product of the respective slowdown value by its base duration. A. Experimental results in different hibernation scenarios In order to evaluate the effectiveness of our scheduling solution in terms of makespan and monetary cost, we compared it with an strategy (On-demand) that uses only ondemand virtual machines, while for evaluating the impact of hibernation, we compared it with a strategy that migrates tasks as soon as the VM, where the tasks have been allocated, hibernates (Immediate Migration), i.e., the latter does not consider the possibility that the VM might resume. Furthermore, we also consider two possible scenarios of execution of our scheduling: (1) no spot VM hibernates (No Hibernation) and (2) a spot VM hibernates and, in this case, either the tasks need to be migrated (Hibernation with Migration) or the VM resumes in time to not violate deadline (Hibernation). In case of hibernation, the latter initiates two hours after the job starts. For the Hibernation with Migration execution, the duration of hibernation is set to 1000 hours, thus forcing task migration; for Hibernation, hibernation duration is just 3 hours and, therefore, task migration is not carried out. Aiming a more accurate analysis of the results, only one spot VM can hibernate in Hibernation and Hibernation with Migration executions. In addition, only one backup migration takes place in Hibernation with Migration execution. Finally, the experiments randomly select the spot VM that should hibernate. 1) Hibernation without Migration: Figures 5 presents the monetary costs in the Hibernation scenario, i.e., our scheduling does not migrate tasks because the hibernated spot VM instance resumes in time to meet the application's deadline. As we can observe in the figure, its monetary cost is similar to the one without hibernation, represented by the Hibernation and No Hibernation bars respectively. Such a result is expected since, according to the new pricing policy defined by AWS in December 2017, the user only pays for the time the spots are running, and during hibernation, the user is charged only for storage, whose price on September 10, 2018 was 0.10 per GB per month. As, in the experiments, the maximum hibernation time is shorter than 30 hours, it is, thus, negligible. When our solution is compared with the one that migrates tasks as soon as the hibernation occurs (Immediate Migration), we observe that the latter is more expensive than the second in 59.97%, 26.74%, 55.15% e 40.51%, for J207, J402, J819 and J595, respectively. Such a difference in price can be explained since in the Immediate Migration, the user was charged for the two hours of execution of the spot VMs as well as for the ondemand VMs used for migration. On the other hand, it is worth mentioning that the Immediate Hibernation monetary cost is, for the four jobs, on average, 57.31% lower than the (On-demand) one. This happens because part of the tasks were executed as primary ones in spot VM with high computational power, and, therefore, fewer slots were needed to complete execution on the on-demand VMs. Figure 6 shows that our solution, Hibernation, has a makespan longer than the Immediate Migration strategy. This occurs because the former has an additional 3 hours due to hibernation, while the latter migrates immediately, continuing running the job's tasks within this 3 hours. In contrast, since the VMs usually chosen by the Immediate Migration strategy are low cost ones, performing poorly, its makespan can be longer than On-demand and No Hibernation ones, that allocate VMs with higher computation power. Moreover, when the virtual machine hibernates, the executing task is re-started from the beginning in another VM. So, its execution time can be computed (in the makespan) almost twice in the worst case. 2) Hibernation with Migration: Table VII presents the number of VMs and the corresponding types used before and after migration for each job, where the hibernated VM is indicated by (H). Note that we consider that only one VM hibernates in these tests, i.e., even if several instances of a same VM type are allocated, only one of them may hibernate. The monetary cost of Hibernation with Migration strategy is equal to the Immediate Migration since the backup map used by both of them are similar. These costs are higher (on average, 30.74% in our experiments) than the one required by the primary scheduling alone (No Hibernation), since the former use spot VMs within the first two hours of execution, as well as on-demand VMs for backup migration. On the other hand they are 136.00% lower than On-demand strategy costs. In terms of makespan, the Hibernation with Migration makespans is close to the deadlines defined in the Table V. Such a behaviour is expected since our approach waits till the start bkp, which is the latest time that hibernation can be tolerated without exceeding the deadline. Note that in the case of the Immediate Migration strategy, the makespan is shorter than the Hibernation with Migration one. In our experiments, this difference was, on average, 74.26%. Note that, although in some cases, the tasks can migrate to VMs of equivalent processing powers (see the case of J207), even in Immediate Migration strategy, the makespan increases. As pointed out in the previous section, it happens because the execution time of a task, initially started in a VM that hibernates along its execution, can be computed almost twice, when it migrates, in the worst case. When comparing Hibernation with Hibernation with Migration, the duration of hibernation has an impact in both makespans due to the duration of the execution itself. However, in the case of Hibernation, where the hibernated spot VM resumes in time to respect the deadline, the monetary cost is lower than Hibernation with Migration, as we can confirm in Figures 5 and 7 Table VI. That history of price variation predates the changes in AWS pricing policies, occurred in December 2017, which stabilized the prices of VMs such that peaks of variation ceased to occur 3 . As shown in Figure 9, in the previous policy, prices could have significant peaks of variation, with intervals lasting a few minutes or hours. The hibernation traces were generated considering a fixed threshold of $0.4, which represents the average price value in the first 24 hours of the history. Thus, the onset of hibernation is the period in which the VM price is higher than this value. Analogously, when the price drops to a value below the threshold, we consider that the VM resumes execution. The generated traces has two hibernation points: (1) c4.8xlarge VMs hibernation at 4.21 hours after the start of execution and lasting 43.51 minutes; (2) c3.4xlarge VMs hibernation at 23.7 minutes after the start of execution and lasted 1.22 hours. The number of VMs affected by hibernation is not the same for all evaluated jobs. While in the J207 and J819 jobs only 2 VMs hibernate, in Job J402 there are 8 hibernations of different VMs. This variation is expected, since different job tasks are scheduled to VMs of different types. As can be observed in Figure 10, our solution, Hibernation, presents the lowest monetary cost, with an average difference of 167.43 %, relative to the Immediate Migration's one, and 240.94 % in relation to the On-demand's one. It is noteworthy that in Job J402, Immediate Migration has a cost which is 6.73 % higher than the On-demand's one. The former used 8 ondemand VMs for migration, which raised the monetary cost, added to the costs of the VMs spots used until the beginning of their hibernation. On the other hand, for Job J402, our approach presents a significantly lower cost than the Immediate Migration's one (260.26 %), since the duration of none of the of hibernation of the corresponding spot VMs triggered the migration of their tasks. Regarding makespan, shown in Figure 11, our approach is 7.52 % longer than On-demand's one and 24.92 % shorter than Immediate Migration's one. These difference can be explained since the duration of VMs hibernation is up to 3 https://aws.amazon.com/blogs/compute/new-amazon-ec2-spot-pricing/ 1.22 hours, it is not necessary to start the task migration process in any of the evaluated jobs. Therefore, this increase is due only to the hibernation of the VMs. On the other hand, in Immediate Migration the scheduling of backup tasks chooses firstly cheaper on-demand VMs, usually with lower computational power. Although our approach increases the makespan when compared On-demand's one, the monetary costs are lower than the two other approaches. Thus, the results from the experiments with the hibernation trace confirm those from the previous experiments. V. CONCLUDING REMARKS AND FUTURE WORK This paper proposed a static scheduling for bag-of-task applications with deadline constraints, using both hibernationprone spot VMs (for cost sake) and on-demand VMs. Our scheduling aims at minimizing monetary costs of bag-of-tasks, respecting application's deadline and avoiding temporal failures. Although we have theoretically evaluated the proposed strategy, the characteristics of Bot applications and VMs, as well as the VM market price variation, were acquired from real scenarios. Our results confirmed the effectiveness of our scheduling and that it tolerates temporal failures. Short-term directions of our work comprise the automation of the computing of both the minimum deadline and the V M time limit , in accordance with the characteristics of the application and the available virtual machines as well as the maximum number of hibernations tolerated at each spot virtual machine. Thus, the user will always have a feasible static scheduling for the expected scenario. In longer-term future, we also intend to work in a dynamic version of the proposed scheduling which periodically takes checkpoints of the tasks, so that, in the migration case, the tasks can start their executions from the last checkpoints, instead of being re-started from the beginning.
4,673
1810.10279
2896757367
Cloud platforms offer different types of virtual machines which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2 cloud, the user pays per hour for on-demand instances while spot instances are unused resources available for a lower price. Despite the monetary advantages, a spot instance can be terminated or hibernated by EC2 at any moment. Using both hibernation-prone spot instances (for cost sake) and on-demand instances, we propose in this paper a static scheduling for applications which are composed of independent tasks (bag-of-task) with deadline constraints. However, if a spot instance hibernates and it does not resume within a time which guarantees the application's deadline, a temporal failure takes place. Our scheduling, thus, aims at minimizing monetary costs of bag-of-tasks applications in EC2 cloud, respecting its deadline and avoiding temporal failures. Performance results with task execution traces, configuration of Amazon EC2 virtual machines, and EC2 market history confirms the effectiveness of our scheduling and that it tolerates temporal failures.
Some works take into account Amazon spot VMs instance features. In @cite_20 , use hybrid instances, including both on-demand instances for high priority tasks and backup, and spot instances for normal computational tasks. Authors of @cite_25 propose to switch to on-demand resources when there is no spot instance available to ensure the desired performance. Using both on-demand and spot VM instances, SpotCheck @cite_8 provides the illusion of an IaaS platform that offers always-available on-demand VMs for a cost near that of spot VMs. Also claiming performance of on-demand VMs, but at a cost near that of the spot market, the authors in @cite_14 present the SpotOn batch service computing, that uses fault-tolerance mechanism to mitigate the impact of spot revocations. To our knowledge no work studies the impact of the new hibernation feature of spot instances on scheduling algorithms.
{ "abstract": [ "Cloud spot markets enable users to bid for compute resources, such that the cloud platform may revoke them if the market price rises too high. Due to their increased risk, revocable resources in the spot market are often significantly cheaper (by as much as 10×) than the equivalent non-revocable on-demand resources. One way to mitigate spot market risk is to use various fault-tolerance mechanisms, such as checkpointing or replication, to limit the work lost on revocation. However, the additional performance overhead and cost for a particular fault-tolerance mechanism is a complex function of both an application's resource usage and the magnitude and volatility of spot market prices. We present the design of a batch computing service for the spot market, called SpotOn, that automatically selects a spot market and fault-tolerance mechanism to mitigate the impact of spot revocations without requiring application modification. SpotOn's goal is to execute jobs with the performance of on-demand resources, but at a cost near that of the spot market. We implement and evaluate SpotOn in simulation and using a prototype on Amazon's EC2 that packages jobs in Linux Containers. Our simulation results using a job trace from a Google cluster indicate that SpotOn lowers costs by 91.9 compared to using on-demand resources with little impact on performance.", "Cloud computing provides an attractive computing paradigm in which computational resources are rented on-demand to users with zero capital and maintenance costs. Cloud providers offer different pricing options to meet computing requirements of a wide variety of applications. An attractive option for batch computing is spot-instances, which allows users to place bids for spare computing instances and rent them at a (often) substantially lower price compared to the fixed on-demand price. However, this raises three main challenges for users: how many instances to rent at any time? what type (on-demand, spot, or both)? and what bid value to use for spot instances? In particular, renting on-demand risks high costs while renting spot instances risks job interruption and delayed completion when the spot market price exceeds the bid. This paper introduces an online learning algorithm for resource allocation to address this fundamental tradeoff between computation cost and performance. Our algorithm dynamically adapts resource allocation by learning from its performance on prior job executions while incorporating history of spot prices and workload characteristics. We provide theoretical bounds on its performance and prove that the average regret of our approach (compared to the best policy in hindsight) vanishes to zero with time. Evaluation on traces from a large datacenter cluster shows that our algorithm outperforms greedy allocation heuristics and quickly converges to a small set of best performing policies.", "Testing and executing large-scale computational applications in public clouds is becoming prevalent due to cost saving, elasticity, and scalability. However, how to increase the reliability and reduce the cost to run large-scale applications in public clouds is still a big challenge. In this paper, we analyzed the pricing schemes of Amazon Elastic Compute Cloud (EC2) and found the disturbance effect that the price of the spot instances can be heavily affected due to the large number of spot instances required. We proposed a dynamic approach which schedules and runs large-scale computational applications on a dynamic pool of cloud computational instances. We use hybrid instances, including both on-demand instances for high priority tasks and backup, and spot instances for normal computational tasks so as to further reduce the cost without significantly increasing the completion time. Our proposed method takes the dynamic pricing of cloud instances into consideration, and it reduces the cost and tolerates the failures for running large-scale applications in public clouds. We conducted experimental tests and an agent based Scalable complex System modeling for Sustainable city (S3) application is used to evaluate the scalability, reliability and cost saving. The results show that our proposed method is robust and highly flexible for researchers and users to further reduce cost in real practice.", "Infrastructure-as-a-Service (IaaS) cloud platforms rent resources, in the form of virtual machines (VMs), under a variety of contract terms that offer different levels of risk and cost. For example, users may acquire VMs in the spot market that are often cheap but entail significant risk, since their price varies over time based on market supply and demand and they may terminate at any time if the price rises too high. Currently, users must manage all the risks associated with using spot servers. As a result, conventional wisdom holds that spot servers are only appropriate for delay-tolerant batch applications. In this paper, we propose a derivative cloud platform, called SpotCheck, that transparently manages the risks associated with using spot servers for users. SpotCheck provides the illusion of an IaaS platform that offers always-available VMs on demand for a cost near that of spot servers, and supports all types of applications, including interactive ones. SpotCheck's design combines the use of nested VMs with live bounded-time migration and novel server pool management policies to maximize availability, while balancing risk and cost. We implement SpotCheck on Amazon's EC2 and show that it i) provides nested VMs to users that are 99.9989 available, ii) achieves nearly 5x cost savings compared to using equivalent types of on-demand VMs, and iii) eliminates any risk of losing VM state." ], "cite_N": [ "@cite_14", "@cite_25", "@cite_20", "@cite_8" ], "mid": [ "2033790833", "1463932917", "1997826804", "2162331430" ] }
A Bag-of-Tasks Scheduler Tolerant to Temporal Failures in Clouds
Abstract-Cloud platforms have emerged as a prominent environment to execute high performance computing (HPC) applications providing on-demand resources as well as scalability. They usually offer different classes of Virtual Machines (VMs) which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2 cloud, the user pays per hour for on-demand VMs while spot VMs are unused instances available for lower price. Despite the monetary advantages, a spot VM can be terminated, stopped, or hibernated by EC2 at any moment. Using both hibernation-prone spot VMs (for cost sake) and on-demand VMs, we propose in this paper a static scheduling for HPC applications which are composed by independent tasks (bag-of-task) with deadline constraints. However, if a spot VM hibernates and it does not resume within a time which guarantees the application's deadline, a temporal failure takes place. Our scheduling, thus, aims at minimizing monetary costs of bagof-tasks applications in EC2 cloud, respecting its deadline and avoiding temporal failures. To this end, our algorithm statically creates two scheduling maps: (i) the first one contains, for each task, its starting time and on which VM (i.e., an available spot or on-demand VM with the current lowest price) the task should execute; (ii) the second one contains, for each task allocated on a VM spot in the first map, its starting time and on which on-demand VM it should be executed to meet the application deadline in order to avoid temporal failures. The latter will be used whenever the hibernation period of a spot VM exceeds a time limit. Performance results from simulation with task execution traces, configuration of Amazon EC2 VM classes, and VMs market history confirm the effectiveness of our scheduling and that it tolerates temporal failures. Index Terms-Clouds, Temporal failures, Scheduling I. INTRODUCTION High Performance Computing (HPC) applications are typically executed in dedicated data centers. However, in the past few years, cloud computing has emerged as an attractive option to run these applications due to several advantages that it brings when compared with a dedicated infrastructure. Clouds provide a significant reduction in operational costs, besides offering a rapid elastic provisioning of computing resources like virtual machines and storage. However, in cloud environments, besides the usual goal of minimizing the execution time of the HPC application, it is also important to minimize the monetary cost of using cloud resources, i.e., there exists a trade-off between performance and monetary cost. In this paper, we are interested in HPC bag-of-task (BoT) applications with time constraints (deadlines) within which they must finish. BoT applications are composed of independent tasks which can be executed in any order and in parallel. Although simple, the BoT approach is used by several HPC applications such as parameter sweep applications, chromosome mapping, Monte Carlo simulation, computer imaging applications [1], [2], [3], [4]. Furthermore, they may require deadline-bounds where the correctness on the computation also depends on the time the computation of all tasks ends. Infrastructure-as-a-Service (IaaS) existing cloud platforms (e.g., Amazon EC2, Microsoft Azure, Google Cloud, etc.) enable users to dynamically acquire resources, usually as virtual machines (VMs), according to their application requirements (CPU, memory, I/O, etc,) in a pay-as-you-use price model. They usually offer different classes of VMs which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2, there are basically three classes 1 : (i) reserved VM instances, where the user pays an upfront price, guaranteeing long-term availability; (ii) ondemand VM instances which are allocated for specific time periods and incur a fixed cost per unit time of use, ensuring availability of the instance during this period; (iii) spot VM instances which are an unused instances available for lower price than on-demand price. The availability of spot VMs instances fluctuates based on the spot market's current demand. The allocation of a spot instance involves defining the VM type and a maximum price for how much the user is willing to pay. However, if there are not enough instances to meet clients demands, the VM in question can be interrupted by the cloud provider (temporarily or definitively). Despite the risk of unavailability, the main advantage of spot VMs is that their cost is much lower than on-demand VMs since the user requests unused instances at steep discounts, reducing the costs significantly. With Amazon's more recent announcement, an interrupted spot can either terminate, stop, or hibernate. Hence, when requesting a spot instance the user specifies the required type as well as the action that Amazon EC2 should take in case the VM instance is interrupted. Whenever a spot instance is hibernated by EC2, its memory and context are saved to the root of EC2 Block Storage (EBS) volume and, during the VM's pause, the user is only charged for EBS storage. EC2 resumes the hibernated instance, reloading the saved memory and context, only when there are enough availability for that type of instance with a spot price which is lower than the user's maximum price. Contrarily to stopped or terminated instances whose user is warned two minutes before the interruption of them, hibernated instances are paused immediately after noticing the user. Our proposal in this work is to provide a static cloud scheduler for Bag-of-Tasks applications using, for cost sake, hibernate-prone spot instances as much as possible, respecting the application deadline constraints while also minimizing the monetary costs of bag-of-tasks applications. However, if a spot instance hibernates, it might happen that it will not resume within a time which guarantees the deadline constraints of the application. In this case, a temporal failure would take place, i.e., correct computation is performed but too late to be useful (inability to meet deadlines). Thus, in order to avoid temporal failure in case of spot instance hibernation, our scheduler statically computes the time interval that an hibernated instance can stay in this state without violating the application's deadline. If the instance does not resume till the end of this interval, our scheduler will move the execution of the current tasks of the spot instance as well as those not executed yet to on-demand instances, in order to guarantee the application's deadline. Note that even after migrating the remaining task execution to on-demand VMs, the scheduler continues to look forward to minimizing monetary costs. The rest of the paper is organized as follows. Section II discusses some related work. Section III describes our proposed static scheduling, including its algorithms. Evaluation results from simulations conducted with real traces are presented in section IV. Finally, Section V concludes the paper and presents some future directions. III. A STATIC SCHEDULER OF BAG-OF-TASKS APPLICATIONS IN CLOUDS Aiming at reducing monetary costs, our proposed scheduling uses hibernate-prone spot instances. However, due to the possibility of hibernation and also the need to meet the application's deadline, the scheduler might migrate tasks that run on spot instances to on-demand ones, whenever the duration of an instance hibernation would induce a temporal failure. We denote primary tasks those which are allocated on VMs (spot or on-demand) that guarantee application's deadline with minimum monetary cost and we denote backup tasks those which are allocated on on-demand VMs and were originally primary tasks allocated on spot VMs. Backup tasks are only executed in case the hibernation state remains for such a long period of time that it is impossible to meet the deadline of the application, avoiding, thus, temporal failures. Therefore, a task might have two versions (primary and backup) which are statically scheduled on two different cores with time exclusion. The scheduling outputs two allocation mappings: one with primary tasks and the other one with backup tasks. Concerning the primary mapping, the proposed strategy aims at minimizing the monetary costs, by adopting hibernateprone spot instances with the highest processing power. Regarding the backup mapping, our strategy aims at minimizing monetary costs, by using the minimum number of the cheapest on-demand VMs, without violating the application's deadline. We assume that each task of the BoT application is executed in one core, requiring some main memory and that a set of different types of VMs are usually offered by cloud providers with a varying number of virtual cores (VCPUs) and memory sizes. Therefore, a VM running on a multi-core machine can execute more than one task simultaneously (one VCPU per task) provided there is enough main memory to allocate them. We also consider that VMs are offered in two different markets, spot and on-demand, where, contrarily to the former, the latter can not hibernate. Note that our solution only allocates spot VMs of those types that support hibernation. Figure 1 shows an example where the hibernation does not require backup tasks execution. In this example, a spot instance starts hibernating in time p and finishes in y, before the time limit, start bkp, when the backups should be triggered. Then, the deadline D can be met without executing the backups. On the other hand, Figure 2 presents a case where it is necessary to execute the backup tasks in an on-demand virtual machine to meet the deadline, since the hibernation exceeded the time limit, start bkp. Let M be the set of virtual machines, B the set of tasks that compose a bag-of-task application, and T = {1, . . . , D} the set of feasible periods, where D is the deadline defined by the user. For each VM, M keeps its storage capacity and the number of cores with the corresponding computation power. Set B keeps, for each task, information about (i) its execution time on a machine with known computational power (base time duration) and (ii) the amount of main memory that the task needs. Let Queue vmj ⊂ B be the set with all tasks scheduled on vm j . It is worth mentioning that the execution time of a task is re-calculated as the product between the original execution time and the VM slowdown where it will be executed. A VM slowdown is defined as P B Pvm j , where P B is the processing capacity of the machine used to calculate the basis time, and P vmj is the processing capacity of the VM. Thus, the slowdown represents the processing capacity of a VM when compared with the machine used to compute the basis time duration. When a VM is allocated for a user, he/she pays for a full-time interval called slot. That time is usually one hour. Thus, if a VM is used for 61 minutes, for example, the user will be charged for two slots (120 minutes). Note that one slot can correspond to several periods. For example, if each period corresponds to one minute, a slot of one hour would correspond to 60 periods. It is, thus, in the user's best interest to maximize the use of a slot already allocated. Let start slot vmj and end slot vmj be the time when the first slot was allocated to vm j and the end time of the last allocated slot for this same VM respectively, such that start slot vmj < end slot vmj . Whenever the execution time of a task allocated to vm j exceeds the end slot vmj , the user has to pay for another full interval. Thus, if part of that interval is not used by any task, we have a waste of time. To compute that waste, we define waste vmj in Equation 1, that is the time interval inside the last contracted slot at which vm j remains idle after executing all tasks allocated to it. waste vmj = end slot vmj − end tmax(1) Such that end tmax = max ∀t l ∈Queue vm j (end t l ) and end t l is the end time of task t l . A. Primary Task Scheduling Algorithm 1 shows the primary scheduling heuristic which is a greedy algorithm that allocates the set of tasks t i ∈ B to a set of VMs (spot and on-demand VMs). Tables I and II present the used variables and functions respectively. The algorithm receives B, M , D, and V M time limit as input parameters. The V M time limit defines the maximum occupation period of a VM. For example, if D = 100(h) and V M time limit = 0.5, the scheduling of the tasks should be done so as not to exceed the period D * V M time limit = 50(h). Since the objective is to respect the application deadline (even in the presence of hibernation) while minimizing monetary costs, all the choices made by the heuristic are guided by the VMs' prices, and by the deadline D and V M time limit , defined by the user. Initially, tasks are ordered in descending order by the memory size they require (line 1). Then, for each task, the algorithm applies a best fit heuristic that tries to include it in an already allocated slot of a virtual machine that presents the highest waste of time (lines 7 to 13), since it has enough memory and ensures that the task insertion will respect D * V M time limit . If such a VM does not exist, the heuristic tries to allocate new slots in an already allocated VM with enough memory to execute the task, but now with the smallest waste (lines 16 to 23). Similarly to the previous case, the slot allocation must not violate D * V M time limit (line 19). Allocating slots in an already allocated VM reduces boot time overhead in comparison of allocating a new VM. However, if such an allocation is not possible, the algorithm must allocate a new VM. In this case, the heuristic defines the best type of VM in terms of execution time (line 25) and, then, it chooses the market where this VM shall be acquired: ondemand or spot, considering the offered prices (lines 26 to 30). Finally, it updates the primary scheduling map (line 35). Figure 3 shows an example of scheduling of nine tasks in a virtual machine with two cores. In the example, there exist two gaps (one per core) which occur due to lack of memory to allocate a task within the current slot. The waste of time and deadline D are also shown. B. Backup Task Scheduling Let Succ vmj t k ⊂ Queue vmj be a set containing task t k and all its successors, i.e., all tasks that are allocated to the same core where t k is allocated and that execute after the end of t k . Let P arallel vmj ti ⊂ Queue vmj be a set containing all tasks that execute in parallel with t i in vm j . In order The proposed backup scheduling algorithm is presented in Algorithm 2, where Table III shows the used variables and Table IV describes the used procedures and functions. As can be seen in line 4, Rec Group vmj ti is created for each task t i ∈ Queue vmj . This algorithm employs a scheduling strategy similar to that presented in Algorithm 1, in which tasks are scheduled on different VMs using a best-fit heuristic. However, unlike the Algorithm 1, in Algorithm 2, the VMs selection prioritizes the on-demand VM with the cheapest monetary cost, resulting from the product of its price and the execution time of a backup task on it. Note that the backup scheduling has to ensure that if a migration event occurs, the number of periods required to perform the backup tasks respects the deadline. Thus, the VMs chosen in the function get best V M (lines 8 and 10) guarantees that end ti + runtime(Rec Group IV. EXPERIMENTAL RESULTS This section presents execution times and monetary costs of simulations accomplished with real BoT applications, using the configuration of Amazon EC2 virtual machines, and considering a real VMs market history. According to the information on Amazon Web Server (AWS) 2 , only the VMs of families C3, C4, C5, M4, M5, R3, and R4 with memory below 100 GB, running in the spot market, are able to hibernate if an interrupt occurs. Therefore, for the purposes of this work, the fourth generation general purpose VMs (M4) and the third and fourth generation VMs optimized for computation (C3 and C4) were used. By choosing the third and fourth generation VMs, it was possible to compute the slowdown using the data from [22]. The workload used in the evaluation were obtained from [23], a database that contains the execution traces of jobs submitted to Google's servers throughout the month of March 2011. Based on these traces, we have defined: (i) the number of tasks of a job; (ii) the execution time of each task of the job; and (iii) the average memory footprint. For the experiments, four BoT-type jobs were chosen from the first 10 days of the traces. Table V summarizes the main characteristics of these jobs, followed by the corresponding deadlines considering the virtual machines used in our tests. We adopted the shortest deadlines which enable the generation of valid primary and backup scheduling, for each job. These values were computed iteratively, using V M time limit value equals to 0.5, starting with D = 1(h) in increments of 1 hour, stopping at the first valid scheduling given by the algorithms 1 and 2. The execution times were obtained from Google machines used in 2011. As the hardware information and computational capacity of these machines are not provided, we assumed that these times were obtained with the VM with the lowest computational power, whose memory capacity was sufficient to meet the requirements of the tasks. As we can observe in Table VI, among the VMs, the ones containing VCPUs with the lowest computational power are c3.large and m4.large. Therefore, they are considered our baseline regarding processing capacity. Spot and on-demand VM prices were obtained on September 10, 2018, considering us-east-1 and us-east-1a regions. Table VI shows the characteristics of these VMs, along with the corresponding slowdown values of their VCPUs. Based on the latter and considering, as mentioned above, that the duration of the tasks, extracted from Google traces, were obtained from execution them on the slowest VMs (base time duration), the duration of each task in the other VMs was obtained through the product of the respective slowdown value by its base duration. A. Experimental results in different hibernation scenarios In order to evaluate the effectiveness of our scheduling solution in terms of makespan and monetary cost, we compared it with an strategy (On-demand) that uses only ondemand virtual machines, while for evaluating the impact of hibernation, we compared it with a strategy that migrates tasks as soon as the VM, where the tasks have been allocated, hibernates (Immediate Migration), i.e., the latter does not consider the possibility that the VM might resume. Furthermore, we also consider two possible scenarios of execution of our scheduling: (1) no spot VM hibernates (No Hibernation) and (2) a spot VM hibernates and, in this case, either the tasks need to be migrated (Hibernation with Migration) or the VM resumes in time to not violate deadline (Hibernation). In case of hibernation, the latter initiates two hours after the job starts. For the Hibernation with Migration execution, the duration of hibernation is set to 1000 hours, thus forcing task migration; for Hibernation, hibernation duration is just 3 hours and, therefore, task migration is not carried out. Aiming a more accurate analysis of the results, only one spot VM can hibernate in Hibernation and Hibernation with Migration executions. In addition, only one backup migration takes place in Hibernation with Migration execution. Finally, the experiments randomly select the spot VM that should hibernate. 1) Hibernation without Migration: Figures 5 presents the monetary costs in the Hibernation scenario, i.e., our scheduling does not migrate tasks because the hibernated spot VM instance resumes in time to meet the application's deadline. As we can observe in the figure, its monetary cost is similar to the one without hibernation, represented by the Hibernation and No Hibernation bars respectively. Such a result is expected since, according to the new pricing policy defined by AWS in December 2017, the user only pays for the time the spots are running, and during hibernation, the user is charged only for storage, whose price on September 10, 2018 was 0.10 per GB per month. As, in the experiments, the maximum hibernation time is shorter than 30 hours, it is, thus, negligible. When our solution is compared with the one that migrates tasks as soon as the hibernation occurs (Immediate Migration), we observe that the latter is more expensive than the second in 59.97%, 26.74%, 55.15% e 40.51%, for J207, J402, J819 and J595, respectively. Such a difference in price can be explained since in the Immediate Migration, the user was charged for the two hours of execution of the spot VMs as well as for the ondemand VMs used for migration. On the other hand, it is worth mentioning that the Immediate Hibernation monetary cost is, for the four jobs, on average, 57.31% lower than the (On-demand) one. This happens because part of the tasks were executed as primary ones in spot VM with high computational power, and, therefore, fewer slots were needed to complete execution on the on-demand VMs. Figure 6 shows that our solution, Hibernation, has a makespan longer than the Immediate Migration strategy. This occurs because the former has an additional 3 hours due to hibernation, while the latter migrates immediately, continuing running the job's tasks within this 3 hours. In contrast, since the VMs usually chosen by the Immediate Migration strategy are low cost ones, performing poorly, its makespan can be longer than On-demand and No Hibernation ones, that allocate VMs with higher computation power. Moreover, when the virtual machine hibernates, the executing task is re-started from the beginning in another VM. So, its execution time can be computed (in the makespan) almost twice in the worst case. 2) Hibernation with Migration: Table VII presents the number of VMs and the corresponding types used before and after migration for each job, where the hibernated VM is indicated by (H). Note that we consider that only one VM hibernates in these tests, i.e., even if several instances of a same VM type are allocated, only one of them may hibernate. The monetary cost of Hibernation with Migration strategy is equal to the Immediate Migration since the backup map used by both of them are similar. These costs are higher (on average, 30.74% in our experiments) than the one required by the primary scheduling alone (No Hibernation), since the former use spot VMs within the first two hours of execution, as well as on-demand VMs for backup migration. On the other hand they are 136.00% lower than On-demand strategy costs. In terms of makespan, the Hibernation with Migration makespans is close to the deadlines defined in the Table V. Such a behaviour is expected since our approach waits till the start bkp, which is the latest time that hibernation can be tolerated without exceeding the deadline. Note that in the case of the Immediate Migration strategy, the makespan is shorter than the Hibernation with Migration one. In our experiments, this difference was, on average, 74.26%. Note that, although in some cases, the tasks can migrate to VMs of equivalent processing powers (see the case of J207), even in Immediate Migration strategy, the makespan increases. As pointed out in the previous section, it happens because the execution time of a task, initially started in a VM that hibernates along its execution, can be computed almost twice, when it migrates, in the worst case. When comparing Hibernation with Hibernation with Migration, the duration of hibernation has an impact in both makespans due to the duration of the execution itself. However, in the case of Hibernation, where the hibernated spot VM resumes in time to respect the deadline, the monetary cost is lower than Hibernation with Migration, as we can confirm in Figures 5 and 7 Table VI. That history of price variation predates the changes in AWS pricing policies, occurred in December 2017, which stabilized the prices of VMs such that peaks of variation ceased to occur 3 . As shown in Figure 9, in the previous policy, prices could have significant peaks of variation, with intervals lasting a few minutes or hours. The hibernation traces were generated considering a fixed threshold of $0.4, which represents the average price value in the first 24 hours of the history. Thus, the onset of hibernation is the period in which the VM price is higher than this value. Analogously, when the price drops to a value below the threshold, we consider that the VM resumes execution. The generated traces has two hibernation points: (1) c4.8xlarge VMs hibernation at 4.21 hours after the start of execution and lasting 43.51 minutes; (2) c3.4xlarge VMs hibernation at 23.7 minutes after the start of execution and lasted 1.22 hours. The number of VMs affected by hibernation is not the same for all evaluated jobs. While in the J207 and J819 jobs only 2 VMs hibernate, in Job J402 there are 8 hibernations of different VMs. This variation is expected, since different job tasks are scheduled to VMs of different types. As can be observed in Figure 10, our solution, Hibernation, presents the lowest monetary cost, with an average difference of 167.43 %, relative to the Immediate Migration's one, and 240.94 % in relation to the On-demand's one. It is noteworthy that in Job J402, Immediate Migration has a cost which is 6.73 % higher than the On-demand's one. The former used 8 ondemand VMs for migration, which raised the monetary cost, added to the costs of the VMs spots used until the beginning of their hibernation. On the other hand, for Job J402, our approach presents a significantly lower cost than the Immediate Migration's one (260.26 %), since the duration of none of the of hibernation of the corresponding spot VMs triggered the migration of their tasks. Regarding makespan, shown in Figure 11, our approach is 7.52 % longer than On-demand's one and 24.92 % shorter than Immediate Migration's one. These difference can be explained since the duration of VMs hibernation is up to 3 https://aws.amazon.com/blogs/compute/new-amazon-ec2-spot-pricing/ 1.22 hours, it is not necessary to start the task migration process in any of the evaluated jobs. Therefore, this increase is due only to the hibernation of the VMs. On the other hand, in Immediate Migration the scheduling of backup tasks chooses firstly cheaper on-demand VMs, usually with lower computational power. Although our approach increases the makespan when compared On-demand's one, the monetary costs are lower than the two other approaches. Thus, the results from the experiments with the hibernation trace confirm those from the previous experiments. V. CONCLUDING REMARKS AND FUTURE WORK This paper proposed a static scheduling for bag-of-task applications with deadline constraints, using both hibernationprone spot VMs (for cost sake) and on-demand VMs. Our scheduling aims at minimizing monetary costs of bag-of-tasks, respecting application's deadline and avoiding temporal failures. Although we have theoretically evaluated the proposed strategy, the characteristics of Bot applications and VMs, as well as the VM market price variation, were acquired from real scenarios. Our results confirmed the effectiveness of our scheduling and that it tolerates temporal failures. Short-term directions of our work comprise the automation of the computing of both the minimum deadline and the V M time limit , in accordance with the characteristics of the application and the available virtual machines as well as the maximum number of hibernations tolerated at each spot virtual machine. Thus, the user will always have a feasible static scheduling for the expected scenario. In longer-term future, we also intend to work in a dynamic version of the proposed scheduling which periodically takes checkpoints of the tasks, so that, in the migration case, the tasks can start their executions from the last checkpoints, instead of being re-started from the beginning.
4,673
1810.09706
2896121525
We address the problem of decomposing a single image into reflectance and shading. The difficulty comes from the fact that the components of image---the surface albedo, the direct illumination, and the ambient illumination---are coupled heavily in observed image. We propose to infer the shading by ordering pixels by their relative brightness, without knowing the absolute values of the image components beforehand. The pairwise shading orders are estimated in two ways: brightness order and low-order fittings of local shading field. The brightness order is a non-local measure, which can be applied to any pair of pixels including those whose reflectance and shading are both different. The low-order fittings are used for pixel pairs within local regions of smooth shading. Together, they can capture both global order structure and local variations of the shading. We propose a Consistency-aware Selective Fusion (CSF) to integrate the pairwise orders into a globally consistent order. The iterative selection process solves the conflicts between the pairwise orders obtained by different estimation methods. Inconsistent or unreliable pairwise orders will be automatically excluded from the fusion to avoid polluting the global order. Experiments on the MIT Intrinsic Image dataset show that the proposed model is effective at recovering the shading including deep shadows. Our model also works well on natural images from the IIW dataset, the UIUC Shadow dataset and the NYU-Depth dataset, where the colors of direct lights and ambient lights are quite different.
Edge-based methods rely on classification of image gradients @cite_44 @cite_11 @cite_15 . The problem is that during the integration of gradients, a single misclassified edge will result in errors in a wide area of recovered reflectance @cite_40 . Our shading orders can capture much more information than the gradients, since the objects of the measurements are no longer limited to adjacent pixels. The non-local shading orders can reduce the adverse influence of misclassified edges. Further, the long range relations define the large-scale structure directly, which can avoid the accumulation of error when integrating local measurements. Some Markov random field models @cite_50 @cite_9 @cite_26 and dense Conditional random field models @cite_38 also consider the relation between distant pixels. However, their non-local smooth terms are only applicable to pixels with the same reflectance or shading. This is of particular importance for pixels whose reflectance and shading are both different. Imposing shading smoothness on these pixels ( @cite_38 ) may cause large errors.
{ "abstract": [ "Intrinsic image decomposition separates an image into a reflectance layer and a shading layer. Automatic intrinsic image decomposition remains a significant challenge, particularly for real-world scenes. Advances on this longstanding problem have been spurred by public datasets of ground truth data, such as the MIT Intrinsic Images dataset. However, the difficulty of acquiring ground truth data has meant that such datasets cover a small range of materials and objects. In contrast, real-world scenes contain a rich range of shapes and materials, lit by complex illumination. In this paper we introduce Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes. We create this benchmark through millions of crowdsourced annotations of relative comparisons of material properties at pairs of points in each scene. Crowdsourcing enables a scalable approach to acquiring a large database, and uses the ability of humans to judge material comparisons, despite variations in illumination. Given our database, we develop a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms. Intrinsic image decomposition remains a challenging problem; we release our code and database publicly to support future research on this problem, available online at http: intrinsic.cs.cornell.edu .", "We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images.", "", "Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects", "", "We propose a method for intrinsic image decomposition based on retinex theory and texture analysis. While most previous methods approach this problem by analyzing local gradient properties, our technique additionally identifies distant pixels with the same reflectance through texture analysis, and uses these nonlocal reflectance constraints to significantly reduce ambiguity in decomposition. We formulate the decomposition problem as the minimization of a quadratic function which incorporates both the retinex constraint and our nonlocal texture constraint. This optimization can be solved in closed form with the standard conjugate gradient algorithm. Extensive experimentation with comparisons to previous techniques validate our method in terms of both decomposition accuracy and runtime efficiency.", "We present a method for decomposing an image into its intrinsic reflectance and shading components. Different from previous work, our method examines texture information to obtain constraints on reflectance among pixels that may be distant from one another in the image. We observe that distinct points with the same intensity-normalized texture configuration generally have the same reflectance value. The separation of shading and reflectance components should thus be performed in a manner that guarantees these non-local constraints. We formulate intrinsic image decomposition by adding these non-local texture constraints to the local derivative analysis employed in conventional techniques. Our results show a significant improvement in performance, with better recovery of global reflectance and shading structure than by previous methods.", "Images can be represented as the composition of multiple intrinsic component images, such as shading, albedo, and noise images. In this paper, we present a method for estimating intrinsic component images from a single image, which we apply to the problems of estimating shading and albedo images and image denoising. Our method is based on learning estimators that predict filtered versions of the desired image. Unlike previous approaches, our method does not require unnatural discretizations of the problem. We also demonstrate how to learn a weighting function that properly weights the local estimates when constructing the estimated image. For shading estimation, we introduce a new training set of real-world images. The accuracy of our method is measured both qualitatively and quantitatively, showing better performance on the shading albedo separation problem than previous approaches. The performance on denoising is competitive with the current state of the art." ], "cite_N": [ "@cite_38", "@cite_26", "@cite_9", "@cite_44", "@cite_40", "@cite_50", "@cite_15", "@cite_11" ], "mid": [ "2076491823", "2101856619", "", "2164847484", "", "2087257250", "2104166077", "2154423567" ] }
Consistency-aware Shading Orders Selective Fusion for Intrinsic Image Decomposition
A N image is the result of several factors, including the material reflectance, the surface's shape, the positions and the colors of the illuminants, and the camera sensor responses. Barrow and Tenenbaum [1] proposed to decompose an image into intrinsic images, each of which captures a distinct aspect of the scene. The most common outputs are the shading and the reflectance. The shading captures the strength of incident illumination at each pixel, while the reflectance shows the surface albedo. The shading is widely used to reconstruct the shapes of the surfaces [2]. The albedo is invariant to illumination and geometry, so it is a robust feature for object classification and image segmentation. In this paper we aim to recover the shading and the reflectance from a single image. This is an underconstrained problem. The absolute values of the unknown variables cannot be measured directly, since they are highly coupled in observed image. Instead, we measure the relative sizes of shading over pixels to recover its essential structure, and determine their absolute values later by boundary conditions. We regard the shading as a global ranking of the pixels in the order of dark to bright. The boundary conditions are simply that the start points are fully shadowed pixels, while the end points are fully lit ones. The global shading is inferred from pairwise shading orders, which are signed differences between the shading of pixels. The flow chart is shown in Fig. 1. We estimate the shading orders in the U V B color space, which is spanned by a 2D shadow-free plane [3] and a brightness dimension. This color space has two major properties: • Pixels with the same reflectance cluster together on the shadow-free plane. • The brightness of image is the sum of the shading brightness and the reflectance brightness. Based on these properties, we can use clustering-based methods to capture the global order structure of the shading. For pixels with the same reflectance, the shading orders can be obtained directly from the difference of the image brightness. For pixels with different reflectance, the shading orders can be calculated in a similar way, but the bias from the difference of the reflectance brightness should be compensated. We choose the optimal biases between different clusters of reflectance, which make the shading constant across reflectance boundaries excluding shading edges. The cluster-wise biases make it possible to handle pixel pairs whose reflectance and shading are both different. We also model the local shading by low-order fittings to predict the shading orders between nearby pixels. Different models can capture the geometric structure of different types of surfaces. For example, a linear model can describe the shading of a smooth surface. The estimation methods above are complementary. The clustering-based methods can be applied to any pair of pixels, in particular those of distantly located pixels, but their accuracies depend on the quality of clustering. In contrast, the low-order fittings do not rely on clustering at all, but they capture only the local structure, and the fitting errors are large for irregular surfaces. The pairwise shading orders are combined into a global shading via Consistency-aware Selective Fusion (CSF). The Fig. 1: The flow chart of our method. Firstly the image is transformed into the U V B color space. Based on brightness and cluster results over chromaticity, different methods m are used to estimate the shading orders O(p, q, m) between each pair of pixels p and q. We also evaluate the reliability C(p, q, m) of the estimates based on the image features. Then we use CSF to infer the global shading. CSF repeats two operations: Local Selection, i.e., selecting the estimation methods and the weights for each pair of pixels under the guidance of consistency between the pairwise orders and the global shading; and Angular Embedding (AE), which infers the globally consistent orders from the pairwise estimates. At last we transform the global shading back into the RGB space. major challenge is avoiding inconsistency between estimates from different methods. CSF identifies a sparse set of reliable and consistent pairwise shading orders and fuses them within a unified optimization framework. For each pair of pixels, CSF selects the most reliable estimate exclusively instead of a weighted summation of different estimates [4][5] [6]. This strategy prevents unreliable estimates from polluting the results. We evaluate the reliability of pairwise orders using not only the image features but also their consistency with the global order. Therefore, the estimates that are incompatible with the majority will be suppressed, even when their preconditions happen to be satisfied by the image features. Forcing sparsity of pairwise connections further reduces unreliable measurements. The global order is obtained from Angular Embedding (AE) [7], which embeds the pixels onto a unit circle in the complex plane. AE uses a complex matrix to encode pairwise orders and their reliability simultaneously. Moreover, AE applies spectral decomposition to get a near-global optimal solution that best matches the reliable pairwise orders. After locating the darkest points on the unit circle, the absolute values of shading can be determined. IMAGE FORMATION An image with only body reflection can be modeled as [3] I i (p) = R i b (p)(γ(p)L i d + L i a ),(1) where the superscript i indexes the RGB channels, and p indexes the pixel. The body reflectance R b denotes the diffuse reflection under white illumination. The three-dimensional vectors L d and L a are the direct illuminant and the ambient illuminant, respectively. γ(p) ∈ [0, 1] is the direct shading, i.e., the proportion of direct illumination reaching the surface. BIDR assumes that the direct and ambient illuminants are constant across the materials [3]. When there are multiple direct illuminants with the same color, their effects can be added. Inspired by the shadow removal problem [45], we define the reflectance to be the image lit by full direct illuminant together with the ambient illuminant: R i (p) = R i b (p)(L i d + L i a ).(2) Accordingly, the shading is defined to be S i (p) = I i (p) R i (p) = γ(p)L i d + L i a L i d + L i a .(3) For a fully lit area (i.e., γ = 1), the shading reaches its maximum. For a fully shadowed area (i.e., γ(p) = 0), the shading will be S(p) = L a /(L d + L a ). In natural scenes, the direct lights are always much stronger than the ambient lights, so the shading of fully shadowed areas should be a small positive value. The color of the shading in (3) does not have definite physical meaning, so we show the shading in grayscale for all the figures in this paper following [24] and [12]. Readers that are interested in the color of the shading are referred to the supplementary material for several examples. SHADING ORDERS FROM BRIGHTNESS We infer shading orders in the U V B color space. We will show that image brightness has a linear relation to the log of shading. Therefore pairwise shading orders can be estimated by either brightness orders or low-order fittings of local shading. The U V B Color Space The BIDR model delivers a 2D shadow-free plane U V [3]. The normal n of the U V plane points from the shadowed pixels to the lit ones sharing the same body reflectance R b (see Fig. 2b for an example). We call the normal n the brightening direction. Formally, the brightening direction is defined by n = 1 K log I(p)| γ(p)=1 − log I(q)| γ(q)=0 = 1 K log( L d La + 1),(4) where the pixels p and q should satisfy R b (p) = R b (q), and K is the normalization factor. From (4) we can see that the brightening direction depends only on the ratio of illuminants, so all the pixels share the same brightening direction (Fig. 2b). If the ratio of illuminants is unknown, we can search the most probable brightening direction that minimizes the entropy of pixels on the U V plane [3] [46]. Since pixels with similar reflectance R b will stay closely together on the U V plane (Fig. 2c), the entropy of the distribution of pixels will be minimized. Let u and v be any pair of basis vectors on the U V plane. Then we have a rotation matrix H = [u, v, n] that transforms the log RGB space into a new color space U V B: [I u (p), I v (p), I b (p)] = log I(p)H.(5) The dimension I b captures the intensity of the image, and we call it the brightness. According to (3) and (5), the brightness of the image can be factorized as follows: I b (p) = log S(p) · n + log R(p) · n = S b (p) + R b (p).(6) Here we used the fact that log I(p) = log R(p) + log S(p). The shading brightness is S b (p) = log S(p) · n, which is a linear function of log S. The reflectance brightness R b (p) = log R(p) · n can be regarded as a bias determined by the body reflectance R b . This linear relationship is the basis for estimating the shading orders in Section 3.2. According to (5), the shading in the U V B space should be [S u (p), S v (p), S b (p)] = log S(p)H. Note that S u and S v are nearly zero since the U V plane is shadow-free [3]. The only unknown dimension is the shading brightness S b , and we will infer it from pairwise shading orders in Section 5. Once we obtain S b , the shading in RGB space can be recovered by S(p) = exp([S u (p), S v (p), S b (p)]H −1 ),(7) where exp denotes element-wise exponential. Note that the rotation matrix H is always invertible. The reflectance can be obtained from R(p) = I(p)/S(p). Measuring Pairwise Shading Orders The shading order between pixels p and q is defined to be the signed difference between shading brightnesses, i.e., O(p, q) = S b (p) − S b (q) . We propose four methods M = {BO, BOB, F S, SS} to estimate the shading orders. These methods are shown in Fig. 3. Brightness Order (BO). According to (6), if two pixels have the same reflectance brightness R b or equivalently, the same body reflectance R b , their shading order will be equal to their difference of brightnesses: O(p, q, BO) = I b (p) − I b (q) if R b (p) = R b (q).(8) Brightness Order minus Bias (BOB). For pixels with different body reflectance, the bias of reflectance brightness ∆R b should be compensated as follows: O(p, r, BOB) = I b (p)−I b (r)−∆R b (p, r) if R b (p) = R b (r),(9) where ∆R b (p, r) = R b (p) − R b (r) is the bias. The process of calculating the bias will be described in Section 3.3. BO and BOB together can estimate the shading order between any two pixels. For pixels nearby, we can fit their shading brightness by low-order functions. This is based on the assumption of Pixels I b p r s t q O(p,q,BO) O(p,t,SS) O(p,s,FS)=0 O(p,r,BOB) S b ΔR b Fig. 3: Calculating shading orders O from brightness I b . We align the curves of the brightness I b and the ground-truth shading brightness S b to make I b (p) = S b (p). The red dashed curve is the brightness after compensating the bias of reflectance brightness ∆R b . The green masks cover the green pixels while the uncovered ones are white. local smoothness of shading, which is valid for most parts of natural images. First-order Smoothness (FS). For flat surfaces, the normal directions and thus the incident angles change little. According to the cosine law of the Lambertian reflection, the variation of shading brightness will be small. The first-order derivative of shading brightness should be almost zero if there are no shadow edges. Consequently, the adjacent pixels will have nearly identical shading brightness: O(p, s, F S) = 0 if s ∈ N (p), ∂I b (p) ∂p ≈ 0,(10) where N (p) is the neighborhood of p, and ∂I b (p) ∂p is the derivative of I b evaluated at p. Second-order Smoothness (SS). For smooth surfaces, the surface normal rotates smoothly. As a result, the shading brightness will change smoothly. We assume that the second-order derivative of the shading is close to zero. Thus we can fit the local shading by a linear function. We further assume that the adjacent pixels share the same body reflectance, so the slope of the linear model ∂S b (p) ∂p = ∂I b (p) ∂p . The shading order between two nearby pixels will be O(p, t, SS) = ∂I b (p) ∂p · (p − t) if t ∈ N (p), ∂ 2 (I b (p)) ∂p 2 ≈ 0,(11) where p − t is the directed spatial distance between p and t. In practice, we calculate the derivative and the spatial distance in the horizontal and vertical directions separately. The preconditions of the methods above are not mutually exclusive, so different methods may be applicable to the same pair of pixels. The preconditions together cover all possible situations, so we can find at least one suitable method for most pairs of pixels. The redundancy and completeness of these methods are the basis for robust estimates of shading orders. The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. Estimating the Bias of Reflectance Brightness The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. The absolute values of reflectance brightness R b are unavailable, so we cannot calculate their biases directly. Instead, we cluster the pixels by body reflectance, and estimate the biases of reflectance brightness between different clusters. The local smoothness of shading implies that pixels within a small patch have similar shading brightness. According to (6), the bias of reflectance brightness between two clusters can be approximated by their difference of image brightness within small patches. The main process is shown in Fig. 4. The image is divided into dense grids with 10 pixels in each side. For a patch T containing pixels from both categories j and k, the difference of reflectance brightness is calculated by ∆R b (j, k, T ) =Ī b (j, T ) −Ī b (k, T ), whereĪ b (j, T ) and I b (k, T ) are the median brightness of pixels belonging to categories j and k, respectively. We generate a histogram of the patch-wise measures ∆R b (j, k, T ), and take the highest peak to be the estimate ∆Ř b (j, k) as shown in Fig. 4c. The minorities of the histogram mainly come from patches with shading edges in them (e.g., patches 3 and 4 in Fig. 4b). The reliability F of the estimate is set to be the number of votes from the patches. When F j,k is 0, it means categories j and k are not adjacent, and their bias cannot be measured directly. In this case, we resort to their biases with other categories. Taking each reflectance category as a node, we can build an undirected graph G = (V, E), where V is the set of nodes and E is the set of edges. The weight of the edge between node j and k is set to be 1/F j,k , where F j,k is the reliability of ∆Ř b (j, k) as described before. We can get an estimate of the bias between two nodes by summing the biases along any path connecting them. We further eliminate the multipath effect by extracting the Minimum Spanning Tree (MST) of the graph G. The MST ensures that there is one and only one path between any two nodes, so the relative reflectance brightnessŘ b of each node can be uniquely determined. Meanwhile, the total reliability of the remaining pairwise biases is maximized. The sparsity of the reflectance spectra [47] ensures that the pixels can be clustered into a small number of categories. Fig. 4: Estimating the bias of reflectance brightness between reflectance categories. (a) The cluster map. The symbols j, k, and l stand for 3 reflectance categories. The squares indicate representative patches for estimating the bias of reflectance brightness between categories j and k; (b) The brightness I b . The biases obtained from patches 3 and 4 are outliers, since there are shadow edges inside them; (c) The histogram of the patch-wise biases of reflectance brightness between categories j and k. The peak of the histogram is selected to be the result. Since pixels on the shadow-free plane U V are well organized by their body reflectance, we cluster the pixels by a simple k-means. The number of clusters is set to be the number of local maxima in the 2D histogram of I u and I v . The bin size of the histogram is empirically set to be 0.03. THE RELIABILITY OF PAIRWISE ORDERS For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. The reliability of an estimate is the probability of all its premises being valid, which is calculated by a Noisy-Or model C(p, q, m) = f ∈Cm 1 − P f (p, q), m ∈ M,(12) where C m is the set of perturbations that the method m is not robust to, as listed in Table 1. The probability P f (p, q) measures how likely the perturbation f occurs around pixels p and q. For an ideal image without any perturbation, all the methods get equally high confidences. Once a perturbation happens, the confidences of sensitive methods will drop. The occurrences of the perturbations are predicted by image features. Generally, we calculate a distance x between the pair of pixels according to each feature, and translate the distance into probability by a sigmoid function in the form of sigm(x; w) = 2 1+e −wx − 1, where w is a positive weight. The features are described below. Clustering Error (CE) is the probability that the clustering of pixels on the shadow-free plane is inaccurate, which is calculated by where the cluster probability P C is the likelihood of each pixel belonging to its reflectance category, and eŜ b is the strength of the step edge [48] on the shifted shading bright-nessŜ b . The first term increases as the pixel p or q deviates from the cluster centers. The second term is large when the pixels are improperly categorized or the relative reflectance brightnesses are inaccurately estimated, as shown in Fig. 5c. PCE(p, q) = (1 − PC (p)PC (q)) · sigm(eŜb (p, q); w1),(13) Here each reflectance category is modeled by a multivariate normal distribution. The shifted shading brightnessŜ b is obtained from the brightness I b minus the relative reflectance brightnessŘ b (Section 3.3) followed by a median filtering. Local Color Variance (LCV) is defined to be: PLCV (p, q) = sigm(max(σ(I(p)), σ(I(q))); w2),(14) where σ(I(p)) is the standard deviation of chromaticities I u and I v within the 3x3 window centered at pixel p. Large color variations mainly appear at reflectance boundaries (Figs. 5a and 5c). Shadow Edges (SE) are caused by occlusions of the direct light. To locate the shadow edges, we render the direct shadingγ under uniformly sampled illuminants. The direct shading is similar to the visibility map proposed by Lee et al. [5]. The difference is that they assume the illuminants to be infinitely far away, which is inaccurate for indoor scenes. Instead, we sample the feasible positions of the illuminant within the room box. The probability of a shadow edge between pixels p and q is calculated by their direct shading under promising illuminants, as follows: PSE(p, q) = sigm( 1 |L| L d ∈L γ(L d , p) −γ(L d , q) ; w3).(15) Here L is the set of promising illuminants, andγ(L d , p) is the direct shading at pixel p under illuminant L d . We select the promising illuminants according to the correlation between the rendered direct shadingγ and the brightness I b . See the supplementary material for details. The Shadow Edges feature is not applicable to RGB-only images, since the geometric layout is needed for rendering the shading map. Reflectance Change (RC) distinguishes pixels with different chromaticities or intensities, which are assumed to have different reflectance [24][13][17] [12]. We calculate the probability of a reflectance change by PRC (p, q) = sigm(duv(p, q); w4) · sigm(e b (p, q); w5),(16) where d uv is the geometric distance on the shadow-free plane. e b (p, q) is the magnitude of the step edge lying between p and q in the brightness I b , which aims at distinguishing colors with similar chromaticity but different intensities, especially achromatic ones. Surface Normal Change (SNC) generates shading variation [5][6] [39]. We calculate the probability of a surface normal change by PSNC (p, q) = sigm( (N(p), N(q)); w6),(17) where (N(p), N(q)) is the angle between the surface normals at pixels p and q. The surface normals are derived from the depth map [5]. SNC is unavailable to RGB-only images. Spatial Distance (SD) is simply the geometric distance between the pixels [6] [12]: PSD(p, q) = sigm(ds(p, q); w7). For RGB-Depth images, we first calculate the 3D positions of the pixels in camera coordinates and then compute their distances. For RGB-only images, we use the 2D coordinates in the image plane. Discussion. The features above can help us choose the best estimation method for a certain pair of pixels. Among them, CE focuses on whether the biases of reflectance brightnesses are correctly estimated, which is the key to the success of the BOB method. We check the correctness by both the cause and the effect, i.e., the pixels are tightly clustered and the estimated shading is smooth, respectively. LCV and RC capture the local and large-scale behaviour of reflectance change, respectively. The local variation, coupled with image blur, will disturb the measurements of brightness as well as its gradient. This will cause problems in most estimation methods except for the FS, which is only concerned with the adjacency of pixels. GLOBAL SHADING FROM SHADING ORDERS VIA CONSISTENCY-AWARE SELECTIVE FUSION Thus far we have obtained a matrix O of the pairwise shading orders (Section 3.2) together with a confidence matrix C from (12) representing their reliability. Now we use the Consistency-aware Selective Fusion (CSF) to select a subset of reliable and consistent pairwise orders, and combine them into an optimal global order. CSF is designed under the following criteria: • For a pair of pixels p and q, the optimal estimation method M p,q ∈ M is selected exclusively. • The pairwise connections W p,q should be sparse such that the outliers are excluded. • The total confidence of the selected pairwise shading orders should be maximized. • The global order should match the input pairwise orders. In practice, the global order is obtained through Angular Embedding (AE) [7]. Let Z p = e iS b (p) with i = √ −1 denote the embedding of pixel p on the unit circle in the complex plane (Fig. 6). The angle Θ p,q from Z p to Z q is the shading order between p and q. AE finds an embedding that makes Θ p,q consistent with the input shading order O p,q = O(p, q, M p,q ). Algorithm 1 Consistency-aware Selective Fusion Require: Pairwise shading orders O and the relative confidence C, the initial weights α 1 and α 2 of regularizer, the threshold ω min of the density of non-zero elements in W , and the step size τ . Ensure: Embedding Z. Initialization: W = 1 n,n , where n is the number of pixels; M p,q = arg max m C(p, q, m). while α 2 > 0 do Optimize Z using (20); Choose M using (22); (23); if W 0 < ω min n 2 then Break; end if end while return Z. α 2 = α 2 − τ ; Update W using The estimation methods W , the pairwise connections M and the embedding Z are optimized jointly as follows: min W,M,Z J AE (Z; W, M ) + P (W ) s.t. Z p = 1, q C p,q = D p , ∀p, W (p, q) ≥ 0, ∀p, q,(19) where the errors of Angular Embedding is defined to be [7] J AE (Z; W, M ) = p,q C p,q · Z p − Z q e iOp,q 2 ,(20) and the regularization term is in the form of elastic net [49] P (W ) = α 1 W 1 + α 2 2 W 2 2 .(21) Here C p,q = W p,q C(p, q, M p,q ) is the weighted confidence. The diagonal matrix D p = q max m∈M C(p, q, m) is a degree matrix. α 1 and α 2 are the weights of lasso (L1) and ridge (L2), respectively. Elastic net enforces group sparsity on the weights, so several groups of reliable neighbors will be selected for each pixel. We optimize the variables M , W , and Z iteratively as described in Algorithm 1. Fig. 6 illustrates one iteration of the process. The details are given below. Choose M . Keeping W and Z fixed, we can search the optimal estimation method by arg min M p,q W p,q C(p, q, M p,q ) · Z p − Z q e iO(p,q,Mp,q) 2 s.t. q W p,q C(p, q, M p,q ) = D p , ∀p. (22) It can be optimized by the Lagrange method. We iteratively pick the optimal M p,q that balances the confidence and the consistency of orders under the current Lagrangian multiplier, and updates the multiplier by dual ascent. In Fig. 6b the selected method for pixels p and q is the one with the second highest confidence but the best consistency to the global shading. W p,q E p,q + α 1 W 1 + α 2 2 W 2 2 s.t. q W p,qCp,q = D p , ∀p, W p,q ≥ 0, ∀p, q,(23) whereC p,q = C(p, q, M p,q ) and the confidence-weighted embedding error is E p,q =C p,q · Z p − Z q e iO(p,q,Mp,q) 2 . This optimization problem can be solved by the Alternating Direction Method of Multipliers (ADMM) [50]. See the supplementary material for details. From (23) we can see that the larger the embedding error E(p, q) is, the smaller W (p, q) tends to be. This can be observed in Fig. 6 that the pair of p and t gets a low weight, since the embedding error is large for every estimation method. Note that we decrease the value of α 2 gradually in Algorithm 1, which makes W more and more sparse. This progressive sparsity has better numerical stability than setting α 2 to be a small value in the very beginning. As α 2 gets too small, the pairwise connections may become overly sparse, producing an ill-conditioned graph. We terminate the iteration of Algorithm 1 in this case. Optimize Z. Optimizing the embedding error J(Z; W, M ) in (20) directly is hard in practice since it has n constraints, where n is the number of pixels. Relaxing the unit-length constraints in (19) to be Z DZ = 1 n D1 n , the problem can be rewritten into the following matrix form: min Z Z LZ s.t. Z DZ = 1 n D1 n .(24) Here L is a Laplacian matrix L = D − C • e iO + (C • e iO ) ,(25) where • is the matrix Hadamard product, is the complex conjugate transpose, 1 n is a n × 1 vector of all ones, and exponentiation acts element-wise. To make the optimization tractable, we consider only the shading orders between nearby pixels, while the confidences of the other shading orders are set to be zero. In our experiments we set the neighborhood to be a square of 30 pixels in each side. The optimization problem in (24) is solved by the spectral partitioning algorithm [48] with complex-valued eigenvectors. The solution is the angles of the first eigenvector Z 0 that has the smallest eigenvalue. We refer to the paper of Yu [7] for more details. Recover shading S b . To decode the shading brightness S b from the angles Z 0 , we need to ensure that the angle between any two points is less than 2π, otherwise the points may overlap with each other. To achieve this, we scale the brightness dimension of the U V B color space by a positive scalar. The scaling will not disturb the order of Z 0 , and we can scale the shading brightness back after the decoding. AE allows the points to rotate as a whole around the original point. We need to rotate the points back until the angles of the darkest points are zero. Note that the darkest pixels and the brightest pixels are always separated by a gap on the circles in the complex plane. Fig. 7b shows an example. The gap can be easily located by the consecutive empty bins of the histogram of the angles Z 0 (Fig. 7c). The pixels falling into the bins to the left of the gap are shifted to the right by 2π. Fig. 8 shows the change of variables during the iterations of CSF. In the beginning, the relative shading of some local regions are inaccurate (e.g., the circle inside the red box), since some wrong estimates occasionally get higher confidences than the right ones based solely on the image features. For example, the orders obtained from the BOB method (indicated by green dots) may possibly be wrong since the clustering is inaccurate (see Fig. 5b). Some pixels with similar but different colors are mistaken to have the same reflectance (the red dots in the light yellow regions). Furthermore, the FS method is adopted to estimate the shading orders between distant pixels (the yellow dots far away from the center point). When the global order is used to guide the selection, the right estimation methods gradually emerge. At the same time, the weights of unreli-able connections are greatly decreased as the sparsity gets stronger. Specifically, pairs of pixels whose orders cannot be accurately estimated by any method will be assigned zero weights and excluded from the fusion. As a result, the errors of Z 0 are reduced considerably. EXPERIMENTS We evaluate our method on the MIT Intrinsic Images dataset [24], which is a widely used benchmark. It contains groundtruth intrinsic images of 20 natural objects, and 16 of them are used for test. The images are taken in a controlled environment, where the direct illuminants are nearly white and the ambient illuminants are limited. To validate against real-world scenes, we evaluate our method on the Intrinsic Image in the Wild (IIW) dataset [12], which is a large-scale dataset of public photo collections. We also test our method on outdoor scenes from the UIUC shadow dataset [45]. We further test the utility of depth information on the RGB-Depth images from the NYU-Depth V2 dataset [51]. Error Metrics and Parameter Settings We evaluate the results on the MIT Intrinsic Images dataset primarily by the standard metric, namely the Local Mean Squared Error (LMSE) [24]. However, as pointed out by Jiang et al. , LMSE is sensitive to the window size and the difference between the mean values of the recovered intrinsic images and the groundtruth [27]. Moreover, LMSE biased towards edge-based methods [11]. To give a more complete evaluation, we include the absolute LMSE (aLMSE) and the correlation metrics proposed by Jiang et al. [27] as well as the standard MSE metric. The aLMSE is defined as follows: (26) where I andĨ are the ground-truth and estimate of intrinsic image, respectively. w is the index of sliding window. µ and µ are the average of I andĨ, respectively. The optimal scale a is searched to minimize the square error. The influence of the difference of mean values can be eliminated by aLMSE. aLM SE(I,Ĩ) = w min a (I w − µ w ) − a(Ĩ w −μ w ) 2 , The correlation is defined to be Cor(I,Ĩ) = E[(I − µ)(Ĩ −μ)] σσ ,(27) where σ is the standard deviation of the image. E is the expectation. We refer to the supplementary material of Reference [27] for more details of aLMSE and correlation. Among these metrics, correlation and MSE measure the error in a global way, while LMSE and aLMSE take an average of local errors on small image windows. For each image, the performance of reflectance and shading are calculated separately and the average of them is taken to be the result. The final result is the average of the performances over all images. Results on the IIW dataset are evaluated by the metric of "weighted human disagreement rate" (W HDR 10% ) [12]. It measures the correct rate of judgements on "which one has a darker reflectance" between two pixels. The main parameters of our model are the positive weights of the sigmoid function in Section 4. We set w 1 to be ln3/0.1, so the sigmoid function maps a step edge of strength 0.1 to a probability of 0.5. Similarly, we set w 2 ∼ w 6 to be ln3/0.2, ln3/0.01, ln3/0.08, ln3/0.1, and ln3/0.2, respectively. Specifically, we set the w 7 of the FS method to be twice as much as that of the SS method. We find the medium of the spatial distances of all the pixel pairsd s , and set w 7 to be ln3/d s for the FS method. For RGB-only images, we increase w 7 by 6 times to compensate the increase of probabilities of selecting the FS and the SS method. The initial weights α 1 and α 2 in (21) are set to be 1 and 2, respectively. The threshold ω min and the step size τ in Algorithm 1 are set to be 1/3 and 0.2, respectively. We found that our model is insensitive to these parameters. Evaluation of the components of our method Individual estimation methods. The results on the MIT Intrinsic Images dataset are compared in Fig. 9a. Our full model (Full) achieves the best performance, while estimating the shading orders without any single method will cause a noticeable drop of performance. Disabling BOB (W/o BOB) causes the most severe drop, followed by BO, FS, and SS, consecutively. Fig. 10 shows the changes of the recovered reflectance and shading when different methods are removed. Removing BO will break the smoothness of reflectance across the shadow edges. When BOB is unused, the shading smoothness across different reflectance will be broken, leaving sharp edges in shading. The smoothnessbased methods FS and SS are essential for keeping the local shading smooth. Without using FS, the smoothness in textured regions cannot be guaranteed. SS is important for the areas where the biases of reflectance brightness are not accurately estimated. The brightening direction. We test a special case of our method, where the brightening direction is fixed at [1, 1, 1] T following the Color Retinex [24]. Although the direct illuminants in the MIT Intrinsic Images dataset are nearly white and the ambient illuminants are weak, the performance under a white brightening direction (WB) is much worse than our original model (Fig. 9b). The confidences of pairwise orders. We evaluate the importance of the confidences of the pairwise orders in inferring the global shading by replacing AE with AS [34], i.e., assigning equal weights to the pairwise shading orders. From Fig. 9b we can see that the performance drops significantly. Depth information. Several depth-based features are used to calculate the confidences of pairwise orders for RGB-Depth images (Section 4). Fig. 11 suggests their effects. Utilizing the feature of Surface Normal Change increases the probability of applying the shading smoothness constraints to flat surfaces. See the regions in the red and green boxes of Fig. 11 for examples. These areas are mistaken to be shadowed without depth cues, since they have similar chromaticity to their surroundings, and their boundaries are blurred. The feature of Shadow Edges finds shading changes at depth discontinuities efficiently. It may miss some shadow edges that cannot be generated by any sample of illuminant, when the change of depth is small (e.g., the area in the blue box of Fig. 11), or a large part of the occluder is not visible in the current view (e.g., the area in the yellow box). Results on MIT Intrinsic Images dataset We compare our method to the state-of-art and to several classic approaches as listed in Table 2. These results are either copied from their papers, the report in [11], or by --0.0390 -Color Retinex [24] 0.7146 0.1108 0.0286 0.2541 Jiang-A [27] 0.6184 0.1533 0.0421 0.3988 Jiang-H [27] 0.5829 0.1524 0.0483 0.3476 Jiang-HA [27] 0.6109 0.1579 0.0454 0.3631 Shen-SR [14] 0.7259 0.1223 0.0240 0.2454 Shen-SRC [14] --0.0204 -Zhao et al. [4] --0.0250 -Gehler et al. [13] 0.7748 0.0985 0.0244 0.2544 Serra et al. [11] 0.7862 0.0834 0.0340 0.2958 Bell et al. [12] 0.7229 0.1100 0.0337 0.2763 Li et al. [30] --0.0190 -Chang et al. [19] --0.0229 -SIRFS [15] 0 running their code directly without tuning any parameters 1 . We report the results under the best parameters for the whole dataset. Our method achieves the best performance. Fig. 12 gives some concrete examples. The most remarkable advantage of our method is that it can recover the reflectance under deep shadows. One reason is that we can cluster the pixels with the same reflectance together on the U V shadow-free plane, no matter how dramatically the shading changes. Another reason is that our model fuses estimates from different methods by selecting the optimal one exclusively, which avoids smoothing the shading edges by the other estimates. Clustering-based methods, including Gehler et al. [13], Garces et al. [17], and Bell et al. [12], are sensitive to the change of intensity and color caused by shadows. The edge-based method of Li et al. [30] tends to assign large gradients to reflectance changes, which degrades at sharp shadow edges (e.g., those on the body of the deer). The methods of Gehler et al. [13] and Li et al. [30] smooth the shading extensively, leaving residuals of shadows in the reflectance (e.g., the teabag). SIRFS [15] smoothes the surfaces, which may generate an overly smooth shading (e.g., the frog). Another advantage is that our method can recover the global shading robustly. The main reason is that the clustering-based methods BO and BOB capture the shading orders between distant pixels effectively. Edge-based methods cannot reliably recover the relative shading between unconnected parts (e.g., the shadings recovered by Li et al. [30] are inconsistent between the front and the back of the turtle). Another reason is that BOB can handle the areas where the shading and reflectance change simultaneously (e.g., the mouth and the head of the frog). 1. The method SIRFS is evaluated on the images of cup2, deer, frog2, paper2, raccoon, sun, teabag1 and turtle, while the other images are used for training. The results of Bell et al. [12] are obtained through relaxing the constraints on the absolute values of shading and removing the intensity from the features for clustering the reflectance. Otherwise the deep shadows will be mistaken to be black and clustered into individual categories. Our method preserves the subtle variations of reflectance (e.g., the yellow and orange regions of the tea bag), since the intra-cluster variations in the U V plane (Fig. 2c) are represented in the recovered reflectance. In contrast, some clustering-based methods, such as Garces et al. [17] and Bell et al. [12], unify the reflectance of the pixels of each cluster. This operation often leads to block artifacts (e.g., the tea bag). Our method did not handle the feet of the deer well. The black feet and the white legs are both achromatic, so they fall into the same cluster on the shadow-free plane. The image blur further reduces the efficiency of the feature of Reflectance Change (Section 4), so the difference between black and white are not kept into reflectance. Results on Natural Images The quantitative results on the IIW dataset are shown in Table 3. Our method achieved comparable results to the state-of-art. It should be mentioned that W HDR 10% cannot reflect the superiority of our method on inferring the shading orders between pixels with different chromaticity, since only pixels with similar chromaticity are compared [12]. Further, the textured pixels are excluded from evaluation, so the ability to preserve the texture of reflectance is untested. Actually, both the top-performing methods of [16] and [12] remove the texture from the reflectance. For a fair comparison, we report our result that uses the edgepreserving smoothing of [16] to preprocess the input image. Without smoothing, the W HDR 10% increases about 3.7%. The IIW dataset is much more difficult than the MIT Intrinsic Images dataset. The image in the top row of Fig. 13 is comprised of different kinds of objects, some of which are highly textured (e.g., the wall with blue painting). Our method preserves the textures 2 much better than the other methods in comparison. Another difficulty comes from the intensive specular reflections (e.g., the wall in the top row of Fig. 13). Our method puts the specular reflections into reflectance, while some other methods, such as Zhao et al. [4] and Garces et al. [17], put them into shading. The greatest challenge of the IIW dataset comes from the coexistence of multiple direct illuminants in the same scene. In the bottom row of Fig. 13, the areas in the red boxes of the input image are covered by lights in different colors. This situation does not satisfy the bi-illuminant assumption of the BIDR model [3]. No unique brightening direction exists for the whole image, and the brightening direction obtained from entropy minimization (Section 3.1) eliminates the difference improperly. It causes two problems to our method: (1) the error of clustering will increase; and (2) the color of the recovered reflectance will be twisted. The first problem is shared by all the clustering-based methods such as Garces et al. [17] and Bell et al. [12]. The second problem is common, since all the methods in comparison assume a single (direct) illumination. Despite these problems, our model still recovered a globally consistent shading. Discussion. Scene-SIRFS addressed the mixture of illuminations by a soft segmentation of the image with respect to the "ownership" of illuminants [40]. But the segmentation 2. We do not use the edge-preserving smoothing to produce the qualitative results in Fig. 13. is not easy, since the changes of illuminations are often slower than the changes of reflectance. Beigpour and Van de Weijer [35] proposed the Multi-illuminant Dichromatic Reflection (MIDR) model to account for the secondary illuminants. However, in practice they only dealt with the case of two direct illuminants irradiating a single-colored object. We may consider extending the BIDR model to incorporate multiple direct illuminants. Accordingly, there will be multiple brightening directions, and the brightness should be extended to a mixture of sub-coordinates. This will make the problem much more complex. We further test on the outdoor images from the UIUC shadow dataset [45]. Fig. 14 shows three examples. The ambient illuminants are usually the blue sky, so the shadowed areas are more blueish than the lit areas. We compare to the methods of Jiang-HA [27] and Gehler et al. [13]. We also compare to the region-pair-based shadow removal method proposed by Guo et al. [45] 3 . Our model recovers the reflectance by lighting the dark pixels along the yellowish brightening direction, while the other intrinsic decomposition methods often fail to recover their colors. The method of Guo et al. [45] is unable to handle thin areas due to the limited resolution of image segmentation (e.g., the fingers in the last image of Fig. 14). Evaluation on RGB-Depth Images We test on the RGB-Depth images from the NYU-Depth V2 dataset. We compare to those methods that take RGB-Depth images [40][6] [39] or videos [5] as input 4 . Typical examples are shown in Fig. 15. Our method successfully recovered globally consistent shadings and preserves the textures of reflectance. In particular, our method was the only one that recovers the smooth shading over the painting in the first row of Fig. 15. In comparison, the method of Lee et al. [5] did not get consistent shadings between surfaces in different orientations. In their recovered reflectance of the first image in Fig. 15, the backrest of the sofa and the walls are much darker than the seat of the sofa and the floor. The method of Barron and Malik [40] successfully captured the shape of curved surfaces (e.g., the sofa in the first image of Fig. 15) but not those of objects with sharp boundaries (e.g., the cabinet and the bed in the second image of Fig. 15). The method of Chen and Koltun [6] achieved good smoothness of shading while keeping the sharp surface edges at the same time. However, this method often failed to recover the shading orders between objects with different colors (e.g., the blue pillow and the sofa in the first image of Fig. 15). The method of Jeon et al. [39] preserved the textures in reflectance very well (e.g., the floor in the second image of Fig. 15), but this method tends to reduce the difference of shading between surfaces with similar orientations (e.g., the walls and the cabinet in the second image of Fig. 15). CONCLUSIONS AND DISCUSSIONS We proposed the shading orders for intrinsic image decomposition. The shading orders captured not only adjacent relations but also distant connections. This overcame the limitations of edge-based methods that lack the large-scale structure of shading. The shading orders can be measured by several individual methods, each of which can give a reasonable estimate based on certain assumptions about the scene. Jointly utilizing these methods captured various kinds of priors and observations of the scene. We developed the CSF algorithm to combine the pairwise orders measured by different methods. CSF infers a global order by selecting the confident and consistent pairwise orders and solving their conflicts through AE. The local competition removes unreliable measurements from the fusion, so the results are much cleaner than a weighted sum of different estimates. This is essential for keeping sharp shadow edges and textures. The sparsity-driven neighbor selection further reduced the outliers of local measurements. Experimental results demonstrated that our model is suitable for various indoor and outdoor scenes with noticeable ambient illuminants. However, the BIDR model cannot handle multiple direct illuminants, interreflections, or specular reflections. We need to generalize the BIDR model and the U V B color space for more realistic scenes. The highly textured images are still quite challenging for clustering-based methods, since their reflectance often changes irregularly and thus cannot be clustered properly. Jeon et al. proposed to separate the texture layer before decomposing the shading and reflectance [39], which is a promising way to ease the clustering. Fig. 16 shows the rendered shading map of an RGB-Depth image. In the camera coordinate, we draw a "gray surface", taking all the pixels as vertices. Both the color of the surface and the illuminant are set to be [1, 1, 1] T , and the reflection of the surface is set to be diffuse only (i.e., without any specular reflection). Here we assume that there is only one direct illuminant for each image, while the ambient illumination is set to be 0. The illuminant is put inside the room box, and the range of the room box is set to be the scope of all the observable pixels. Especially, we expand the range of the z dimension (orthogonal to the image plane) to the negative part of the coordinate, since the light may be placed to the back of the camera. The surface is rendered with the Matlab Surfl function, and the output intensities of the vertices form a shading map. The bottom row of Fig. 16 shows the rendering results under several sampled illuminants. We can see that some of them are close to the real shading map of the scene, while the others are quite different. APPENDIX RENDERING THE SHADING MAP The similarity between the rendered shading and the ground-truth shading brightness S b is measured by their category-wise correlation: Sim(γ(L d ), S b ) = g∈G ng n Cor(γg(L d ), e S b g ) = g∈G ng n Cor(γg(L d ), e I b g ),(28) where G is the set of reflectance categories, n is the number of pixels, and Cor is the correlation between two variables. The subscripts g denotes the subset of pixels belonging to the g-th category. Here we utilized the linear relationship between the brightness I b and the shading brightness S b based on (6). We select a set of candidate illuminants L = {L d |Sim(γ(L d ), S b ) > 0.2}. ADMM FOR OPTIMIZING THE WEIGHTS W Eqn. 24 can be solved for each pixel p individually, where the matrix W can be decomposed into a series of vectors W p,· . So do E andC. For simplicity, we omit the subscript p of all the matrixes from now on. Denote d = D p . We reformulate Eqn. 24 to an equivalent problem: arg min W,X,Y g 1 (W ) + g 2 (X) + g 3 (Y ) s.t.C T W = d W = X = Y(29) where g 1 (W ) = E T W + α 2 2 W 2 2 g 2 (X) = α 1 X 1 g 3 (Y ) = 0 if Y q ≥ 0, ∀q ∞ otherwise(30) Through introducing Lagrange multipliers λ, Γ 1 and Γ 2 , we can obtain the following augmented Lagrangian [50]: L(W, X, Y, λ, Γ 1 , Γ 2 ) =g 1 (W ) + g 2 (X) + g 3 (Y ) + λ(d −C T W ) + Γ T 1 (W − X) + ρ 2 W − X 2 2 + Γ T 2 (W − Y ) + ρ 2 W − Y 2 2(31) where ρ is a scaling parameter. We initialize W , X and Y with 1 n , while λ = 2 and Γ 1 = Γ 2 = 1 n . Then we update them iteratively as follows: W k+1 = 1 α 2 + 2ρ (ρX k + ρY k − E + λ kC − Γ k 1 − Γ k 2 ) X k+1 =                W k+1 + Γ k 1 − 1 ρ α 1 if W k+1 + Γ k 1 > 1 ρ α 1 0 if |W k+1 + Γ k 1 | ≤ 1 ρ α 1 W k+1 + Γ k 1 + 1 ρ α 1 if W k+1 + Γ k 1 < − 1 ρ α 1 Y k+1 = (W k+1 + 1 ρ Γ k 2 ) + λ k+1 = λ k + η 1 (d −C T W k+1 ) Γ k+1 1 = Γ k 1 + η 2 (W k+1 − X k+1 ) Γ k+1 2 = Γ k 2 + η 3 (W k+1 − Y k+1 )(32) where X is got from soft thresholding. (·) + truncates all the elements of a vector to be non-negative. η 1 , η 2 and η 3 are step sizes. We terminate the iteration when W − X 1 + W − Y 1 is less than a threshold T W and d − E T W is less than a threshold T d . In implementation, we set ρ, η 1 , η 2 and η 3 to be 5, 0.05, 1, and 1, respectively. Fig. 17 shows the results of our method for the images of the MIT Intrinsic Images dataset other than those appeared in the paper. Figs. 18, 19 and 20 present several examples from the IIW dataset. Fig. 21 shows more results of our method on the UIUC Shadow Removal dataset. Fig. 22 shows more results of our method on the NYU-Depth V2 dataset. We compare our method to several recent algorithms, including Bell et al. [12], Zhao et al. [4], Garces [5], Barron et al. [40], Chen et al. [6], and Jeon et al. [39]. Figure 23 shows the colours of shading in images from the MIT Intrinsic Images dataset. We can see that most of the shading images are nearly achromatic. The reason is that the images are captured in controlled environment, where the ambient illuminations are largely suppressed by painting the background into black. According to Equation 3, when the ambient illumination is negligible, the shading will be nearly achromatic, no matter what the color of the direct illumination is. However, for the frog in Figure 23, the shading is slightly chromatic. Figure 24 shows the colours of shading in natural indoor and outdoor scenes. The indoor scenes often have complex illuminations, so the shading colors vary a lot from image to image, even from place to place in the same image. In comparison, the shading colors in outdoor scenes are more regular. Especially, the shadows in outdoor scenes are often blueish, since the ambient light is often the blue sky.
9,123
1810.09706
2896121525
We address the problem of decomposing a single image into reflectance and shading. The difficulty comes from the fact that the components of image---the surface albedo, the direct illumination, and the ambient illumination---are coupled heavily in observed image. We propose to infer the shading by ordering pixels by their relative brightness, without knowing the absolute values of the image components beforehand. The pairwise shading orders are estimated in two ways: brightness order and low-order fittings of local shading field. The brightness order is a non-local measure, which can be applied to any pair of pixels including those whose reflectance and shading are both different. The low-order fittings are used for pixel pairs within local regions of smooth shading. Together, they can capture both global order structure and local variations of the shading. We propose a Consistency-aware Selective Fusion (CSF) to integrate the pairwise orders into a globally consistent order. The iterative selection process solves the conflicts between the pairwise orders obtained by different estimation methods. Inconsistent or unreliable pairwise orders will be automatically excluded from the fusion to avoid polluting the global order. Experiments on the MIT Intrinsic Image dataset show that the proposed model is effective at recovering the shading including deep shadows. Our model also works well on natural images from the IIW dataset, the UIUC Shadow dataset and the NYU-Depth dataset, where the colors of direct lights and ambient lights are quite different.
Different constraints often result in quite different shading orders. How to fuse them remains an open problem. Edge-based methods classify each edge into a reflectance edge or a shading edge. Accordingly, the shading order between the two sides of the edge can be decided. In particular, Retinex classified the edges by the magnitude of gradients @cite_44 . This classification method is risky. Some shadow edges are quite strong, while the reflectance edges between similar colors are relatively weak. Extensions of Retinex introduced several new features, including texture similarity @cite_15 , classifiers over local features @cite_46 or patches @cite_17 @cite_11 , correlation between the mean luminance and luminance amplitude @cite_41 , and image sequences under different illumination directions @cite_49 @cite_22 . These features improved the accuracy of classification, but none of them are robust enough to handle all kinds of scenes. CSF faces a similar problem of selecting the optimal pairwise order from several estimates. The difference is that CSF incorporates consistency between the pairwise orders and global order into the selection criteria, which can rectify the inconsistent selections made by noisy image features.
{ "abstract": [ "", "", "Intrinsic images represent the underlying properties of a scene such as illumination (shading) and surface reflectance. Extracting intrinsic images is a challenging, ill-posed problem. Human performance on tasks such as shadow detection and shape-from-shading is improved by adding colour and texture to surfaces. In particular, when a surface is painted with a textured pattern, correlations between local mean luminance and local luminance amplitude promote the interpretation of luminance variations as illumination changes. Based on this finding, we propose a novel feature, local luminance amplitude, to separate illumination and reflectance, and a framework to integrate this cue with hue and texture to extract intrinsic images. The algorithm uses steerable filters to separate images into frequency and orientation components and constructs shading and reflectance images from weighted combinations of these components. Weights are determined by correlations between corresponding variations in local luminance, local amplitude, colour and texture. The intrinsic images are further refined by ensuring the consistency of local texture elements. We test this method on surfaces photographed under different lighting conditions. The effectiveness of the algorithm is demonstrated by the correlation between our intrinsic images and ground truth shading and reflectance data. Luminance amplitude was found to be a useful cue. Results are also presented for natural images.", "Interpreting real-world images requires the ability distinguish the different characteristics of the scene that lead to its final appearance. Two of the most important of these characteristics are the shading and reflectance of each point in the scene. We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, given the lighting direction, each image derivative is classified as being caused by shading or a change in the surface's reflectance. The classifiers gather local evidence about the surface's form and color, which is then propagated using the generalized belief propagation algorithm. The propagation step disambiguates areas of the image where the correct classification is not clear from local evidence. We use real-world images to demonstrate results and show how each component of the system affects the results.", "Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects", "Intrinsic images are a useful midlevel description of scenes proposed by H.G. Barrow and J.M. Tenenbaum (1978). An image is de-composed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. We focus on a slightly, easier problem: given a sequence of T images where the reflectance is constant and the illumination changes, can we recover T illumination images and a single reflectance image? We show that this problem is still imposed and suggest approaching it as a maximum-likelihood estimation problem. Following recent work on the statistics of natural images, we use a prior that assumes that illumination images will give rise to sparse filter outputs. We show that this leads to a simple, novel algorithm for recovering reflectance images. We illustrate the algorithm's performance on real and synthetic image sequences.", "We present a method for decomposing an image into its intrinsic reflectance and shading components. Different from previous work, our method examines texture information to obtain constraints on reflectance among pixels that may be distant from one another in the image. We observe that distinct points with the same intensity-normalized texture configuration generally have the same reflectance value. The separation of shading and reflectance components should thus be performed in a manner that guarantees these non-local constraints. We formulate intrinsic image decomposition by adding these non-local texture constraints to the local derivative analysis employed in conventional techniques. Our results show a significant improvement in performance, with better recovery of global reflectance and shading structure than by previous methods.", "Images can be represented as the composition of multiple intrinsic component images, such as shading, albedo, and noise images. In this paper, we present a method for estimating intrinsic component images from a single image, which we apply to the problems of estimating shading and albedo images and image denoising. Our method is based on learning estimators that predict filtered versions of the desired image. Unlike previous approaches, our method does not require unnatural discretizations of the problem. We also demonstrate how to learn a weighting function that properly weights the local estimates when constructing the estimated image. For shading estimation, we introduce a new training set of real-world images. The accuracy of our method is measured both qualitatively and quantitatively, showing better performance on the shading albedo separation problem than previous approaches. The performance on denoising is competitive with the current state of the art." ], "cite_N": [ "@cite_22", "@cite_46", "@cite_41", "@cite_17", "@cite_44", "@cite_49", "@cite_15", "@cite_11" ], "mid": [ "", "", "1576148925", "2116919352", "2164847484", "2136748901", "2104166077", "2154423567" ] }
Consistency-aware Shading Orders Selective Fusion for Intrinsic Image Decomposition
A N image is the result of several factors, including the material reflectance, the surface's shape, the positions and the colors of the illuminants, and the camera sensor responses. Barrow and Tenenbaum [1] proposed to decompose an image into intrinsic images, each of which captures a distinct aspect of the scene. The most common outputs are the shading and the reflectance. The shading captures the strength of incident illumination at each pixel, while the reflectance shows the surface albedo. The shading is widely used to reconstruct the shapes of the surfaces [2]. The albedo is invariant to illumination and geometry, so it is a robust feature for object classification and image segmentation. In this paper we aim to recover the shading and the reflectance from a single image. This is an underconstrained problem. The absolute values of the unknown variables cannot be measured directly, since they are highly coupled in observed image. Instead, we measure the relative sizes of shading over pixels to recover its essential structure, and determine their absolute values later by boundary conditions. We regard the shading as a global ranking of the pixels in the order of dark to bright. The boundary conditions are simply that the start points are fully shadowed pixels, while the end points are fully lit ones. The global shading is inferred from pairwise shading orders, which are signed differences between the shading of pixels. The flow chart is shown in Fig. 1. We estimate the shading orders in the U V B color space, which is spanned by a 2D shadow-free plane [3] and a brightness dimension. This color space has two major properties: • Pixels with the same reflectance cluster together on the shadow-free plane. • The brightness of image is the sum of the shading brightness and the reflectance brightness. Based on these properties, we can use clustering-based methods to capture the global order structure of the shading. For pixels with the same reflectance, the shading orders can be obtained directly from the difference of the image brightness. For pixels with different reflectance, the shading orders can be calculated in a similar way, but the bias from the difference of the reflectance brightness should be compensated. We choose the optimal biases between different clusters of reflectance, which make the shading constant across reflectance boundaries excluding shading edges. The cluster-wise biases make it possible to handle pixel pairs whose reflectance and shading are both different. We also model the local shading by low-order fittings to predict the shading orders between nearby pixels. Different models can capture the geometric structure of different types of surfaces. For example, a linear model can describe the shading of a smooth surface. The estimation methods above are complementary. The clustering-based methods can be applied to any pair of pixels, in particular those of distantly located pixels, but their accuracies depend on the quality of clustering. In contrast, the low-order fittings do not rely on clustering at all, but they capture only the local structure, and the fitting errors are large for irregular surfaces. The pairwise shading orders are combined into a global shading via Consistency-aware Selective Fusion (CSF). The Fig. 1: The flow chart of our method. Firstly the image is transformed into the U V B color space. Based on brightness and cluster results over chromaticity, different methods m are used to estimate the shading orders O(p, q, m) between each pair of pixels p and q. We also evaluate the reliability C(p, q, m) of the estimates based on the image features. Then we use CSF to infer the global shading. CSF repeats two operations: Local Selection, i.e., selecting the estimation methods and the weights for each pair of pixels under the guidance of consistency between the pairwise orders and the global shading; and Angular Embedding (AE), which infers the globally consistent orders from the pairwise estimates. At last we transform the global shading back into the RGB space. major challenge is avoiding inconsistency between estimates from different methods. CSF identifies a sparse set of reliable and consistent pairwise shading orders and fuses them within a unified optimization framework. For each pair of pixels, CSF selects the most reliable estimate exclusively instead of a weighted summation of different estimates [4][5] [6]. This strategy prevents unreliable estimates from polluting the results. We evaluate the reliability of pairwise orders using not only the image features but also their consistency with the global order. Therefore, the estimates that are incompatible with the majority will be suppressed, even when their preconditions happen to be satisfied by the image features. Forcing sparsity of pairwise connections further reduces unreliable measurements. The global order is obtained from Angular Embedding (AE) [7], which embeds the pixels onto a unit circle in the complex plane. AE uses a complex matrix to encode pairwise orders and their reliability simultaneously. Moreover, AE applies spectral decomposition to get a near-global optimal solution that best matches the reliable pairwise orders. After locating the darkest points on the unit circle, the absolute values of shading can be determined. IMAGE FORMATION An image with only body reflection can be modeled as [3] I i (p) = R i b (p)(γ(p)L i d + L i a ),(1) where the superscript i indexes the RGB channels, and p indexes the pixel. The body reflectance R b denotes the diffuse reflection under white illumination. The three-dimensional vectors L d and L a are the direct illuminant and the ambient illuminant, respectively. γ(p) ∈ [0, 1] is the direct shading, i.e., the proportion of direct illumination reaching the surface. BIDR assumes that the direct and ambient illuminants are constant across the materials [3]. When there are multiple direct illuminants with the same color, their effects can be added. Inspired by the shadow removal problem [45], we define the reflectance to be the image lit by full direct illuminant together with the ambient illuminant: R i (p) = R i b (p)(L i d + L i a ).(2) Accordingly, the shading is defined to be S i (p) = I i (p) R i (p) = γ(p)L i d + L i a L i d + L i a .(3) For a fully lit area (i.e., γ = 1), the shading reaches its maximum. For a fully shadowed area (i.e., γ(p) = 0), the shading will be S(p) = L a /(L d + L a ). In natural scenes, the direct lights are always much stronger than the ambient lights, so the shading of fully shadowed areas should be a small positive value. The color of the shading in (3) does not have definite physical meaning, so we show the shading in grayscale for all the figures in this paper following [24] and [12]. Readers that are interested in the color of the shading are referred to the supplementary material for several examples. SHADING ORDERS FROM BRIGHTNESS We infer shading orders in the U V B color space. We will show that image brightness has a linear relation to the log of shading. Therefore pairwise shading orders can be estimated by either brightness orders or low-order fittings of local shading. The U V B Color Space The BIDR model delivers a 2D shadow-free plane U V [3]. The normal n of the U V plane points from the shadowed pixels to the lit ones sharing the same body reflectance R b (see Fig. 2b for an example). We call the normal n the brightening direction. Formally, the brightening direction is defined by n = 1 K log I(p)| γ(p)=1 − log I(q)| γ(q)=0 = 1 K log( L d La + 1),(4) where the pixels p and q should satisfy R b (p) = R b (q), and K is the normalization factor. From (4) we can see that the brightening direction depends only on the ratio of illuminants, so all the pixels share the same brightening direction (Fig. 2b). If the ratio of illuminants is unknown, we can search the most probable brightening direction that minimizes the entropy of pixels on the U V plane [3] [46]. Since pixels with similar reflectance R b will stay closely together on the U V plane (Fig. 2c), the entropy of the distribution of pixels will be minimized. Let u and v be any pair of basis vectors on the U V plane. Then we have a rotation matrix H = [u, v, n] that transforms the log RGB space into a new color space U V B: [I u (p), I v (p), I b (p)] = log I(p)H.(5) The dimension I b captures the intensity of the image, and we call it the brightness. According to (3) and (5), the brightness of the image can be factorized as follows: I b (p) = log S(p) · n + log R(p) · n = S b (p) + R b (p).(6) Here we used the fact that log I(p) = log R(p) + log S(p). The shading brightness is S b (p) = log S(p) · n, which is a linear function of log S. The reflectance brightness R b (p) = log R(p) · n can be regarded as a bias determined by the body reflectance R b . This linear relationship is the basis for estimating the shading orders in Section 3.2. According to (5), the shading in the U V B space should be [S u (p), S v (p), S b (p)] = log S(p)H. Note that S u and S v are nearly zero since the U V plane is shadow-free [3]. The only unknown dimension is the shading brightness S b , and we will infer it from pairwise shading orders in Section 5. Once we obtain S b , the shading in RGB space can be recovered by S(p) = exp([S u (p), S v (p), S b (p)]H −1 ),(7) where exp denotes element-wise exponential. Note that the rotation matrix H is always invertible. The reflectance can be obtained from R(p) = I(p)/S(p). Measuring Pairwise Shading Orders The shading order between pixels p and q is defined to be the signed difference between shading brightnesses, i.e., O(p, q) = S b (p) − S b (q) . We propose four methods M = {BO, BOB, F S, SS} to estimate the shading orders. These methods are shown in Fig. 3. Brightness Order (BO). According to (6), if two pixels have the same reflectance brightness R b or equivalently, the same body reflectance R b , their shading order will be equal to their difference of brightnesses: O(p, q, BO) = I b (p) − I b (q) if R b (p) = R b (q).(8) Brightness Order minus Bias (BOB). For pixels with different body reflectance, the bias of reflectance brightness ∆R b should be compensated as follows: O(p, r, BOB) = I b (p)−I b (r)−∆R b (p, r) if R b (p) = R b (r),(9) where ∆R b (p, r) = R b (p) − R b (r) is the bias. The process of calculating the bias will be described in Section 3.3. BO and BOB together can estimate the shading order between any two pixels. For pixels nearby, we can fit their shading brightness by low-order functions. This is based on the assumption of Pixels I b p r s t q O(p,q,BO) O(p,t,SS) O(p,s,FS)=0 O(p,r,BOB) S b ΔR b Fig. 3: Calculating shading orders O from brightness I b . We align the curves of the brightness I b and the ground-truth shading brightness S b to make I b (p) = S b (p). The red dashed curve is the brightness after compensating the bias of reflectance brightness ∆R b . The green masks cover the green pixels while the uncovered ones are white. local smoothness of shading, which is valid for most parts of natural images. First-order Smoothness (FS). For flat surfaces, the normal directions and thus the incident angles change little. According to the cosine law of the Lambertian reflection, the variation of shading brightness will be small. The first-order derivative of shading brightness should be almost zero if there are no shadow edges. Consequently, the adjacent pixels will have nearly identical shading brightness: O(p, s, F S) = 0 if s ∈ N (p), ∂I b (p) ∂p ≈ 0,(10) where N (p) is the neighborhood of p, and ∂I b (p) ∂p is the derivative of I b evaluated at p. Second-order Smoothness (SS). For smooth surfaces, the surface normal rotates smoothly. As a result, the shading brightness will change smoothly. We assume that the second-order derivative of the shading is close to zero. Thus we can fit the local shading by a linear function. We further assume that the adjacent pixels share the same body reflectance, so the slope of the linear model ∂S b (p) ∂p = ∂I b (p) ∂p . The shading order between two nearby pixels will be O(p, t, SS) = ∂I b (p) ∂p · (p − t) if t ∈ N (p), ∂ 2 (I b (p)) ∂p 2 ≈ 0,(11) where p − t is the directed spatial distance between p and t. In practice, we calculate the derivative and the spatial distance in the horizontal and vertical directions separately. The preconditions of the methods above are not mutually exclusive, so different methods may be applicable to the same pair of pixels. The preconditions together cover all possible situations, so we can find at least one suitable method for most pairs of pixels. The redundancy and completeness of these methods are the basis for robust estimates of shading orders. The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. Estimating the Bias of Reflectance Brightness The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. The absolute values of reflectance brightness R b are unavailable, so we cannot calculate their biases directly. Instead, we cluster the pixels by body reflectance, and estimate the biases of reflectance brightness between different clusters. The local smoothness of shading implies that pixels within a small patch have similar shading brightness. According to (6), the bias of reflectance brightness between two clusters can be approximated by their difference of image brightness within small patches. The main process is shown in Fig. 4. The image is divided into dense grids with 10 pixels in each side. For a patch T containing pixels from both categories j and k, the difference of reflectance brightness is calculated by ∆R b (j, k, T ) =Ī b (j, T ) −Ī b (k, T ), whereĪ b (j, T ) and I b (k, T ) are the median brightness of pixels belonging to categories j and k, respectively. We generate a histogram of the patch-wise measures ∆R b (j, k, T ), and take the highest peak to be the estimate ∆Ř b (j, k) as shown in Fig. 4c. The minorities of the histogram mainly come from patches with shading edges in them (e.g., patches 3 and 4 in Fig. 4b). The reliability F of the estimate is set to be the number of votes from the patches. When F j,k is 0, it means categories j and k are not adjacent, and their bias cannot be measured directly. In this case, we resort to their biases with other categories. Taking each reflectance category as a node, we can build an undirected graph G = (V, E), where V is the set of nodes and E is the set of edges. The weight of the edge between node j and k is set to be 1/F j,k , where F j,k is the reliability of ∆Ř b (j, k) as described before. We can get an estimate of the bias between two nodes by summing the biases along any path connecting them. We further eliminate the multipath effect by extracting the Minimum Spanning Tree (MST) of the graph G. The MST ensures that there is one and only one path between any two nodes, so the relative reflectance brightnessŘ b of each node can be uniquely determined. Meanwhile, the total reliability of the remaining pairwise biases is maximized. The sparsity of the reflectance spectra [47] ensures that the pixels can be clustered into a small number of categories. Fig. 4: Estimating the bias of reflectance brightness between reflectance categories. (a) The cluster map. The symbols j, k, and l stand for 3 reflectance categories. The squares indicate representative patches for estimating the bias of reflectance brightness between categories j and k; (b) The brightness I b . The biases obtained from patches 3 and 4 are outliers, since there are shadow edges inside them; (c) The histogram of the patch-wise biases of reflectance brightness between categories j and k. The peak of the histogram is selected to be the result. Since pixels on the shadow-free plane U V are well organized by their body reflectance, we cluster the pixels by a simple k-means. The number of clusters is set to be the number of local maxima in the 2D histogram of I u and I v . The bin size of the histogram is empirically set to be 0.03. THE RELIABILITY OF PAIRWISE ORDERS For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. The reliability of an estimate is the probability of all its premises being valid, which is calculated by a Noisy-Or model C(p, q, m) = f ∈Cm 1 − P f (p, q), m ∈ M,(12) where C m is the set of perturbations that the method m is not robust to, as listed in Table 1. The probability P f (p, q) measures how likely the perturbation f occurs around pixels p and q. For an ideal image without any perturbation, all the methods get equally high confidences. Once a perturbation happens, the confidences of sensitive methods will drop. The occurrences of the perturbations are predicted by image features. Generally, we calculate a distance x between the pair of pixels according to each feature, and translate the distance into probability by a sigmoid function in the form of sigm(x; w) = 2 1+e −wx − 1, where w is a positive weight. The features are described below. Clustering Error (CE) is the probability that the clustering of pixels on the shadow-free plane is inaccurate, which is calculated by where the cluster probability P C is the likelihood of each pixel belonging to its reflectance category, and eŜ b is the strength of the step edge [48] on the shifted shading bright-nessŜ b . The first term increases as the pixel p or q deviates from the cluster centers. The second term is large when the pixels are improperly categorized or the relative reflectance brightnesses are inaccurately estimated, as shown in Fig. 5c. PCE(p, q) = (1 − PC (p)PC (q)) · sigm(eŜb (p, q); w1),(13) Here each reflectance category is modeled by a multivariate normal distribution. The shifted shading brightnessŜ b is obtained from the brightness I b minus the relative reflectance brightnessŘ b (Section 3.3) followed by a median filtering. Local Color Variance (LCV) is defined to be: PLCV (p, q) = sigm(max(σ(I(p)), σ(I(q))); w2),(14) where σ(I(p)) is the standard deviation of chromaticities I u and I v within the 3x3 window centered at pixel p. Large color variations mainly appear at reflectance boundaries (Figs. 5a and 5c). Shadow Edges (SE) are caused by occlusions of the direct light. To locate the shadow edges, we render the direct shadingγ under uniformly sampled illuminants. The direct shading is similar to the visibility map proposed by Lee et al. [5]. The difference is that they assume the illuminants to be infinitely far away, which is inaccurate for indoor scenes. Instead, we sample the feasible positions of the illuminant within the room box. The probability of a shadow edge between pixels p and q is calculated by their direct shading under promising illuminants, as follows: PSE(p, q) = sigm( 1 |L| L d ∈L γ(L d , p) −γ(L d , q) ; w3).(15) Here L is the set of promising illuminants, andγ(L d , p) is the direct shading at pixel p under illuminant L d . We select the promising illuminants according to the correlation between the rendered direct shadingγ and the brightness I b . See the supplementary material for details. The Shadow Edges feature is not applicable to RGB-only images, since the geometric layout is needed for rendering the shading map. Reflectance Change (RC) distinguishes pixels with different chromaticities or intensities, which are assumed to have different reflectance [24][13][17] [12]. We calculate the probability of a reflectance change by PRC (p, q) = sigm(duv(p, q); w4) · sigm(e b (p, q); w5),(16) where d uv is the geometric distance on the shadow-free plane. e b (p, q) is the magnitude of the step edge lying between p and q in the brightness I b , which aims at distinguishing colors with similar chromaticity but different intensities, especially achromatic ones. Surface Normal Change (SNC) generates shading variation [5][6] [39]. We calculate the probability of a surface normal change by PSNC (p, q) = sigm( (N(p), N(q)); w6),(17) where (N(p), N(q)) is the angle between the surface normals at pixels p and q. The surface normals are derived from the depth map [5]. SNC is unavailable to RGB-only images. Spatial Distance (SD) is simply the geometric distance between the pixels [6] [12]: PSD(p, q) = sigm(ds(p, q); w7). For RGB-Depth images, we first calculate the 3D positions of the pixels in camera coordinates and then compute their distances. For RGB-only images, we use the 2D coordinates in the image plane. Discussion. The features above can help us choose the best estimation method for a certain pair of pixels. Among them, CE focuses on whether the biases of reflectance brightnesses are correctly estimated, which is the key to the success of the BOB method. We check the correctness by both the cause and the effect, i.e., the pixels are tightly clustered and the estimated shading is smooth, respectively. LCV and RC capture the local and large-scale behaviour of reflectance change, respectively. The local variation, coupled with image blur, will disturb the measurements of brightness as well as its gradient. This will cause problems in most estimation methods except for the FS, which is only concerned with the adjacency of pixels. GLOBAL SHADING FROM SHADING ORDERS VIA CONSISTENCY-AWARE SELECTIVE FUSION Thus far we have obtained a matrix O of the pairwise shading orders (Section 3.2) together with a confidence matrix C from (12) representing their reliability. Now we use the Consistency-aware Selective Fusion (CSF) to select a subset of reliable and consistent pairwise orders, and combine them into an optimal global order. CSF is designed under the following criteria: • For a pair of pixels p and q, the optimal estimation method M p,q ∈ M is selected exclusively. • The pairwise connections W p,q should be sparse such that the outliers are excluded. • The total confidence of the selected pairwise shading orders should be maximized. • The global order should match the input pairwise orders. In practice, the global order is obtained through Angular Embedding (AE) [7]. Let Z p = e iS b (p) with i = √ −1 denote the embedding of pixel p on the unit circle in the complex plane (Fig. 6). The angle Θ p,q from Z p to Z q is the shading order between p and q. AE finds an embedding that makes Θ p,q consistent with the input shading order O p,q = O(p, q, M p,q ). Algorithm 1 Consistency-aware Selective Fusion Require: Pairwise shading orders O and the relative confidence C, the initial weights α 1 and α 2 of regularizer, the threshold ω min of the density of non-zero elements in W , and the step size τ . Ensure: Embedding Z. Initialization: W = 1 n,n , where n is the number of pixels; M p,q = arg max m C(p, q, m). while α 2 > 0 do Optimize Z using (20); Choose M using (22); (23); if W 0 < ω min n 2 then Break; end if end while return Z. α 2 = α 2 − τ ; Update W using The estimation methods W , the pairwise connections M and the embedding Z are optimized jointly as follows: min W,M,Z J AE (Z; W, M ) + P (W ) s.t. Z p = 1, q C p,q = D p , ∀p, W (p, q) ≥ 0, ∀p, q,(19) where the errors of Angular Embedding is defined to be [7] J AE (Z; W, M ) = p,q C p,q · Z p − Z q e iOp,q 2 ,(20) and the regularization term is in the form of elastic net [49] P (W ) = α 1 W 1 + α 2 2 W 2 2 .(21) Here C p,q = W p,q C(p, q, M p,q ) is the weighted confidence. The diagonal matrix D p = q max m∈M C(p, q, m) is a degree matrix. α 1 and α 2 are the weights of lasso (L1) and ridge (L2), respectively. Elastic net enforces group sparsity on the weights, so several groups of reliable neighbors will be selected for each pixel. We optimize the variables M , W , and Z iteratively as described in Algorithm 1. Fig. 6 illustrates one iteration of the process. The details are given below. Choose M . Keeping W and Z fixed, we can search the optimal estimation method by arg min M p,q W p,q C(p, q, M p,q ) · Z p − Z q e iO(p,q,Mp,q) 2 s.t. q W p,q C(p, q, M p,q ) = D p , ∀p. (22) It can be optimized by the Lagrange method. We iteratively pick the optimal M p,q that balances the confidence and the consistency of orders under the current Lagrangian multiplier, and updates the multiplier by dual ascent. In Fig. 6b the selected method for pixels p and q is the one with the second highest confidence but the best consistency to the global shading. W p,q E p,q + α 1 W 1 + α 2 2 W 2 2 s.t. q W p,qCp,q = D p , ∀p, W p,q ≥ 0, ∀p, q,(23) whereC p,q = C(p, q, M p,q ) and the confidence-weighted embedding error is E p,q =C p,q · Z p − Z q e iO(p,q,Mp,q) 2 . This optimization problem can be solved by the Alternating Direction Method of Multipliers (ADMM) [50]. See the supplementary material for details. From (23) we can see that the larger the embedding error E(p, q) is, the smaller W (p, q) tends to be. This can be observed in Fig. 6 that the pair of p and t gets a low weight, since the embedding error is large for every estimation method. Note that we decrease the value of α 2 gradually in Algorithm 1, which makes W more and more sparse. This progressive sparsity has better numerical stability than setting α 2 to be a small value in the very beginning. As α 2 gets too small, the pairwise connections may become overly sparse, producing an ill-conditioned graph. We terminate the iteration of Algorithm 1 in this case. Optimize Z. Optimizing the embedding error J(Z; W, M ) in (20) directly is hard in practice since it has n constraints, where n is the number of pixels. Relaxing the unit-length constraints in (19) to be Z DZ = 1 n D1 n , the problem can be rewritten into the following matrix form: min Z Z LZ s.t. Z DZ = 1 n D1 n .(24) Here L is a Laplacian matrix L = D − C • e iO + (C • e iO ) ,(25) where • is the matrix Hadamard product, is the complex conjugate transpose, 1 n is a n × 1 vector of all ones, and exponentiation acts element-wise. To make the optimization tractable, we consider only the shading orders between nearby pixels, while the confidences of the other shading orders are set to be zero. In our experiments we set the neighborhood to be a square of 30 pixels in each side. The optimization problem in (24) is solved by the spectral partitioning algorithm [48] with complex-valued eigenvectors. The solution is the angles of the first eigenvector Z 0 that has the smallest eigenvalue. We refer to the paper of Yu [7] for more details. Recover shading S b . To decode the shading brightness S b from the angles Z 0 , we need to ensure that the angle between any two points is less than 2π, otherwise the points may overlap with each other. To achieve this, we scale the brightness dimension of the U V B color space by a positive scalar. The scaling will not disturb the order of Z 0 , and we can scale the shading brightness back after the decoding. AE allows the points to rotate as a whole around the original point. We need to rotate the points back until the angles of the darkest points are zero. Note that the darkest pixels and the brightest pixels are always separated by a gap on the circles in the complex plane. Fig. 7b shows an example. The gap can be easily located by the consecutive empty bins of the histogram of the angles Z 0 (Fig. 7c). The pixels falling into the bins to the left of the gap are shifted to the right by 2π. Fig. 8 shows the change of variables during the iterations of CSF. In the beginning, the relative shading of some local regions are inaccurate (e.g., the circle inside the red box), since some wrong estimates occasionally get higher confidences than the right ones based solely on the image features. For example, the orders obtained from the BOB method (indicated by green dots) may possibly be wrong since the clustering is inaccurate (see Fig. 5b). Some pixels with similar but different colors are mistaken to have the same reflectance (the red dots in the light yellow regions). Furthermore, the FS method is adopted to estimate the shading orders between distant pixels (the yellow dots far away from the center point). When the global order is used to guide the selection, the right estimation methods gradually emerge. At the same time, the weights of unreli-able connections are greatly decreased as the sparsity gets stronger. Specifically, pairs of pixels whose orders cannot be accurately estimated by any method will be assigned zero weights and excluded from the fusion. As a result, the errors of Z 0 are reduced considerably. EXPERIMENTS We evaluate our method on the MIT Intrinsic Images dataset [24], which is a widely used benchmark. It contains groundtruth intrinsic images of 20 natural objects, and 16 of them are used for test. The images are taken in a controlled environment, where the direct illuminants are nearly white and the ambient illuminants are limited. To validate against real-world scenes, we evaluate our method on the Intrinsic Image in the Wild (IIW) dataset [12], which is a large-scale dataset of public photo collections. We also test our method on outdoor scenes from the UIUC shadow dataset [45]. We further test the utility of depth information on the RGB-Depth images from the NYU-Depth V2 dataset [51]. Error Metrics and Parameter Settings We evaluate the results on the MIT Intrinsic Images dataset primarily by the standard metric, namely the Local Mean Squared Error (LMSE) [24]. However, as pointed out by Jiang et al. , LMSE is sensitive to the window size and the difference between the mean values of the recovered intrinsic images and the groundtruth [27]. Moreover, LMSE biased towards edge-based methods [11]. To give a more complete evaluation, we include the absolute LMSE (aLMSE) and the correlation metrics proposed by Jiang et al. [27] as well as the standard MSE metric. The aLMSE is defined as follows: (26) where I andĨ are the ground-truth and estimate of intrinsic image, respectively. w is the index of sliding window. µ and µ are the average of I andĨ, respectively. The optimal scale a is searched to minimize the square error. The influence of the difference of mean values can be eliminated by aLMSE. aLM SE(I,Ĩ) = w min a (I w − µ w ) − a(Ĩ w −μ w ) 2 , The correlation is defined to be Cor(I,Ĩ) = E[(I − µ)(Ĩ −μ)] σσ ,(27) where σ is the standard deviation of the image. E is the expectation. We refer to the supplementary material of Reference [27] for more details of aLMSE and correlation. Among these metrics, correlation and MSE measure the error in a global way, while LMSE and aLMSE take an average of local errors on small image windows. For each image, the performance of reflectance and shading are calculated separately and the average of them is taken to be the result. The final result is the average of the performances over all images. Results on the IIW dataset are evaluated by the metric of "weighted human disagreement rate" (W HDR 10% ) [12]. It measures the correct rate of judgements on "which one has a darker reflectance" between two pixels. The main parameters of our model are the positive weights of the sigmoid function in Section 4. We set w 1 to be ln3/0.1, so the sigmoid function maps a step edge of strength 0.1 to a probability of 0.5. Similarly, we set w 2 ∼ w 6 to be ln3/0.2, ln3/0.01, ln3/0.08, ln3/0.1, and ln3/0.2, respectively. Specifically, we set the w 7 of the FS method to be twice as much as that of the SS method. We find the medium of the spatial distances of all the pixel pairsd s , and set w 7 to be ln3/d s for the FS method. For RGB-only images, we increase w 7 by 6 times to compensate the increase of probabilities of selecting the FS and the SS method. The initial weights α 1 and α 2 in (21) are set to be 1 and 2, respectively. The threshold ω min and the step size τ in Algorithm 1 are set to be 1/3 and 0.2, respectively. We found that our model is insensitive to these parameters. Evaluation of the components of our method Individual estimation methods. The results on the MIT Intrinsic Images dataset are compared in Fig. 9a. Our full model (Full) achieves the best performance, while estimating the shading orders without any single method will cause a noticeable drop of performance. Disabling BOB (W/o BOB) causes the most severe drop, followed by BO, FS, and SS, consecutively. Fig. 10 shows the changes of the recovered reflectance and shading when different methods are removed. Removing BO will break the smoothness of reflectance across the shadow edges. When BOB is unused, the shading smoothness across different reflectance will be broken, leaving sharp edges in shading. The smoothnessbased methods FS and SS are essential for keeping the local shading smooth. Without using FS, the smoothness in textured regions cannot be guaranteed. SS is important for the areas where the biases of reflectance brightness are not accurately estimated. The brightening direction. We test a special case of our method, where the brightening direction is fixed at [1, 1, 1] T following the Color Retinex [24]. Although the direct illuminants in the MIT Intrinsic Images dataset are nearly white and the ambient illuminants are weak, the performance under a white brightening direction (WB) is much worse than our original model (Fig. 9b). The confidences of pairwise orders. We evaluate the importance of the confidences of the pairwise orders in inferring the global shading by replacing AE with AS [34], i.e., assigning equal weights to the pairwise shading orders. From Fig. 9b we can see that the performance drops significantly. Depth information. Several depth-based features are used to calculate the confidences of pairwise orders for RGB-Depth images (Section 4). Fig. 11 suggests their effects. Utilizing the feature of Surface Normal Change increases the probability of applying the shading smoothness constraints to flat surfaces. See the regions in the red and green boxes of Fig. 11 for examples. These areas are mistaken to be shadowed without depth cues, since they have similar chromaticity to their surroundings, and their boundaries are blurred. The feature of Shadow Edges finds shading changes at depth discontinuities efficiently. It may miss some shadow edges that cannot be generated by any sample of illuminant, when the change of depth is small (e.g., the area in the blue box of Fig. 11), or a large part of the occluder is not visible in the current view (e.g., the area in the yellow box). Results on MIT Intrinsic Images dataset We compare our method to the state-of-art and to several classic approaches as listed in Table 2. These results are either copied from their papers, the report in [11], or by --0.0390 -Color Retinex [24] 0.7146 0.1108 0.0286 0.2541 Jiang-A [27] 0.6184 0.1533 0.0421 0.3988 Jiang-H [27] 0.5829 0.1524 0.0483 0.3476 Jiang-HA [27] 0.6109 0.1579 0.0454 0.3631 Shen-SR [14] 0.7259 0.1223 0.0240 0.2454 Shen-SRC [14] --0.0204 -Zhao et al. [4] --0.0250 -Gehler et al. [13] 0.7748 0.0985 0.0244 0.2544 Serra et al. [11] 0.7862 0.0834 0.0340 0.2958 Bell et al. [12] 0.7229 0.1100 0.0337 0.2763 Li et al. [30] --0.0190 -Chang et al. [19] --0.0229 -SIRFS [15] 0 running their code directly without tuning any parameters 1 . We report the results under the best parameters for the whole dataset. Our method achieves the best performance. Fig. 12 gives some concrete examples. The most remarkable advantage of our method is that it can recover the reflectance under deep shadows. One reason is that we can cluster the pixels with the same reflectance together on the U V shadow-free plane, no matter how dramatically the shading changes. Another reason is that our model fuses estimates from different methods by selecting the optimal one exclusively, which avoids smoothing the shading edges by the other estimates. Clustering-based methods, including Gehler et al. [13], Garces et al. [17], and Bell et al. [12], are sensitive to the change of intensity and color caused by shadows. The edge-based method of Li et al. [30] tends to assign large gradients to reflectance changes, which degrades at sharp shadow edges (e.g., those on the body of the deer). The methods of Gehler et al. [13] and Li et al. [30] smooth the shading extensively, leaving residuals of shadows in the reflectance (e.g., the teabag). SIRFS [15] smoothes the surfaces, which may generate an overly smooth shading (e.g., the frog). Another advantage is that our method can recover the global shading robustly. The main reason is that the clustering-based methods BO and BOB capture the shading orders between distant pixels effectively. Edge-based methods cannot reliably recover the relative shading between unconnected parts (e.g., the shadings recovered by Li et al. [30] are inconsistent between the front and the back of the turtle). Another reason is that BOB can handle the areas where the shading and reflectance change simultaneously (e.g., the mouth and the head of the frog). 1. The method SIRFS is evaluated on the images of cup2, deer, frog2, paper2, raccoon, sun, teabag1 and turtle, while the other images are used for training. The results of Bell et al. [12] are obtained through relaxing the constraints on the absolute values of shading and removing the intensity from the features for clustering the reflectance. Otherwise the deep shadows will be mistaken to be black and clustered into individual categories. Our method preserves the subtle variations of reflectance (e.g., the yellow and orange regions of the tea bag), since the intra-cluster variations in the U V plane (Fig. 2c) are represented in the recovered reflectance. In contrast, some clustering-based methods, such as Garces et al. [17] and Bell et al. [12], unify the reflectance of the pixels of each cluster. This operation often leads to block artifacts (e.g., the tea bag). Our method did not handle the feet of the deer well. The black feet and the white legs are both achromatic, so they fall into the same cluster on the shadow-free plane. The image blur further reduces the efficiency of the feature of Reflectance Change (Section 4), so the difference between black and white are not kept into reflectance. Results on Natural Images The quantitative results on the IIW dataset are shown in Table 3. Our method achieved comparable results to the state-of-art. It should be mentioned that W HDR 10% cannot reflect the superiority of our method on inferring the shading orders between pixels with different chromaticity, since only pixels with similar chromaticity are compared [12]. Further, the textured pixels are excluded from evaluation, so the ability to preserve the texture of reflectance is untested. Actually, both the top-performing methods of [16] and [12] remove the texture from the reflectance. For a fair comparison, we report our result that uses the edgepreserving smoothing of [16] to preprocess the input image. Without smoothing, the W HDR 10% increases about 3.7%. The IIW dataset is much more difficult than the MIT Intrinsic Images dataset. The image in the top row of Fig. 13 is comprised of different kinds of objects, some of which are highly textured (e.g., the wall with blue painting). Our method preserves the textures 2 much better than the other methods in comparison. Another difficulty comes from the intensive specular reflections (e.g., the wall in the top row of Fig. 13). Our method puts the specular reflections into reflectance, while some other methods, such as Zhao et al. [4] and Garces et al. [17], put them into shading. The greatest challenge of the IIW dataset comes from the coexistence of multiple direct illuminants in the same scene. In the bottom row of Fig. 13, the areas in the red boxes of the input image are covered by lights in different colors. This situation does not satisfy the bi-illuminant assumption of the BIDR model [3]. No unique brightening direction exists for the whole image, and the brightening direction obtained from entropy minimization (Section 3.1) eliminates the difference improperly. It causes two problems to our method: (1) the error of clustering will increase; and (2) the color of the recovered reflectance will be twisted. The first problem is shared by all the clustering-based methods such as Garces et al. [17] and Bell et al. [12]. The second problem is common, since all the methods in comparison assume a single (direct) illumination. Despite these problems, our model still recovered a globally consistent shading. Discussion. Scene-SIRFS addressed the mixture of illuminations by a soft segmentation of the image with respect to the "ownership" of illuminants [40]. But the segmentation 2. We do not use the edge-preserving smoothing to produce the qualitative results in Fig. 13. is not easy, since the changes of illuminations are often slower than the changes of reflectance. Beigpour and Van de Weijer [35] proposed the Multi-illuminant Dichromatic Reflection (MIDR) model to account for the secondary illuminants. However, in practice they only dealt with the case of two direct illuminants irradiating a single-colored object. We may consider extending the BIDR model to incorporate multiple direct illuminants. Accordingly, there will be multiple brightening directions, and the brightness should be extended to a mixture of sub-coordinates. This will make the problem much more complex. We further test on the outdoor images from the UIUC shadow dataset [45]. Fig. 14 shows three examples. The ambient illuminants are usually the blue sky, so the shadowed areas are more blueish than the lit areas. We compare to the methods of Jiang-HA [27] and Gehler et al. [13]. We also compare to the region-pair-based shadow removal method proposed by Guo et al. [45] 3 . Our model recovers the reflectance by lighting the dark pixels along the yellowish brightening direction, while the other intrinsic decomposition methods often fail to recover their colors. The method of Guo et al. [45] is unable to handle thin areas due to the limited resolution of image segmentation (e.g., the fingers in the last image of Fig. 14). Evaluation on RGB-Depth Images We test on the RGB-Depth images from the NYU-Depth V2 dataset. We compare to those methods that take RGB-Depth images [40][6] [39] or videos [5] as input 4 . Typical examples are shown in Fig. 15. Our method successfully recovered globally consistent shadings and preserves the textures of reflectance. In particular, our method was the only one that recovers the smooth shading over the painting in the first row of Fig. 15. In comparison, the method of Lee et al. [5] did not get consistent shadings between surfaces in different orientations. In their recovered reflectance of the first image in Fig. 15, the backrest of the sofa and the walls are much darker than the seat of the sofa and the floor. The method of Barron and Malik [40] successfully captured the shape of curved surfaces (e.g., the sofa in the first image of Fig. 15) but not those of objects with sharp boundaries (e.g., the cabinet and the bed in the second image of Fig. 15). The method of Chen and Koltun [6] achieved good smoothness of shading while keeping the sharp surface edges at the same time. However, this method often failed to recover the shading orders between objects with different colors (e.g., the blue pillow and the sofa in the first image of Fig. 15). The method of Jeon et al. [39] preserved the textures in reflectance very well (e.g., the floor in the second image of Fig. 15), but this method tends to reduce the difference of shading between surfaces with similar orientations (e.g., the walls and the cabinet in the second image of Fig. 15). CONCLUSIONS AND DISCUSSIONS We proposed the shading orders for intrinsic image decomposition. The shading orders captured not only adjacent relations but also distant connections. This overcame the limitations of edge-based methods that lack the large-scale structure of shading. The shading orders can be measured by several individual methods, each of which can give a reasonable estimate based on certain assumptions about the scene. Jointly utilizing these methods captured various kinds of priors and observations of the scene. We developed the CSF algorithm to combine the pairwise orders measured by different methods. CSF infers a global order by selecting the confident and consistent pairwise orders and solving their conflicts through AE. The local competition removes unreliable measurements from the fusion, so the results are much cleaner than a weighted sum of different estimates. This is essential for keeping sharp shadow edges and textures. The sparsity-driven neighbor selection further reduced the outliers of local measurements. Experimental results demonstrated that our model is suitable for various indoor and outdoor scenes with noticeable ambient illuminants. However, the BIDR model cannot handle multiple direct illuminants, interreflections, or specular reflections. We need to generalize the BIDR model and the U V B color space for more realistic scenes. The highly textured images are still quite challenging for clustering-based methods, since their reflectance often changes irregularly and thus cannot be clustered properly. Jeon et al. proposed to separate the texture layer before decomposing the shading and reflectance [39], which is a promising way to ease the clustering. Fig. 16 shows the rendered shading map of an RGB-Depth image. In the camera coordinate, we draw a "gray surface", taking all the pixels as vertices. Both the color of the surface and the illuminant are set to be [1, 1, 1] T , and the reflection of the surface is set to be diffuse only (i.e., without any specular reflection). Here we assume that there is only one direct illuminant for each image, while the ambient illumination is set to be 0. The illuminant is put inside the room box, and the range of the room box is set to be the scope of all the observable pixels. Especially, we expand the range of the z dimension (orthogonal to the image plane) to the negative part of the coordinate, since the light may be placed to the back of the camera. The surface is rendered with the Matlab Surfl function, and the output intensities of the vertices form a shading map. The bottom row of Fig. 16 shows the rendering results under several sampled illuminants. We can see that some of them are close to the real shading map of the scene, while the others are quite different. APPENDIX RENDERING THE SHADING MAP The similarity between the rendered shading and the ground-truth shading brightness S b is measured by their category-wise correlation: Sim(γ(L d ), S b ) = g∈G ng n Cor(γg(L d ), e S b g ) = g∈G ng n Cor(γg(L d ), e I b g ),(28) where G is the set of reflectance categories, n is the number of pixels, and Cor is the correlation between two variables. The subscripts g denotes the subset of pixels belonging to the g-th category. Here we utilized the linear relationship between the brightness I b and the shading brightness S b based on (6). We select a set of candidate illuminants L = {L d |Sim(γ(L d ), S b ) > 0.2}. ADMM FOR OPTIMIZING THE WEIGHTS W Eqn. 24 can be solved for each pixel p individually, where the matrix W can be decomposed into a series of vectors W p,· . So do E andC. For simplicity, we omit the subscript p of all the matrixes from now on. Denote d = D p . We reformulate Eqn. 24 to an equivalent problem: arg min W,X,Y g 1 (W ) + g 2 (X) + g 3 (Y ) s.t.C T W = d W = X = Y(29) where g 1 (W ) = E T W + α 2 2 W 2 2 g 2 (X) = α 1 X 1 g 3 (Y ) = 0 if Y q ≥ 0, ∀q ∞ otherwise(30) Through introducing Lagrange multipliers λ, Γ 1 and Γ 2 , we can obtain the following augmented Lagrangian [50]: L(W, X, Y, λ, Γ 1 , Γ 2 ) =g 1 (W ) + g 2 (X) + g 3 (Y ) + λ(d −C T W ) + Γ T 1 (W − X) + ρ 2 W − X 2 2 + Γ T 2 (W − Y ) + ρ 2 W − Y 2 2(31) where ρ is a scaling parameter. We initialize W , X and Y with 1 n , while λ = 2 and Γ 1 = Γ 2 = 1 n . Then we update them iteratively as follows: W k+1 = 1 α 2 + 2ρ (ρX k + ρY k − E + λ kC − Γ k 1 − Γ k 2 ) X k+1 =                W k+1 + Γ k 1 − 1 ρ α 1 if W k+1 + Γ k 1 > 1 ρ α 1 0 if |W k+1 + Γ k 1 | ≤ 1 ρ α 1 W k+1 + Γ k 1 + 1 ρ α 1 if W k+1 + Γ k 1 < − 1 ρ α 1 Y k+1 = (W k+1 + 1 ρ Γ k 2 ) + λ k+1 = λ k + η 1 (d −C T W k+1 ) Γ k+1 1 = Γ k 1 + η 2 (W k+1 − X k+1 ) Γ k+1 2 = Γ k 2 + η 3 (W k+1 − Y k+1 )(32) where X is got from soft thresholding. (·) + truncates all the elements of a vector to be non-negative. η 1 , η 2 and η 3 are step sizes. We terminate the iteration when W − X 1 + W − Y 1 is less than a threshold T W and d − E T W is less than a threshold T d . In implementation, we set ρ, η 1 , η 2 and η 3 to be 5, 0.05, 1, and 1, respectively. Fig. 17 shows the results of our method for the images of the MIT Intrinsic Images dataset other than those appeared in the paper. Figs. 18, 19 and 20 present several examples from the IIW dataset. Fig. 21 shows more results of our method on the UIUC Shadow Removal dataset. Fig. 22 shows more results of our method on the NYU-Depth V2 dataset. We compare our method to several recent algorithms, including Bell et al. [12], Zhao et al. [4], Garces [5], Barron et al. [40], Chen et al. [6], and Jeon et al. [39]. Figure 23 shows the colours of shading in images from the MIT Intrinsic Images dataset. We can see that most of the shading images are nearly achromatic. The reason is that the images are captured in controlled environment, where the ambient illuminations are largely suppressed by painting the background into black. According to Equation 3, when the ambient illumination is negligible, the shading will be nearly achromatic, no matter what the color of the direct illumination is. However, for the frog in Figure 23, the shading is slightly chromatic. Figure 24 shows the colours of shading in natural indoor and outdoor scenes. The indoor scenes often have complex illuminations, so the shading colors vary a lot from image to image, even from place to place in the same image. In comparison, the shading colors in outdoor scenes are more regular. Especially, the shadows in outdoor scenes are often blueish, since the ambient light is often the blue sky.
9,123
1810.09706
2896121525
We address the problem of decomposing a single image into reflectance and shading. The difficulty comes from the fact that the components of image---the surface albedo, the direct illumination, and the ambient illumination---are coupled heavily in observed image. We propose to infer the shading by ordering pixels by their relative brightness, without knowing the absolute values of the image components beforehand. The pairwise shading orders are estimated in two ways: brightness order and low-order fittings of local shading field. The brightness order is a non-local measure, which can be applied to any pair of pixels including those whose reflectance and shading are both different. The low-order fittings are used for pixel pairs within local regions of smooth shading. Together, they can capture both global order structure and local variations of the shading. We propose a Consistency-aware Selective Fusion (CSF) to integrate the pairwise orders into a globally consistent order. The iterative selection process solves the conflicts between the pairwise orders obtained by different estimation methods. Inconsistent or unreliable pairwise orders will be automatically excluded from the fusion to avoid polluting the global order. Experiments on the MIT Intrinsic Image dataset show that the proposed model is effective at recovering the shading including deep shadows. Our model also works well on natural images from the IIW dataset, the UIUC Shadow dataset and the NYU-Depth dataset, where the colors of direct lights and ambient lights are quite different.
Ranking elements from their pairwise comparisons has been extensively studied in many fields @cite_33 @cite_8 @cite_45 . Angular Embedding @cite_23 adopts a cosine error function, which is proven to be more robust to outliers than the traditional @math or @math errors used by Least Squares Embedding @cite_33 . Angular Synchronization (AS) also uses the angular space @cite_2 , but it does not consider the confidences of pairwise measures.
{ "abstract": [ "Flash images are known to suffer from several problems: saturation of nearby objects, poor illumination of distant objects, reflections of objects strongly lit by the flash and strong highlights due to the reflection of flash itself by glossy surfaces. We propose to use a flash and no-flash (ambient) image pair to produce better flash images. We present a novel gradient projection scheme based on a gradient coherence model that allows removal of reflections and highlights from flash images. We also present a brightness-ratio based algorithm that allows us to compensate for the falloff in the flash image brightness due to depth. In several practical scenarios, the quality of flash no-flash images may be limited in terms of dynamic range. In such cases, we advocate using several images taken under different flash intensities and exposures. We analyze the flash intensity-exposure space and propose a method for adaptively sampling this space so as to minimize the number of captured images for any given scene. We present several experimental results that demonstrate the ability of our algorithms to produce improved flash images.", "Our goal is to turn an intensity image into its perceived luminance without parsing it into depths, surfaces, or scene illuminations. We start with jarring intensity differences at two scales mixed according to edges, identified by a pixel-centric edge detector. We propose angular embedding as a more robust, efficient, and versatile alternative to LS, LLE, and NCUTS for obtaining a global brightness ordering from local differences. Our model explains a variety of brightness illusions with a single algorithm. Brightness of a pixel can be understood locally as its intensity deviating in the gradient direction and globally as finding its rank relative to others, particularly the lightest and darkest ones.", "", "Given the size and confidence of pairwise local orderings, angular embedding (AE) finds a global ordering with a near-global optimal eigensolution. As a quadratic criterion in the complex domain, AE is remarkably robust to outliers, unlike its real domain counterpart LS, the least squares embedding. Our comparative study of LS and AE reveals that AE's robustness is due not to the particular choice of the criterion, but to the choice of representation in the complex domain. When the embedding is encoded in the angular space, we not only have a nonconvex error function that delivers robustness, but also have a Hermitian graph Laplacian that completely determines the optimum and delivers efficiency. The high quality of embedding by AE in the presence of outliers can hardly be matched by LS, its corresponding L1 norm formulation, or their bounded versions. These results suggest that the key to overcoming outliers lies not with additionally imposing constraints on the embedding solution, but with adaptively penalizing inconsistency between measurements themselves. AE thus significantly advances statistical ranking methods by removing the impact of outliers directly without explicit inconsistency characterization, and advances spectral clustering methods by covering the entire size-confidence measurement space and providing an ordered cluster organization.", "The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,…,θn from m noisy measurements of their offsets θi−θjmod2π. Of particular interest is angle recovery in the presence of many outlier measurements that are uniformly distributed in [0,2π) and carry no information on the true offsets. We introduce an efficient recovery algorithm for the unknown angles from the top eigenvector of a specially designed Hermitian matrix. The eigenvector method is extremely stable and succeeds even when the number of outliers is exceedingly large. For example, we successfully estimate n=400 angles from a full set of m=(4002) offset measurements of which 90 are outliers in less than a second on a commercial laptop. The performance of the method is analyzed using random matrix theory and information theory. We discuss the relation of the synchronization problem to the combinatorial optimization problem Max-2-Lin mod L and present a semidefinite relaxation for angle recovery, drawing similarities with the Goemans–Williamson algorithm for finding the maximum cut in a weighted graph. We present extensions of the eigenvector method to other synchronization problems that involve different group structures and their applications, such as the time synchronization problem in distributed networks and the surface reconstruction problems in computer vision and optics." ], "cite_N": [ "@cite_33", "@cite_8", "@cite_45", "@cite_23", "@cite_2" ], "mid": [ "2064194050", "2126602610", "", "2032802519", "2143703915" ] }
Consistency-aware Shading Orders Selective Fusion for Intrinsic Image Decomposition
A N image is the result of several factors, including the material reflectance, the surface's shape, the positions and the colors of the illuminants, and the camera sensor responses. Barrow and Tenenbaum [1] proposed to decompose an image into intrinsic images, each of which captures a distinct aspect of the scene. The most common outputs are the shading and the reflectance. The shading captures the strength of incident illumination at each pixel, while the reflectance shows the surface albedo. The shading is widely used to reconstruct the shapes of the surfaces [2]. The albedo is invariant to illumination and geometry, so it is a robust feature for object classification and image segmentation. In this paper we aim to recover the shading and the reflectance from a single image. This is an underconstrained problem. The absolute values of the unknown variables cannot be measured directly, since they are highly coupled in observed image. Instead, we measure the relative sizes of shading over pixels to recover its essential structure, and determine their absolute values later by boundary conditions. We regard the shading as a global ranking of the pixels in the order of dark to bright. The boundary conditions are simply that the start points are fully shadowed pixels, while the end points are fully lit ones. The global shading is inferred from pairwise shading orders, which are signed differences between the shading of pixels. The flow chart is shown in Fig. 1. We estimate the shading orders in the U V B color space, which is spanned by a 2D shadow-free plane [3] and a brightness dimension. This color space has two major properties: • Pixels with the same reflectance cluster together on the shadow-free plane. • The brightness of image is the sum of the shading brightness and the reflectance brightness. Based on these properties, we can use clustering-based methods to capture the global order structure of the shading. For pixels with the same reflectance, the shading orders can be obtained directly from the difference of the image brightness. For pixels with different reflectance, the shading orders can be calculated in a similar way, but the bias from the difference of the reflectance brightness should be compensated. We choose the optimal biases between different clusters of reflectance, which make the shading constant across reflectance boundaries excluding shading edges. The cluster-wise biases make it possible to handle pixel pairs whose reflectance and shading are both different. We also model the local shading by low-order fittings to predict the shading orders between nearby pixels. Different models can capture the geometric structure of different types of surfaces. For example, a linear model can describe the shading of a smooth surface. The estimation methods above are complementary. The clustering-based methods can be applied to any pair of pixels, in particular those of distantly located pixels, but their accuracies depend on the quality of clustering. In contrast, the low-order fittings do not rely on clustering at all, but they capture only the local structure, and the fitting errors are large for irregular surfaces. The pairwise shading orders are combined into a global shading via Consistency-aware Selective Fusion (CSF). The Fig. 1: The flow chart of our method. Firstly the image is transformed into the U V B color space. Based on brightness and cluster results over chromaticity, different methods m are used to estimate the shading orders O(p, q, m) between each pair of pixels p and q. We also evaluate the reliability C(p, q, m) of the estimates based on the image features. Then we use CSF to infer the global shading. CSF repeats two operations: Local Selection, i.e., selecting the estimation methods and the weights for each pair of pixels under the guidance of consistency between the pairwise orders and the global shading; and Angular Embedding (AE), which infers the globally consistent orders from the pairwise estimates. At last we transform the global shading back into the RGB space. major challenge is avoiding inconsistency between estimates from different methods. CSF identifies a sparse set of reliable and consistent pairwise shading orders and fuses them within a unified optimization framework. For each pair of pixels, CSF selects the most reliable estimate exclusively instead of a weighted summation of different estimates [4][5] [6]. This strategy prevents unreliable estimates from polluting the results. We evaluate the reliability of pairwise orders using not only the image features but also their consistency with the global order. Therefore, the estimates that are incompatible with the majority will be suppressed, even when their preconditions happen to be satisfied by the image features. Forcing sparsity of pairwise connections further reduces unreliable measurements. The global order is obtained from Angular Embedding (AE) [7], which embeds the pixels onto a unit circle in the complex plane. AE uses a complex matrix to encode pairwise orders and their reliability simultaneously. Moreover, AE applies spectral decomposition to get a near-global optimal solution that best matches the reliable pairwise orders. After locating the darkest points on the unit circle, the absolute values of shading can be determined. IMAGE FORMATION An image with only body reflection can be modeled as [3] I i (p) = R i b (p)(γ(p)L i d + L i a ),(1) where the superscript i indexes the RGB channels, and p indexes the pixel. The body reflectance R b denotes the diffuse reflection under white illumination. The three-dimensional vectors L d and L a are the direct illuminant and the ambient illuminant, respectively. γ(p) ∈ [0, 1] is the direct shading, i.e., the proportion of direct illumination reaching the surface. BIDR assumes that the direct and ambient illuminants are constant across the materials [3]. When there are multiple direct illuminants with the same color, their effects can be added. Inspired by the shadow removal problem [45], we define the reflectance to be the image lit by full direct illuminant together with the ambient illuminant: R i (p) = R i b (p)(L i d + L i a ).(2) Accordingly, the shading is defined to be S i (p) = I i (p) R i (p) = γ(p)L i d + L i a L i d + L i a .(3) For a fully lit area (i.e., γ = 1), the shading reaches its maximum. For a fully shadowed area (i.e., γ(p) = 0), the shading will be S(p) = L a /(L d + L a ). In natural scenes, the direct lights are always much stronger than the ambient lights, so the shading of fully shadowed areas should be a small positive value. The color of the shading in (3) does not have definite physical meaning, so we show the shading in grayscale for all the figures in this paper following [24] and [12]. Readers that are interested in the color of the shading are referred to the supplementary material for several examples. SHADING ORDERS FROM BRIGHTNESS We infer shading orders in the U V B color space. We will show that image brightness has a linear relation to the log of shading. Therefore pairwise shading orders can be estimated by either brightness orders or low-order fittings of local shading. The U V B Color Space The BIDR model delivers a 2D shadow-free plane U V [3]. The normal n of the U V plane points from the shadowed pixels to the lit ones sharing the same body reflectance R b (see Fig. 2b for an example). We call the normal n the brightening direction. Formally, the brightening direction is defined by n = 1 K log I(p)| γ(p)=1 − log I(q)| γ(q)=0 = 1 K log( L d La + 1),(4) where the pixels p and q should satisfy R b (p) = R b (q), and K is the normalization factor. From (4) we can see that the brightening direction depends only on the ratio of illuminants, so all the pixels share the same brightening direction (Fig. 2b). If the ratio of illuminants is unknown, we can search the most probable brightening direction that minimizes the entropy of pixels on the U V plane [3] [46]. Since pixels with similar reflectance R b will stay closely together on the U V plane (Fig. 2c), the entropy of the distribution of pixels will be minimized. Let u and v be any pair of basis vectors on the U V plane. Then we have a rotation matrix H = [u, v, n] that transforms the log RGB space into a new color space U V B: [I u (p), I v (p), I b (p)] = log I(p)H.(5) The dimension I b captures the intensity of the image, and we call it the brightness. According to (3) and (5), the brightness of the image can be factorized as follows: I b (p) = log S(p) · n + log R(p) · n = S b (p) + R b (p).(6) Here we used the fact that log I(p) = log R(p) + log S(p). The shading brightness is S b (p) = log S(p) · n, which is a linear function of log S. The reflectance brightness R b (p) = log R(p) · n can be regarded as a bias determined by the body reflectance R b . This linear relationship is the basis for estimating the shading orders in Section 3.2. According to (5), the shading in the U V B space should be [S u (p), S v (p), S b (p)] = log S(p)H. Note that S u and S v are nearly zero since the U V plane is shadow-free [3]. The only unknown dimension is the shading brightness S b , and we will infer it from pairwise shading orders in Section 5. Once we obtain S b , the shading in RGB space can be recovered by S(p) = exp([S u (p), S v (p), S b (p)]H −1 ),(7) where exp denotes element-wise exponential. Note that the rotation matrix H is always invertible. The reflectance can be obtained from R(p) = I(p)/S(p). Measuring Pairwise Shading Orders The shading order between pixels p and q is defined to be the signed difference between shading brightnesses, i.e., O(p, q) = S b (p) − S b (q) . We propose four methods M = {BO, BOB, F S, SS} to estimate the shading orders. These methods are shown in Fig. 3. Brightness Order (BO). According to (6), if two pixels have the same reflectance brightness R b or equivalently, the same body reflectance R b , their shading order will be equal to their difference of brightnesses: O(p, q, BO) = I b (p) − I b (q) if R b (p) = R b (q).(8) Brightness Order minus Bias (BOB). For pixels with different body reflectance, the bias of reflectance brightness ∆R b should be compensated as follows: O(p, r, BOB) = I b (p)−I b (r)−∆R b (p, r) if R b (p) = R b (r),(9) where ∆R b (p, r) = R b (p) − R b (r) is the bias. The process of calculating the bias will be described in Section 3.3. BO and BOB together can estimate the shading order between any two pixels. For pixels nearby, we can fit their shading brightness by low-order functions. This is based on the assumption of Pixels I b p r s t q O(p,q,BO) O(p,t,SS) O(p,s,FS)=0 O(p,r,BOB) S b ΔR b Fig. 3: Calculating shading orders O from brightness I b . We align the curves of the brightness I b and the ground-truth shading brightness S b to make I b (p) = S b (p). The red dashed curve is the brightness after compensating the bias of reflectance brightness ∆R b . The green masks cover the green pixels while the uncovered ones are white. local smoothness of shading, which is valid for most parts of natural images. First-order Smoothness (FS). For flat surfaces, the normal directions and thus the incident angles change little. According to the cosine law of the Lambertian reflection, the variation of shading brightness will be small. The first-order derivative of shading brightness should be almost zero if there are no shadow edges. Consequently, the adjacent pixels will have nearly identical shading brightness: O(p, s, F S) = 0 if s ∈ N (p), ∂I b (p) ∂p ≈ 0,(10) where N (p) is the neighborhood of p, and ∂I b (p) ∂p is the derivative of I b evaluated at p. Second-order Smoothness (SS). For smooth surfaces, the surface normal rotates smoothly. As a result, the shading brightness will change smoothly. We assume that the second-order derivative of the shading is close to zero. Thus we can fit the local shading by a linear function. We further assume that the adjacent pixels share the same body reflectance, so the slope of the linear model ∂S b (p) ∂p = ∂I b (p) ∂p . The shading order between two nearby pixels will be O(p, t, SS) = ∂I b (p) ∂p · (p − t) if t ∈ N (p), ∂ 2 (I b (p)) ∂p 2 ≈ 0,(11) where p − t is the directed spatial distance between p and t. In practice, we calculate the derivative and the spatial distance in the horizontal and vertical directions separately. The preconditions of the methods above are not mutually exclusive, so different methods may be applicable to the same pair of pixels. The preconditions together cover all possible situations, so we can find at least one suitable method for most pairs of pixels. The redundancy and completeness of these methods are the basis for robust estimates of shading orders. The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. Estimating the Bias of Reflectance Brightness The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. The absolute values of reflectance brightness R b are unavailable, so we cannot calculate their biases directly. Instead, we cluster the pixels by body reflectance, and estimate the biases of reflectance brightness between different clusters. The local smoothness of shading implies that pixels within a small patch have similar shading brightness. According to (6), the bias of reflectance brightness between two clusters can be approximated by their difference of image brightness within small patches. The main process is shown in Fig. 4. The image is divided into dense grids with 10 pixels in each side. For a patch T containing pixels from both categories j and k, the difference of reflectance brightness is calculated by ∆R b (j, k, T ) =Ī b (j, T ) −Ī b (k, T ), whereĪ b (j, T ) and I b (k, T ) are the median brightness of pixels belonging to categories j and k, respectively. We generate a histogram of the patch-wise measures ∆R b (j, k, T ), and take the highest peak to be the estimate ∆Ř b (j, k) as shown in Fig. 4c. The minorities of the histogram mainly come from patches with shading edges in them (e.g., patches 3 and 4 in Fig. 4b). The reliability F of the estimate is set to be the number of votes from the patches. When F j,k is 0, it means categories j and k are not adjacent, and their bias cannot be measured directly. In this case, we resort to their biases with other categories. Taking each reflectance category as a node, we can build an undirected graph G = (V, E), where V is the set of nodes and E is the set of edges. The weight of the edge between node j and k is set to be 1/F j,k , where F j,k is the reliability of ∆Ř b (j, k) as described before. We can get an estimate of the bias between two nodes by summing the biases along any path connecting them. We further eliminate the multipath effect by extracting the Minimum Spanning Tree (MST) of the graph G. The MST ensures that there is one and only one path between any two nodes, so the relative reflectance brightnessŘ b of each node can be uniquely determined. Meanwhile, the total reliability of the remaining pairwise biases is maximized. The sparsity of the reflectance spectra [47] ensures that the pixels can be clustered into a small number of categories. Fig. 4: Estimating the bias of reflectance brightness between reflectance categories. (a) The cluster map. The symbols j, k, and l stand for 3 reflectance categories. The squares indicate representative patches for estimating the bias of reflectance brightness between categories j and k; (b) The brightness I b . The biases obtained from patches 3 and 4 are outliers, since there are shadow edges inside them; (c) The histogram of the patch-wise biases of reflectance brightness between categories j and k. The peak of the histogram is selected to be the result. Since pixels on the shadow-free plane U V are well organized by their body reflectance, we cluster the pixels by a simple k-means. The number of clusters is set to be the number of local maxima in the 2D histogram of I u and I v . The bin size of the histogram is empirically set to be 0.03. THE RELIABILITY OF PAIRWISE ORDERS For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. The reliability of an estimate is the probability of all its premises being valid, which is calculated by a Noisy-Or model C(p, q, m) = f ∈Cm 1 − P f (p, q), m ∈ M,(12) where C m is the set of perturbations that the method m is not robust to, as listed in Table 1. The probability P f (p, q) measures how likely the perturbation f occurs around pixels p and q. For an ideal image without any perturbation, all the methods get equally high confidences. Once a perturbation happens, the confidences of sensitive methods will drop. The occurrences of the perturbations are predicted by image features. Generally, we calculate a distance x between the pair of pixels according to each feature, and translate the distance into probability by a sigmoid function in the form of sigm(x; w) = 2 1+e −wx − 1, where w is a positive weight. The features are described below. Clustering Error (CE) is the probability that the clustering of pixels on the shadow-free plane is inaccurate, which is calculated by where the cluster probability P C is the likelihood of each pixel belonging to its reflectance category, and eŜ b is the strength of the step edge [48] on the shifted shading bright-nessŜ b . The first term increases as the pixel p or q deviates from the cluster centers. The second term is large when the pixels are improperly categorized or the relative reflectance brightnesses are inaccurately estimated, as shown in Fig. 5c. PCE(p, q) = (1 − PC (p)PC (q)) · sigm(eŜb (p, q); w1),(13) Here each reflectance category is modeled by a multivariate normal distribution. The shifted shading brightnessŜ b is obtained from the brightness I b minus the relative reflectance brightnessŘ b (Section 3.3) followed by a median filtering. Local Color Variance (LCV) is defined to be: PLCV (p, q) = sigm(max(σ(I(p)), σ(I(q))); w2),(14) where σ(I(p)) is the standard deviation of chromaticities I u and I v within the 3x3 window centered at pixel p. Large color variations mainly appear at reflectance boundaries (Figs. 5a and 5c). Shadow Edges (SE) are caused by occlusions of the direct light. To locate the shadow edges, we render the direct shadingγ under uniformly sampled illuminants. The direct shading is similar to the visibility map proposed by Lee et al. [5]. The difference is that they assume the illuminants to be infinitely far away, which is inaccurate for indoor scenes. Instead, we sample the feasible positions of the illuminant within the room box. The probability of a shadow edge between pixels p and q is calculated by their direct shading under promising illuminants, as follows: PSE(p, q) = sigm( 1 |L| L d ∈L γ(L d , p) −γ(L d , q) ; w3).(15) Here L is the set of promising illuminants, andγ(L d , p) is the direct shading at pixel p under illuminant L d . We select the promising illuminants according to the correlation between the rendered direct shadingγ and the brightness I b . See the supplementary material for details. The Shadow Edges feature is not applicable to RGB-only images, since the geometric layout is needed for rendering the shading map. Reflectance Change (RC) distinguishes pixels with different chromaticities or intensities, which are assumed to have different reflectance [24][13][17] [12]. We calculate the probability of a reflectance change by PRC (p, q) = sigm(duv(p, q); w4) · sigm(e b (p, q); w5),(16) where d uv is the geometric distance on the shadow-free plane. e b (p, q) is the magnitude of the step edge lying between p and q in the brightness I b , which aims at distinguishing colors with similar chromaticity but different intensities, especially achromatic ones. Surface Normal Change (SNC) generates shading variation [5][6] [39]. We calculate the probability of a surface normal change by PSNC (p, q) = sigm( (N(p), N(q)); w6),(17) where (N(p), N(q)) is the angle between the surface normals at pixels p and q. The surface normals are derived from the depth map [5]. SNC is unavailable to RGB-only images. Spatial Distance (SD) is simply the geometric distance between the pixels [6] [12]: PSD(p, q) = sigm(ds(p, q); w7). For RGB-Depth images, we first calculate the 3D positions of the pixels in camera coordinates and then compute their distances. For RGB-only images, we use the 2D coordinates in the image plane. Discussion. The features above can help us choose the best estimation method for a certain pair of pixels. Among them, CE focuses on whether the biases of reflectance brightnesses are correctly estimated, which is the key to the success of the BOB method. We check the correctness by both the cause and the effect, i.e., the pixels are tightly clustered and the estimated shading is smooth, respectively. LCV and RC capture the local and large-scale behaviour of reflectance change, respectively. The local variation, coupled with image blur, will disturb the measurements of brightness as well as its gradient. This will cause problems in most estimation methods except for the FS, which is only concerned with the adjacency of pixels. GLOBAL SHADING FROM SHADING ORDERS VIA CONSISTENCY-AWARE SELECTIVE FUSION Thus far we have obtained a matrix O of the pairwise shading orders (Section 3.2) together with a confidence matrix C from (12) representing their reliability. Now we use the Consistency-aware Selective Fusion (CSF) to select a subset of reliable and consistent pairwise orders, and combine them into an optimal global order. CSF is designed under the following criteria: • For a pair of pixels p and q, the optimal estimation method M p,q ∈ M is selected exclusively. • The pairwise connections W p,q should be sparse such that the outliers are excluded. • The total confidence of the selected pairwise shading orders should be maximized. • The global order should match the input pairwise orders. In practice, the global order is obtained through Angular Embedding (AE) [7]. Let Z p = e iS b (p) with i = √ −1 denote the embedding of pixel p on the unit circle in the complex plane (Fig. 6). The angle Θ p,q from Z p to Z q is the shading order between p and q. AE finds an embedding that makes Θ p,q consistent with the input shading order O p,q = O(p, q, M p,q ). Algorithm 1 Consistency-aware Selective Fusion Require: Pairwise shading orders O and the relative confidence C, the initial weights α 1 and α 2 of regularizer, the threshold ω min of the density of non-zero elements in W , and the step size τ . Ensure: Embedding Z. Initialization: W = 1 n,n , where n is the number of pixels; M p,q = arg max m C(p, q, m). while α 2 > 0 do Optimize Z using (20); Choose M using (22); (23); if W 0 < ω min n 2 then Break; end if end while return Z. α 2 = α 2 − τ ; Update W using The estimation methods W , the pairwise connections M and the embedding Z are optimized jointly as follows: min W,M,Z J AE (Z; W, M ) + P (W ) s.t. Z p = 1, q C p,q = D p , ∀p, W (p, q) ≥ 0, ∀p, q,(19) where the errors of Angular Embedding is defined to be [7] J AE (Z; W, M ) = p,q C p,q · Z p − Z q e iOp,q 2 ,(20) and the regularization term is in the form of elastic net [49] P (W ) = α 1 W 1 + α 2 2 W 2 2 .(21) Here C p,q = W p,q C(p, q, M p,q ) is the weighted confidence. The diagonal matrix D p = q max m∈M C(p, q, m) is a degree matrix. α 1 and α 2 are the weights of lasso (L1) and ridge (L2), respectively. Elastic net enforces group sparsity on the weights, so several groups of reliable neighbors will be selected for each pixel. We optimize the variables M , W , and Z iteratively as described in Algorithm 1. Fig. 6 illustrates one iteration of the process. The details are given below. Choose M . Keeping W and Z fixed, we can search the optimal estimation method by arg min M p,q W p,q C(p, q, M p,q ) · Z p − Z q e iO(p,q,Mp,q) 2 s.t. q W p,q C(p, q, M p,q ) = D p , ∀p. (22) It can be optimized by the Lagrange method. We iteratively pick the optimal M p,q that balances the confidence and the consistency of orders under the current Lagrangian multiplier, and updates the multiplier by dual ascent. In Fig. 6b the selected method for pixels p and q is the one with the second highest confidence but the best consistency to the global shading. W p,q E p,q + α 1 W 1 + α 2 2 W 2 2 s.t. q W p,qCp,q = D p , ∀p, W p,q ≥ 0, ∀p, q,(23) whereC p,q = C(p, q, M p,q ) and the confidence-weighted embedding error is E p,q =C p,q · Z p − Z q e iO(p,q,Mp,q) 2 . This optimization problem can be solved by the Alternating Direction Method of Multipliers (ADMM) [50]. See the supplementary material for details. From (23) we can see that the larger the embedding error E(p, q) is, the smaller W (p, q) tends to be. This can be observed in Fig. 6 that the pair of p and t gets a low weight, since the embedding error is large for every estimation method. Note that we decrease the value of α 2 gradually in Algorithm 1, which makes W more and more sparse. This progressive sparsity has better numerical stability than setting α 2 to be a small value in the very beginning. As α 2 gets too small, the pairwise connections may become overly sparse, producing an ill-conditioned graph. We terminate the iteration of Algorithm 1 in this case. Optimize Z. Optimizing the embedding error J(Z; W, M ) in (20) directly is hard in practice since it has n constraints, where n is the number of pixels. Relaxing the unit-length constraints in (19) to be Z DZ = 1 n D1 n , the problem can be rewritten into the following matrix form: min Z Z LZ s.t. Z DZ = 1 n D1 n .(24) Here L is a Laplacian matrix L = D − C • e iO + (C • e iO ) ,(25) where • is the matrix Hadamard product, is the complex conjugate transpose, 1 n is a n × 1 vector of all ones, and exponentiation acts element-wise. To make the optimization tractable, we consider only the shading orders between nearby pixels, while the confidences of the other shading orders are set to be zero. In our experiments we set the neighborhood to be a square of 30 pixels in each side. The optimization problem in (24) is solved by the spectral partitioning algorithm [48] with complex-valued eigenvectors. The solution is the angles of the first eigenvector Z 0 that has the smallest eigenvalue. We refer to the paper of Yu [7] for more details. Recover shading S b . To decode the shading brightness S b from the angles Z 0 , we need to ensure that the angle between any two points is less than 2π, otherwise the points may overlap with each other. To achieve this, we scale the brightness dimension of the U V B color space by a positive scalar. The scaling will not disturb the order of Z 0 , and we can scale the shading brightness back after the decoding. AE allows the points to rotate as a whole around the original point. We need to rotate the points back until the angles of the darkest points are zero. Note that the darkest pixels and the brightest pixels are always separated by a gap on the circles in the complex plane. Fig. 7b shows an example. The gap can be easily located by the consecutive empty bins of the histogram of the angles Z 0 (Fig. 7c). The pixels falling into the bins to the left of the gap are shifted to the right by 2π. Fig. 8 shows the change of variables during the iterations of CSF. In the beginning, the relative shading of some local regions are inaccurate (e.g., the circle inside the red box), since some wrong estimates occasionally get higher confidences than the right ones based solely on the image features. For example, the orders obtained from the BOB method (indicated by green dots) may possibly be wrong since the clustering is inaccurate (see Fig. 5b). Some pixels with similar but different colors are mistaken to have the same reflectance (the red dots in the light yellow regions). Furthermore, the FS method is adopted to estimate the shading orders between distant pixels (the yellow dots far away from the center point). When the global order is used to guide the selection, the right estimation methods gradually emerge. At the same time, the weights of unreli-able connections are greatly decreased as the sparsity gets stronger. Specifically, pairs of pixels whose orders cannot be accurately estimated by any method will be assigned zero weights and excluded from the fusion. As a result, the errors of Z 0 are reduced considerably. EXPERIMENTS We evaluate our method on the MIT Intrinsic Images dataset [24], which is a widely used benchmark. It contains groundtruth intrinsic images of 20 natural objects, and 16 of them are used for test. The images are taken in a controlled environment, where the direct illuminants are nearly white and the ambient illuminants are limited. To validate against real-world scenes, we evaluate our method on the Intrinsic Image in the Wild (IIW) dataset [12], which is a large-scale dataset of public photo collections. We also test our method on outdoor scenes from the UIUC shadow dataset [45]. We further test the utility of depth information on the RGB-Depth images from the NYU-Depth V2 dataset [51]. Error Metrics and Parameter Settings We evaluate the results on the MIT Intrinsic Images dataset primarily by the standard metric, namely the Local Mean Squared Error (LMSE) [24]. However, as pointed out by Jiang et al. , LMSE is sensitive to the window size and the difference between the mean values of the recovered intrinsic images and the groundtruth [27]. Moreover, LMSE biased towards edge-based methods [11]. To give a more complete evaluation, we include the absolute LMSE (aLMSE) and the correlation metrics proposed by Jiang et al. [27] as well as the standard MSE metric. The aLMSE is defined as follows: (26) where I andĨ are the ground-truth and estimate of intrinsic image, respectively. w is the index of sliding window. µ and µ are the average of I andĨ, respectively. The optimal scale a is searched to minimize the square error. The influence of the difference of mean values can be eliminated by aLMSE. aLM SE(I,Ĩ) = w min a (I w − µ w ) − a(Ĩ w −μ w ) 2 , The correlation is defined to be Cor(I,Ĩ) = E[(I − µ)(Ĩ −μ)] σσ ,(27) where σ is the standard deviation of the image. E is the expectation. We refer to the supplementary material of Reference [27] for more details of aLMSE and correlation. Among these metrics, correlation and MSE measure the error in a global way, while LMSE and aLMSE take an average of local errors on small image windows. For each image, the performance of reflectance and shading are calculated separately and the average of them is taken to be the result. The final result is the average of the performances over all images. Results on the IIW dataset are evaluated by the metric of "weighted human disagreement rate" (W HDR 10% ) [12]. It measures the correct rate of judgements on "which one has a darker reflectance" between two pixels. The main parameters of our model are the positive weights of the sigmoid function in Section 4. We set w 1 to be ln3/0.1, so the sigmoid function maps a step edge of strength 0.1 to a probability of 0.5. Similarly, we set w 2 ∼ w 6 to be ln3/0.2, ln3/0.01, ln3/0.08, ln3/0.1, and ln3/0.2, respectively. Specifically, we set the w 7 of the FS method to be twice as much as that of the SS method. We find the medium of the spatial distances of all the pixel pairsd s , and set w 7 to be ln3/d s for the FS method. For RGB-only images, we increase w 7 by 6 times to compensate the increase of probabilities of selecting the FS and the SS method. The initial weights α 1 and α 2 in (21) are set to be 1 and 2, respectively. The threshold ω min and the step size τ in Algorithm 1 are set to be 1/3 and 0.2, respectively. We found that our model is insensitive to these parameters. Evaluation of the components of our method Individual estimation methods. The results on the MIT Intrinsic Images dataset are compared in Fig. 9a. Our full model (Full) achieves the best performance, while estimating the shading orders without any single method will cause a noticeable drop of performance. Disabling BOB (W/o BOB) causes the most severe drop, followed by BO, FS, and SS, consecutively. Fig. 10 shows the changes of the recovered reflectance and shading when different methods are removed. Removing BO will break the smoothness of reflectance across the shadow edges. When BOB is unused, the shading smoothness across different reflectance will be broken, leaving sharp edges in shading. The smoothnessbased methods FS and SS are essential for keeping the local shading smooth. Without using FS, the smoothness in textured regions cannot be guaranteed. SS is important for the areas where the biases of reflectance brightness are not accurately estimated. The brightening direction. We test a special case of our method, where the brightening direction is fixed at [1, 1, 1] T following the Color Retinex [24]. Although the direct illuminants in the MIT Intrinsic Images dataset are nearly white and the ambient illuminants are weak, the performance under a white brightening direction (WB) is much worse than our original model (Fig. 9b). The confidences of pairwise orders. We evaluate the importance of the confidences of the pairwise orders in inferring the global shading by replacing AE with AS [34], i.e., assigning equal weights to the pairwise shading orders. From Fig. 9b we can see that the performance drops significantly. Depth information. Several depth-based features are used to calculate the confidences of pairwise orders for RGB-Depth images (Section 4). Fig. 11 suggests their effects. Utilizing the feature of Surface Normal Change increases the probability of applying the shading smoothness constraints to flat surfaces. See the regions in the red and green boxes of Fig. 11 for examples. These areas are mistaken to be shadowed without depth cues, since they have similar chromaticity to their surroundings, and their boundaries are blurred. The feature of Shadow Edges finds shading changes at depth discontinuities efficiently. It may miss some shadow edges that cannot be generated by any sample of illuminant, when the change of depth is small (e.g., the area in the blue box of Fig. 11), or a large part of the occluder is not visible in the current view (e.g., the area in the yellow box). Results on MIT Intrinsic Images dataset We compare our method to the state-of-art and to several classic approaches as listed in Table 2. These results are either copied from their papers, the report in [11], or by --0.0390 -Color Retinex [24] 0.7146 0.1108 0.0286 0.2541 Jiang-A [27] 0.6184 0.1533 0.0421 0.3988 Jiang-H [27] 0.5829 0.1524 0.0483 0.3476 Jiang-HA [27] 0.6109 0.1579 0.0454 0.3631 Shen-SR [14] 0.7259 0.1223 0.0240 0.2454 Shen-SRC [14] --0.0204 -Zhao et al. [4] --0.0250 -Gehler et al. [13] 0.7748 0.0985 0.0244 0.2544 Serra et al. [11] 0.7862 0.0834 0.0340 0.2958 Bell et al. [12] 0.7229 0.1100 0.0337 0.2763 Li et al. [30] --0.0190 -Chang et al. [19] --0.0229 -SIRFS [15] 0 running their code directly without tuning any parameters 1 . We report the results under the best parameters for the whole dataset. Our method achieves the best performance. Fig. 12 gives some concrete examples. The most remarkable advantage of our method is that it can recover the reflectance under deep shadows. One reason is that we can cluster the pixels with the same reflectance together on the U V shadow-free plane, no matter how dramatically the shading changes. Another reason is that our model fuses estimates from different methods by selecting the optimal one exclusively, which avoids smoothing the shading edges by the other estimates. Clustering-based methods, including Gehler et al. [13], Garces et al. [17], and Bell et al. [12], are sensitive to the change of intensity and color caused by shadows. The edge-based method of Li et al. [30] tends to assign large gradients to reflectance changes, which degrades at sharp shadow edges (e.g., those on the body of the deer). The methods of Gehler et al. [13] and Li et al. [30] smooth the shading extensively, leaving residuals of shadows in the reflectance (e.g., the teabag). SIRFS [15] smoothes the surfaces, which may generate an overly smooth shading (e.g., the frog). Another advantage is that our method can recover the global shading robustly. The main reason is that the clustering-based methods BO and BOB capture the shading orders between distant pixels effectively. Edge-based methods cannot reliably recover the relative shading between unconnected parts (e.g., the shadings recovered by Li et al. [30] are inconsistent between the front and the back of the turtle). Another reason is that BOB can handle the areas where the shading and reflectance change simultaneously (e.g., the mouth and the head of the frog). 1. The method SIRFS is evaluated on the images of cup2, deer, frog2, paper2, raccoon, sun, teabag1 and turtle, while the other images are used for training. The results of Bell et al. [12] are obtained through relaxing the constraints on the absolute values of shading and removing the intensity from the features for clustering the reflectance. Otherwise the deep shadows will be mistaken to be black and clustered into individual categories. Our method preserves the subtle variations of reflectance (e.g., the yellow and orange regions of the tea bag), since the intra-cluster variations in the U V plane (Fig. 2c) are represented in the recovered reflectance. In contrast, some clustering-based methods, such as Garces et al. [17] and Bell et al. [12], unify the reflectance of the pixels of each cluster. This operation often leads to block artifacts (e.g., the tea bag). Our method did not handle the feet of the deer well. The black feet and the white legs are both achromatic, so they fall into the same cluster on the shadow-free plane. The image blur further reduces the efficiency of the feature of Reflectance Change (Section 4), so the difference between black and white are not kept into reflectance. Results on Natural Images The quantitative results on the IIW dataset are shown in Table 3. Our method achieved comparable results to the state-of-art. It should be mentioned that W HDR 10% cannot reflect the superiority of our method on inferring the shading orders between pixels with different chromaticity, since only pixels with similar chromaticity are compared [12]. Further, the textured pixels are excluded from evaluation, so the ability to preserve the texture of reflectance is untested. Actually, both the top-performing methods of [16] and [12] remove the texture from the reflectance. For a fair comparison, we report our result that uses the edgepreserving smoothing of [16] to preprocess the input image. Without smoothing, the W HDR 10% increases about 3.7%. The IIW dataset is much more difficult than the MIT Intrinsic Images dataset. The image in the top row of Fig. 13 is comprised of different kinds of objects, some of which are highly textured (e.g., the wall with blue painting). Our method preserves the textures 2 much better than the other methods in comparison. Another difficulty comes from the intensive specular reflections (e.g., the wall in the top row of Fig. 13). Our method puts the specular reflections into reflectance, while some other methods, such as Zhao et al. [4] and Garces et al. [17], put them into shading. The greatest challenge of the IIW dataset comes from the coexistence of multiple direct illuminants in the same scene. In the bottom row of Fig. 13, the areas in the red boxes of the input image are covered by lights in different colors. This situation does not satisfy the bi-illuminant assumption of the BIDR model [3]. No unique brightening direction exists for the whole image, and the brightening direction obtained from entropy minimization (Section 3.1) eliminates the difference improperly. It causes two problems to our method: (1) the error of clustering will increase; and (2) the color of the recovered reflectance will be twisted. The first problem is shared by all the clustering-based methods such as Garces et al. [17] and Bell et al. [12]. The second problem is common, since all the methods in comparison assume a single (direct) illumination. Despite these problems, our model still recovered a globally consistent shading. Discussion. Scene-SIRFS addressed the mixture of illuminations by a soft segmentation of the image with respect to the "ownership" of illuminants [40]. But the segmentation 2. We do not use the edge-preserving smoothing to produce the qualitative results in Fig. 13. is not easy, since the changes of illuminations are often slower than the changes of reflectance. Beigpour and Van de Weijer [35] proposed the Multi-illuminant Dichromatic Reflection (MIDR) model to account for the secondary illuminants. However, in practice they only dealt with the case of two direct illuminants irradiating a single-colored object. We may consider extending the BIDR model to incorporate multiple direct illuminants. Accordingly, there will be multiple brightening directions, and the brightness should be extended to a mixture of sub-coordinates. This will make the problem much more complex. We further test on the outdoor images from the UIUC shadow dataset [45]. Fig. 14 shows three examples. The ambient illuminants are usually the blue sky, so the shadowed areas are more blueish than the lit areas. We compare to the methods of Jiang-HA [27] and Gehler et al. [13]. We also compare to the region-pair-based shadow removal method proposed by Guo et al. [45] 3 . Our model recovers the reflectance by lighting the dark pixels along the yellowish brightening direction, while the other intrinsic decomposition methods often fail to recover their colors. The method of Guo et al. [45] is unable to handle thin areas due to the limited resolution of image segmentation (e.g., the fingers in the last image of Fig. 14). Evaluation on RGB-Depth Images We test on the RGB-Depth images from the NYU-Depth V2 dataset. We compare to those methods that take RGB-Depth images [40][6] [39] or videos [5] as input 4 . Typical examples are shown in Fig. 15. Our method successfully recovered globally consistent shadings and preserves the textures of reflectance. In particular, our method was the only one that recovers the smooth shading over the painting in the first row of Fig. 15. In comparison, the method of Lee et al. [5] did not get consistent shadings between surfaces in different orientations. In their recovered reflectance of the first image in Fig. 15, the backrest of the sofa and the walls are much darker than the seat of the sofa and the floor. The method of Barron and Malik [40] successfully captured the shape of curved surfaces (e.g., the sofa in the first image of Fig. 15) but not those of objects with sharp boundaries (e.g., the cabinet and the bed in the second image of Fig. 15). The method of Chen and Koltun [6] achieved good smoothness of shading while keeping the sharp surface edges at the same time. However, this method often failed to recover the shading orders between objects with different colors (e.g., the blue pillow and the sofa in the first image of Fig. 15). The method of Jeon et al. [39] preserved the textures in reflectance very well (e.g., the floor in the second image of Fig. 15), but this method tends to reduce the difference of shading between surfaces with similar orientations (e.g., the walls and the cabinet in the second image of Fig. 15). CONCLUSIONS AND DISCUSSIONS We proposed the shading orders for intrinsic image decomposition. The shading orders captured not only adjacent relations but also distant connections. This overcame the limitations of edge-based methods that lack the large-scale structure of shading. The shading orders can be measured by several individual methods, each of which can give a reasonable estimate based on certain assumptions about the scene. Jointly utilizing these methods captured various kinds of priors and observations of the scene. We developed the CSF algorithm to combine the pairwise orders measured by different methods. CSF infers a global order by selecting the confident and consistent pairwise orders and solving their conflicts through AE. The local competition removes unreliable measurements from the fusion, so the results are much cleaner than a weighted sum of different estimates. This is essential for keeping sharp shadow edges and textures. The sparsity-driven neighbor selection further reduced the outliers of local measurements. Experimental results demonstrated that our model is suitable for various indoor and outdoor scenes with noticeable ambient illuminants. However, the BIDR model cannot handle multiple direct illuminants, interreflections, or specular reflections. We need to generalize the BIDR model and the U V B color space for more realistic scenes. The highly textured images are still quite challenging for clustering-based methods, since their reflectance often changes irregularly and thus cannot be clustered properly. Jeon et al. proposed to separate the texture layer before decomposing the shading and reflectance [39], which is a promising way to ease the clustering. Fig. 16 shows the rendered shading map of an RGB-Depth image. In the camera coordinate, we draw a "gray surface", taking all the pixels as vertices. Both the color of the surface and the illuminant are set to be [1, 1, 1] T , and the reflection of the surface is set to be diffuse only (i.e., without any specular reflection). Here we assume that there is only one direct illuminant for each image, while the ambient illumination is set to be 0. The illuminant is put inside the room box, and the range of the room box is set to be the scope of all the observable pixels. Especially, we expand the range of the z dimension (orthogonal to the image plane) to the negative part of the coordinate, since the light may be placed to the back of the camera. The surface is rendered with the Matlab Surfl function, and the output intensities of the vertices form a shading map. The bottom row of Fig. 16 shows the rendering results under several sampled illuminants. We can see that some of them are close to the real shading map of the scene, while the others are quite different. APPENDIX RENDERING THE SHADING MAP The similarity between the rendered shading and the ground-truth shading brightness S b is measured by their category-wise correlation: Sim(γ(L d ), S b ) = g∈G ng n Cor(γg(L d ), e S b g ) = g∈G ng n Cor(γg(L d ), e I b g ),(28) where G is the set of reflectance categories, n is the number of pixels, and Cor is the correlation between two variables. The subscripts g denotes the subset of pixels belonging to the g-th category. Here we utilized the linear relationship between the brightness I b and the shading brightness S b based on (6). We select a set of candidate illuminants L = {L d |Sim(γ(L d ), S b ) > 0.2}. ADMM FOR OPTIMIZING THE WEIGHTS W Eqn. 24 can be solved for each pixel p individually, where the matrix W can be decomposed into a series of vectors W p,· . So do E andC. For simplicity, we omit the subscript p of all the matrixes from now on. Denote d = D p . We reformulate Eqn. 24 to an equivalent problem: arg min W,X,Y g 1 (W ) + g 2 (X) + g 3 (Y ) s.t.C T W = d W = X = Y(29) where g 1 (W ) = E T W + α 2 2 W 2 2 g 2 (X) = α 1 X 1 g 3 (Y ) = 0 if Y q ≥ 0, ∀q ∞ otherwise(30) Through introducing Lagrange multipliers λ, Γ 1 and Γ 2 , we can obtain the following augmented Lagrangian [50]: L(W, X, Y, λ, Γ 1 , Γ 2 ) =g 1 (W ) + g 2 (X) + g 3 (Y ) + λ(d −C T W ) + Γ T 1 (W − X) + ρ 2 W − X 2 2 + Γ T 2 (W − Y ) + ρ 2 W − Y 2 2(31) where ρ is a scaling parameter. We initialize W , X and Y with 1 n , while λ = 2 and Γ 1 = Γ 2 = 1 n . Then we update them iteratively as follows: W k+1 = 1 α 2 + 2ρ (ρX k + ρY k − E + λ kC − Γ k 1 − Γ k 2 ) X k+1 =                W k+1 + Γ k 1 − 1 ρ α 1 if W k+1 + Γ k 1 > 1 ρ α 1 0 if |W k+1 + Γ k 1 | ≤ 1 ρ α 1 W k+1 + Γ k 1 + 1 ρ α 1 if W k+1 + Γ k 1 < − 1 ρ α 1 Y k+1 = (W k+1 + 1 ρ Γ k 2 ) + λ k+1 = λ k + η 1 (d −C T W k+1 ) Γ k+1 1 = Γ k 1 + η 2 (W k+1 − X k+1 ) Γ k+1 2 = Γ k 2 + η 3 (W k+1 − Y k+1 )(32) where X is got from soft thresholding. (·) + truncates all the elements of a vector to be non-negative. η 1 , η 2 and η 3 are step sizes. We terminate the iteration when W − X 1 + W − Y 1 is less than a threshold T W and d − E T W is less than a threshold T d . In implementation, we set ρ, η 1 , η 2 and η 3 to be 5, 0.05, 1, and 1, respectively. Fig. 17 shows the results of our method for the images of the MIT Intrinsic Images dataset other than those appeared in the paper. Figs. 18, 19 and 20 present several examples from the IIW dataset. Fig. 21 shows more results of our method on the UIUC Shadow Removal dataset. Fig. 22 shows more results of our method on the NYU-Depth V2 dataset. We compare our method to several recent algorithms, including Bell et al. [12], Zhao et al. [4], Garces [5], Barron et al. [40], Chen et al. [6], and Jeon et al. [39]. Figure 23 shows the colours of shading in images from the MIT Intrinsic Images dataset. We can see that most of the shading images are nearly achromatic. The reason is that the images are captured in controlled environment, where the ambient illuminations are largely suppressed by painting the background into black. According to Equation 3, when the ambient illumination is negligible, the shading will be nearly achromatic, no matter what the color of the direct illumination is. However, for the frog in Figure 23, the shading is slightly chromatic. Figure 24 shows the colours of shading in natural indoor and outdoor scenes. The indoor scenes often have complex illuminations, so the shading colors vary a lot from image to image, even from place to place in the same image. In comparison, the shading colors in outdoor scenes are more regular. Especially, the shadows in outdoor scenes are often blueish, since the ambient light is often the blue sky.
9,123
1810.09706
2896121525
We address the problem of decomposing a single image into reflectance and shading. The difficulty comes from the fact that the components of image---the surface albedo, the direct illumination, and the ambient illumination---are coupled heavily in observed image. We propose to infer the shading by ordering pixels by their relative brightness, without knowing the absolute values of the image components beforehand. The pairwise shading orders are estimated in two ways: brightness order and low-order fittings of local shading field. The brightness order is a non-local measure, which can be applied to any pair of pixels including those whose reflectance and shading are both different. The low-order fittings are used for pixel pairs within local regions of smooth shading. Together, they can capture both global order structure and local variations of the shading. We propose a Consistency-aware Selective Fusion (CSF) to integrate the pairwise orders into a globally consistent order. The iterative selection process solves the conflicts between the pairwise orders obtained by different estimation methods. Inconsistent or unreliable pairwise orders will be automatically excluded from the fusion to avoid polluting the global order. Experiments on the MIT Intrinsic Image dataset show that the proposed model is effective at recovering the shading including deep shadows. Our model also works well on natural images from the IIW dataset, the UIUC Shadow dataset and the NYU-Depth dataset, where the colors of direct lights and ambient lights are quite different.
Many recent methods address more intrinsic components other than shading and reflectance, including specular reflection @cite_20 , shape and illumination @cite_48 , coarse-scale and detailed shading @cite_21 , direct and indirect irradiance @cite_26 @cite_43 , illuminant color and sensor characteristics @cite_37 , and texture @cite_36 . These detailed decompositions give a more comprehensive analysis of the scene, but they also make the problem much more complex. Recently, new constraints have been formed based on the geometric information of RGB-Depth images @cite_9 @cite_5 @cite_26 @cite_36 . We use the depth map to render shadow maps that can determine the positions of shading edges. More recently, the intrinsic video techniques have extended the research to videos @cite_6 @cite_3 @cite_42 .
{ "abstract": [ "Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a \"truly intrinsic\" reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images.", "We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images.", "While intrinsic image decomposition has been studied extensively during the past a few decades, it is still a challenging problem. This is partly because commonly used constraints on shading and reflectance are often too restrictive to capture an important property of natural images, i.e., rich textures. In this paper, we propose a novel image model for handling textures in intrinsic image decomposition, which enables us to produce high quality results even with simple constraints. We also propose a novel constraint based on surface normals obtained from an RGB-D image. Assuming Lambertian surfaces, we formulate the constraint based on a locally linear embedding framework to promote local and global consistency on the shading layer. We demonstrate that combining the novel texture-aware image model and the novel surface normal based constraint can produce superior results to existing approaches.", "We present SIRFS (shape, illumination, and reflectance from shading), the first unified model for recovering shape, chromatic illumination, and reflectance from a single image. Our model is an extension of our previous work [1], which addressed the achromatic version of this problem. Dealing with color requires a modified problem formulation, novel priors on reflectance and illumination, and a new optimization scheme for dealing with the resulting inference problem. Our approach outperforms all previously published algorithms for intrinsic image decomposition and shape-from-shading on the MIT intrinsic images dataset [1, 2] and on our own \"naturally\" illuminated version of that dataset.", "", "", "Separating a photograph into its reflectance and illumination intrinsic images is a fundamentally ambiguous problem, and state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results. However, these algorithms cannot be easily extended to videos for two reasons: first, naively applying algorithms designed for single images to videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback, and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video sequences into their reflectance and illumination components. Our algorithm uses a hybrid e2ep formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and lighting-aware compositing.", "", "We present a method to decompose a video into its intrinsic components of reflectance and shading, plus a number of related example applications in video editing such as segmentation, stylization, material editing, recolorization and color transfer. Intrinsic decomposition is an ill-posed problem, which becomes even more challenging in the case of video due to the need for temporal coherence and the potentially large memory requirements of a global approach. Additionally, user interaction should be kept to a minimum in order to ensure efficiency. We propose a probabilistic approach, formulating a Bayesian Maximum a Posteriori problem to drive the propagation of clustered reflectance values from the first frame, and defining additional constraints as priors on the reflectance and shading. We explicitly leverage temporal information in the video by building a causal-anticausal, coarse-to-fine iterative scheme, and by relying on optical flow information. We impose no restrictions on the input video, and show examples representing a varied range of difficult cases. Our method is the first one designed explicitly for video; moreover, it naturally ensures temporal consistency, and compares favorably against the state of the art in this regard.", "", "In this paper we extend the “shape, illumination and reflectance from shading” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.", "" ], "cite_N": [ "@cite_37", "@cite_26", "@cite_36", "@cite_48", "@cite_9", "@cite_21", "@cite_42", "@cite_6", "@cite_3", "@cite_43", "@cite_5", "@cite_20" ], "mid": [ "1990993885", "2101856619", "2581345", "1511909101", "", "", "2146721395", "", "1994246617", "", "2117751343", "" ] }
Consistency-aware Shading Orders Selective Fusion for Intrinsic Image Decomposition
A N image is the result of several factors, including the material reflectance, the surface's shape, the positions and the colors of the illuminants, and the camera sensor responses. Barrow and Tenenbaum [1] proposed to decompose an image into intrinsic images, each of which captures a distinct aspect of the scene. The most common outputs are the shading and the reflectance. The shading captures the strength of incident illumination at each pixel, while the reflectance shows the surface albedo. The shading is widely used to reconstruct the shapes of the surfaces [2]. The albedo is invariant to illumination and geometry, so it is a robust feature for object classification and image segmentation. In this paper we aim to recover the shading and the reflectance from a single image. This is an underconstrained problem. The absolute values of the unknown variables cannot be measured directly, since they are highly coupled in observed image. Instead, we measure the relative sizes of shading over pixels to recover its essential structure, and determine their absolute values later by boundary conditions. We regard the shading as a global ranking of the pixels in the order of dark to bright. The boundary conditions are simply that the start points are fully shadowed pixels, while the end points are fully lit ones. The global shading is inferred from pairwise shading orders, which are signed differences between the shading of pixels. The flow chart is shown in Fig. 1. We estimate the shading orders in the U V B color space, which is spanned by a 2D shadow-free plane [3] and a brightness dimension. This color space has two major properties: • Pixels with the same reflectance cluster together on the shadow-free plane. • The brightness of image is the sum of the shading brightness and the reflectance brightness. Based on these properties, we can use clustering-based methods to capture the global order structure of the shading. For pixels with the same reflectance, the shading orders can be obtained directly from the difference of the image brightness. For pixels with different reflectance, the shading orders can be calculated in a similar way, but the bias from the difference of the reflectance brightness should be compensated. We choose the optimal biases between different clusters of reflectance, which make the shading constant across reflectance boundaries excluding shading edges. The cluster-wise biases make it possible to handle pixel pairs whose reflectance and shading are both different. We also model the local shading by low-order fittings to predict the shading orders between nearby pixels. Different models can capture the geometric structure of different types of surfaces. For example, a linear model can describe the shading of a smooth surface. The estimation methods above are complementary. The clustering-based methods can be applied to any pair of pixels, in particular those of distantly located pixels, but their accuracies depend on the quality of clustering. In contrast, the low-order fittings do not rely on clustering at all, but they capture only the local structure, and the fitting errors are large for irregular surfaces. The pairwise shading orders are combined into a global shading via Consistency-aware Selective Fusion (CSF). The Fig. 1: The flow chart of our method. Firstly the image is transformed into the U V B color space. Based on brightness and cluster results over chromaticity, different methods m are used to estimate the shading orders O(p, q, m) between each pair of pixels p and q. We also evaluate the reliability C(p, q, m) of the estimates based on the image features. Then we use CSF to infer the global shading. CSF repeats two operations: Local Selection, i.e., selecting the estimation methods and the weights for each pair of pixels under the guidance of consistency between the pairwise orders and the global shading; and Angular Embedding (AE), which infers the globally consistent orders from the pairwise estimates. At last we transform the global shading back into the RGB space. major challenge is avoiding inconsistency between estimates from different methods. CSF identifies a sparse set of reliable and consistent pairwise shading orders and fuses them within a unified optimization framework. For each pair of pixels, CSF selects the most reliable estimate exclusively instead of a weighted summation of different estimates [4][5] [6]. This strategy prevents unreliable estimates from polluting the results. We evaluate the reliability of pairwise orders using not only the image features but also their consistency with the global order. Therefore, the estimates that are incompatible with the majority will be suppressed, even when their preconditions happen to be satisfied by the image features. Forcing sparsity of pairwise connections further reduces unreliable measurements. The global order is obtained from Angular Embedding (AE) [7], which embeds the pixels onto a unit circle in the complex plane. AE uses a complex matrix to encode pairwise orders and their reliability simultaneously. Moreover, AE applies spectral decomposition to get a near-global optimal solution that best matches the reliable pairwise orders. After locating the darkest points on the unit circle, the absolute values of shading can be determined. IMAGE FORMATION An image with only body reflection can be modeled as [3] I i (p) = R i b (p)(γ(p)L i d + L i a ),(1) where the superscript i indexes the RGB channels, and p indexes the pixel. The body reflectance R b denotes the diffuse reflection under white illumination. The three-dimensional vectors L d and L a are the direct illuminant and the ambient illuminant, respectively. γ(p) ∈ [0, 1] is the direct shading, i.e., the proportion of direct illumination reaching the surface. BIDR assumes that the direct and ambient illuminants are constant across the materials [3]. When there are multiple direct illuminants with the same color, their effects can be added. Inspired by the shadow removal problem [45], we define the reflectance to be the image lit by full direct illuminant together with the ambient illuminant: R i (p) = R i b (p)(L i d + L i a ).(2) Accordingly, the shading is defined to be S i (p) = I i (p) R i (p) = γ(p)L i d + L i a L i d + L i a .(3) For a fully lit area (i.e., γ = 1), the shading reaches its maximum. For a fully shadowed area (i.e., γ(p) = 0), the shading will be S(p) = L a /(L d + L a ). In natural scenes, the direct lights are always much stronger than the ambient lights, so the shading of fully shadowed areas should be a small positive value. The color of the shading in (3) does not have definite physical meaning, so we show the shading in grayscale for all the figures in this paper following [24] and [12]. Readers that are interested in the color of the shading are referred to the supplementary material for several examples. SHADING ORDERS FROM BRIGHTNESS We infer shading orders in the U V B color space. We will show that image brightness has a linear relation to the log of shading. Therefore pairwise shading orders can be estimated by either brightness orders or low-order fittings of local shading. The U V B Color Space The BIDR model delivers a 2D shadow-free plane U V [3]. The normal n of the U V plane points from the shadowed pixels to the lit ones sharing the same body reflectance R b (see Fig. 2b for an example). We call the normal n the brightening direction. Formally, the brightening direction is defined by n = 1 K log I(p)| γ(p)=1 − log I(q)| γ(q)=0 = 1 K log( L d La + 1),(4) where the pixels p and q should satisfy R b (p) = R b (q), and K is the normalization factor. From (4) we can see that the brightening direction depends only on the ratio of illuminants, so all the pixels share the same brightening direction (Fig. 2b). If the ratio of illuminants is unknown, we can search the most probable brightening direction that minimizes the entropy of pixels on the U V plane [3] [46]. Since pixels with similar reflectance R b will stay closely together on the U V plane (Fig. 2c), the entropy of the distribution of pixels will be minimized. Let u and v be any pair of basis vectors on the U V plane. Then we have a rotation matrix H = [u, v, n] that transforms the log RGB space into a new color space U V B: [I u (p), I v (p), I b (p)] = log I(p)H.(5) The dimension I b captures the intensity of the image, and we call it the brightness. According to (3) and (5), the brightness of the image can be factorized as follows: I b (p) = log S(p) · n + log R(p) · n = S b (p) + R b (p).(6) Here we used the fact that log I(p) = log R(p) + log S(p). The shading brightness is S b (p) = log S(p) · n, which is a linear function of log S. The reflectance brightness R b (p) = log R(p) · n can be regarded as a bias determined by the body reflectance R b . This linear relationship is the basis for estimating the shading orders in Section 3.2. According to (5), the shading in the U V B space should be [S u (p), S v (p), S b (p)] = log S(p)H. Note that S u and S v are nearly zero since the U V plane is shadow-free [3]. The only unknown dimension is the shading brightness S b , and we will infer it from pairwise shading orders in Section 5. Once we obtain S b , the shading in RGB space can be recovered by S(p) = exp([S u (p), S v (p), S b (p)]H −1 ),(7) where exp denotes element-wise exponential. Note that the rotation matrix H is always invertible. The reflectance can be obtained from R(p) = I(p)/S(p). Measuring Pairwise Shading Orders The shading order between pixels p and q is defined to be the signed difference between shading brightnesses, i.e., O(p, q) = S b (p) − S b (q) . We propose four methods M = {BO, BOB, F S, SS} to estimate the shading orders. These methods are shown in Fig. 3. Brightness Order (BO). According to (6), if two pixels have the same reflectance brightness R b or equivalently, the same body reflectance R b , their shading order will be equal to their difference of brightnesses: O(p, q, BO) = I b (p) − I b (q) if R b (p) = R b (q).(8) Brightness Order minus Bias (BOB). For pixels with different body reflectance, the bias of reflectance brightness ∆R b should be compensated as follows: O(p, r, BOB) = I b (p)−I b (r)−∆R b (p, r) if R b (p) = R b (r),(9) where ∆R b (p, r) = R b (p) − R b (r) is the bias. The process of calculating the bias will be described in Section 3.3. BO and BOB together can estimate the shading order between any two pixels. For pixels nearby, we can fit their shading brightness by low-order functions. This is based on the assumption of Pixels I b p r s t q O(p,q,BO) O(p,t,SS) O(p,s,FS)=0 O(p,r,BOB) S b ΔR b Fig. 3: Calculating shading orders O from brightness I b . We align the curves of the brightness I b and the ground-truth shading brightness S b to make I b (p) = S b (p). The red dashed curve is the brightness after compensating the bias of reflectance brightness ∆R b . The green masks cover the green pixels while the uncovered ones are white. local smoothness of shading, which is valid for most parts of natural images. First-order Smoothness (FS). For flat surfaces, the normal directions and thus the incident angles change little. According to the cosine law of the Lambertian reflection, the variation of shading brightness will be small. The first-order derivative of shading brightness should be almost zero if there are no shadow edges. Consequently, the adjacent pixels will have nearly identical shading brightness: O(p, s, F S) = 0 if s ∈ N (p), ∂I b (p) ∂p ≈ 0,(10) where N (p) is the neighborhood of p, and ∂I b (p) ∂p is the derivative of I b evaluated at p. Second-order Smoothness (SS). For smooth surfaces, the surface normal rotates smoothly. As a result, the shading brightness will change smoothly. We assume that the second-order derivative of the shading is close to zero. Thus we can fit the local shading by a linear function. We further assume that the adjacent pixels share the same body reflectance, so the slope of the linear model ∂S b (p) ∂p = ∂I b (p) ∂p . The shading order between two nearby pixels will be O(p, t, SS) = ∂I b (p) ∂p · (p − t) if t ∈ N (p), ∂ 2 (I b (p)) ∂p 2 ≈ 0,(11) where p − t is the directed spatial distance between p and t. In practice, we calculate the derivative and the spatial distance in the horizontal and vertical directions separately. The preconditions of the methods above are not mutually exclusive, so different methods may be applicable to the same pair of pixels. The preconditions together cover all possible situations, so we can find at least one suitable method for most pairs of pixels. The redundancy and completeness of these methods are the basis for robust estimates of shading orders. The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. Estimating the Bias of Reflectance Brightness The biases of reflectance brightness ∆R b in (9) are needed to estimate the shading orders between pixels with different body reflectance. The absolute values of reflectance brightness R b are unavailable, so we cannot calculate their biases directly. Instead, we cluster the pixels by body reflectance, and estimate the biases of reflectance brightness between different clusters. The local smoothness of shading implies that pixels within a small patch have similar shading brightness. According to (6), the bias of reflectance brightness between two clusters can be approximated by their difference of image brightness within small patches. The main process is shown in Fig. 4. The image is divided into dense grids with 10 pixels in each side. For a patch T containing pixels from both categories j and k, the difference of reflectance brightness is calculated by ∆R b (j, k, T ) =Ī b (j, T ) −Ī b (k, T ), whereĪ b (j, T ) and I b (k, T ) are the median brightness of pixels belonging to categories j and k, respectively. We generate a histogram of the patch-wise measures ∆R b (j, k, T ), and take the highest peak to be the estimate ∆Ř b (j, k) as shown in Fig. 4c. The minorities of the histogram mainly come from patches with shading edges in them (e.g., patches 3 and 4 in Fig. 4b). The reliability F of the estimate is set to be the number of votes from the patches. When F j,k is 0, it means categories j and k are not adjacent, and their bias cannot be measured directly. In this case, we resort to their biases with other categories. Taking each reflectance category as a node, we can build an undirected graph G = (V, E), where V is the set of nodes and E is the set of edges. The weight of the edge between node j and k is set to be 1/F j,k , where F j,k is the reliability of ∆Ř b (j, k) as described before. We can get an estimate of the bias between two nodes by summing the biases along any path connecting them. We further eliminate the multipath effect by extracting the Minimum Spanning Tree (MST) of the graph G. The MST ensures that there is one and only one path between any two nodes, so the relative reflectance brightnessŘ b of each node can be uniquely determined. Meanwhile, the total reliability of the remaining pairwise biases is maximized. The sparsity of the reflectance spectra [47] ensures that the pixels can be clustered into a small number of categories. Fig. 4: Estimating the bias of reflectance brightness between reflectance categories. (a) The cluster map. The symbols j, k, and l stand for 3 reflectance categories. The squares indicate representative patches for estimating the bias of reflectance brightness between categories j and k; (b) The brightness I b . The biases obtained from patches 3 and 4 are outliers, since there are shadow edges inside them; (c) The histogram of the patch-wise biases of reflectance brightness between categories j and k. The peak of the histogram is selected to be the result. Since pixels on the shadow-free plane U V are well organized by their body reflectance, we cluster the pixels by a simple k-means. The number of clusters is set to be the number of local maxima in the 2D histogram of I u and I v . The bin size of the histogram is empirically set to be 0.03. THE RELIABILITY OF PAIRWISE ORDERS For each pair of pixels, we obtained several estimates of their shading order by different methods in Section 3.2. These methods rely on certain assumptions about the scene, which may be invalid for certain parts of the image. Therefore, the estimated shading orders may differ from the ground-truth. We evaluate the reliability of each estimate by checking whether influential perturbations happened there. The reliability of an estimate is the probability of all its premises being valid, which is calculated by a Noisy-Or model C(p, q, m) = f ∈Cm 1 − P f (p, q), m ∈ M,(12) where C m is the set of perturbations that the method m is not robust to, as listed in Table 1. The probability P f (p, q) measures how likely the perturbation f occurs around pixels p and q. For an ideal image without any perturbation, all the methods get equally high confidences. Once a perturbation happens, the confidences of sensitive methods will drop. The occurrences of the perturbations are predicted by image features. Generally, we calculate a distance x between the pair of pixels according to each feature, and translate the distance into probability by a sigmoid function in the form of sigm(x; w) = 2 1+e −wx − 1, where w is a positive weight. The features are described below. Clustering Error (CE) is the probability that the clustering of pixels on the shadow-free plane is inaccurate, which is calculated by where the cluster probability P C is the likelihood of each pixel belonging to its reflectance category, and eŜ b is the strength of the step edge [48] on the shifted shading bright-nessŜ b . The first term increases as the pixel p or q deviates from the cluster centers. The second term is large when the pixels are improperly categorized or the relative reflectance brightnesses are inaccurately estimated, as shown in Fig. 5c. PCE(p, q) = (1 − PC (p)PC (q)) · sigm(eŜb (p, q); w1),(13) Here each reflectance category is modeled by a multivariate normal distribution. The shifted shading brightnessŜ b is obtained from the brightness I b minus the relative reflectance brightnessŘ b (Section 3.3) followed by a median filtering. Local Color Variance (LCV) is defined to be: PLCV (p, q) = sigm(max(σ(I(p)), σ(I(q))); w2),(14) where σ(I(p)) is the standard deviation of chromaticities I u and I v within the 3x3 window centered at pixel p. Large color variations mainly appear at reflectance boundaries (Figs. 5a and 5c). Shadow Edges (SE) are caused by occlusions of the direct light. To locate the shadow edges, we render the direct shadingγ under uniformly sampled illuminants. The direct shading is similar to the visibility map proposed by Lee et al. [5]. The difference is that they assume the illuminants to be infinitely far away, which is inaccurate for indoor scenes. Instead, we sample the feasible positions of the illuminant within the room box. The probability of a shadow edge between pixels p and q is calculated by their direct shading under promising illuminants, as follows: PSE(p, q) = sigm( 1 |L| L d ∈L γ(L d , p) −γ(L d , q) ; w3).(15) Here L is the set of promising illuminants, andγ(L d , p) is the direct shading at pixel p under illuminant L d . We select the promising illuminants according to the correlation between the rendered direct shadingγ and the brightness I b . See the supplementary material for details. The Shadow Edges feature is not applicable to RGB-only images, since the geometric layout is needed for rendering the shading map. Reflectance Change (RC) distinguishes pixels with different chromaticities or intensities, which are assumed to have different reflectance [24][13][17] [12]. We calculate the probability of a reflectance change by PRC (p, q) = sigm(duv(p, q); w4) · sigm(e b (p, q); w5),(16) where d uv is the geometric distance on the shadow-free plane. e b (p, q) is the magnitude of the step edge lying between p and q in the brightness I b , which aims at distinguishing colors with similar chromaticity but different intensities, especially achromatic ones. Surface Normal Change (SNC) generates shading variation [5][6] [39]. We calculate the probability of a surface normal change by PSNC (p, q) = sigm( (N(p), N(q)); w6),(17) where (N(p), N(q)) is the angle between the surface normals at pixels p and q. The surface normals are derived from the depth map [5]. SNC is unavailable to RGB-only images. Spatial Distance (SD) is simply the geometric distance between the pixels [6] [12]: PSD(p, q) = sigm(ds(p, q); w7). For RGB-Depth images, we first calculate the 3D positions of the pixels in camera coordinates and then compute their distances. For RGB-only images, we use the 2D coordinates in the image plane. Discussion. The features above can help us choose the best estimation method for a certain pair of pixels. Among them, CE focuses on whether the biases of reflectance brightnesses are correctly estimated, which is the key to the success of the BOB method. We check the correctness by both the cause and the effect, i.e., the pixels are tightly clustered and the estimated shading is smooth, respectively. LCV and RC capture the local and large-scale behaviour of reflectance change, respectively. The local variation, coupled with image blur, will disturb the measurements of brightness as well as its gradient. This will cause problems in most estimation methods except for the FS, which is only concerned with the adjacency of pixels. GLOBAL SHADING FROM SHADING ORDERS VIA CONSISTENCY-AWARE SELECTIVE FUSION Thus far we have obtained a matrix O of the pairwise shading orders (Section 3.2) together with a confidence matrix C from (12) representing their reliability. Now we use the Consistency-aware Selective Fusion (CSF) to select a subset of reliable and consistent pairwise orders, and combine them into an optimal global order. CSF is designed under the following criteria: • For a pair of pixels p and q, the optimal estimation method M p,q ∈ M is selected exclusively. • The pairwise connections W p,q should be sparse such that the outliers are excluded. • The total confidence of the selected pairwise shading orders should be maximized. • The global order should match the input pairwise orders. In practice, the global order is obtained through Angular Embedding (AE) [7]. Let Z p = e iS b (p) with i = √ −1 denote the embedding of pixel p on the unit circle in the complex plane (Fig. 6). The angle Θ p,q from Z p to Z q is the shading order between p and q. AE finds an embedding that makes Θ p,q consistent with the input shading order O p,q = O(p, q, M p,q ). Algorithm 1 Consistency-aware Selective Fusion Require: Pairwise shading orders O and the relative confidence C, the initial weights α 1 and α 2 of regularizer, the threshold ω min of the density of non-zero elements in W , and the step size τ . Ensure: Embedding Z. Initialization: W = 1 n,n , where n is the number of pixels; M p,q = arg max m C(p, q, m). while α 2 > 0 do Optimize Z using (20); Choose M using (22); (23); if W 0 < ω min n 2 then Break; end if end while return Z. α 2 = α 2 − τ ; Update W using The estimation methods W , the pairwise connections M and the embedding Z are optimized jointly as follows: min W,M,Z J AE (Z; W, M ) + P (W ) s.t. Z p = 1, q C p,q = D p , ∀p, W (p, q) ≥ 0, ∀p, q,(19) where the errors of Angular Embedding is defined to be [7] J AE (Z; W, M ) = p,q C p,q · Z p − Z q e iOp,q 2 ,(20) and the regularization term is in the form of elastic net [49] P (W ) = α 1 W 1 + α 2 2 W 2 2 .(21) Here C p,q = W p,q C(p, q, M p,q ) is the weighted confidence. The diagonal matrix D p = q max m∈M C(p, q, m) is a degree matrix. α 1 and α 2 are the weights of lasso (L1) and ridge (L2), respectively. Elastic net enforces group sparsity on the weights, so several groups of reliable neighbors will be selected for each pixel. We optimize the variables M , W , and Z iteratively as described in Algorithm 1. Fig. 6 illustrates one iteration of the process. The details are given below. Choose M . Keeping W and Z fixed, we can search the optimal estimation method by arg min M p,q W p,q C(p, q, M p,q ) · Z p − Z q e iO(p,q,Mp,q) 2 s.t. q W p,q C(p, q, M p,q ) = D p , ∀p. (22) It can be optimized by the Lagrange method. We iteratively pick the optimal M p,q that balances the confidence and the consistency of orders under the current Lagrangian multiplier, and updates the multiplier by dual ascent. In Fig. 6b the selected method for pixels p and q is the one with the second highest confidence but the best consistency to the global shading. W p,q E p,q + α 1 W 1 + α 2 2 W 2 2 s.t. q W p,qCp,q = D p , ∀p, W p,q ≥ 0, ∀p, q,(23) whereC p,q = C(p, q, M p,q ) and the confidence-weighted embedding error is E p,q =C p,q · Z p − Z q e iO(p,q,Mp,q) 2 . This optimization problem can be solved by the Alternating Direction Method of Multipliers (ADMM) [50]. See the supplementary material for details. From (23) we can see that the larger the embedding error E(p, q) is, the smaller W (p, q) tends to be. This can be observed in Fig. 6 that the pair of p and t gets a low weight, since the embedding error is large for every estimation method. Note that we decrease the value of α 2 gradually in Algorithm 1, which makes W more and more sparse. This progressive sparsity has better numerical stability than setting α 2 to be a small value in the very beginning. As α 2 gets too small, the pairwise connections may become overly sparse, producing an ill-conditioned graph. We terminate the iteration of Algorithm 1 in this case. Optimize Z. Optimizing the embedding error J(Z; W, M ) in (20) directly is hard in practice since it has n constraints, where n is the number of pixels. Relaxing the unit-length constraints in (19) to be Z DZ = 1 n D1 n , the problem can be rewritten into the following matrix form: min Z Z LZ s.t. Z DZ = 1 n D1 n .(24) Here L is a Laplacian matrix L = D − C • e iO + (C • e iO ) ,(25) where • is the matrix Hadamard product, is the complex conjugate transpose, 1 n is a n × 1 vector of all ones, and exponentiation acts element-wise. To make the optimization tractable, we consider only the shading orders between nearby pixels, while the confidences of the other shading orders are set to be zero. In our experiments we set the neighborhood to be a square of 30 pixels in each side. The optimization problem in (24) is solved by the spectral partitioning algorithm [48] with complex-valued eigenvectors. The solution is the angles of the first eigenvector Z 0 that has the smallest eigenvalue. We refer to the paper of Yu [7] for more details. Recover shading S b . To decode the shading brightness S b from the angles Z 0 , we need to ensure that the angle between any two points is less than 2π, otherwise the points may overlap with each other. To achieve this, we scale the brightness dimension of the U V B color space by a positive scalar. The scaling will not disturb the order of Z 0 , and we can scale the shading brightness back after the decoding. AE allows the points to rotate as a whole around the original point. We need to rotate the points back until the angles of the darkest points are zero. Note that the darkest pixels and the brightest pixels are always separated by a gap on the circles in the complex plane. Fig. 7b shows an example. The gap can be easily located by the consecutive empty bins of the histogram of the angles Z 0 (Fig. 7c). The pixels falling into the bins to the left of the gap are shifted to the right by 2π. Fig. 8 shows the change of variables during the iterations of CSF. In the beginning, the relative shading of some local regions are inaccurate (e.g., the circle inside the red box), since some wrong estimates occasionally get higher confidences than the right ones based solely on the image features. For example, the orders obtained from the BOB method (indicated by green dots) may possibly be wrong since the clustering is inaccurate (see Fig. 5b). Some pixels with similar but different colors are mistaken to have the same reflectance (the red dots in the light yellow regions). Furthermore, the FS method is adopted to estimate the shading orders between distant pixels (the yellow dots far away from the center point). When the global order is used to guide the selection, the right estimation methods gradually emerge. At the same time, the weights of unreli-able connections are greatly decreased as the sparsity gets stronger. Specifically, pairs of pixels whose orders cannot be accurately estimated by any method will be assigned zero weights and excluded from the fusion. As a result, the errors of Z 0 are reduced considerably. EXPERIMENTS We evaluate our method on the MIT Intrinsic Images dataset [24], which is a widely used benchmark. It contains groundtruth intrinsic images of 20 natural objects, and 16 of them are used for test. The images are taken in a controlled environment, where the direct illuminants are nearly white and the ambient illuminants are limited. To validate against real-world scenes, we evaluate our method on the Intrinsic Image in the Wild (IIW) dataset [12], which is a large-scale dataset of public photo collections. We also test our method on outdoor scenes from the UIUC shadow dataset [45]. We further test the utility of depth information on the RGB-Depth images from the NYU-Depth V2 dataset [51]. Error Metrics and Parameter Settings We evaluate the results on the MIT Intrinsic Images dataset primarily by the standard metric, namely the Local Mean Squared Error (LMSE) [24]. However, as pointed out by Jiang et al. , LMSE is sensitive to the window size and the difference between the mean values of the recovered intrinsic images and the groundtruth [27]. Moreover, LMSE biased towards edge-based methods [11]. To give a more complete evaluation, we include the absolute LMSE (aLMSE) and the correlation metrics proposed by Jiang et al. [27] as well as the standard MSE metric. The aLMSE is defined as follows: (26) where I andĨ are the ground-truth and estimate of intrinsic image, respectively. w is the index of sliding window. µ and µ are the average of I andĨ, respectively. The optimal scale a is searched to minimize the square error. The influence of the difference of mean values can be eliminated by aLMSE. aLM SE(I,Ĩ) = w min a (I w − µ w ) − a(Ĩ w −μ w ) 2 , The correlation is defined to be Cor(I,Ĩ) = E[(I − µ)(Ĩ −μ)] σσ ,(27) where σ is the standard deviation of the image. E is the expectation. We refer to the supplementary material of Reference [27] for more details of aLMSE and correlation. Among these metrics, correlation and MSE measure the error in a global way, while LMSE and aLMSE take an average of local errors on small image windows. For each image, the performance of reflectance and shading are calculated separately and the average of them is taken to be the result. The final result is the average of the performances over all images. Results on the IIW dataset are evaluated by the metric of "weighted human disagreement rate" (W HDR 10% ) [12]. It measures the correct rate of judgements on "which one has a darker reflectance" between two pixels. The main parameters of our model are the positive weights of the sigmoid function in Section 4. We set w 1 to be ln3/0.1, so the sigmoid function maps a step edge of strength 0.1 to a probability of 0.5. Similarly, we set w 2 ∼ w 6 to be ln3/0.2, ln3/0.01, ln3/0.08, ln3/0.1, and ln3/0.2, respectively. Specifically, we set the w 7 of the FS method to be twice as much as that of the SS method. We find the medium of the spatial distances of all the pixel pairsd s , and set w 7 to be ln3/d s for the FS method. For RGB-only images, we increase w 7 by 6 times to compensate the increase of probabilities of selecting the FS and the SS method. The initial weights α 1 and α 2 in (21) are set to be 1 and 2, respectively. The threshold ω min and the step size τ in Algorithm 1 are set to be 1/3 and 0.2, respectively. We found that our model is insensitive to these parameters. Evaluation of the components of our method Individual estimation methods. The results on the MIT Intrinsic Images dataset are compared in Fig. 9a. Our full model (Full) achieves the best performance, while estimating the shading orders without any single method will cause a noticeable drop of performance. Disabling BOB (W/o BOB) causes the most severe drop, followed by BO, FS, and SS, consecutively. Fig. 10 shows the changes of the recovered reflectance and shading when different methods are removed. Removing BO will break the smoothness of reflectance across the shadow edges. When BOB is unused, the shading smoothness across different reflectance will be broken, leaving sharp edges in shading. The smoothnessbased methods FS and SS are essential for keeping the local shading smooth. Without using FS, the smoothness in textured regions cannot be guaranteed. SS is important for the areas where the biases of reflectance brightness are not accurately estimated. The brightening direction. We test a special case of our method, where the brightening direction is fixed at [1, 1, 1] T following the Color Retinex [24]. Although the direct illuminants in the MIT Intrinsic Images dataset are nearly white and the ambient illuminants are weak, the performance under a white brightening direction (WB) is much worse than our original model (Fig. 9b). The confidences of pairwise orders. We evaluate the importance of the confidences of the pairwise orders in inferring the global shading by replacing AE with AS [34], i.e., assigning equal weights to the pairwise shading orders. From Fig. 9b we can see that the performance drops significantly. Depth information. Several depth-based features are used to calculate the confidences of pairwise orders for RGB-Depth images (Section 4). Fig. 11 suggests their effects. Utilizing the feature of Surface Normal Change increases the probability of applying the shading smoothness constraints to flat surfaces. See the regions in the red and green boxes of Fig. 11 for examples. These areas are mistaken to be shadowed without depth cues, since they have similar chromaticity to their surroundings, and their boundaries are blurred. The feature of Shadow Edges finds shading changes at depth discontinuities efficiently. It may miss some shadow edges that cannot be generated by any sample of illuminant, when the change of depth is small (e.g., the area in the blue box of Fig. 11), or a large part of the occluder is not visible in the current view (e.g., the area in the yellow box). Results on MIT Intrinsic Images dataset We compare our method to the state-of-art and to several classic approaches as listed in Table 2. These results are either copied from their papers, the report in [11], or by --0.0390 -Color Retinex [24] 0.7146 0.1108 0.0286 0.2541 Jiang-A [27] 0.6184 0.1533 0.0421 0.3988 Jiang-H [27] 0.5829 0.1524 0.0483 0.3476 Jiang-HA [27] 0.6109 0.1579 0.0454 0.3631 Shen-SR [14] 0.7259 0.1223 0.0240 0.2454 Shen-SRC [14] --0.0204 -Zhao et al. [4] --0.0250 -Gehler et al. [13] 0.7748 0.0985 0.0244 0.2544 Serra et al. [11] 0.7862 0.0834 0.0340 0.2958 Bell et al. [12] 0.7229 0.1100 0.0337 0.2763 Li et al. [30] --0.0190 -Chang et al. [19] --0.0229 -SIRFS [15] 0 running their code directly without tuning any parameters 1 . We report the results under the best parameters for the whole dataset. Our method achieves the best performance. Fig. 12 gives some concrete examples. The most remarkable advantage of our method is that it can recover the reflectance under deep shadows. One reason is that we can cluster the pixels with the same reflectance together on the U V shadow-free plane, no matter how dramatically the shading changes. Another reason is that our model fuses estimates from different methods by selecting the optimal one exclusively, which avoids smoothing the shading edges by the other estimates. Clustering-based methods, including Gehler et al. [13], Garces et al. [17], and Bell et al. [12], are sensitive to the change of intensity and color caused by shadows. The edge-based method of Li et al. [30] tends to assign large gradients to reflectance changes, which degrades at sharp shadow edges (e.g., those on the body of the deer). The methods of Gehler et al. [13] and Li et al. [30] smooth the shading extensively, leaving residuals of shadows in the reflectance (e.g., the teabag). SIRFS [15] smoothes the surfaces, which may generate an overly smooth shading (e.g., the frog). Another advantage is that our method can recover the global shading robustly. The main reason is that the clustering-based methods BO and BOB capture the shading orders between distant pixels effectively. Edge-based methods cannot reliably recover the relative shading between unconnected parts (e.g., the shadings recovered by Li et al. [30] are inconsistent between the front and the back of the turtle). Another reason is that BOB can handle the areas where the shading and reflectance change simultaneously (e.g., the mouth and the head of the frog). 1. The method SIRFS is evaluated on the images of cup2, deer, frog2, paper2, raccoon, sun, teabag1 and turtle, while the other images are used for training. The results of Bell et al. [12] are obtained through relaxing the constraints on the absolute values of shading and removing the intensity from the features for clustering the reflectance. Otherwise the deep shadows will be mistaken to be black and clustered into individual categories. Our method preserves the subtle variations of reflectance (e.g., the yellow and orange regions of the tea bag), since the intra-cluster variations in the U V plane (Fig. 2c) are represented in the recovered reflectance. In contrast, some clustering-based methods, such as Garces et al. [17] and Bell et al. [12], unify the reflectance of the pixels of each cluster. This operation often leads to block artifacts (e.g., the tea bag). Our method did not handle the feet of the deer well. The black feet and the white legs are both achromatic, so they fall into the same cluster on the shadow-free plane. The image blur further reduces the efficiency of the feature of Reflectance Change (Section 4), so the difference between black and white are not kept into reflectance. Results on Natural Images The quantitative results on the IIW dataset are shown in Table 3. Our method achieved comparable results to the state-of-art. It should be mentioned that W HDR 10% cannot reflect the superiority of our method on inferring the shading orders between pixels with different chromaticity, since only pixels with similar chromaticity are compared [12]. Further, the textured pixels are excluded from evaluation, so the ability to preserve the texture of reflectance is untested. Actually, both the top-performing methods of [16] and [12] remove the texture from the reflectance. For a fair comparison, we report our result that uses the edgepreserving smoothing of [16] to preprocess the input image. Without smoothing, the W HDR 10% increases about 3.7%. The IIW dataset is much more difficult than the MIT Intrinsic Images dataset. The image in the top row of Fig. 13 is comprised of different kinds of objects, some of which are highly textured (e.g., the wall with blue painting). Our method preserves the textures 2 much better than the other methods in comparison. Another difficulty comes from the intensive specular reflections (e.g., the wall in the top row of Fig. 13). Our method puts the specular reflections into reflectance, while some other methods, such as Zhao et al. [4] and Garces et al. [17], put them into shading. The greatest challenge of the IIW dataset comes from the coexistence of multiple direct illuminants in the same scene. In the bottom row of Fig. 13, the areas in the red boxes of the input image are covered by lights in different colors. This situation does not satisfy the bi-illuminant assumption of the BIDR model [3]. No unique brightening direction exists for the whole image, and the brightening direction obtained from entropy minimization (Section 3.1) eliminates the difference improperly. It causes two problems to our method: (1) the error of clustering will increase; and (2) the color of the recovered reflectance will be twisted. The first problem is shared by all the clustering-based methods such as Garces et al. [17] and Bell et al. [12]. The second problem is common, since all the methods in comparison assume a single (direct) illumination. Despite these problems, our model still recovered a globally consistent shading. Discussion. Scene-SIRFS addressed the mixture of illuminations by a soft segmentation of the image with respect to the "ownership" of illuminants [40]. But the segmentation 2. We do not use the edge-preserving smoothing to produce the qualitative results in Fig. 13. is not easy, since the changes of illuminations are often slower than the changes of reflectance. Beigpour and Van de Weijer [35] proposed the Multi-illuminant Dichromatic Reflection (MIDR) model to account for the secondary illuminants. However, in practice they only dealt with the case of two direct illuminants irradiating a single-colored object. We may consider extending the BIDR model to incorporate multiple direct illuminants. Accordingly, there will be multiple brightening directions, and the brightness should be extended to a mixture of sub-coordinates. This will make the problem much more complex. We further test on the outdoor images from the UIUC shadow dataset [45]. Fig. 14 shows three examples. The ambient illuminants are usually the blue sky, so the shadowed areas are more blueish than the lit areas. We compare to the methods of Jiang-HA [27] and Gehler et al. [13]. We also compare to the region-pair-based shadow removal method proposed by Guo et al. [45] 3 . Our model recovers the reflectance by lighting the dark pixels along the yellowish brightening direction, while the other intrinsic decomposition methods often fail to recover their colors. The method of Guo et al. [45] is unable to handle thin areas due to the limited resolution of image segmentation (e.g., the fingers in the last image of Fig. 14). Evaluation on RGB-Depth Images We test on the RGB-Depth images from the NYU-Depth V2 dataset. We compare to those methods that take RGB-Depth images [40][6] [39] or videos [5] as input 4 . Typical examples are shown in Fig. 15. Our method successfully recovered globally consistent shadings and preserves the textures of reflectance. In particular, our method was the only one that recovers the smooth shading over the painting in the first row of Fig. 15. In comparison, the method of Lee et al. [5] did not get consistent shadings between surfaces in different orientations. In their recovered reflectance of the first image in Fig. 15, the backrest of the sofa and the walls are much darker than the seat of the sofa and the floor. The method of Barron and Malik [40] successfully captured the shape of curved surfaces (e.g., the sofa in the first image of Fig. 15) but not those of objects with sharp boundaries (e.g., the cabinet and the bed in the second image of Fig. 15). The method of Chen and Koltun [6] achieved good smoothness of shading while keeping the sharp surface edges at the same time. However, this method often failed to recover the shading orders between objects with different colors (e.g., the blue pillow and the sofa in the first image of Fig. 15). The method of Jeon et al. [39] preserved the textures in reflectance very well (e.g., the floor in the second image of Fig. 15), but this method tends to reduce the difference of shading between surfaces with similar orientations (e.g., the walls and the cabinet in the second image of Fig. 15). CONCLUSIONS AND DISCUSSIONS We proposed the shading orders for intrinsic image decomposition. The shading orders captured not only adjacent relations but also distant connections. This overcame the limitations of edge-based methods that lack the large-scale structure of shading. The shading orders can be measured by several individual methods, each of which can give a reasonable estimate based on certain assumptions about the scene. Jointly utilizing these methods captured various kinds of priors and observations of the scene. We developed the CSF algorithm to combine the pairwise orders measured by different methods. CSF infers a global order by selecting the confident and consistent pairwise orders and solving their conflicts through AE. The local competition removes unreliable measurements from the fusion, so the results are much cleaner than a weighted sum of different estimates. This is essential for keeping sharp shadow edges and textures. The sparsity-driven neighbor selection further reduced the outliers of local measurements. Experimental results demonstrated that our model is suitable for various indoor and outdoor scenes with noticeable ambient illuminants. However, the BIDR model cannot handle multiple direct illuminants, interreflections, or specular reflections. We need to generalize the BIDR model and the U V B color space for more realistic scenes. The highly textured images are still quite challenging for clustering-based methods, since their reflectance often changes irregularly and thus cannot be clustered properly. Jeon et al. proposed to separate the texture layer before decomposing the shading and reflectance [39], which is a promising way to ease the clustering. Fig. 16 shows the rendered shading map of an RGB-Depth image. In the camera coordinate, we draw a "gray surface", taking all the pixels as vertices. Both the color of the surface and the illuminant are set to be [1, 1, 1] T , and the reflection of the surface is set to be diffuse only (i.e., without any specular reflection). Here we assume that there is only one direct illuminant for each image, while the ambient illumination is set to be 0. The illuminant is put inside the room box, and the range of the room box is set to be the scope of all the observable pixels. Especially, we expand the range of the z dimension (orthogonal to the image plane) to the negative part of the coordinate, since the light may be placed to the back of the camera. The surface is rendered with the Matlab Surfl function, and the output intensities of the vertices form a shading map. The bottom row of Fig. 16 shows the rendering results under several sampled illuminants. We can see that some of them are close to the real shading map of the scene, while the others are quite different. APPENDIX RENDERING THE SHADING MAP The similarity between the rendered shading and the ground-truth shading brightness S b is measured by their category-wise correlation: Sim(γ(L d ), S b ) = g∈G ng n Cor(γg(L d ), e S b g ) = g∈G ng n Cor(γg(L d ), e I b g ),(28) where G is the set of reflectance categories, n is the number of pixels, and Cor is the correlation between two variables. The subscripts g denotes the subset of pixels belonging to the g-th category. Here we utilized the linear relationship between the brightness I b and the shading brightness S b based on (6). We select a set of candidate illuminants L = {L d |Sim(γ(L d ), S b ) > 0.2}. ADMM FOR OPTIMIZING THE WEIGHTS W Eqn. 24 can be solved for each pixel p individually, where the matrix W can be decomposed into a series of vectors W p,· . So do E andC. For simplicity, we omit the subscript p of all the matrixes from now on. Denote d = D p . We reformulate Eqn. 24 to an equivalent problem: arg min W,X,Y g 1 (W ) + g 2 (X) + g 3 (Y ) s.t.C T W = d W = X = Y(29) where g 1 (W ) = E T W + α 2 2 W 2 2 g 2 (X) = α 1 X 1 g 3 (Y ) = 0 if Y q ≥ 0, ∀q ∞ otherwise(30) Through introducing Lagrange multipliers λ, Γ 1 and Γ 2 , we can obtain the following augmented Lagrangian [50]: L(W, X, Y, λ, Γ 1 , Γ 2 ) =g 1 (W ) + g 2 (X) + g 3 (Y ) + λ(d −C T W ) + Γ T 1 (W − X) + ρ 2 W − X 2 2 + Γ T 2 (W − Y ) + ρ 2 W − Y 2 2(31) where ρ is a scaling parameter. We initialize W , X and Y with 1 n , while λ = 2 and Γ 1 = Γ 2 = 1 n . Then we update them iteratively as follows: W k+1 = 1 α 2 + 2ρ (ρX k + ρY k − E + λ kC − Γ k 1 − Γ k 2 ) X k+1 =                W k+1 + Γ k 1 − 1 ρ α 1 if W k+1 + Γ k 1 > 1 ρ α 1 0 if |W k+1 + Γ k 1 | ≤ 1 ρ α 1 W k+1 + Γ k 1 + 1 ρ α 1 if W k+1 + Γ k 1 < − 1 ρ α 1 Y k+1 = (W k+1 + 1 ρ Γ k 2 ) + λ k+1 = λ k + η 1 (d −C T W k+1 ) Γ k+1 1 = Γ k 1 + η 2 (W k+1 − X k+1 ) Γ k+1 2 = Γ k 2 + η 3 (W k+1 − Y k+1 )(32) where X is got from soft thresholding. (·) + truncates all the elements of a vector to be non-negative. η 1 , η 2 and η 3 are step sizes. We terminate the iteration when W − X 1 + W − Y 1 is less than a threshold T W and d − E T W is less than a threshold T d . In implementation, we set ρ, η 1 , η 2 and η 3 to be 5, 0.05, 1, and 1, respectively. Fig. 17 shows the results of our method for the images of the MIT Intrinsic Images dataset other than those appeared in the paper. Figs. 18, 19 and 20 present several examples from the IIW dataset. Fig. 21 shows more results of our method on the UIUC Shadow Removal dataset. Fig. 22 shows more results of our method on the NYU-Depth V2 dataset. We compare our method to several recent algorithms, including Bell et al. [12], Zhao et al. [4], Garces [5], Barron et al. [40], Chen et al. [6], and Jeon et al. [39]. Figure 23 shows the colours of shading in images from the MIT Intrinsic Images dataset. We can see that most of the shading images are nearly achromatic. The reason is that the images are captured in controlled environment, where the ambient illuminations are largely suppressed by painting the background into black. According to Equation 3, when the ambient illumination is negligible, the shading will be nearly achromatic, no matter what the color of the direct illumination is. However, for the frog in Figure 23, the shading is slightly chromatic. Figure 24 shows the colours of shading in natural indoor and outdoor scenes. The indoor scenes often have complex illuminations, so the shading colors vary a lot from image to image, even from place to place in the same image. In comparison, the shading colors in outdoor scenes are more regular. Especially, the shadows in outdoor scenes are often blueish, since the ambient light is often the blue sky.
9,123
1810.09482
2897594638
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. In this work, we consider the problem of indexing a set of @math planar point sets (of varying sizes) to create a database @math that supports nearest bottleneck distance queries: given a query point set @math of size @math , the point sets @math that are closest in terms of bottleneck distance are returned. Without loss of generality, we assume that all point sets belong to the unit box @math in the plane. The main contribution of this work is a trie -based data structure that is space efficient and supports @math -approximate nearest bottleneck queries in @math time, where @math is the minimum bottleneck distance from @math to any point set in @math . A direct consequence, of independent interest, is a simple @math time algorithm to @math -approximate @math , for any two point sets @math and @math . Finally, the querying algorithm proposed is easily adapted to support nearest subset and superset queries.
Bottleneck distance is closely related to the bipartite matching problem, which can be solved by the classic maximum flow technique of Hopcroft and Karp @cite_1 . The current best exact algorithm for planar bipartite matching is due to Efrat @cite_9 and runs in @math time for point sets of size @math .
{ "abstract": [ "Let A and B be two sets of n objects in d , and let Match be a (one-to-one) matching between A and B . Let min(Match ), max(Match ), and Σ(Match) denote the length of the shortest edge, the length of the longest edge, and the sum of the lengths of the edges of Match , respectively. Bottleneck matching— a matching that minimizes max(Match )— is suggested as a convenient way for measuring the resemblance between A and B . Several algorithms for computing, as well as approximating, this resemblance are proposed. The running time of all the algorithms involving planar objects is roughly O(n 1.5 ) . For instance, if the objects are points in the plane, the running time of the exact algorithm is O(n 1.5 log n ) . A semidynamic data structure for answering containment problems for a set of congruent disks in the plane is developed. This data structure may be of independent interest.", "The present paper shows how to construct a maximum matching in a bipartite graph with n vertices and m edges in a number of computation steps proportional to @math ." ], "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "2052551455", "2157529519" ] }
A Note on Indexing Planar Point Sets for Approximate Bottleneck Distance Queries
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. The problem of computing the bottleneck distance arises in geometric applications such as comparing persistence diagrams in topological data analysis [3]. Bottleneck distance is defined between two point sets P and Q as d B (P, Q) = min h:P →Q max p∈P h(p) − p , where h is a bijection and · is chosen as the L ∞ norm, as this is common for the persistence diagram comparison application. Given a database D of point sets, we can also define The problem considered in this work is to identify approximate nearest neighbor point sets P , whose bottleneck distance from Q is within a constant factor of d B (D, Q). Without loss of generality, we assume that all point sets belong to the unit box [0, 1] 2 in the plane. We first describe a simple approach to represent point sets using strings. This suggests using a trie data structure [6,5] to store strings associated with each point set in the database. Preliminaries Without loss of generality, all point sets are contained within the unit box B = [0, 1] 2 in the plane. Following the general approach of [8], we recursively divide B into finer grids. The corner (0, 0) is designated as the origin. The four corners of B are the grid points at level 1. The grid at level d is subdivided by 2 to form the grid at level d + 1. Thus, level 2 contains 9 grid points and in general level d contains (2 d−1 + 1) 2 grid points. The grid length at level d is δ d = 2 1−d . Let p be a point in or on B, for d ≥ 1, we define n d (p) as the nearest level d grid point to p, breaking ties by going in the S and/or W direction with respect to p. Observe that n d (p) is unique and that if p is already a level d grid point, then n d (p) = p. We also define n 0 (p) as the origin. Suppose P is a point set to be stored in D. For each p ∈ P , let n 4 d (p) = {g : g is a level d grid point, g − p ∞ < δ d }. Note |n 4 d (p)| ≤ 4. We consider all ways of snapping each p to some grid point snap(p) ∈ n 4 d (p). Definition 1. A query point set Q is said to hit a point set P at level d, if there is a snapping of P such that |q : q ∈ Q, n d (q) = g| = |p : p ∈ P, snap(p) = g| for all level d grid points g. Versions of the following two lemmas appear in [8], and a similar analysis is also found in [9]. Lemma 1. If Q hits P at depth d, then d B (P, Q) ≤ 3 2 δ d . Proof. Since Q hits P , there is a snap-rounding of P that produces the grid configuration n d (Q) = {n d (q) : q ∈ Q} (repeats allowed); define a bijection h : P → Q by mapping each p that snapped to some grid point g to a unique q such that g = n d (q). Then h(p) − p ∞ = q − p ∞ ≤ q − n d (q) ∞ + n d (q) − p ∞ ≤ δ d 2 + δ d . Thus d B (P, Q) ≤ 3 2 δ d . Lemma 2. If Q does not hit P at level d, then d B (P, Q) ≥ δ d 2 . Proof. Let h : P → Q be a bijection that realizes d B (P, Q). We prove the contrapositive: Suppose d B (P, Q) < δ d 2 . Let p ∈ P and q = h(p). Then p − n d (q) ∞ ≤ p − q ∞ + q − n d (q) ∞ ≤ d B (P, Q) + δ d 2 < δ d . It follows that n d (q) ∈ n 4 d (p), and so snapping each p to n d (q) provides a hit to Q at level d. Suppose d * is the maximum depth at which there is a hit P ∈ D for a query point set Q. Lemma 1 implies d B (P, Q) ≤ 3 2 δ d * . On the other hand, since no hits were found at depth d * + 1, by Lemma (D, Q), and so for a query point set Q, the point set returned P is guaranteed to be a 6-approximation to the nearest point set in D. 2, d B (D, Q) ≥ δ d * +1 2 = δ d * 4 . Thus d B (P, Q) ≤ 6d B A Trie-based Data Structure We propose an indexing approach based on representing configurations of grid points as strings. We first define a string representation for a single grid point at level d as a length d string and then interleave n such strings to represent a set of n grid points in the level d grid. The interleaving is done so that the string first describes the level 1 grid points, then level 2, etc. Let g be a grid point at level d ≥ 1. We define N d (g) as the grid point neighbor at level d directly north of g, provided this point belongs to the grid. Define similarly for all eight principal compass wind directions and let I d (g) = g (I for identity). We introduce a string encoding of any grid point g at some level d ≥ 1. The string, s d (g) is constructed in left-to-right order, in O(1) time per symbol by "walking" in the grid toward g, starting at the origin, following the grid points n 0 (g), n 1 (g), . . . , n d (g) = g. Observe that for 1 ≤ i ≤ d, We now consider how to use the above string encoding to represent grid point configurations. Let G be a set of n grid points (repeats allowed) at level d > 0 and let S d (G) = {s d (g)|g ∈ G} be the set of length d strings that encode each grid point in G. Consider S d (G) sorted into lexicographic order, i.e. S d (G) = {s d (g 1 ) ≤ s d (g 2 ) ≤ . . . ≤ s d (g n )}. S d (G) can be encoded as a single interleaved string of length nd, defined as: n i (g) = dir i (g)(n i−1 (g)),(1)S d,G = s d (g 1 ) 1 . . . s d (g n ) 1 s d (g 1 ) 2 . . . s d (g n ) 2 . . . s d (g 1 ) n . . . s d (g n ) n . (2) Notice that the first n characters in S d,G describe the level 1 nearest neighbor grid points for G, the next n characters describe the level 2 nearest neighbor grid points for G and so on. Any distinguishable level d grid point configuration G is encoded uniquely by S d,G . The time required to generate S d,G is O(dn) (e.g. by using radix sort). Lemma 3. Let p(G) = {n d−1 (g) : g ∈ G}. Then S d,G = [S d−1,p(G) ]s d (g 1 ) n . . . s d (g n ) n . Proof. This can be seen by noting that each string in S d (G) is formed from a string in S d−1 (G) with a single symbol appended to the end, so the lexicographic sortings of S d (G) and S d−1 (p(G)) agree up to position d − 1. A natural approach to storing a collection of point sets, each represented as a string, is to use a trie-based data structure [6,5]. We first consider the database scheme proposed in [8], in which many snaproundings of each point set are stored and use the aforementioned string representation and trie data structure. To represent a point set P , snap-roundings to grid point configurations (up to some maximum grid level d max ) are stored in a trie T. If |P | = n, there are 4 dmaxn such snap-roundings, although there are potentially fewer distinguishable grid point configurations to store. Each snap-rounding at level d represents a grid point configuration G that must be stored in D; the string representation S d,G is used to represent each G. Lemma 4. If G is a snap-rounding configuration at level d > 1 for a point set P , then then there is another snap-rounding configuration G ′ of P at level d − 1 such that S d−1,G ′ is a prefix of S d,G . Proof. Let G = {g 1 , . . . g n } be a snap-rounding of P at level d > 1. Each g i = snap(p) ∈ n 4 d (p) for some p ∈ P . Let g ′ i = n d−1 (g i ). Clearly, g ′ i ∈ n 4 d−1 (p) . It follows that the grid configuration G ′ = n d−1 (G) will be snapped to by P in the level d − 1 grid. Furthermore, S d,G = [S d−1,G ′ ]s d (g 1 ) n . . . s d (g n ) n , by Lemma 3. Each trie node will also store a pointer to the list of point sets (initialized to null). As snapped grid point configurations for P are added to T, P is appended to this list at each trie node that "finishes" describing a grid point configuration for some level, e.g. if |P | = k, then a trie node at depth dk describes a level d grid point configuration for P . The time required to add a new point set P of size n to T is O(4 dmaxn d max n), since at most O(4 dmaxn ) snapped grid configuration strings are stored and each is generated in O(d max n) time. The additional space requirement for T is also O(4 dmaxn d max n). Handling Queries Let Q be a query point set of size n; our objective is to find those P ∈ D that approximate nearest(D, Q), where the database D is represented using a trie T, as described above. A query string S Q is constructed in left-to-right order in blocks of size n as follows: For each point q ∈ Q, we consider the sequence of grid points n 0 (q), n 1 (q), . . . , n dmax (q); the sequence gets monotonically closer to q. As before, we can represent this sequence as a string s dmax (q), whose ith symbol is dir i (q) and S dmax (Q) is the collection of these strings for all q ∈ Q. In order to produce the query string S Q , S dmax (Q) must be sorted lexicographically, however this can be done lazily using radix sort. First, s dmax (q) 1 is found for all q ∈ Q and the strings are sorted on index 1. The resulting sorted column provides the first n symbols in S Q . Next, the trie T is searched on this block. If there is a hit, then the search continues to the next index position, the string symbols are computed at that position (in O(n) time) and the radix sort is continued at the next index. This produces the next size n block of S Q and T is probed from where the previous hit was found. If d * ≤ d max is the maximum hit depth, then d * = − lg(d B (D, Q)) and the query runs in O(− lg(d B (D, Q))n) time. Discussion An approach to indexing planar point sets that supports approximate nearest bottleneck distance queries using a trie-based data structure to compactly represent point configurations in a multi-level grid is described. The obvious drawback is the exponential space complexity; up to 4 dmaxn strings are stored for each point set of size n. A natural question is whether a more space-efficient database scheme is possible. It would also be interesting to consider if an indexing approach and querying procedure can be found that permits one of the point sets to be transformed by an isometry, such as done in [9].
2,152
1810.09482
2897594638
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. In this work, we consider the problem of indexing a set of @math planar point sets (of varying sizes) to create a database @math that supports nearest bottleneck distance queries: given a query point set @math of size @math , the point sets @math that are closest in terms of bottleneck distance are returned. Without loss of generality, we assume that all point sets belong to the unit box @math in the plane. The main contribution of this work is a trie -based data structure that is space efficient and supports @math -approximate nearest bottleneck queries in @math time, where @math is the minimum bottleneck distance from @math to any point set in @math . A direct consequence, of independent interest, is a simple @math time algorithm to @math -approximate @math , for any two point sets @math and @math . Finally, the querying algorithm proposed is easily adapted to support nearest subset and superset queries.
Earlier seminal work by Hefferman and Schira @cite_3 considered approximation algorithms for the more general problem in which one of the point sets is mapped by an isometry (translated, rotated, and possibly reflected) prior to being matched. In the case of just computing the bottleneck distance, their methods provide a @math time algorithm to test if @math , where the answer must be correct if @math . A key idea in @cite_3 is to check for bottleneck matchings using a maximum flow computation in graph that arises from snap-rounding'' the point sets to their nearest point in grid. Our approach uses a similar idea in which the maximum flow instance is a planar graph (not true for @cite_3 ), so a recent improved algorithm for multi-source, multi-sink maximum flow due to Borradaile @cite_8 that runs in @math time can be leveraged.
{ "abstract": [ "Abstract This paper considers the computer vision problem of testing whether two equal cardinality points sets A and B in the plane are e-congruent. We say that A and B are e-congruent if there exists an isometry I and bijection l:A → B such that dist(I(a), l(a)) ⩽ e, for all a ϵ A. Since known methods for this problem are expensive, we develop approximate decision algorithms that are considerably faster than the known decision algorithms, and have bounds on their imprecision. Our approach reduces the problem to that of computing maximum flows on a series of graphs with integral capacities.", "We give an O(n log3 n) algorithm that, given an n-node directed planar graph with arc capacities, a set of source nodes, and a set of sink nodes, finds a maximum flow from the sources to the sinks. Previously, the fastest algorithms known for this problem were those for general graphs." ], "cite_N": [ "@cite_3", "@cite_8" ], "mid": [ "2212031862", "2058622993" ] }
A Note on Indexing Planar Point Sets for Approximate Bottleneck Distance Queries
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. The problem of computing the bottleneck distance arises in geometric applications such as comparing persistence diagrams in topological data analysis [3]. Bottleneck distance is defined between two point sets P and Q as d B (P, Q) = min h:P →Q max p∈P h(p) − p , where h is a bijection and · is chosen as the L ∞ norm, as this is common for the persistence diagram comparison application. Given a database D of point sets, we can also define The problem considered in this work is to identify approximate nearest neighbor point sets P , whose bottleneck distance from Q is within a constant factor of d B (D, Q). Without loss of generality, we assume that all point sets belong to the unit box [0, 1] 2 in the plane. We first describe a simple approach to represent point sets using strings. This suggests using a trie data structure [6,5] to store strings associated with each point set in the database. Preliminaries Without loss of generality, all point sets are contained within the unit box B = [0, 1] 2 in the plane. Following the general approach of [8], we recursively divide B into finer grids. The corner (0, 0) is designated as the origin. The four corners of B are the grid points at level 1. The grid at level d is subdivided by 2 to form the grid at level d + 1. Thus, level 2 contains 9 grid points and in general level d contains (2 d−1 + 1) 2 grid points. The grid length at level d is δ d = 2 1−d . Let p be a point in or on B, for d ≥ 1, we define n d (p) as the nearest level d grid point to p, breaking ties by going in the S and/or W direction with respect to p. Observe that n d (p) is unique and that if p is already a level d grid point, then n d (p) = p. We also define n 0 (p) as the origin. Suppose P is a point set to be stored in D. For each p ∈ P , let n 4 d (p) = {g : g is a level d grid point, g − p ∞ < δ d }. Note |n 4 d (p)| ≤ 4. We consider all ways of snapping each p to some grid point snap(p) ∈ n 4 d (p). Definition 1. A query point set Q is said to hit a point set P at level d, if there is a snapping of P such that |q : q ∈ Q, n d (q) = g| = |p : p ∈ P, snap(p) = g| for all level d grid points g. Versions of the following two lemmas appear in [8], and a similar analysis is also found in [9]. Lemma 1. If Q hits P at depth d, then d B (P, Q) ≤ 3 2 δ d . Proof. Since Q hits P , there is a snap-rounding of P that produces the grid configuration n d (Q) = {n d (q) : q ∈ Q} (repeats allowed); define a bijection h : P → Q by mapping each p that snapped to some grid point g to a unique q such that g = n d (q). Then h(p) − p ∞ = q − p ∞ ≤ q − n d (q) ∞ + n d (q) − p ∞ ≤ δ d 2 + δ d . Thus d B (P, Q) ≤ 3 2 δ d . Lemma 2. If Q does not hit P at level d, then d B (P, Q) ≥ δ d 2 . Proof. Let h : P → Q be a bijection that realizes d B (P, Q). We prove the contrapositive: Suppose d B (P, Q) < δ d 2 . Let p ∈ P and q = h(p). Then p − n d (q) ∞ ≤ p − q ∞ + q − n d (q) ∞ ≤ d B (P, Q) + δ d 2 < δ d . It follows that n d (q) ∈ n 4 d (p), and so snapping each p to n d (q) provides a hit to Q at level d. Suppose d * is the maximum depth at which there is a hit P ∈ D for a query point set Q. Lemma 1 implies d B (P, Q) ≤ 3 2 δ d * . On the other hand, since no hits were found at depth d * + 1, by Lemma (D, Q), and so for a query point set Q, the point set returned P is guaranteed to be a 6-approximation to the nearest point set in D. 2, d B (D, Q) ≥ δ d * +1 2 = δ d * 4 . Thus d B (P, Q) ≤ 6d B A Trie-based Data Structure We propose an indexing approach based on representing configurations of grid points as strings. We first define a string representation for a single grid point at level d as a length d string and then interleave n such strings to represent a set of n grid points in the level d grid. The interleaving is done so that the string first describes the level 1 grid points, then level 2, etc. Let g be a grid point at level d ≥ 1. We define N d (g) as the grid point neighbor at level d directly north of g, provided this point belongs to the grid. Define similarly for all eight principal compass wind directions and let I d (g) = g (I for identity). We introduce a string encoding of any grid point g at some level d ≥ 1. The string, s d (g) is constructed in left-to-right order, in O(1) time per symbol by "walking" in the grid toward g, starting at the origin, following the grid points n 0 (g), n 1 (g), . . . , n d (g) = g. Observe that for 1 ≤ i ≤ d, We now consider how to use the above string encoding to represent grid point configurations. Let G be a set of n grid points (repeats allowed) at level d > 0 and let S d (G) = {s d (g)|g ∈ G} be the set of length d strings that encode each grid point in G. Consider S d (G) sorted into lexicographic order, i.e. S d (G) = {s d (g 1 ) ≤ s d (g 2 ) ≤ . . . ≤ s d (g n )}. S d (G) can be encoded as a single interleaved string of length nd, defined as: n i (g) = dir i (g)(n i−1 (g)),(1)S d,G = s d (g 1 ) 1 . . . s d (g n ) 1 s d (g 1 ) 2 . . . s d (g n ) 2 . . . s d (g 1 ) n . . . s d (g n ) n . (2) Notice that the first n characters in S d,G describe the level 1 nearest neighbor grid points for G, the next n characters describe the level 2 nearest neighbor grid points for G and so on. Any distinguishable level d grid point configuration G is encoded uniquely by S d,G . The time required to generate S d,G is O(dn) (e.g. by using radix sort). Lemma 3. Let p(G) = {n d−1 (g) : g ∈ G}. Then S d,G = [S d−1,p(G) ]s d (g 1 ) n . . . s d (g n ) n . Proof. This can be seen by noting that each string in S d (G) is formed from a string in S d−1 (G) with a single symbol appended to the end, so the lexicographic sortings of S d (G) and S d−1 (p(G)) agree up to position d − 1. A natural approach to storing a collection of point sets, each represented as a string, is to use a trie-based data structure [6,5]. We first consider the database scheme proposed in [8], in which many snaproundings of each point set are stored and use the aforementioned string representation and trie data structure. To represent a point set P , snap-roundings to grid point configurations (up to some maximum grid level d max ) are stored in a trie T. If |P | = n, there are 4 dmaxn such snap-roundings, although there are potentially fewer distinguishable grid point configurations to store. Each snap-rounding at level d represents a grid point configuration G that must be stored in D; the string representation S d,G is used to represent each G. Lemma 4. If G is a snap-rounding configuration at level d > 1 for a point set P , then then there is another snap-rounding configuration G ′ of P at level d − 1 such that S d−1,G ′ is a prefix of S d,G . Proof. Let G = {g 1 , . . . g n } be a snap-rounding of P at level d > 1. Each g i = snap(p) ∈ n 4 d (p) for some p ∈ P . Let g ′ i = n d−1 (g i ). Clearly, g ′ i ∈ n 4 d−1 (p) . It follows that the grid configuration G ′ = n d−1 (G) will be snapped to by P in the level d − 1 grid. Furthermore, S d,G = [S d−1,G ′ ]s d (g 1 ) n . . . s d (g n ) n , by Lemma 3. Each trie node will also store a pointer to the list of point sets (initialized to null). As snapped grid point configurations for P are added to T, P is appended to this list at each trie node that "finishes" describing a grid point configuration for some level, e.g. if |P | = k, then a trie node at depth dk describes a level d grid point configuration for P . The time required to add a new point set P of size n to T is O(4 dmaxn d max n), since at most O(4 dmaxn ) snapped grid configuration strings are stored and each is generated in O(d max n) time. The additional space requirement for T is also O(4 dmaxn d max n). Handling Queries Let Q be a query point set of size n; our objective is to find those P ∈ D that approximate nearest(D, Q), where the database D is represented using a trie T, as described above. A query string S Q is constructed in left-to-right order in blocks of size n as follows: For each point q ∈ Q, we consider the sequence of grid points n 0 (q), n 1 (q), . . . , n dmax (q); the sequence gets monotonically closer to q. As before, we can represent this sequence as a string s dmax (q), whose ith symbol is dir i (q) and S dmax (Q) is the collection of these strings for all q ∈ Q. In order to produce the query string S Q , S dmax (Q) must be sorted lexicographically, however this can be done lazily using radix sort. First, s dmax (q) 1 is found for all q ∈ Q and the strings are sorted on index 1. The resulting sorted column provides the first n symbols in S Q . Next, the trie T is searched on this block. If there is a hit, then the search continues to the next index position, the string symbols are computed at that position (in O(n) time) and the radix sort is continued at the next index. This produces the next size n block of S Q and T is probed from where the previous hit was found. If d * ≤ d max is the maximum hit depth, then d * = − lg(d B (D, Q)) and the query runs in O(− lg(d B (D, Q))n) time. Discussion An approach to indexing planar point sets that supports approximate nearest bottleneck distance queries using a trie-based data structure to compactly represent point configurations in a multi-level grid is described. The obvious drawback is the exponential space complexity; up to 4 dmaxn strings are stored for each point set of size n. A natural question is whether a more space-efficient database scheme is possible. It would also be interesting to consider if an indexing approach and querying procedure can be found that permits one of the point sets to be transformed by an isometry, such as done in [9].
2,152
1810.09482
2897594638
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. In this work, we consider the problem of indexing a set of @math planar point sets (of varying sizes) to create a database @math that supports nearest bottleneck distance queries: given a query point set @math of size @math , the point sets @math that are closest in terms of bottleneck distance are returned. Without loss of generality, we assume that all point sets belong to the unit box @math in the plane. The main contribution of this work is a trie -based data structure that is space efficient and supports @math -approximate nearest bottleneck queries in @math time, where @math is the minimum bottleneck distance from @math to any point set in @math . A direct consequence, of independent interest, is a simple @math time algorithm to @math -approximate @math , for any two point sets @math and @math . Finally, the querying algorithm proposed is easily adapted to support nearest subset and superset queries.
Bottleneck distance arises naturally in the comparison of persistence diagrams in topological data analysis @cite_4 . @cite_0 consider the related problem of building a database of persistence diagrams that permits approximate querying in @math time ( @math is the number of point sets stored in @math ). Their approach is also based on representing point sets by snap-rounding each point to neighboring grid points at each level in a multilevel grid. All combinations of snap-roundings are considered and the resulting grid point distributions are stored in a database. Binary search is used on the hashed values to query the database and resulting matches are shown to provide a @math -approximation to the nearest point set in the database. They also observe that the approximation ratio can be improved to @math , if more snap-roundings are done ( @math per point). Rather than using a hashing scheme, it is possible to use a trie data structure (described in ) to achieve @math time queries.
{ "abstract": [ "Persistence diagrams are important tools in the field of topological data analysis that describe the presence and magnitude of features in a filtered topological space. However, current approaches for comparing a persistence diagram to a set of other persistence diagrams is linear in the number of diagrams or do not offer performance guarantees. In this paper, we apply concepts from locality-sensitive hashing to support approximate nearest neighbor search in the space of persistence diagrams. Given a set @math of @math @math -bounded persistence diagrams, each with at most @math points, we snap-round the points of each diagram to points on a cubical lattice and produce a key for each possible snap-rounding. Specifically, we fix a grid over each diagram at several resolutions and consider the snap-roundings of each diagram to the four nearest lattice points. Then, we propose a data structure with @math levels @math that stores all snap-roundings of each persistence diagram in @math at each resolution. This data structure has size @math to account for varying lattice resolutions as well as snap-roundings and the deletion of points with low persistence. To search for a persistence diagram, we compute a key for a query diagram by snapping each point to a lattice and deleting points of low persistence. Furthermore, as the lattice parameter decreases, searching our data structure yields a six-approximation of the nearest diagram in @math in @math time and a constant factor approximation of the @math th nearest diagram in @math time.", "We define a new topological summary for data that we call the persistence landscape. Since this summary lies in a vector space, it is easy to combine with tools from statistics and machine learning, in contrast to the standard topological summaries. Viewed as a random variable with values in a Banach space, this summary obeys a strong law of large numbers and a central limit theorem. We show how a number of standard statistical tests can be used for statistical inference using this summary. We also prove that this summary is stable and that it can be used to provide lower bounds for the bottleneck and Wasserstein distances." ], "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2907190463", "2149185044" ] }
A Note on Indexing Planar Point Sets for Approximate Bottleneck Distance Queries
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. The problem of computing the bottleneck distance arises in geometric applications such as comparing persistence diagrams in topological data analysis [3]. Bottleneck distance is defined between two point sets P and Q as d B (P, Q) = min h:P →Q max p∈P h(p) − p , where h is a bijection and · is chosen as the L ∞ norm, as this is common for the persistence diagram comparison application. Given a database D of point sets, we can also define The problem considered in this work is to identify approximate nearest neighbor point sets P , whose bottleneck distance from Q is within a constant factor of d B (D, Q). Without loss of generality, we assume that all point sets belong to the unit box [0, 1] 2 in the plane. We first describe a simple approach to represent point sets using strings. This suggests using a trie data structure [6,5] to store strings associated with each point set in the database. Preliminaries Without loss of generality, all point sets are contained within the unit box B = [0, 1] 2 in the plane. Following the general approach of [8], we recursively divide B into finer grids. The corner (0, 0) is designated as the origin. The four corners of B are the grid points at level 1. The grid at level d is subdivided by 2 to form the grid at level d + 1. Thus, level 2 contains 9 grid points and in general level d contains (2 d−1 + 1) 2 grid points. The grid length at level d is δ d = 2 1−d . Let p be a point in or on B, for d ≥ 1, we define n d (p) as the nearest level d grid point to p, breaking ties by going in the S and/or W direction with respect to p. Observe that n d (p) is unique and that if p is already a level d grid point, then n d (p) = p. We also define n 0 (p) as the origin. Suppose P is a point set to be stored in D. For each p ∈ P , let n 4 d (p) = {g : g is a level d grid point, g − p ∞ < δ d }. Note |n 4 d (p)| ≤ 4. We consider all ways of snapping each p to some grid point snap(p) ∈ n 4 d (p). Definition 1. A query point set Q is said to hit a point set P at level d, if there is a snapping of P such that |q : q ∈ Q, n d (q) = g| = |p : p ∈ P, snap(p) = g| for all level d grid points g. Versions of the following two lemmas appear in [8], and a similar analysis is also found in [9]. Lemma 1. If Q hits P at depth d, then d B (P, Q) ≤ 3 2 δ d . Proof. Since Q hits P , there is a snap-rounding of P that produces the grid configuration n d (Q) = {n d (q) : q ∈ Q} (repeats allowed); define a bijection h : P → Q by mapping each p that snapped to some grid point g to a unique q such that g = n d (q). Then h(p) − p ∞ = q − p ∞ ≤ q − n d (q) ∞ + n d (q) − p ∞ ≤ δ d 2 + δ d . Thus d B (P, Q) ≤ 3 2 δ d . Lemma 2. If Q does not hit P at level d, then d B (P, Q) ≥ δ d 2 . Proof. Let h : P → Q be a bijection that realizes d B (P, Q). We prove the contrapositive: Suppose d B (P, Q) < δ d 2 . Let p ∈ P and q = h(p). Then p − n d (q) ∞ ≤ p − q ∞ + q − n d (q) ∞ ≤ d B (P, Q) + δ d 2 < δ d . It follows that n d (q) ∈ n 4 d (p), and so snapping each p to n d (q) provides a hit to Q at level d. Suppose d * is the maximum depth at which there is a hit P ∈ D for a query point set Q. Lemma 1 implies d B (P, Q) ≤ 3 2 δ d * . On the other hand, since no hits were found at depth d * + 1, by Lemma (D, Q), and so for a query point set Q, the point set returned P is guaranteed to be a 6-approximation to the nearest point set in D. 2, d B (D, Q) ≥ δ d * +1 2 = δ d * 4 . Thus d B (P, Q) ≤ 6d B A Trie-based Data Structure We propose an indexing approach based on representing configurations of grid points as strings. We first define a string representation for a single grid point at level d as a length d string and then interleave n such strings to represent a set of n grid points in the level d grid. The interleaving is done so that the string first describes the level 1 grid points, then level 2, etc. Let g be a grid point at level d ≥ 1. We define N d (g) as the grid point neighbor at level d directly north of g, provided this point belongs to the grid. Define similarly for all eight principal compass wind directions and let I d (g) = g (I for identity). We introduce a string encoding of any grid point g at some level d ≥ 1. The string, s d (g) is constructed in left-to-right order, in O(1) time per symbol by "walking" in the grid toward g, starting at the origin, following the grid points n 0 (g), n 1 (g), . . . , n d (g) = g. Observe that for 1 ≤ i ≤ d, We now consider how to use the above string encoding to represent grid point configurations. Let G be a set of n grid points (repeats allowed) at level d > 0 and let S d (G) = {s d (g)|g ∈ G} be the set of length d strings that encode each grid point in G. Consider S d (G) sorted into lexicographic order, i.e. S d (G) = {s d (g 1 ) ≤ s d (g 2 ) ≤ . . . ≤ s d (g n )}. S d (G) can be encoded as a single interleaved string of length nd, defined as: n i (g) = dir i (g)(n i−1 (g)),(1)S d,G = s d (g 1 ) 1 . . . s d (g n ) 1 s d (g 1 ) 2 . . . s d (g n ) 2 . . . s d (g 1 ) n . . . s d (g n ) n . (2) Notice that the first n characters in S d,G describe the level 1 nearest neighbor grid points for G, the next n characters describe the level 2 nearest neighbor grid points for G and so on. Any distinguishable level d grid point configuration G is encoded uniquely by S d,G . The time required to generate S d,G is O(dn) (e.g. by using radix sort). Lemma 3. Let p(G) = {n d−1 (g) : g ∈ G}. Then S d,G = [S d−1,p(G) ]s d (g 1 ) n . . . s d (g n ) n . Proof. This can be seen by noting that each string in S d (G) is formed from a string in S d−1 (G) with a single symbol appended to the end, so the lexicographic sortings of S d (G) and S d−1 (p(G)) agree up to position d − 1. A natural approach to storing a collection of point sets, each represented as a string, is to use a trie-based data structure [6,5]. We first consider the database scheme proposed in [8], in which many snaproundings of each point set are stored and use the aforementioned string representation and trie data structure. To represent a point set P , snap-roundings to grid point configurations (up to some maximum grid level d max ) are stored in a trie T. If |P | = n, there are 4 dmaxn such snap-roundings, although there are potentially fewer distinguishable grid point configurations to store. Each snap-rounding at level d represents a grid point configuration G that must be stored in D; the string representation S d,G is used to represent each G. Lemma 4. If G is a snap-rounding configuration at level d > 1 for a point set P , then then there is another snap-rounding configuration G ′ of P at level d − 1 such that S d−1,G ′ is a prefix of S d,G . Proof. Let G = {g 1 , . . . g n } be a snap-rounding of P at level d > 1. Each g i = snap(p) ∈ n 4 d (p) for some p ∈ P . Let g ′ i = n d−1 (g i ). Clearly, g ′ i ∈ n 4 d−1 (p) . It follows that the grid configuration G ′ = n d−1 (G) will be snapped to by P in the level d − 1 grid. Furthermore, S d,G = [S d−1,G ′ ]s d (g 1 ) n . . . s d (g n ) n , by Lemma 3. Each trie node will also store a pointer to the list of point sets (initialized to null). As snapped grid point configurations for P are added to T, P is appended to this list at each trie node that "finishes" describing a grid point configuration for some level, e.g. if |P | = k, then a trie node at depth dk describes a level d grid point configuration for P . The time required to add a new point set P of size n to T is O(4 dmaxn d max n), since at most O(4 dmaxn ) snapped grid configuration strings are stored and each is generated in O(d max n) time. The additional space requirement for T is also O(4 dmaxn d max n). Handling Queries Let Q be a query point set of size n; our objective is to find those P ∈ D that approximate nearest(D, Q), where the database D is represented using a trie T, as described above. A query string S Q is constructed in left-to-right order in blocks of size n as follows: For each point q ∈ Q, we consider the sequence of grid points n 0 (q), n 1 (q), . . . , n dmax (q); the sequence gets monotonically closer to q. As before, we can represent this sequence as a string s dmax (q), whose ith symbol is dir i (q) and S dmax (Q) is the collection of these strings for all q ∈ Q. In order to produce the query string S Q , S dmax (Q) must be sorted lexicographically, however this can be done lazily using radix sort. First, s dmax (q) 1 is found for all q ∈ Q and the strings are sorted on index 1. The resulting sorted column provides the first n symbols in S Q . Next, the trie T is searched on this block. If there is a hit, then the search continues to the next index position, the string symbols are computed at that position (in O(n) time) and the radix sort is continued at the next index. This produces the next size n block of S Q and T is probed from where the previous hit was found. If d * ≤ d max is the maximum hit depth, then d * = − lg(d B (D, Q)) and the query runs in O(− lg(d B (D, Q))n) time. Discussion An approach to indexing planar point sets that supports approximate nearest bottleneck distance queries using a trie-based data structure to compactly represent point configurations in a multi-level grid is described. The obvious drawback is the exponential space complexity; up to 4 dmaxn strings are stored for each point set of size n. A natural question is whether a more space-efficient database scheme is possible. It would also be interesting to consider if an indexing approach and querying procedure can be found that permits one of the point sets to be transformed by an isometry, such as done in [9].
2,152
1810.09482
2897594638
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. In this work, we consider the problem of indexing a set of @math planar point sets (of varying sizes) to create a database @math that supports nearest bottleneck distance queries: given a query point set @math of size @math , the point sets @math that are closest in terms of bottleneck distance are returned. Without loss of generality, we assume that all point sets belong to the unit box @math in the plane. The main contribution of this work is a trie -based data structure that is space efficient and supports @math -approximate nearest bottleneck queries in @math time, where @math is the minimum bottleneck distance from @math to any point set in @math . A direct consequence, of independent interest, is a simple @math time algorithm to @math -approximate @math , for any two point sets @math and @math . Finally, the querying algorithm proposed is easily adapted to support nearest subset and superset queries.
Approximation results are known for general bipartite matching in metric spaces; in @cite_2 the authors show that for any @math , there is an algorithm that computes a @math -approximate matching, where @math in @math time. A variation on minimum-distance bottleneck matching, with the additional constraint that the matched edges cannot cross, was recently shown to be NP-hard to approximate within a factor of less than @math @cite_7 .
{ "abstract": [ "Motivated by a crane assignment problem, we consider a Euclidean bipartite matching problem with edge-crossing constraints. Specifically, given n red points and n blue points in the plane, we want to construct a perfect matching between red and blue points that minimizes the length of the longest edge, while imposing a constraint that no two edges may cross each other. We show that the problem cannot be approximately solved within a factor less than 1:277 in polynomial time unless P = NP. We give simple dynamic programming algorithms that solve our problem in two special cases, namely (1) the case where the red and blue points form the vertices of a convex polygon and (2) the case where the red points are collinear and the blue points lie to one side of the line through the red points.", "Let G = G(A∪B,A×B), with |A| = |B| = n, be a weighted bipartite graph, and let d(·,·) be the cost function on the edges. Let w(M) denote the weight of a matching in G, and M* a minimum-cost perfect matching in G. We call a perfect matching M c-approximate, for c ≥ 1, if w(M) ≤ c · w(M*). We present three approximation algorithms for computing minimum-cost perfect matchings in G. First, we consider the case when d(·,·) is a metric. For any Δ > 0, we present an algorithm that, in O(n2+Δ log n log2(1 Δ)) time, computes a O(1 Δα)-approximate matching of G, where α = log3 2 ≈ 0.631. Next, we assume the existence of a dynamic data structure for answering approximate nearest neighbor (ANN) queries under d(··). Given two parameters e, d e) is the query and update time of an (e 2)-ANN data structure. Finally, we present an algorithm that works even if d(·,·) is not a metric but admits an ANN data structure for d(·,·). In particular, we present an algorithm that computes, in O(e---1n3 2τ (n, e) log4(n e) log Δ) time, a (1 + e)-approximate matching of A and B; here Δ is the ratio of the largest to the smallest-cost edge in G, and τ (n, e) is the query and update time of an (e c)-ANN data structure for some constant c > 1. We show that our results lead to faster matching algorithms for many geometric settings." ], "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "2289848075", "2079168081" ] }
A Note on Indexing Planar Point Sets for Approximate Bottleneck Distance Queries
The bottleneck distance is a natural measure of the distance between two finite point sets of equal cardinality. The problem of computing the bottleneck distance arises in geometric applications such as comparing persistence diagrams in topological data analysis [3]. Bottleneck distance is defined between two point sets P and Q as d B (P, Q) = min h:P →Q max p∈P h(p) − p , where h is a bijection and · is chosen as the L ∞ norm, as this is common for the persistence diagram comparison application. Given a database D of point sets, we can also define The problem considered in this work is to identify approximate nearest neighbor point sets P , whose bottleneck distance from Q is within a constant factor of d B (D, Q). Without loss of generality, we assume that all point sets belong to the unit box [0, 1] 2 in the plane. We first describe a simple approach to represent point sets using strings. This suggests using a trie data structure [6,5] to store strings associated with each point set in the database. Preliminaries Without loss of generality, all point sets are contained within the unit box B = [0, 1] 2 in the plane. Following the general approach of [8], we recursively divide B into finer grids. The corner (0, 0) is designated as the origin. The four corners of B are the grid points at level 1. The grid at level d is subdivided by 2 to form the grid at level d + 1. Thus, level 2 contains 9 grid points and in general level d contains (2 d−1 + 1) 2 grid points. The grid length at level d is δ d = 2 1−d . Let p be a point in or on B, for d ≥ 1, we define n d (p) as the nearest level d grid point to p, breaking ties by going in the S and/or W direction with respect to p. Observe that n d (p) is unique and that if p is already a level d grid point, then n d (p) = p. We also define n 0 (p) as the origin. Suppose P is a point set to be stored in D. For each p ∈ P , let n 4 d (p) = {g : g is a level d grid point, g − p ∞ < δ d }. Note |n 4 d (p)| ≤ 4. We consider all ways of snapping each p to some grid point snap(p) ∈ n 4 d (p). Definition 1. A query point set Q is said to hit a point set P at level d, if there is a snapping of P such that |q : q ∈ Q, n d (q) = g| = |p : p ∈ P, snap(p) = g| for all level d grid points g. Versions of the following two lemmas appear in [8], and a similar analysis is also found in [9]. Lemma 1. If Q hits P at depth d, then d B (P, Q) ≤ 3 2 δ d . Proof. Since Q hits P , there is a snap-rounding of P that produces the grid configuration n d (Q) = {n d (q) : q ∈ Q} (repeats allowed); define a bijection h : P → Q by mapping each p that snapped to some grid point g to a unique q such that g = n d (q). Then h(p) − p ∞ = q − p ∞ ≤ q − n d (q) ∞ + n d (q) − p ∞ ≤ δ d 2 + δ d . Thus d B (P, Q) ≤ 3 2 δ d . Lemma 2. If Q does not hit P at level d, then d B (P, Q) ≥ δ d 2 . Proof. Let h : P → Q be a bijection that realizes d B (P, Q). We prove the contrapositive: Suppose d B (P, Q) < δ d 2 . Let p ∈ P and q = h(p). Then p − n d (q) ∞ ≤ p − q ∞ + q − n d (q) ∞ ≤ d B (P, Q) + δ d 2 < δ d . It follows that n d (q) ∈ n 4 d (p), and so snapping each p to n d (q) provides a hit to Q at level d. Suppose d * is the maximum depth at which there is a hit P ∈ D for a query point set Q. Lemma 1 implies d B (P, Q) ≤ 3 2 δ d * . On the other hand, since no hits were found at depth d * + 1, by Lemma (D, Q), and so for a query point set Q, the point set returned P is guaranteed to be a 6-approximation to the nearest point set in D. 2, d B (D, Q) ≥ δ d * +1 2 = δ d * 4 . Thus d B (P, Q) ≤ 6d B A Trie-based Data Structure We propose an indexing approach based on representing configurations of grid points as strings. We first define a string representation for a single grid point at level d as a length d string and then interleave n such strings to represent a set of n grid points in the level d grid. The interleaving is done so that the string first describes the level 1 grid points, then level 2, etc. Let g be a grid point at level d ≥ 1. We define N d (g) as the grid point neighbor at level d directly north of g, provided this point belongs to the grid. Define similarly for all eight principal compass wind directions and let I d (g) = g (I for identity). We introduce a string encoding of any grid point g at some level d ≥ 1. The string, s d (g) is constructed in left-to-right order, in O(1) time per symbol by "walking" in the grid toward g, starting at the origin, following the grid points n 0 (g), n 1 (g), . . . , n d (g) = g. Observe that for 1 ≤ i ≤ d, We now consider how to use the above string encoding to represent grid point configurations. Let G be a set of n grid points (repeats allowed) at level d > 0 and let S d (G) = {s d (g)|g ∈ G} be the set of length d strings that encode each grid point in G. Consider S d (G) sorted into lexicographic order, i.e. S d (G) = {s d (g 1 ) ≤ s d (g 2 ) ≤ . . . ≤ s d (g n )}. S d (G) can be encoded as a single interleaved string of length nd, defined as: n i (g) = dir i (g)(n i−1 (g)),(1)S d,G = s d (g 1 ) 1 . . . s d (g n ) 1 s d (g 1 ) 2 . . . s d (g n ) 2 . . . s d (g 1 ) n . . . s d (g n ) n . (2) Notice that the first n characters in S d,G describe the level 1 nearest neighbor grid points for G, the next n characters describe the level 2 nearest neighbor grid points for G and so on. Any distinguishable level d grid point configuration G is encoded uniquely by S d,G . The time required to generate S d,G is O(dn) (e.g. by using radix sort). Lemma 3. Let p(G) = {n d−1 (g) : g ∈ G}. Then S d,G = [S d−1,p(G) ]s d (g 1 ) n . . . s d (g n ) n . Proof. This can be seen by noting that each string in S d (G) is formed from a string in S d−1 (G) with a single symbol appended to the end, so the lexicographic sortings of S d (G) and S d−1 (p(G)) agree up to position d − 1. A natural approach to storing a collection of point sets, each represented as a string, is to use a trie-based data structure [6,5]. We first consider the database scheme proposed in [8], in which many snaproundings of each point set are stored and use the aforementioned string representation and trie data structure. To represent a point set P , snap-roundings to grid point configurations (up to some maximum grid level d max ) are stored in a trie T. If |P | = n, there are 4 dmaxn such snap-roundings, although there are potentially fewer distinguishable grid point configurations to store. Each snap-rounding at level d represents a grid point configuration G that must be stored in D; the string representation S d,G is used to represent each G. Lemma 4. If G is a snap-rounding configuration at level d > 1 for a point set P , then then there is another snap-rounding configuration G ′ of P at level d − 1 such that S d−1,G ′ is a prefix of S d,G . Proof. Let G = {g 1 , . . . g n } be a snap-rounding of P at level d > 1. Each g i = snap(p) ∈ n 4 d (p) for some p ∈ P . Let g ′ i = n d−1 (g i ). Clearly, g ′ i ∈ n 4 d−1 (p) . It follows that the grid configuration G ′ = n d−1 (G) will be snapped to by P in the level d − 1 grid. Furthermore, S d,G = [S d−1,G ′ ]s d (g 1 ) n . . . s d (g n ) n , by Lemma 3. Each trie node will also store a pointer to the list of point sets (initialized to null). As snapped grid point configurations for P are added to T, P is appended to this list at each trie node that "finishes" describing a grid point configuration for some level, e.g. if |P | = k, then a trie node at depth dk describes a level d grid point configuration for P . The time required to add a new point set P of size n to T is O(4 dmaxn d max n), since at most O(4 dmaxn ) snapped grid configuration strings are stored and each is generated in O(d max n) time. The additional space requirement for T is also O(4 dmaxn d max n). Handling Queries Let Q be a query point set of size n; our objective is to find those P ∈ D that approximate nearest(D, Q), where the database D is represented using a trie T, as described above. A query string S Q is constructed in left-to-right order in blocks of size n as follows: For each point q ∈ Q, we consider the sequence of grid points n 0 (q), n 1 (q), . . . , n dmax (q); the sequence gets monotonically closer to q. As before, we can represent this sequence as a string s dmax (q), whose ith symbol is dir i (q) and S dmax (Q) is the collection of these strings for all q ∈ Q. In order to produce the query string S Q , S dmax (Q) must be sorted lexicographically, however this can be done lazily using radix sort. First, s dmax (q) 1 is found for all q ∈ Q and the strings are sorted on index 1. The resulting sorted column provides the first n symbols in S Q . Next, the trie T is searched on this block. If there is a hit, then the search continues to the next index position, the string symbols are computed at that position (in O(n) time) and the radix sort is continued at the next index. This produces the next size n block of S Q and T is probed from where the previous hit was found. If d * ≤ d max is the maximum hit depth, then d * = − lg(d B (D, Q)) and the query runs in O(− lg(d B (D, Q))n) time. Discussion An approach to indexing planar point sets that supports approximate nearest bottleneck distance queries using a trie-based data structure to compactly represent point configurations in a multi-level grid is described. The obvious drawback is the exponential space complexity; up to 4 dmaxn strings are stored for each point set of size n. A natural question is whether a more space-efficient database scheme is possible. It would also be interesting to consider if an indexing approach and querying procedure can be found that permits one of the point sets to be transformed by an isometry, such as done in [9].
2,152
1906.07258
2949855468
Precise knowledge about the size of a crowd, its density and flow can provide valuable information for safety and security applications, event planning, architectural design and to analyze consumer behavior. Creating a powerful machine learning model, to employ for such applications requires a large and highly accurate and reliable dataset. Unfortunately the existing crowd counting and density estimation benchmark datasets are not only limited in terms of their size, but also lack annotation, in general too time consuming to implement. This paper attempts to address this very issue through a content aware technique, uses combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. The results shows that by simply replacing the commonly used density map generators with the proposed method, higher level of accuracy can be achieved using the existing state of the art models.
Liu @cite_17 proposed a universal network for counting people in a crowd with varying density and scale. in this study the proposed network is composed of two components: a detection network (DNet) and an encoder-decoder estimation network (ENet). The input first run through DNet to detect and count individuals who can be segmented clearly. Then, ENet is utilized to estimate the density maps of the remaining areas, where the numbers of individuals cannot be detected. Modified version of Xception used as an encoder for feature extraction and a combination of dilated convolution and transposed convolution used as decoder. Authors attempted to address the variations in crowd density with two literally isolated deep networks which significantly slows down the process lacks novelty.
{ "abstract": [ "Counting people or objects with significantly varying scales and densities has attracted much interest from the research community and yet it remains an open problem. In this paper, we propose a simple but an efficient and effective network, named DENet, which is composed of two components, i.e., a detection network (DNet) and an encoder-decoder estimation network (ENet). We first run DNet on an input image to detect and count individuals who can be segmented clearly. Then, ENet is utilized to estimate the density maps of the remaining areas, where the numbers of individuals cannot be detected. We propose a modified Xception as an encoder for feature extraction and a combination of dilated convolution and transposed convolution as a decoder. In the ShanghaiTech Part A, UCF and WorldExpo'10 datasets, our DENet achieves lower Mean Absolute Error (MAE) than those of the state-of-the-art methods." ], "cite_N": [ "@cite_17" ], "mid": [ "2939776820" ] }
Content-aware Density Map for Crowd Counting and Density Estimation
The study of human behavior is a subject of great scientific interest and probably an inexhaustible source of research. One of the most cited and popular research topic in human behavior analysis is study of crowd features and characteristics. In recent years, crowd analysis has gained a lot of interest mainly due to its wide range of applications such as safety monitor-ing, disaster management, public spaces design, and intelligence gathering, especially in the congested scenes like arenas, shopping malls, and airports [1,2]. Crowd counting, localization and density estimation are crucial objectives of an automated crowd analysis system. Accurate knowledge of the crowd size, location and density in a public space can provide valuable insight for tasks such as city planning, analyzing consumer shopping patterns as well as maintaining general crowd safety. Several studies attempt to produce an accurate estimation of the true number of people present in a crowded scene through density estimation. Deep learning has proven superior to classic computer vision and machine learning techniques that tend to struggle with the complexity of crowd counting and behavior analysis models. [3]. Generally, crowd counting and density estimation approaches can be divided in two categories: detection-based methods (specific) and regression-based methods (holistic). Detectionbased methods generally assume each person on the crowd can be detected and located individually based on its individual features and characteristics. These approaches are preferable in sparse crowd analysis where crowd occlusion is negligible. Holistic crowd counting and behavior analysis approaches utilize global crowd features and characteristics to estimate crowd size, flow and density. These approaches are preferable in dense crowd analysis, where crowd occlusion is significant. Due to high amount of occlusions these approaches only utilize heads as deterministic feature [4]. However, crowd counting and density estimation is not a trivial task. Several key challenges such as severe occlusions, poor illumination, camera perspective and highly dynamic environments further complicate crowd analysis. Moreover, poor quality of annotated data increases to complexity of crowd counting and behavior analysis in crowded environments. The existing crowd counting and density estimation benchmark datasets are not only limited in terms of the quantity, but also lack in terms of annotation strategy. In regression-based crowd counting and density estimation approaches, people heads are the only visible body part in an image. Thus, these approaches use heads as the only discriminant feature. Meanwhile, the existing benchmark datasets such as UCF-CC-50 and ShanghaiTech only provide people heads centroid pixel instead of masking the entire head region. Hence, the recreation of the ground truth head masks is accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian based on the K nearest neighbors. However, the dynamic Gaussian approach based on proximity of the nearest neighbors mitigates the issue to some extent, but this technique is not content aware and incorporates significant amount of noise into ground truth data [5,6]. In this regard, our study attempts to address the limitation of the existing crowd counting and density estimation benchmark datasets through a content aware annotation technique. It employs combinations of nearest neighbor algorithm and unsupervised segmentation to generate the ground truth head masks. The proposed technique first uses the brute-force nearest neighbor search to localize the nearest neighbor head point, then it identified the head boundaries using Chan-Vese segmentation algorithm and generates a two-dimensional Gaussian filter on that basis. We believe that by simply replacing the kN N /Gaussian based ground truth density maps in an existing state of the art network with the proposed content aware approach in this study, higher level of accuracy can be achieved. The rest of this paper is organized as following: section 2 summarizes the related work, section 3 describes the existing datasets and anno-tation strategies, section 4 presents the proposed methodology, section 5 presents the experimental results and finally section 6 concludes the findings of this research. Annotation Strategy In a dense crowd scenario, aside from people heads which are usually fairly visible, the majority of the other body parts are subject to heavy occlusion. This makes heads the only reliable discriminant feature in dense crowd counting and localization. Existing crowd counting and density estimation benchmark datasets such as UCF-CC-50 and ShanghaiTech provide the heads centroid pixel location as labels. Conducting the crowd counting and density estimation as a regression task, seeks for regional isolation of the heads in the form of a binary mask. As the head size is subject to various factors such as camera specifications, point of view, perspective, distance and angle, generation of such mask could be challenging task, given the heads centroid pixel is the only provided form of annotation in existing benchmark datasets. The formation of the ground truth binary head masks in majority of the existing studies is either accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian filter paired with k nearest neighbors approach. The static two-dimensional Gaussian filter assigns a fixed size Gaussian filter to each head regardless of the head size and proximity of the nearest neighbor. This approach does not attempt to compensate for crowd density, distance and camera perspective and incorporates significant amount of noise into ground truth data. The dynamic two-dimensional Gaussian filter approach employs the nearest neighbors search through k-d tree space partitioning approach, prioritizes the speed over integrity and does not deliver optimal results. In this approach the Gaussian filters are centered to the annotation points and spread based on the average euclidean distance among the three nearest neighbors. In both approaches, the spatial accumulation of all Gaussians creates the global density map for the given image. The following formula shows the commonly used dynamic twodimensional Gaussian approach: D(x, f ) = T h=1 1 2πf (σ h ) exp(− (x − x h ) 2 + (y − y h ) 2 2f (σ h ) 2 )(1) where T is the total number of the heads presents in the given image, σ h is the sized for each head point positioned at (x h , y h ) determined by k-d tree space partitioning approach based on the the average euclidean distance among the three nearest neighbors and f is a scaling constant. The dynamic Gaussian approach based on the k nearest neighbors attempts to mitigate the crowd density, distance and camera perspective issues to some extent. However, this technique is not content aware and it introduces a significant amount of noise into the ground truth data, which in turn negatively affects the model's accuracy. Figure 1 shows some sample images from the ShanghaiTech dataset along with their respective ground truth density maps. It can be observed that both approaches are fairly unreliable and inconsistent in determining the true head sizes. Methodology In order to address the shortcomings of the existing ground truth density maps generation approaches, this study offers a content aware technique using combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. This technique is based on the Mumford-Shah functional for segmentation, and is widely used in the medical imaging field. The Chan-Vese segmentation algorithm is able to segment objects without prominently defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. As the head boundaries in highly dense crowds are not clearly defined, this technique can be used to segment the head regions from the background. Chan-Vese algorithm attempt to minimize the following energy function in an iterative process [24]. F (c 1 , c 2 , G) = µ.Len(G) + ν.Area(in(G)) +λ 1 in(G) |u 0 (x, y) − c 1 | 2 dxdy +λ 2 out(G) |u 0 (x, y) − c 2 | 2 dxdy (2) where G denote the initial head which manually set to a 5x5 bounding box centered on the labelled head point, c 1 will denote the average pixels' intensity inside the initial head region G, and c 2 denotes the average intensity of a square box, centered to the annotation head point and its boundary extended to the nearest neighbor head point. λ 1 , λ 2 and µ are positive scalars, manually set to 1, 1 and 0 respectively. A twodimensional Gaussian filter which extends to the G mean and centered to the head point is used to create the ground truth head mask. Unlike k-d tree space partitioning technique which does not always delivers the absolute nearest neighbors, brute-force nearest neighbor search technique always guarantees to find the absolute nearest neighbors regardless of the distribution of the points. The brute-force nearest neighbor search technique does take considerably longer time (O(n 2 ) vs O(n log n)) to find the nearest neighbors. However, since generating the ground truth density maps is a singlepass preliminary operation in crowd counting and density estimation, speed is a less of a priority. Since, the Chan-Vese segmentation algorithm only uses the very nearest neighbor head point to determine the boundary of the outside region, the brute-force nearest neighbor search only looks for the very nearest head point. To create the global density map, we employed an exclusive cumulative of the Gaussians which addresses the head mask overlap issue. To maintain the count integrity, density map has been normalized at each iteration. Experimental Results In order to measure the effectiveness of our content-aware crowd density map generator, we have re-trained some of the notable state of the art deep models including Sindagi et al. [25] , Shi et al. [22] , Li et al. [26] and Zhang et al. [27] using the density maps generated by the proposed crowd density map generator. We have used the original implementation of these algorithms provided by authors in Github. All algorithms were trained and tested across both UCF-CC-50 and ShanghaiTech datasets using the proposed content-aware crowd density map generator as well as the commonly used existing ground truth density map generator. In some cases we were unable to reproduce the reported performance in the original manuscripts. However, as we were consistent with the experiments across both density map generators, validity and integrity of the comparison is not compromised. Table 1 shows the mean square error (MSE) comparison between the proposed and existing density map generator across ShanghaiTech dataset part A and B. It can be observed that using the proposed content-aware density map generator, MSE has been consistently decreased across relatively all investigated models. The improvements is more pronounced in Shang-haiTech part A dataset. ShanghaiTech part A dataset exhibits more challenging and dynamic crowd scenarios. The results convey the proposed method could deliver better depiction of the ground truth density maps. Table 2 compares the MSE and mean absolute error (MAE) between the proposed and existing density map generator using extremely challenging UCF-CC-50 dataset. Similar to the results in Shang-haiTech dataset, there is a notable improvement in both MSE and MAE metrics. Figure 2 compares the density maps generated using the existing approach based on k-d tree space partitioning technique and the proposed content-aware crowd density map generator. It can be observed that in highly dense crowds, the proposed method generates more more granular density maps with lesser overlaps between neighbor Guassians. The proposed method uses combination of pixels intensity and nearest neighbors to adjust the size of the Guassians per head. Figure 2 shows this technique significantly improves the integrity of the density map relative to the input image. Figure 2: From top to bottom: sample images from ShanghaiTech dataset, density map generated using the existing method and density map generated using the proposed method Conclusion Creating an accurate model for crowd counting and density estimation demands for a large and highly reliable ground truth data in the first place. However, the existing crowd counting and density estimation benchmark datasets are not only limited in terms of size, but also lack in terms of annotation methodology. This study attempted to address this issue through a content-aware technique which employed combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search to generate the ground truth density maps. Experiment results shows by replacing the commonly practiced ground truth density map generators with the proposed content-aware method, the existing state of the art crowd counting models can achieve higher level of count and localization accuracy.
2,080
1906.07258
2949855468
Precise knowledge about the size of a crowd, its density and flow can provide valuable information for safety and security applications, event planning, architectural design and to analyze consumer behavior. Creating a powerful machine learning model, to employ for such applications requires a large and highly accurate and reliable dataset. Unfortunately the existing crowd counting and density estimation benchmark datasets are not only limited in terms of their size, but also lack annotation, in general too time consuming to implement. This paper attempts to address this very issue through a content aware technique, uses combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. The results shows that by simply replacing the commonly used density map generators with the proposed method, higher level of accuracy can be achieved using the existing state of the art models.
In another study, Mehta @cite_26 proposed independent decoding reinforcement branch as a binary classifier which helps the network converge much earlier and also enables the network to estimate density maps with high Structural Similarity Index (SSIM). A joint loss strategy, the binary cross entropy (BCE) loss and mean squared error (MSE) loss used to train the network in an end to end fashion. They have used variation of the U-net model to generate the density maps. The proposed model shows notable improvements in recreation of the crowd density maps over the existing models.
{ "abstract": [ "Crowd management is of paramount importance when it comes to preventing stampedes and saving lives, especially in a countries like China and India where the combined population is a third of the global population. Millions of people convene annually all around the nation to celebrate a myriad of events and crowd count estimation is the linchpin of the crowd management system that could prevent stampedes and save lives. We present a network for crowd counting which reports state of the art results on crowd counting benchmarks. Our contributions are, first, a U-Net inspired model which affords us to report state of the art results. Second, we propose an independent decoding Reinforcement branch which helps the network converge much earlier and also enables the network to estimate density maps with high Structural Similarity Index (SSIM). Third, we discuss the drawbacks of the contemporary architectures and empirically show that even though our architecture achieves state of the art results, the merit may be due to the encoder-decoder pipeline instead. Finally, we report the error analysis which shows that the contemporary line of work is at saturation and leaves certain prominent problems unsolved." ], "cite_N": [ "@cite_26" ], "mid": [ "2924040726" ] }
Content-aware Density Map for Crowd Counting and Density Estimation
The study of human behavior is a subject of great scientific interest and probably an inexhaustible source of research. One of the most cited and popular research topic in human behavior analysis is study of crowd features and characteristics. In recent years, crowd analysis has gained a lot of interest mainly due to its wide range of applications such as safety monitor-ing, disaster management, public spaces design, and intelligence gathering, especially in the congested scenes like arenas, shopping malls, and airports [1,2]. Crowd counting, localization and density estimation are crucial objectives of an automated crowd analysis system. Accurate knowledge of the crowd size, location and density in a public space can provide valuable insight for tasks such as city planning, analyzing consumer shopping patterns as well as maintaining general crowd safety. Several studies attempt to produce an accurate estimation of the true number of people present in a crowded scene through density estimation. Deep learning has proven superior to classic computer vision and machine learning techniques that tend to struggle with the complexity of crowd counting and behavior analysis models. [3]. Generally, crowd counting and density estimation approaches can be divided in two categories: detection-based methods (specific) and regression-based methods (holistic). Detectionbased methods generally assume each person on the crowd can be detected and located individually based on its individual features and characteristics. These approaches are preferable in sparse crowd analysis where crowd occlusion is negligible. Holistic crowd counting and behavior analysis approaches utilize global crowd features and characteristics to estimate crowd size, flow and density. These approaches are preferable in dense crowd analysis, where crowd occlusion is significant. Due to high amount of occlusions these approaches only utilize heads as deterministic feature [4]. However, crowd counting and density estimation is not a trivial task. Several key challenges such as severe occlusions, poor illumination, camera perspective and highly dynamic environments further complicate crowd analysis. Moreover, poor quality of annotated data increases to complexity of crowd counting and behavior analysis in crowded environments. The existing crowd counting and density estimation benchmark datasets are not only limited in terms of the quantity, but also lack in terms of annotation strategy. In regression-based crowd counting and density estimation approaches, people heads are the only visible body part in an image. Thus, these approaches use heads as the only discriminant feature. Meanwhile, the existing benchmark datasets such as UCF-CC-50 and ShanghaiTech only provide people heads centroid pixel instead of masking the entire head region. Hence, the recreation of the ground truth head masks is accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian based on the K nearest neighbors. However, the dynamic Gaussian approach based on proximity of the nearest neighbors mitigates the issue to some extent, but this technique is not content aware and incorporates significant amount of noise into ground truth data [5,6]. In this regard, our study attempts to address the limitation of the existing crowd counting and density estimation benchmark datasets through a content aware annotation technique. It employs combinations of nearest neighbor algorithm and unsupervised segmentation to generate the ground truth head masks. The proposed technique first uses the brute-force nearest neighbor search to localize the nearest neighbor head point, then it identified the head boundaries using Chan-Vese segmentation algorithm and generates a two-dimensional Gaussian filter on that basis. We believe that by simply replacing the kN N /Gaussian based ground truth density maps in an existing state of the art network with the proposed content aware approach in this study, higher level of accuracy can be achieved. The rest of this paper is organized as following: section 2 summarizes the related work, section 3 describes the existing datasets and anno-tation strategies, section 4 presents the proposed methodology, section 5 presents the experimental results and finally section 6 concludes the findings of this research. Annotation Strategy In a dense crowd scenario, aside from people heads which are usually fairly visible, the majority of the other body parts are subject to heavy occlusion. This makes heads the only reliable discriminant feature in dense crowd counting and localization. Existing crowd counting and density estimation benchmark datasets such as UCF-CC-50 and ShanghaiTech provide the heads centroid pixel location as labels. Conducting the crowd counting and density estimation as a regression task, seeks for regional isolation of the heads in the form of a binary mask. As the head size is subject to various factors such as camera specifications, point of view, perspective, distance and angle, generation of such mask could be challenging task, given the heads centroid pixel is the only provided form of annotation in existing benchmark datasets. The formation of the ground truth binary head masks in majority of the existing studies is either accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian filter paired with k nearest neighbors approach. The static two-dimensional Gaussian filter assigns a fixed size Gaussian filter to each head regardless of the head size and proximity of the nearest neighbor. This approach does not attempt to compensate for crowd density, distance and camera perspective and incorporates significant amount of noise into ground truth data. The dynamic two-dimensional Gaussian filter approach employs the nearest neighbors search through k-d tree space partitioning approach, prioritizes the speed over integrity and does not deliver optimal results. In this approach the Gaussian filters are centered to the annotation points and spread based on the average euclidean distance among the three nearest neighbors. In both approaches, the spatial accumulation of all Gaussians creates the global density map for the given image. The following formula shows the commonly used dynamic twodimensional Gaussian approach: D(x, f ) = T h=1 1 2πf (σ h ) exp(− (x − x h ) 2 + (y − y h ) 2 2f (σ h ) 2 )(1) where T is the total number of the heads presents in the given image, σ h is the sized for each head point positioned at (x h , y h ) determined by k-d tree space partitioning approach based on the the average euclidean distance among the three nearest neighbors and f is a scaling constant. The dynamic Gaussian approach based on the k nearest neighbors attempts to mitigate the crowd density, distance and camera perspective issues to some extent. However, this technique is not content aware and it introduces a significant amount of noise into the ground truth data, which in turn negatively affects the model's accuracy. Figure 1 shows some sample images from the ShanghaiTech dataset along with their respective ground truth density maps. It can be observed that both approaches are fairly unreliable and inconsistent in determining the true head sizes. Methodology In order to address the shortcomings of the existing ground truth density maps generation approaches, this study offers a content aware technique using combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. This technique is based on the Mumford-Shah functional for segmentation, and is widely used in the medical imaging field. The Chan-Vese segmentation algorithm is able to segment objects without prominently defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. As the head boundaries in highly dense crowds are not clearly defined, this technique can be used to segment the head regions from the background. Chan-Vese algorithm attempt to minimize the following energy function in an iterative process [24]. F (c 1 , c 2 , G) = µ.Len(G) + ν.Area(in(G)) +λ 1 in(G) |u 0 (x, y) − c 1 | 2 dxdy +λ 2 out(G) |u 0 (x, y) − c 2 | 2 dxdy (2) where G denote the initial head which manually set to a 5x5 bounding box centered on the labelled head point, c 1 will denote the average pixels' intensity inside the initial head region G, and c 2 denotes the average intensity of a square box, centered to the annotation head point and its boundary extended to the nearest neighbor head point. λ 1 , λ 2 and µ are positive scalars, manually set to 1, 1 and 0 respectively. A twodimensional Gaussian filter which extends to the G mean and centered to the head point is used to create the ground truth head mask. Unlike k-d tree space partitioning technique which does not always delivers the absolute nearest neighbors, brute-force nearest neighbor search technique always guarantees to find the absolute nearest neighbors regardless of the distribution of the points. The brute-force nearest neighbor search technique does take considerably longer time (O(n 2 ) vs O(n log n)) to find the nearest neighbors. However, since generating the ground truth density maps is a singlepass preliminary operation in crowd counting and density estimation, speed is a less of a priority. Since, the Chan-Vese segmentation algorithm only uses the very nearest neighbor head point to determine the boundary of the outside region, the brute-force nearest neighbor search only looks for the very nearest head point. To create the global density map, we employed an exclusive cumulative of the Gaussians which addresses the head mask overlap issue. To maintain the count integrity, density map has been normalized at each iteration. Experimental Results In order to measure the effectiveness of our content-aware crowd density map generator, we have re-trained some of the notable state of the art deep models including Sindagi et al. [25] , Shi et al. [22] , Li et al. [26] and Zhang et al. [27] using the density maps generated by the proposed crowd density map generator. We have used the original implementation of these algorithms provided by authors in Github. All algorithms were trained and tested across both UCF-CC-50 and ShanghaiTech datasets using the proposed content-aware crowd density map generator as well as the commonly used existing ground truth density map generator. In some cases we were unable to reproduce the reported performance in the original manuscripts. However, as we were consistent with the experiments across both density map generators, validity and integrity of the comparison is not compromised. Table 1 shows the mean square error (MSE) comparison between the proposed and existing density map generator across ShanghaiTech dataset part A and B. It can be observed that using the proposed content-aware density map generator, MSE has been consistently decreased across relatively all investigated models. The improvements is more pronounced in Shang-haiTech part A dataset. ShanghaiTech part A dataset exhibits more challenging and dynamic crowd scenarios. The results convey the proposed method could deliver better depiction of the ground truth density maps. Table 2 compares the MSE and mean absolute error (MAE) between the proposed and existing density map generator using extremely challenging UCF-CC-50 dataset. Similar to the results in Shang-haiTech dataset, there is a notable improvement in both MSE and MAE metrics. Figure 2 compares the density maps generated using the existing approach based on k-d tree space partitioning technique and the proposed content-aware crowd density map generator. It can be observed that in highly dense crowds, the proposed method generates more more granular density maps with lesser overlaps between neighbor Guassians. The proposed method uses combination of pixels intensity and nearest neighbors to adjust the size of the Guassians per head. Figure 2 shows this technique significantly improves the integrity of the density map relative to the input image. Figure 2: From top to bottom: sample images from ShanghaiTech dataset, density map generated using the existing method and density map generated using the proposed method Conclusion Creating an accurate model for crowd counting and density estimation demands for a large and highly reliable ground truth data in the first place. However, the existing crowd counting and density estimation benchmark datasets are not only limited in terms of size, but also lack in terms of annotation methodology. This study attempted to address this issue through a content-aware technique which employed combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search to generate the ground truth density maps. Experiment results shows by replacing the commonly practiced ground truth density map generators with the proposed content-aware method, the existing state of the art crowd counting models can achieve higher level of count and localization accuracy.
2,080
1906.07258
2949855468
Precise knowledge about the size of a crowd, its density and flow can provide valuable information for safety and security applications, event planning, architectural design and to analyze consumer behavior. Creating a powerful machine learning model, to employ for such applications requires a large and highly accurate and reliable dataset. Unfortunately the existing crowd counting and density estimation benchmark datasets are not only limited in terms of their size, but also lack annotation, in general too time consuming to implement. This paper attempts to address this very issue through a content aware technique, uses combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. The results shows that by simply replacing the commonly used density map generators with the proposed method, higher level of accuracy can be achieved using the existing state of the art models.
A study by Oh @cite_24 attempt to address the uncertainty estimation in the domain of crowd counting. This study proposed a scalable neural network framework with quantification of decomposed uncertainty using a bootstrap ensemble. The proposed method incorporates both epistemic uncertainty and aleatoric uncertainty in a neural network for crowd counting. The proposed uncertainty quantification method provides additional auxiliary insight to the crowd counting model. The proposed technique attempt to address the uncertainty issue in crowd counting. However the use of unsupervised calibration method to re-calibrate the predictions of the pre-trained network is questionable.
{ "abstract": [ "Research in neural networks in the field of computer vision has achieved remarkable accuracy for point estimation. However, the uncertainty in the estimation is rarely addressed. Uncertainty quantification accompanied by point estimation can lead to a more informed decision, and even improve the prediction quality. In this work, we focus on uncertainty estimation in the domain of crowd counting. We propose a scalable neural network framework with quantification of decomposed uncertainty using a bootstrap ensemble. We demonstrate that the proposed uncertainty quantification method provides additional insight to the crowd counting problem and is simple to implement. We also show that our proposed method outperforms the current state of the art method in many benchmark data sets. To the best of our knowledge, we have the best system for ShanghaiTech part A and B, UCF CC 50, UCSD, and UCF-QNRF datasets." ], "cite_N": [ "@cite_24" ], "mid": [ "2920816188" ] }
Content-aware Density Map for Crowd Counting and Density Estimation
The study of human behavior is a subject of great scientific interest and probably an inexhaustible source of research. One of the most cited and popular research topic in human behavior analysis is study of crowd features and characteristics. In recent years, crowd analysis has gained a lot of interest mainly due to its wide range of applications such as safety monitor-ing, disaster management, public spaces design, and intelligence gathering, especially in the congested scenes like arenas, shopping malls, and airports [1,2]. Crowd counting, localization and density estimation are crucial objectives of an automated crowd analysis system. Accurate knowledge of the crowd size, location and density in a public space can provide valuable insight for tasks such as city planning, analyzing consumer shopping patterns as well as maintaining general crowd safety. Several studies attempt to produce an accurate estimation of the true number of people present in a crowded scene through density estimation. Deep learning has proven superior to classic computer vision and machine learning techniques that tend to struggle with the complexity of crowd counting and behavior analysis models. [3]. Generally, crowd counting and density estimation approaches can be divided in two categories: detection-based methods (specific) and regression-based methods (holistic). Detectionbased methods generally assume each person on the crowd can be detected and located individually based on its individual features and characteristics. These approaches are preferable in sparse crowd analysis where crowd occlusion is negligible. Holistic crowd counting and behavior analysis approaches utilize global crowd features and characteristics to estimate crowd size, flow and density. These approaches are preferable in dense crowd analysis, where crowd occlusion is significant. Due to high amount of occlusions these approaches only utilize heads as deterministic feature [4]. However, crowd counting and density estimation is not a trivial task. Several key challenges such as severe occlusions, poor illumination, camera perspective and highly dynamic environments further complicate crowd analysis. Moreover, poor quality of annotated data increases to complexity of crowd counting and behavior analysis in crowded environments. The existing crowd counting and density estimation benchmark datasets are not only limited in terms of the quantity, but also lack in terms of annotation strategy. In regression-based crowd counting and density estimation approaches, people heads are the only visible body part in an image. Thus, these approaches use heads as the only discriminant feature. Meanwhile, the existing benchmark datasets such as UCF-CC-50 and ShanghaiTech only provide people heads centroid pixel instead of masking the entire head region. Hence, the recreation of the ground truth head masks is accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian based on the K nearest neighbors. However, the dynamic Gaussian approach based on proximity of the nearest neighbors mitigates the issue to some extent, but this technique is not content aware and incorporates significant amount of noise into ground truth data [5,6]. In this regard, our study attempts to address the limitation of the existing crowd counting and density estimation benchmark datasets through a content aware annotation technique. It employs combinations of nearest neighbor algorithm and unsupervised segmentation to generate the ground truth head masks. The proposed technique first uses the brute-force nearest neighbor search to localize the nearest neighbor head point, then it identified the head boundaries using Chan-Vese segmentation algorithm and generates a two-dimensional Gaussian filter on that basis. We believe that by simply replacing the kN N /Gaussian based ground truth density maps in an existing state of the art network with the proposed content aware approach in this study, higher level of accuracy can be achieved. The rest of this paper is organized as following: section 2 summarizes the related work, section 3 describes the existing datasets and anno-tation strategies, section 4 presents the proposed methodology, section 5 presents the experimental results and finally section 6 concludes the findings of this research. Annotation Strategy In a dense crowd scenario, aside from people heads which are usually fairly visible, the majority of the other body parts are subject to heavy occlusion. This makes heads the only reliable discriminant feature in dense crowd counting and localization. Existing crowd counting and density estimation benchmark datasets such as UCF-CC-50 and ShanghaiTech provide the heads centroid pixel location as labels. Conducting the crowd counting and density estimation as a regression task, seeks for regional isolation of the heads in the form of a binary mask. As the head size is subject to various factors such as camera specifications, point of view, perspective, distance and angle, generation of such mask could be challenging task, given the heads centroid pixel is the only provided form of annotation in existing benchmark datasets. The formation of the ground truth binary head masks in majority of the existing studies is either accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian filter paired with k nearest neighbors approach. The static two-dimensional Gaussian filter assigns a fixed size Gaussian filter to each head regardless of the head size and proximity of the nearest neighbor. This approach does not attempt to compensate for crowd density, distance and camera perspective and incorporates significant amount of noise into ground truth data. The dynamic two-dimensional Gaussian filter approach employs the nearest neighbors search through k-d tree space partitioning approach, prioritizes the speed over integrity and does not deliver optimal results. In this approach the Gaussian filters are centered to the annotation points and spread based on the average euclidean distance among the three nearest neighbors. In both approaches, the spatial accumulation of all Gaussians creates the global density map for the given image. The following formula shows the commonly used dynamic twodimensional Gaussian approach: D(x, f ) = T h=1 1 2πf (σ h ) exp(− (x − x h ) 2 + (y − y h ) 2 2f (σ h ) 2 )(1) where T is the total number of the heads presents in the given image, σ h is the sized for each head point positioned at (x h , y h ) determined by k-d tree space partitioning approach based on the the average euclidean distance among the three nearest neighbors and f is a scaling constant. The dynamic Gaussian approach based on the k nearest neighbors attempts to mitigate the crowd density, distance and camera perspective issues to some extent. However, this technique is not content aware and it introduces a significant amount of noise into the ground truth data, which in turn negatively affects the model's accuracy. Figure 1 shows some sample images from the ShanghaiTech dataset along with their respective ground truth density maps. It can be observed that both approaches are fairly unreliable and inconsistent in determining the true head sizes. Methodology In order to address the shortcomings of the existing ground truth density maps generation approaches, this study offers a content aware technique using combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. This technique is based on the Mumford-Shah functional for segmentation, and is widely used in the medical imaging field. The Chan-Vese segmentation algorithm is able to segment objects without prominently defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. As the head boundaries in highly dense crowds are not clearly defined, this technique can be used to segment the head regions from the background. Chan-Vese algorithm attempt to minimize the following energy function in an iterative process [24]. F (c 1 , c 2 , G) = µ.Len(G) + ν.Area(in(G)) +λ 1 in(G) |u 0 (x, y) − c 1 | 2 dxdy +λ 2 out(G) |u 0 (x, y) − c 2 | 2 dxdy (2) where G denote the initial head which manually set to a 5x5 bounding box centered on the labelled head point, c 1 will denote the average pixels' intensity inside the initial head region G, and c 2 denotes the average intensity of a square box, centered to the annotation head point and its boundary extended to the nearest neighbor head point. λ 1 , λ 2 and µ are positive scalars, manually set to 1, 1 and 0 respectively. A twodimensional Gaussian filter which extends to the G mean and centered to the head point is used to create the ground truth head mask. Unlike k-d tree space partitioning technique which does not always delivers the absolute nearest neighbors, brute-force nearest neighbor search technique always guarantees to find the absolute nearest neighbors regardless of the distribution of the points. The brute-force nearest neighbor search technique does take considerably longer time (O(n 2 ) vs O(n log n)) to find the nearest neighbors. However, since generating the ground truth density maps is a singlepass preliminary operation in crowd counting and density estimation, speed is a less of a priority. Since, the Chan-Vese segmentation algorithm only uses the very nearest neighbor head point to determine the boundary of the outside region, the brute-force nearest neighbor search only looks for the very nearest head point. To create the global density map, we employed an exclusive cumulative of the Gaussians which addresses the head mask overlap issue. To maintain the count integrity, density map has been normalized at each iteration. Experimental Results In order to measure the effectiveness of our content-aware crowd density map generator, we have re-trained some of the notable state of the art deep models including Sindagi et al. [25] , Shi et al. [22] , Li et al. [26] and Zhang et al. [27] using the density maps generated by the proposed crowd density map generator. We have used the original implementation of these algorithms provided by authors in Github. All algorithms were trained and tested across both UCF-CC-50 and ShanghaiTech datasets using the proposed content-aware crowd density map generator as well as the commonly used existing ground truth density map generator. In some cases we were unable to reproduce the reported performance in the original manuscripts. However, as we were consistent with the experiments across both density map generators, validity and integrity of the comparison is not compromised. Table 1 shows the mean square error (MSE) comparison between the proposed and existing density map generator across ShanghaiTech dataset part A and B. It can be observed that using the proposed content-aware density map generator, MSE has been consistently decreased across relatively all investigated models. The improvements is more pronounced in Shang-haiTech part A dataset. ShanghaiTech part A dataset exhibits more challenging and dynamic crowd scenarios. The results convey the proposed method could deliver better depiction of the ground truth density maps. Table 2 compares the MSE and mean absolute error (MAE) between the proposed and existing density map generator using extremely challenging UCF-CC-50 dataset. Similar to the results in Shang-haiTech dataset, there is a notable improvement in both MSE and MAE metrics. Figure 2 compares the density maps generated using the existing approach based on k-d tree space partitioning technique and the proposed content-aware crowd density map generator. It can be observed that in highly dense crowds, the proposed method generates more more granular density maps with lesser overlaps between neighbor Guassians. The proposed method uses combination of pixels intensity and nearest neighbors to adjust the size of the Guassians per head. Figure 2 shows this technique significantly improves the integrity of the density map relative to the input image. Figure 2: From top to bottom: sample images from ShanghaiTech dataset, density map generated using the existing method and density map generated using the proposed method Conclusion Creating an accurate model for crowd counting and density estimation demands for a large and highly reliable ground truth data in the first place. However, the existing crowd counting and density estimation benchmark datasets are not only limited in terms of size, but also lack in terms of annotation methodology. This study attempted to address this issue through a content-aware technique which employed combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search to generate the ground truth density maps. Experiment results shows by replacing the commonly practiced ground truth density map generators with the proposed content-aware method, the existing state of the art crowd counting models can achieve higher level of count and localization accuracy.
2,080
1906.07258
2949855468
Precise knowledge about the size of a crowd, its density and flow can provide valuable information for safety and security applications, event planning, architectural design and to analyze consumer behavior. Creating a powerful machine learning model, to employ for such applications requires a large and highly accurate and reliable dataset. Unfortunately the existing crowd counting and density estimation benchmark datasets are not only limited in terms of their size, but also lack annotation, in general too time consuming to implement. This paper attempts to address this very issue through a content aware technique, uses combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. The results shows that by simply replacing the commonly used density map generators with the proposed method, higher level of accuracy can be achieved using the existing state of the art models.
In another study Olmschenk @cite_18 attempt to address the inefficiency of the existing crowd density map labeling scheme for training deep neural networks. This study proposes a labeling scheme based on inverse k-nearest neighbor ( @math ) maps which does not explicitly represents the crowd density. Authors claim a single @math map provides information similar to the commonly practiced accumulation of many density maps with different Gaussian spreads.
{ "abstract": [ "Gatherings of thousands to millions of people frequently occur for an enormous variety of events, and automated counting of these high-density crowds is useful for safety, management, and measuring significance of an event. In this work, we show that the regularly accepted labeling scheme of crowd density maps for training deep neural networks is less effective than our alternative inverse k-nearest neighbor (i @math NN) maps, even when used directly in existing state-of-the-art network structures. We also provide a new network architecture MUD-i @math NN, which uses multi-scale upsampling via transposed convolutions to take full advantage of the provided i @math NN labeling. This upsampling combined with the i @math NN maps further improves crowd counting accuracy. Our new network architecture performs favorably in comparison with the state-of-the-art. However, our labeling and upsampling techniques are generally applicable to existing crowd counting architectures." ], "cite_N": [ "@cite_18" ], "mid": [ "2913127348" ] }
Content-aware Density Map for Crowd Counting and Density Estimation
The study of human behavior is a subject of great scientific interest and probably an inexhaustible source of research. One of the most cited and popular research topic in human behavior analysis is study of crowd features and characteristics. In recent years, crowd analysis has gained a lot of interest mainly due to its wide range of applications such as safety monitor-ing, disaster management, public spaces design, and intelligence gathering, especially in the congested scenes like arenas, shopping malls, and airports [1,2]. Crowd counting, localization and density estimation are crucial objectives of an automated crowd analysis system. Accurate knowledge of the crowd size, location and density in a public space can provide valuable insight for tasks such as city planning, analyzing consumer shopping patterns as well as maintaining general crowd safety. Several studies attempt to produce an accurate estimation of the true number of people present in a crowded scene through density estimation. Deep learning has proven superior to classic computer vision and machine learning techniques that tend to struggle with the complexity of crowd counting and behavior analysis models. [3]. Generally, crowd counting and density estimation approaches can be divided in two categories: detection-based methods (specific) and regression-based methods (holistic). Detectionbased methods generally assume each person on the crowd can be detected and located individually based on its individual features and characteristics. These approaches are preferable in sparse crowd analysis where crowd occlusion is negligible. Holistic crowd counting and behavior analysis approaches utilize global crowd features and characteristics to estimate crowd size, flow and density. These approaches are preferable in dense crowd analysis, where crowd occlusion is significant. Due to high amount of occlusions these approaches only utilize heads as deterministic feature [4]. However, crowd counting and density estimation is not a trivial task. Several key challenges such as severe occlusions, poor illumination, camera perspective and highly dynamic environments further complicate crowd analysis. Moreover, poor quality of annotated data increases to complexity of crowd counting and behavior analysis in crowded environments. The existing crowd counting and density estimation benchmark datasets are not only limited in terms of the quantity, but also lack in terms of annotation strategy. In regression-based crowd counting and density estimation approaches, people heads are the only visible body part in an image. Thus, these approaches use heads as the only discriminant feature. Meanwhile, the existing benchmark datasets such as UCF-CC-50 and ShanghaiTech only provide people heads centroid pixel instead of masking the entire head region. Hence, the recreation of the ground truth head masks is accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian based on the K nearest neighbors. However, the dynamic Gaussian approach based on proximity of the nearest neighbors mitigates the issue to some extent, but this technique is not content aware and incorporates significant amount of noise into ground truth data [5,6]. In this regard, our study attempts to address the limitation of the existing crowd counting and density estimation benchmark datasets through a content aware annotation technique. It employs combinations of nearest neighbor algorithm and unsupervised segmentation to generate the ground truth head masks. The proposed technique first uses the brute-force nearest neighbor search to localize the nearest neighbor head point, then it identified the head boundaries using Chan-Vese segmentation algorithm and generates a two-dimensional Gaussian filter on that basis. We believe that by simply replacing the kN N /Gaussian based ground truth density maps in an existing state of the art network with the proposed content aware approach in this study, higher level of accuracy can be achieved. The rest of this paper is organized as following: section 2 summarizes the related work, section 3 describes the existing datasets and anno-tation strategies, section 4 presents the proposed methodology, section 5 presents the experimental results and finally section 6 concludes the findings of this research. Annotation Strategy In a dense crowd scenario, aside from people heads which are usually fairly visible, the majority of the other body parts are subject to heavy occlusion. This makes heads the only reliable discriminant feature in dense crowd counting and localization. Existing crowd counting and density estimation benchmark datasets such as UCF-CC-50 and ShanghaiTech provide the heads centroid pixel location as labels. Conducting the crowd counting and density estimation as a regression task, seeks for regional isolation of the heads in the form of a binary mask. As the head size is subject to various factors such as camera specifications, point of view, perspective, distance and angle, generation of such mask could be challenging task, given the heads centroid pixel is the only provided form of annotation in existing benchmark datasets. The formation of the ground truth binary head masks in majority of the existing studies is either accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian filter paired with k nearest neighbors approach. The static two-dimensional Gaussian filter assigns a fixed size Gaussian filter to each head regardless of the head size and proximity of the nearest neighbor. This approach does not attempt to compensate for crowd density, distance and camera perspective and incorporates significant amount of noise into ground truth data. The dynamic two-dimensional Gaussian filter approach employs the nearest neighbors search through k-d tree space partitioning approach, prioritizes the speed over integrity and does not deliver optimal results. In this approach the Gaussian filters are centered to the annotation points and spread based on the average euclidean distance among the three nearest neighbors. In both approaches, the spatial accumulation of all Gaussians creates the global density map for the given image. The following formula shows the commonly used dynamic twodimensional Gaussian approach: D(x, f ) = T h=1 1 2πf (σ h ) exp(− (x − x h ) 2 + (y − y h ) 2 2f (σ h ) 2 )(1) where T is the total number of the heads presents in the given image, σ h is the sized for each head point positioned at (x h , y h ) determined by k-d tree space partitioning approach based on the the average euclidean distance among the three nearest neighbors and f is a scaling constant. The dynamic Gaussian approach based on the k nearest neighbors attempts to mitigate the crowd density, distance and camera perspective issues to some extent. However, this technique is not content aware and it introduces a significant amount of noise into the ground truth data, which in turn negatively affects the model's accuracy. Figure 1 shows some sample images from the ShanghaiTech dataset along with their respective ground truth density maps. It can be observed that both approaches are fairly unreliable and inconsistent in determining the true head sizes. Methodology In order to address the shortcomings of the existing ground truth density maps generation approaches, this study offers a content aware technique using combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. This technique is based on the Mumford-Shah functional for segmentation, and is widely used in the medical imaging field. The Chan-Vese segmentation algorithm is able to segment objects without prominently defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. As the head boundaries in highly dense crowds are not clearly defined, this technique can be used to segment the head regions from the background. Chan-Vese algorithm attempt to minimize the following energy function in an iterative process [24]. F (c 1 , c 2 , G) = µ.Len(G) + ν.Area(in(G)) +λ 1 in(G) |u 0 (x, y) − c 1 | 2 dxdy +λ 2 out(G) |u 0 (x, y) − c 2 | 2 dxdy (2) where G denote the initial head which manually set to a 5x5 bounding box centered on the labelled head point, c 1 will denote the average pixels' intensity inside the initial head region G, and c 2 denotes the average intensity of a square box, centered to the annotation head point and its boundary extended to the nearest neighbor head point. λ 1 , λ 2 and µ are positive scalars, manually set to 1, 1 and 0 respectively. A twodimensional Gaussian filter which extends to the G mean and centered to the head point is used to create the ground truth head mask. Unlike k-d tree space partitioning technique which does not always delivers the absolute nearest neighbors, brute-force nearest neighbor search technique always guarantees to find the absolute nearest neighbors regardless of the distribution of the points. The brute-force nearest neighbor search technique does take considerably longer time (O(n 2 ) vs O(n log n)) to find the nearest neighbors. However, since generating the ground truth density maps is a singlepass preliminary operation in crowd counting and density estimation, speed is a less of a priority. Since, the Chan-Vese segmentation algorithm only uses the very nearest neighbor head point to determine the boundary of the outside region, the brute-force nearest neighbor search only looks for the very nearest head point. To create the global density map, we employed an exclusive cumulative of the Gaussians which addresses the head mask overlap issue. To maintain the count integrity, density map has been normalized at each iteration. Experimental Results In order to measure the effectiveness of our content-aware crowd density map generator, we have re-trained some of the notable state of the art deep models including Sindagi et al. [25] , Shi et al. [22] , Li et al. [26] and Zhang et al. [27] using the density maps generated by the proposed crowd density map generator. We have used the original implementation of these algorithms provided by authors in Github. All algorithms were trained and tested across both UCF-CC-50 and ShanghaiTech datasets using the proposed content-aware crowd density map generator as well as the commonly used existing ground truth density map generator. In some cases we were unable to reproduce the reported performance in the original manuscripts. However, as we were consistent with the experiments across both density map generators, validity and integrity of the comparison is not compromised. Table 1 shows the mean square error (MSE) comparison between the proposed and existing density map generator across ShanghaiTech dataset part A and B. It can be observed that using the proposed content-aware density map generator, MSE has been consistently decreased across relatively all investigated models. The improvements is more pronounced in Shang-haiTech part A dataset. ShanghaiTech part A dataset exhibits more challenging and dynamic crowd scenarios. The results convey the proposed method could deliver better depiction of the ground truth density maps. Table 2 compares the MSE and mean absolute error (MAE) between the proposed and existing density map generator using extremely challenging UCF-CC-50 dataset. Similar to the results in Shang-haiTech dataset, there is a notable improvement in both MSE and MAE metrics. Figure 2 compares the density maps generated using the existing approach based on k-d tree space partitioning technique and the proposed content-aware crowd density map generator. It can be observed that in highly dense crowds, the proposed method generates more more granular density maps with lesser overlaps between neighbor Guassians. The proposed method uses combination of pixels intensity and nearest neighbors to adjust the size of the Guassians per head. Figure 2 shows this technique significantly improves the integrity of the density map relative to the input image. Figure 2: From top to bottom: sample images from ShanghaiTech dataset, density map generated using the existing method and density map generated using the proposed method Conclusion Creating an accurate model for crowd counting and density estimation demands for a large and highly reliable ground truth data in the first place. However, the existing crowd counting and density estimation benchmark datasets are not only limited in terms of size, but also lack in terms of annotation methodology. This study attempted to address this issue through a content-aware technique which employed combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search to generate the ground truth density maps. Experiment results shows by replacing the commonly practiced ground truth density map generators with the proposed content-aware method, the existing state of the art crowd counting models can achieve higher level of count and localization accuracy.
2,080
1906.07258
2949855468
Precise knowledge about the size of a crowd, its density and flow can provide valuable information for safety and security applications, event planning, architectural design and to analyze consumer behavior. Creating a powerful machine learning model, to employ for such applications requires a large and highly accurate and reliable dataset. Unfortunately the existing crowd counting and density estimation benchmark datasets are not only limited in terms of their size, but also lack annotation, in general too time consuming to implement. This paper attempts to address this very issue through a content aware technique, uses combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. The results shows that by simply replacing the commonly used density map generators with the proposed method, higher level of accuracy can be achieved using the existing state of the art models.
A study by Idrees @cite_0 stems from the observation that crowd counting, density map estimation and localization are very interrelated and can be decomposed with respect to each other through composition loss, which can then be used to train a neural network. This study
{ "abstract": [ "With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNNs, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes." ], "cite_N": [ "@cite_0" ], "mid": [ "2886443245" ] }
Content-aware Density Map for Crowd Counting and Density Estimation
The study of human behavior is a subject of great scientific interest and probably an inexhaustible source of research. One of the most cited and popular research topic in human behavior analysis is study of crowd features and characteristics. In recent years, crowd analysis has gained a lot of interest mainly due to its wide range of applications such as safety monitor-ing, disaster management, public spaces design, and intelligence gathering, especially in the congested scenes like arenas, shopping malls, and airports [1,2]. Crowd counting, localization and density estimation are crucial objectives of an automated crowd analysis system. Accurate knowledge of the crowd size, location and density in a public space can provide valuable insight for tasks such as city planning, analyzing consumer shopping patterns as well as maintaining general crowd safety. Several studies attempt to produce an accurate estimation of the true number of people present in a crowded scene through density estimation. Deep learning has proven superior to classic computer vision and machine learning techniques that tend to struggle with the complexity of crowd counting and behavior analysis models. [3]. Generally, crowd counting and density estimation approaches can be divided in two categories: detection-based methods (specific) and regression-based methods (holistic). Detectionbased methods generally assume each person on the crowd can be detected and located individually based on its individual features and characteristics. These approaches are preferable in sparse crowd analysis where crowd occlusion is negligible. Holistic crowd counting and behavior analysis approaches utilize global crowd features and characteristics to estimate crowd size, flow and density. These approaches are preferable in dense crowd analysis, where crowd occlusion is significant. Due to high amount of occlusions these approaches only utilize heads as deterministic feature [4]. However, crowd counting and density estimation is not a trivial task. Several key challenges such as severe occlusions, poor illumination, camera perspective and highly dynamic environments further complicate crowd analysis. Moreover, poor quality of annotated data increases to complexity of crowd counting and behavior analysis in crowded environments. The existing crowd counting and density estimation benchmark datasets are not only limited in terms of the quantity, but also lack in terms of annotation strategy. In regression-based crowd counting and density estimation approaches, people heads are the only visible body part in an image. Thus, these approaches use heads as the only discriminant feature. Meanwhile, the existing benchmark datasets such as UCF-CC-50 and ShanghaiTech only provide people heads centroid pixel instead of masking the entire head region. Hence, the recreation of the ground truth head masks is accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian based on the K nearest neighbors. However, the dynamic Gaussian approach based on proximity of the nearest neighbors mitigates the issue to some extent, but this technique is not content aware and incorporates significant amount of noise into ground truth data [5,6]. In this regard, our study attempts to address the limitation of the existing crowd counting and density estimation benchmark datasets through a content aware annotation technique. It employs combinations of nearest neighbor algorithm and unsupervised segmentation to generate the ground truth head masks. The proposed technique first uses the brute-force nearest neighbor search to localize the nearest neighbor head point, then it identified the head boundaries using Chan-Vese segmentation algorithm and generates a two-dimensional Gaussian filter on that basis. We believe that by simply replacing the kN N /Gaussian based ground truth density maps in an existing state of the art network with the proposed content aware approach in this study, higher level of accuracy can be achieved. The rest of this paper is organized as following: section 2 summarizes the related work, section 3 describes the existing datasets and anno-tation strategies, section 4 presents the proposed methodology, section 5 presents the experimental results and finally section 6 concludes the findings of this research. Annotation Strategy In a dense crowd scenario, aside from people heads which are usually fairly visible, the majority of the other body parts are subject to heavy occlusion. This makes heads the only reliable discriminant feature in dense crowd counting and localization. Existing crowd counting and density estimation benchmark datasets such as UCF-CC-50 and ShanghaiTech provide the heads centroid pixel location as labels. Conducting the crowd counting and density estimation as a regression task, seeks for regional isolation of the heads in the form of a binary mask. As the head size is subject to various factors such as camera specifications, point of view, perspective, distance and angle, generation of such mask could be challenging task, given the heads centroid pixel is the only provided form of annotation in existing benchmark datasets. The formation of the ground truth binary head masks in majority of the existing studies is either accomplished through a static two-dimensional Gaussian filter or a dynamic two-dimensional Gaussian filter paired with k nearest neighbors approach. The static two-dimensional Gaussian filter assigns a fixed size Gaussian filter to each head regardless of the head size and proximity of the nearest neighbor. This approach does not attempt to compensate for crowd density, distance and camera perspective and incorporates significant amount of noise into ground truth data. The dynamic two-dimensional Gaussian filter approach employs the nearest neighbors search through k-d tree space partitioning approach, prioritizes the speed over integrity and does not deliver optimal results. In this approach the Gaussian filters are centered to the annotation points and spread based on the average euclidean distance among the three nearest neighbors. In both approaches, the spatial accumulation of all Gaussians creates the global density map for the given image. The following formula shows the commonly used dynamic twodimensional Gaussian approach: D(x, f ) = T h=1 1 2πf (σ h ) exp(− (x − x h ) 2 + (y − y h ) 2 2f (σ h ) 2 )(1) where T is the total number of the heads presents in the given image, σ h is the sized for each head point positioned at (x h , y h ) determined by k-d tree space partitioning approach based on the the average euclidean distance among the three nearest neighbors and f is a scaling constant. The dynamic Gaussian approach based on the k nearest neighbors attempts to mitigate the crowd density, distance and camera perspective issues to some extent. However, this technique is not content aware and it introduces a significant amount of noise into the ground truth data, which in turn negatively affects the model's accuracy. Figure 1 shows some sample images from the ShanghaiTech dataset along with their respective ground truth density maps. It can be observed that both approaches are fairly unreliable and inconsistent in determining the true head sizes. Methodology In order to address the shortcomings of the existing ground truth density maps generation approaches, this study offers a content aware technique using combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search. This technique is based on the Mumford-Shah functional for segmentation, and is widely used in the medical imaging field. The Chan-Vese segmentation algorithm is able to segment objects without prominently defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. As the head boundaries in highly dense crowds are not clearly defined, this technique can be used to segment the head regions from the background. Chan-Vese algorithm attempt to minimize the following energy function in an iterative process [24]. F (c 1 , c 2 , G) = µ.Len(G) + ν.Area(in(G)) +λ 1 in(G) |u 0 (x, y) − c 1 | 2 dxdy +λ 2 out(G) |u 0 (x, y) − c 2 | 2 dxdy (2) where G denote the initial head which manually set to a 5x5 bounding box centered on the labelled head point, c 1 will denote the average pixels' intensity inside the initial head region G, and c 2 denotes the average intensity of a square box, centered to the annotation head point and its boundary extended to the nearest neighbor head point. λ 1 , λ 2 and µ are positive scalars, manually set to 1, 1 and 0 respectively. A twodimensional Gaussian filter which extends to the G mean and centered to the head point is used to create the ground truth head mask. Unlike k-d tree space partitioning technique which does not always delivers the absolute nearest neighbors, brute-force nearest neighbor search technique always guarantees to find the absolute nearest neighbors regardless of the distribution of the points. The brute-force nearest neighbor search technique does take considerably longer time (O(n 2 ) vs O(n log n)) to find the nearest neighbors. However, since generating the ground truth density maps is a singlepass preliminary operation in crowd counting and density estimation, speed is a less of a priority. Since, the Chan-Vese segmentation algorithm only uses the very nearest neighbor head point to determine the boundary of the outside region, the brute-force nearest neighbor search only looks for the very nearest head point. To create the global density map, we employed an exclusive cumulative of the Gaussians which addresses the head mask overlap issue. To maintain the count integrity, density map has been normalized at each iteration. Experimental Results In order to measure the effectiveness of our content-aware crowd density map generator, we have re-trained some of the notable state of the art deep models including Sindagi et al. [25] , Shi et al. [22] , Li et al. [26] and Zhang et al. [27] using the density maps generated by the proposed crowd density map generator. We have used the original implementation of these algorithms provided by authors in Github. All algorithms were trained and tested across both UCF-CC-50 and ShanghaiTech datasets using the proposed content-aware crowd density map generator as well as the commonly used existing ground truth density map generator. In some cases we were unable to reproduce the reported performance in the original manuscripts. However, as we were consistent with the experiments across both density map generators, validity and integrity of the comparison is not compromised. Table 1 shows the mean square error (MSE) comparison between the proposed and existing density map generator across ShanghaiTech dataset part A and B. It can be observed that using the proposed content-aware density map generator, MSE has been consistently decreased across relatively all investigated models. The improvements is more pronounced in Shang-haiTech part A dataset. ShanghaiTech part A dataset exhibits more challenging and dynamic crowd scenarios. The results convey the proposed method could deliver better depiction of the ground truth density maps. Table 2 compares the MSE and mean absolute error (MAE) between the proposed and existing density map generator using extremely challenging UCF-CC-50 dataset. Similar to the results in Shang-haiTech dataset, there is a notable improvement in both MSE and MAE metrics. Figure 2 compares the density maps generated using the existing approach based on k-d tree space partitioning technique and the proposed content-aware crowd density map generator. It can be observed that in highly dense crowds, the proposed method generates more more granular density maps with lesser overlaps between neighbor Guassians. The proposed method uses combination of pixels intensity and nearest neighbors to adjust the size of the Guassians per head. Figure 2 shows this technique significantly improves the integrity of the density map relative to the input image. Figure 2: From top to bottom: sample images from ShanghaiTech dataset, density map generated using the existing method and density map generated using the proposed method Conclusion Creating an accurate model for crowd counting and density estimation demands for a large and highly reliable ground truth data in the first place. However, the existing crowd counting and density estimation benchmark datasets are not only limited in terms of size, but also lack in terms of annotation methodology. This study attempted to address this issue through a content-aware technique which employed combinations of Chan-Vese segmentation algorithm, two-dimensional Gaussian filter and brute-force nearest neighbor search to generate the ground truth density maps. Experiment results shows by replacing the commonly practiced ground truth density map generators with the proposed content-aware method, the existing state of the art crowd counting models can achieve higher level of count and localization accuracy.
2,080
1906.07251
2950919816
Generating a photorealistic image with intended human pose is a promising yet challenging research topic for many applications such as smart photo editing, movie making, virtual try-on, and fashion display. In this paper, we present a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent. In order to formulate the framework, we employ one generator and two discriminators for image synthesis. The generator includes an image encoder, a pose encoder and a decoder. The two encoders provide good representation of visual and geometrical context which will be utilized by the decoder in order to generate a photorealistic image. Unlike existing pose-guided image generation models, we exploit two discriminators to guide the synthesis process where one discriminator differentiates between generated image and real images (training samples), and another discriminator verifies the consistency of appearance between a target pose and a generated image. We perform end-to-end training of the network to learn the parameters through back-propagation given ground-truth images. The proposed generative model is capable of synthesizing a photorealistic image of a person given a target pose. We have demonstrated our results by conducting rigorous experiments on two data sets, both quantitatively and qualitatively.
Recently, image generative modeling has gained a lot of attention from both scientific communities and fashion industry. Generative Adversarial Networks (GANs) @cite_15 are the most popular generative models for the tasks of image synthesis and image modification. There have been some works @cite_33 @cite_27 that exploit GAN in conditional setting. In @cite_3 @cite_33 , generative models are developed conditioning upon class labels. Text @cite_28 @cite_32 and images @cite_8 @cite_27 @cite_22 @cite_11 @cite_13 have also been used as conditions to build image generative models.
{ "abstract": [ "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.", "", "This paper proposes the novel Pose Guided Person Generation Network (PG @math ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG @math utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128 @math 64 re-identification images and 256 @math 256 fashion photos show that our model generates high-quality person images with convincing details.", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model \"redresses\" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN .", "", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.", "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing." ], "cite_N": [ "@cite_33", "@cite_22", "@cite_8", "@cite_28", "@cite_32", "@cite_3", "@cite_27", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2963636093", "", "2962819541", "2949999304", "2757508077", "", "", "2099471712", "2963630103", "2963800363" ] }
Pose Guided Fashion Image Synthesis Using Deep Generative Model Image Synthesis, bidirectional LSTM, Generative Adversarial Net- works ACM Reference Format
Over the past few years, online fashion industry has been shaped by recent technological innovations such as augmented reality, virtual reality, wearable tech, and connected fitting rooms. In order to attract online shoppers and to deliver rich and intuitive online experience, retailers strive to provide high-quality and informative pictures of the products. Online shoppers usually expect to see multiple photos of a garment item from different viewpoints, or multiple photos of a fashion model wearing the same garment from different angles or under different poses. In such scenarios, image synthesis techniques can be exploited to enhance shopping experience for shoppers and to reduce cost for retailers. In computer vision, image generative models [7,27,28,36], which are capable of generating high quality photorealistic images, have been successfully applied in numerous applications. In this paper, our main objective is to develop an image generative model in order to transfer a person from its current pose to an intended target pose. Generative Adversarial Network (GAN) [7] is one of the prominent approaches for image synthesis that has been widely used. For fashion applications, there have been some prior works that utilize generative models at conditional settings. In [18], a reference image has been utilized to transfer a person with given pose to intended pose. Shape information has been incorporated in [5] to aid image generation process. Unlike these two methods, which use one discriminator for pose guided image generation task, we utilize two specific discriminators -one discriminator differentiates between real image and generated image, and another one enhances the consistency between the generated image and the target pose. For virtual try-on, Han et al. propose a VITON network [9] that virtually dresses a person with an different fashion item. The objective is different between VITON and our work-VITON allows for a user to virtually try on different garments, while our work allows for a online retailer to easily generate various display photos. Moreover, online retailers usually provide multiple photos. In such scenario, it will be advantageous to utilize multiple photos as input in order to extract visual-semantic features both for training and image generation. Unlike most of the image generation approaches [9,18], we exploit a set of images of the same fashion item, either garment itself or a fashion model wearing the garment, from which a meaningful representation is learned. In this paper, we aim to develop a novel generative model to produce photorealistic images of a person with new pose different from its current pose. The proposed framework exploits a bi-directional convolutional LSTM [4,32] network and U-Net architecture [23] for image generation process. The LSTM network is utilized to discover the common attributes from multiple images by observing the change in the various semantic image features, such as colors, textures, and shapes. The network is also capable of distinguishing background or noise from the variation in semantic features. A U-Net encoder is used to learn a compact representation of appearance. The representations learned from convolutional LSTM and U-Net encoder are then exploited to synthesize a new image. Two discriminators are designed and deployed in order to guide the image generation process. We perform end-to-end training of generator and discriminator networks. We show both quantitative and qualitative analysis to evaluate the performance of our image generative model on two datasets. Main Contributions. Our major contributions are as follows. • In this paper, we present a novel generative model which employs two encoders (E I and E P ) and one decoder to generate a new image. The representations learned by the two encoders from multiple images of the same fashion item is compact and meaningful, which can be applied in other tasks such as image search and garment parsing. • The proposed framework exploits two discriminators where one discriminator enforces photorealism of the generated images, and the other discriminator enhances the consistency between generated image and target pose. • Using multiple images (e.g., images of a person wearing same garment item with different poses) allows the convolutional LSTM network to learn more visual-semantic context that helps guide the image generation process. PROPOSED MODEL Given a set of images and pose maps as input, our objective is to generate photorealistic image of a person with a new pose different from current one. The proposed framework has two basic components -(a) generator, and (b) discriminator. Fig. 2 demonstrates the overall architecture. Real Fake Generator In this paper, we develop a generator G to produce a photorealistic image of a person with a target pose. Our generator has three parts: (a) E I encoder, (b) E P encoder, and (c) a decoder. Fig. 2 illustrates how different components are utilized to form an image generator. The generator exploits visual-semantic context and pose information obtained from E I and E P encoder respectively, which will then be fed into a decoder in order to generate a new image. 3.1.1 Image Encoder E I . The objective of E I is to learn a semantic representation from a set of images or from a single image. To extract visual features from images, we use ResNet [10] architecture that includes several residual blocks. At each layer, the network learn different features, e.g., texture, color, edges, contours. Next, these features are fed to a bidirectional convolutional LSTM (bC-LSTM). While LSTM has been used in several recognition tasks to extract sequential information. The main motivation of using the bC-LSTM network in our work is to connect the common features from the same person wearing same fashion item at different viewpoints. The bC-LSTM network observes the transition of various semantic image features, such as colors, textures, and shapes from one image to other. As a result, the network can also distinguish background and noise from the variation in features. After training, E I is able to learn useful visual-semantic representation of the images. The learned representation or 'codes' match the concepts of different aspects of fashion items, such as semantic components of the garment (e.g., sleeves, neck, etc.) and certain textural information of the fashion item. We denote the representation as C I . The representative code C I learned from E I will be utilized by a decoder to generate new image. Fig. 2 also shows the E P encoder used in our framework. We use a U-Net architecture [23] to encode the pose information. We provide pose feature maps with 3 channels (R, G, B) as input to the network. Human pose estimation method [1] is used to generate locations of 18 keypoints. We create a pose map by joining the keypoints with a straight line using different colors as shown in Fig. 2. 3.1.2 Pose Encoder E P . The map will then be used by U-Net encoder to aggregate geometrical features. The U-Net encoder includes two 3 × 3 convolutions. Each convolution is followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling operations. We also increase the number of feature channels by 2 as in [23]. Each layer of U-Net encoder is connected to later layers of U-Net decoder by skip connections in order to produce high level features. Finally, we obtain a representation C P . In the following section, we will discuss how the outputs of E I encoder and E P encoder have been further utilized in the decoder network. Decoder. The primary focus of the decoder is to generate a new image by decoding the representative codes C I and C P obtained from E I and E P encoders respectively. The encoded features C I and C P are concatenated in the intermediate stage which will be taken as input to Decoder. Fig. 2 shows the steps of image synthesis process. For decoder, we use convolutional decoder from U-Net architecture with skip connections. The advantage of using the skip connections with E P encoder is that it allows the network to align the visual-semantic features with appearance context learned in U-Net encoder. We fuse the visual C I and pose C P encoded features computed from E I and E P respectively. The fused feature maps are fed to the U-Net decoder. At each layer of decoder, we first aggregate the feature maps obtained from previous layer and the precomputed feature maps at early stage chained by skip connection. Next, we upsample the feature map which is followed by 2×2 up-convolution. This operation also decreases the number of channels by half. Upconvolution is followed by 3 × 3 convolution and ReLU operation. Finally, we obtain a synthesized imageŷ as output of U-Net decoder. Discriminator The main objective of discriminator is to guide the image generation process to be photorealistic by comparing synthesized images against genuine ones. During the training process of the network, we apply two discriminators: discriminator D I classifying whether an image is real or fake (generated); and discriminator D P aiming to estimate whether a pair, e.g., an image of a person and a pose, is consistent. The architectures for D I and D P are shown in bottom right of Fig. 2. Similar to other traditional GAN models, we use a discriminator network D I to guide the generation of an image. The D I discriminator distinguishes between an real image and fake (generated) image. Sometimes, the generated images looks 'real', but not consistent with pose provided. In this paper, we propose another discriminator, denoted as D P , which aims to distinguish between a generated image-pose pair (ŷ, p) and a real image-pose pair (y, p) by checking the consistency between them. Here,ŷ, y and p represent the real, generated (fake) and pose map. This discriminator plays a vital role to align a person with a target pose. Thus, our model can also generate images with complicated pose by enforcing consistency. Exploitation of two discriminators makes our image generation process more robust, consistent and photorealistic. Training During the training of generator, we define the loss function in a way so that the generated image is judged as 'real' and 'consistent' with corresponding pose provided by the discriminators. In contrast, the loss functions for discriminators are chosen to predict the newly generated image as fake or inconsistent with high confidence. We take advantage of the adversarial examples to train the whole network simultaneously. After optimization of the parameters, the proposed generator is able to generate photorealistic images similar to the training images which cannot be distinguished from real images by the two discriminators. Let us denote a set of images that belong to a same person wearing same fashion garment with different pose as {x i } N i=1 , and {p i } N i=1 represent the corresponding pose maps 1 , where N is the number of images. The generator G generates a set of images {ŷ i } N i=1 given {x i } N i=1 , and {p i } N i=1 . Here, G indicates a combination of Image Encoder, Pose Encoder, and Decoder. The generator model G learns a mapping function G(x i , p i ) =ŷ i . Using the ground-truth images, we can write the loss function for the generator as L G (G) = ||y − G(x, p)|| 1 + k λ k ||Φ k (y) − Φ k (G(x, p))|| 1(1) Our goal is to generate an imageŷ = G(x, p) which resembles ground-truth y. The first term of Eqn. 1 is the L1 loss function. Φ k (.) denotes the feature maps of an image at k-th layer of a visual perception network. We use V GG19 [25] network which is trained on ImageNet dataset. λ k is a hyperparameter which represents the importance of k-th layer towards loss function. The second term in Eqn. 1 measures the perceptual similarity between an input image y and an output imageŷ. We refer L G (G) as reconstruction loss. In order to train discriminators, we also consider additional poses taken from different fashion item as shown in Fig. 2. Let us denote these additional poses asp. Withp as input, the generator G will produce new imagesŷ ′ . D I discriminator aims to identify generated images as 'fake'. In order to learn the parameters of D I , we adopt adversarial training as presented in [7]. The loss function can be written as L D I (G, D I ) = E[log D I (y)] + E[1 − log D I (G(x,p))](2) Similarly, D P discriminator distinguishes between real and fake by checking the consistency between given image and pose pair. The 1 For simplicity, we often omit the subscript. loss function for D P can be written as L D P (G, D P ) = E[log D P (ỹ,p)] + E[1 − log D P (G(x,p),p)] (3) y andp represent image samples different from input image and corresponding pose map in training set respectively. We formulate our full objective as G ⋆ , D ⋆ I , D ⋆ P = arg min G max D 1 , D 2 L G + αL D I + βL D P(4) α and β are the weights on the loss functions of two discriminators. EXPERIMENTAL RESULTS In this section, we demonstrate our experimental results for generating photorealistic images of a person guided by target pose. We evaluate the proposed network on two datasets: DeepFashion [16] and Market-1501 [37]. We show both qualitative and quantitative results and compare our approach against recent state-of-the-art pose-guided image generation methods. Dataset In our experiment, DeepFashion [16] and Market-1501 [37] datasets are used for evaluation. We use In-shop Cloth Retrieval benchmark from DeepFashion dataset. DeepFashion includes multiple images of a person with different poses. This dataset contains 52, 712 inshop clothes images. We use the same training and testing set as presented in [18]. The resolution of an image is 256 × 256. We also demonstrate our experimentation on Market-1501 dataset. This dataset is very challenging due to variety of pose, illumination, and background. It has 32, 668 images with 128 × 64 resolution of 1, 501 persons captured from six different view points. For fair comparison, we follow PG 2 [18] for splitting training and testing sets. Implementation Details Our U-Net encoder and decoder follow the network architecture as presented in [38]. The network contains two stride-2 convolution layers and several residual blocks, and two fractionally-strided convolutions with stride 1 2 . Each layer of the image encoder only contains convolutional residual blocks. For DeepFashion dataset, we use 6 residual blocks. In order to train two discriminators D I and D P , we adopt the training procedure as PatchGAN [11]. The discriminator uses 70 × 70 patch and averages all scores by sliding across the image to determine the final output. This allows us to capture high frequency structures. To optimize the network parameters, we use Adam optimizer [12] with β 1 = 0.5 and β 2 = 0.999. We use batch size of 1, with initial learning rate 1e −4 , decay 0.5 every 50 epochs. Here, batch size represents one SKU which includes multiple images of a fashion item ranging from 2 to 5. These images are further used for data augmentation. We randomly crop images and flip the image left-right for data augmentation. We also randomly rotate images to increase the training set. Quantitative Results In order to evaluate our proposed model, we consider two metrics to measure the quality of image synthesis. We utilize Structural Similarity (SSIM) [30] and the Inception Score (IS) [24] as evaluation criteria. Table. 1 shows the quantitative results on DeepFashion and DeepFashion Market-1501 Methods SSIM IS SSIM IS Real Data 1.000 3.415 1.000 3.678 pix2pix [11] 0.646 2.640 0.166 2.289 PG 2 (G1 + D) [18] 0.761 3.091 0.283 3.490 PG 2 [18] 0.762 3.090 0.253 3.460 Varational U-Net [5] 0 Market-1501 datasets. Next, we will compare against some baseline and state-of-the-art methods. Impact of Two Discriminators. Unlike most of the pose guided image generation methods [5,18], we take advantage of adversarial training with two discriminators -D I and D P . In order to analyze the effect of these two discriminators we remove one discriminator at a time, and evaluate the performance. If we remove D I from the network, the loss function L D I in Eqn. 4 does not have any impact. In other words, the mapping function D I in Eqn. 4 has no contribution in the network. We denote this model as G + D I . To verify the effectiveness of the two discriminators, we pick Deep-Fashion dataset to run the framework with different configurations. Furthermore, we provide the results of our proposed model with two discriminators on Market1501 dataset as shown in Table. 1. As can be seen in Table. 1, after removal of D I , both SSIM and IS scores have been significantly dropped by 6.11% and 9.18% respectively compared with (G + D I + D P ) − S. Since D I distinguishes whether an image is real or generated, G + D P model can not generate photorealistic images with high SSIM and IS score. Similarly, we refer the removal of D P discriminator from the proposed architecture as G + D P . D P helps the model to generate photorealistic images of a person with target pose by comparing between real image-pose pair and generated image-pose pair. From Table. 1, we can see that the G + D I achieves 3.059 and 0.715 in IS and SSIM scores respectively. We observe a large drop ( 4.9%) in SSIM score with compared to proposed model (G + D I + D P ) − S. The SSIM score can be improved by exploiting D P discriminator as shown in Table. 1. Effect of Using Multiple Images. In our proposed architecture, we exploit multiple photos of a same fashion item to extract visual-semantic features. During the training process, we allow bi-directional convolutional LSTM (bC-LSTM) network to learn common attributes by observing the transition between multiple images. These attributes or visual-semantic features are utilized to generate photorealistic images. The proposed model is also capable of taking single image as input. Table. 1 shows its SSIM and IS score on DeepFashion and Market-1501. Multi-Image mode outperforms Single-Image model by large margin (2.87%) on DeepFashion and Market-1501 datasets in terms of IS score. Multi-Image model also achieves better SSIM score than Single-Image model on Market-1501 dataset. For DeepFashion dataset, both models achieve similar SSIM score. From Table. 1, we can conclude that bC-LSTM in the generator learns visual-semantic contextual details by exploiting multiple images as input. 4.3.3 Compare against State-of-the-art. In this section, we compare our proposed model against other state-of-the-art deep generative models. We choose some recent works -PG 2 [18], PG 2 (G1 +D) [18], pix2pix [11], and variational U-Net [5] to evaluate the performance of our proposed model. From the Table. 1, we can see that the proposed method achieves comparable results with other exiting works in terms of SSIM score. To measure the quality of image generation process, we also compare the proposed approach with other state-of-the-art models by exploiting IS score as shown in Table. 1. The proposed model outperforms PG 2 [18], PG 2 (G1 + D) [18], pix2pix [11], and variational U-Net [5] by large margin in IS score on DeepFashion and datasets. Market-1501 [37] dataset has images with various background which becomes very difficult to predict as there is no information of background in target image from the input. Our model is able to generate photorealistic images with high SSIM and IS score on Market-1501 dataset as presented in Table. 1. Our model get state-of-the-art performance in terms of Inception Score, which indicates our model not only generate realistic images, but also output a high diversity of images, and with a lower probability to mode collapse. As for SSIM, we achieve improvement to [18] and comparable results with [5]. Qualitative Analysis Given an image or multiple images of a fashion item along with target pose, our proposed model is able to transfer a person's current pose to intended pose. Fig. 3, we can see the high resemblance between the synthetic (4 t h and 5 t h columns) and target ground truth images (6 t h column) as shown in the figure. Furthermore, the proposed model is also able to predict reasonable face details such as mouth, eyes and nose of a person as illustrated in Fig. 3. CONCLUSION In this paper, we present a novel generative model to produce photorealistic images of a person change to a target pose. We utilize convolutional LSTM, and U-Net architecture to develop the generator, in which we 1) exploit multiple images of the same person in order to learn semantic visual context from the convolutional LSTM network; 2) apply a U-Net encoder to learn the appearance/geometrical information; and 3) use a U-Net decoder to generate an image by exploiting visual and appearance context. In order to better guide the image generation process, we apply two discriminators specifically designed for image authenticity and pose consistency. Our experimental results show that the proposed model can produce high-quality images both in qualitative and quantitative measures. As future direction, we will explore the usage of visual and appearance context for human parsing.
3,544
1906.07251
2950919816
Generating a photorealistic image with intended human pose is a promising yet challenging research topic for many applications such as smart photo editing, movie making, virtual try-on, and fashion display. In this paper, we present a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent. In order to formulate the framework, we employ one generator and two discriminators for image synthesis. The generator includes an image encoder, a pose encoder and a decoder. The two encoders provide good representation of visual and geometrical context which will be utilized by the decoder in order to generate a photorealistic image. Unlike existing pose-guided image generation models, we exploit two discriminators to guide the synthesis process where one discriminator differentiates between generated image and real images (training samples), and another discriminator verifies the consistency of appearance between a target pose and a generated image. We perform end-to-end training of the network to learn the parameters through back-propagation given ground-truth images. The proposed generative model is capable of synthesizing a photorealistic image of a person given a target pose. We have demonstrated our results by conducting rigorous experiments on two data sets, both quantitatively and qualitatively.
Image Synthesis in Fashion. These techniques have also been applied @cite_41 @cite_32 @cite_8 to exploit image generative models in fashion technology. An image based virtual try-on network has been proposed in @cite_41 where the generative model transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. A novel approach is presented in @cite_32 for generating new clothing on a wearer through generative adversarial learning by utilizing textual information. In @cite_2 , a conditional U-Net has been used to generate image guided by shape information, and conditioned on the output of a variational autoencoder for appearance. In @cite_8 , the authors present a generative model conditioned upon pose to manipulate a person in an image to an arbitrary pose. @cite_20 study similar task with us, instead they transfer poses to target person from video in a frame-by-frame manner.
{ "abstract": [ "This paper proposes the novel Pose Guided Person Generation Network (PG @math ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG @math utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128 @math 64 re-identification images and 256 @math 256 fashion photos show that our model generates high-quality person images with convincing details.", "We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. Conditioned upon a new clothing-agnostic yet descriptive person representation, our framework first generates a coarse synthesized image with the target clothing item overlaid on that same person in the same pose. We further enhance the initial blurry clothing area with a refinement network. The network is trained to learn how much detail to utilize from the target clothing item, and where to apply to the person in order to synthesize a photo-realistic image in which the target item deforms naturally with clear visual patterns. Experiments on our newly collected Zalando dataset demonstrate its promise in the image-based virtual try-on task over state-of-the-art generative models.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model \"redresses\" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN .", "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net [30] for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO [20], DeepFashion [21, 23], shoes [43], Market-1501 [47] and handbags [49] the approach demonstrates significant improvements over the state-of-the-art.", "This paper presents a simple method for \"do as I do\" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer." ], "cite_N": [ "@cite_8", "@cite_41", "@cite_32", "@cite_2", "@cite_20" ], "mid": [ "2962819541", "2769242856", "2757508077", "2962963674", "2888164449" ] }
Pose Guided Fashion Image Synthesis Using Deep Generative Model Image Synthesis, bidirectional LSTM, Generative Adversarial Net- works ACM Reference Format
Over the past few years, online fashion industry has been shaped by recent technological innovations such as augmented reality, virtual reality, wearable tech, and connected fitting rooms. In order to attract online shoppers and to deliver rich and intuitive online experience, retailers strive to provide high-quality and informative pictures of the products. Online shoppers usually expect to see multiple photos of a garment item from different viewpoints, or multiple photos of a fashion model wearing the same garment from different angles or under different poses. In such scenarios, image synthesis techniques can be exploited to enhance shopping experience for shoppers and to reduce cost for retailers. In computer vision, image generative models [7,27,28,36], which are capable of generating high quality photorealistic images, have been successfully applied in numerous applications. In this paper, our main objective is to develop an image generative model in order to transfer a person from its current pose to an intended target pose. Generative Adversarial Network (GAN) [7] is one of the prominent approaches for image synthesis that has been widely used. For fashion applications, there have been some prior works that utilize generative models at conditional settings. In [18], a reference image has been utilized to transfer a person with given pose to intended pose. Shape information has been incorporated in [5] to aid image generation process. Unlike these two methods, which use one discriminator for pose guided image generation task, we utilize two specific discriminators -one discriminator differentiates between real image and generated image, and another one enhances the consistency between the generated image and the target pose. For virtual try-on, Han et al. propose a VITON network [9] that virtually dresses a person with an different fashion item. The objective is different between VITON and our work-VITON allows for a user to virtually try on different garments, while our work allows for a online retailer to easily generate various display photos. Moreover, online retailers usually provide multiple photos. In such scenario, it will be advantageous to utilize multiple photos as input in order to extract visual-semantic features both for training and image generation. Unlike most of the image generation approaches [9,18], we exploit a set of images of the same fashion item, either garment itself or a fashion model wearing the garment, from which a meaningful representation is learned. In this paper, we aim to develop a novel generative model to produce photorealistic images of a person with new pose different from its current pose. The proposed framework exploits a bi-directional convolutional LSTM [4,32] network and U-Net architecture [23] for image generation process. The LSTM network is utilized to discover the common attributes from multiple images by observing the change in the various semantic image features, such as colors, textures, and shapes. The network is also capable of distinguishing background or noise from the variation in semantic features. A U-Net encoder is used to learn a compact representation of appearance. The representations learned from convolutional LSTM and U-Net encoder are then exploited to synthesize a new image. Two discriminators are designed and deployed in order to guide the image generation process. We perform end-to-end training of generator and discriminator networks. We show both quantitative and qualitative analysis to evaluate the performance of our image generative model on two datasets. Main Contributions. Our major contributions are as follows. • In this paper, we present a novel generative model which employs two encoders (E I and E P ) and one decoder to generate a new image. The representations learned by the two encoders from multiple images of the same fashion item is compact and meaningful, which can be applied in other tasks such as image search and garment parsing. • The proposed framework exploits two discriminators where one discriminator enforces photorealism of the generated images, and the other discriminator enhances the consistency between generated image and target pose. • Using multiple images (e.g., images of a person wearing same garment item with different poses) allows the convolutional LSTM network to learn more visual-semantic context that helps guide the image generation process. PROPOSED MODEL Given a set of images and pose maps as input, our objective is to generate photorealistic image of a person with a new pose different from current one. The proposed framework has two basic components -(a) generator, and (b) discriminator. Fig. 2 demonstrates the overall architecture. Real Fake Generator In this paper, we develop a generator G to produce a photorealistic image of a person with a target pose. Our generator has three parts: (a) E I encoder, (b) E P encoder, and (c) a decoder. Fig. 2 illustrates how different components are utilized to form an image generator. The generator exploits visual-semantic context and pose information obtained from E I and E P encoder respectively, which will then be fed into a decoder in order to generate a new image. 3.1.1 Image Encoder E I . The objective of E I is to learn a semantic representation from a set of images or from a single image. To extract visual features from images, we use ResNet [10] architecture that includes several residual blocks. At each layer, the network learn different features, e.g., texture, color, edges, contours. Next, these features are fed to a bidirectional convolutional LSTM (bC-LSTM). While LSTM has been used in several recognition tasks to extract sequential information. The main motivation of using the bC-LSTM network in our work is to connect the common features from the same person wearing same fashion item at different viewpoints. The bC-LSTM network observes the transition of various semantic image features, such as colors, textures, and shapes from one image to other. As a result, the network can also distinguish background and noise from the variation in features. After training, E I is able to learn useful visual-semantic representation of the images. The learned representation or 'codes' match the concepts of different aspects of fashion items, such as semantic components of the garment (e.g., sleeves, neck, etc.) and certain textural information of the fashion item. We denote the representation as C I . The representative code C I learned from E I will be utilized by a decoder to generate new image. Fig. 2 also shows the E P encoder used in our framework. We use a U-Net architecture [23] to encode the pose information. We provide pose feature maps with 3 channels (R, G, B) as input to the network. Human pose estimation method [1] is used to generate locations of 18 keypoints. We create a pose map by joining the keypoints with a straight line using different colors as shown in Fig. 2. 3.1.2 Pose Encoder E P . The map will then be used by U-Net encoder to aggregate geometrical features. The U-Net encoder includes two 3 × 3 convolutions. Each convolution is followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling operations. We also increase the number of feature channels by 2 as in [23]. Each layer of U-Net encoder is connected to later layers of U-Net decoder by skip connections in order to produce high level features. Finally, we obtain a representation C P . In the following section, we will discuss how the outputs of E I encoder and E P encoder have been further utilized in the decoder network. Decoder. The primary focus of the decoder is to generate a new image by decoding the representative codes C I and C P obtained from E I and E P encoders respectively. The encoded features C I and C P are concatenated in the intermediate stage which will be taken as input to Decoder. Fig. 2 shows the steps of image synthesis process. For decoder, we use convolutional decoder from U-Net architecture with skip connections. The advantage of using the skip connections with E P encoder is that it allows the network to align the visual-semantic features with appearance context learned in U-Net encoder. We fuse the visual C I and pose C P encoded features computed from E I and E P respectively. The fused feature maps are fed to the U-Net decoder. At each layer of decoder, we first aggregate the feature maps obtained from previous layer and the precomputed feature maps at early stage chained by skip connection. Next, we upsample the feature map which is followed by 2×2 up-convolution. This operation also decreases the number of channels by half. Upconvolution is followed by 3 × 3 convolution and ReLU operation. Finally, we obtain a synthesized imageŷ as output of U-Net decoder. Discriminator The main objective of discriminator is to guide the image generation process to be photorealistic by comparing synthesized images against genuine ones. During the training process of the network, we apply two discriminators: discriminator D I classifying whether an image is real or fake (generated); and discriminator D P aiming to estimate whether a pair, e.g., an image of a person and a pose, is consistent. The architectures for D I and D P are shown in bottom right of Fig. 2. Similar to other traditional GAN models, we use a discriminator network D I to guide the generation of an image. The D I discriminator distinguishes between an real image and fake (generated) image. Sometimes, the generated images looks 'real', but not consistent with pose provided. In this paper, we propose another discriminator, denoted as D P , which aims to distinguish between a generated image-pose pair (ŷ, p) and a real image-pose pair (y, p) by checking the consistency between them. Here,ŷ, y and p represent the real, generated (fake) and pose map. This discriminator plays a vital role to align a person with a target pose. Thus, our model can also generate images with complicated pose by enforcing consistency. Exploitation of two discriminators makes our image generation process more robust, consistent and photorealistic. Training During the training of generator, we define the loss function in a way so that the generated image is judged as 'real' and 'consistent' with corresponding pose provided by the discriminators. In contrast, the loss functions for discriminators are chosen to predict the newly generated image as fake or inconsistent with high confidence. We take advantage of the adversarial examples to train the whole network simultaneously. After optimization of the parameters, the proposed generator is able to generate photorealistic images similar to the training images which cannot be distinguished from real images by the two discriminators. Let us denote a set of images that belong to a same person wearing same fashion garment with different pose as {x i } N i=1 , and {p i } N i=1 represent the corresponding pose maps 1 , where N is the number of images. The generator G generates a set of images {ŷ i } N i=1 given {x i } N i=1 , and {p i } N i=1 . Here, G indicates a combination of Image Encoder, Pose Encoder, and Decoder. The generator model G learns a mapping function G(x i , p i ) =ŷ i . Using the ground-truth images, we can write the loss function for the generator as L G (G) = ||y − G(x, p)|| 1 + k λ k ||Φ k (y) − Φ k (G(x, p))|| 1(1) Our goal is to generate an imageŷ = G(x, p) which resembles ground-truth y. The first term of Eqn. 1 is the L1 loss function. Φ k (.) denotes the feature maps of an image at k-th layer of a visual perception network. We use V GG19 [25] network which is trained on ImageNet dataset. λ k is a hyperparameter which represents the importance of k-th layer towards loss function. The second term in Eqn. 1 measures the perceptual similarity between an input image y and an output imageŷ. We refer L G (G) as reconstruction loss. In order to train discriminators, we also consider additional poses taken from different fashion item as shown in Fig. 2. Let us denote these additional poses asp. Withp as input, the generator G will produce new imagesŷ ′ . D I discriminator aims to identify generated images as 'fake'. In order to learn the parameters of D I , we adopt adversarial training as presented in [7]. The loss function can be written as L D I (G, D I ) = E[log D I (y)] + E[1 − log D I (G(x,p))](2) Similarly, D P discriminator distinguishes between real and fake by checking the consistency between given image and pose pair. The 1 For simplicity, we often omit the subscript. loss function for D P can be written as L D P (G, D P ) = E[log D P (ỹ,p)] + E[1 − log D P (G(x,p),p)] (3) y andp represent image samples different from input image and corresponding pose map in training set respectively. We formulate our full objective as G ⋆ , D ⋆ I , D ⋆ P = arg min G max D 1 , D 2 L G + αL D I + βL D P(4) α and β are the weights on the loss functions of two discriminators. EXPERIMENTAL RESULTS In this section, we demonstrate our experimental results for generating photorealistic images of a person guided by target pose. We evaluate the proposed network on two datasets: DeepFashion [16] and Market-1501 [37]. We show both qualitative and quantitative results and compare our approach against recent state-of-the-art pose-guided image generation methods. Dataset In our experiment, DeepFashion [16] and Market-1501 [37] datasets are used for evaluation. We use In-shop Cloth Retrieval benchmark from DeepFashion dataset. DeepFashion includes multiple images of a person with different poses. This dataset contains 52, 712 inshop clothes images. We use the same training and testing set as presented in [18]. The resolution of an image is 256 × 256. We also demonstrate our experimentation on Market-1501 dataset. This dataset is very challenging due to variety of pose, illumination, and background. It has 32, 668 images with 128 × 64 resolution of 1, 501 persons captured from six different view points. For fair comparison, we follow PG 2 [18] for splitting training and testing sets. Implementation Details Our U-Net encoder and decoder follow the network architecture as presented in [38]. The network contains two stride-2 convolution layers and several residual blocks, and two fractionally-strided convolutions with stride 1 2 . Each layer of the image encoder only contains convolutional residual blocks. For DeepFashion dataset, we use 6 residual blocks. In order to train two discriminators D I and D P , we adopt the training procedure as PatchGAN [11]. The discriminator uses 70 × 70 patch and averages all scores by sliding across the image to determine the final output. This allows us to capture high frequency structures. To optimize the network parameters, we use Adam optimizer [12] with β 1 = 0.5 and β 2 = 0.999. We use batch size of 1, with initial learning rate 1e −4 , decay 0.5 every 50 epochs. Here, batch size represents one SKU which includes multiple images of a fashion item ranging from 2 to 5. These images are further used for data augmentation. We randomly crop images and flip the image left-right for data augmentation. We also randomly rotate images to increase the training set. Quantitative Results In order to evaluate our proposed model, we consider two metrics to measure the quality of image synthesis. We utilize Structural Similarity (SSIM) [30] and the Inception Score (IS) [24] as evaluation criteria. Table. 1 shows the quantitative results on DeepFashion and DeepFashion Market-1501 Methods SSIM IS SSIM IS Real Data 1.000 3.415 1.000 3.678 pix2pix [11] 0.646 2.640 0.166 2.289 PG 2 (G1 + D) [18] 0.761 3.091 0.283 3.490 PG 2 [18] 0.762 3.090 0.253 3.460 Varational U-Net [5] 0 Market-1501 datasets. Next, we will compare against some baseline and state-of-the-art methods. Impact of Two Discriminators. Unlike most of the pose guided image generation methods [5,18], we take advantage of adversarial training with two discriminators -D I and D P . In order to analyze the effect of these two discriminators we remove one discriminator at a time, and evaluate the performance. If we remove D I from the network, the loss function L D I in Eqn. 4 does not have any impact. In other words, the mapping function D I in Eqn. 4 has no contribution in the network. We denote this model as G + D I . To verify the effectiveness of the two discriminators, we pick Deep-Fashion dataset to run the framework with different configurations. Furthermore, we provide the results of our proposed model with two discriminators on Market1501 dataset as shown in Table. 1. As can be seen in Table. 1, after removal of D I , both SSIM and IS scores have been significantly dropped by 6.11% and 9.18% respectively compared with (G + D I + D P ) − S. Since D I distinguishes whether an image is real or generated, G + D P model can not generate photorealistic images with high SSIM and IS score. Similarly, we refer the removal of D P discriminator from the proposed architecture as G + D P . D P helps the model to generate photorealistic images of a person with target pose by comparing between real image-pose pair and generated image-pose pair. From Table. 1, we can see that the G + D I achieves 3.059 and 0.715 in IS and SSIM scores respectively. We observe a large drop ( 4.9%) in SSIM score with compared to proposed model (G + D I + D P ) − S. The SSIM score can be improved by exploiting D P discriminator as shown in Table. 1. Effect of Using Multiple Images. In our proposed architecture, we exploit multiple photos of a same fashion item to extract visual-semantic features. During the training process, we allow bi-directional convolutional LSTM (bC-LSTM) network to learn common attributes by observing the transition between multiple images. These attributes or visual-semantic features are utilized to generate photorealistic images. The proposed model is also capable of taking single image as input. Table. 1 shows its SSIM and IS score on DeepFashion and Market-1501. Multi-Image mode outperforms Single-Image model by large margin (2.87%) on DeepFashion and Market-1501 datasets in terms of IS score. Multi-Image model also achieves better SSIM score than Single-Image model on Market-1501 dataset. For DeepFashion dataset, both models achieve similar SSIM score. From Table. 1, we can conclude that bC-LSTM in the generator learns visual-semantic contextual details by exploiting multiple images as input. 4.3.3 Compare against State-of-the-art. In this section, we compare our proposed model against other state-of-the-art deep generative models. We choose some recent works -PG 2 [18], PG 2 (G1 +D) [18], pix2pix [11], and variational U-Net [5] to evaluate the performance of our proposed model. From the Table. 1, we can see that the proposed method achieves comparable results with other exiting works in terms of SSIM score. To measure the quality of image generation process, we also compare the proposed approach with other state-of-the-art models by exploiting IS score as shown in Table. 1. The proposed model outperforms PG 2 [18], PG 2 (G1 + D) [18], pix2pix [11], and variational U-Net [5] by large margin in IS score on DeepFashion and datasets. Market-1501 [37] dataset has images with various background which becomes very difficult to predict as there is no information of background in target image from the input. Our model is able to generate photorealistic images with high SSIM and IS score on Market-1501 dataset as presented in Table. 1. Our model get state-of-the-art performance in terms of Inception Score, which indicates our model not only generate realistic images, but also output a high diversity of images, and with a lower probability to mode collapse. As for SSIM, we achieve improvement to [18] and comparable results with [5]. Qualitative Analysis Given an image or multiple images of a fashion item along with target pose, our proposed model is able to transfer a person's current pose to intended pose. Fig. 3, we can see the high resemblance between the synthetic (4 t h and 5 t h columns) and target ground truth images (6 t h column) as shown in the figure. Furthermore, the proposed model is also able to predict reasonable face details such as mouth, eyes and nose of a person as illustrated in Fig. 3. CONCLUSION In this paper, we present a novel generative model to produce photorealistic images of a person change to a target pose. We utilize convolutional LSTM, and U-Net architecture to develop the generator, in which we 1) exploit multiple images of the same person in order to learn semantic visual context from the convolutional LSTM network; 2) apply a U-Net encoder to learn the appearance/geometrical information; and 3) use a U-Net decoder to generate an image by exploiting visual and appearance context. In order to better guide the image generation process, we apply two discriminators specifically designed for image authenticity and pose consistency. Our experimental results show that the proposed model can produce high-quality images both in qualitative and quantitative measures. As future direction, we will explore the usage of visual and appearance context for human parsing.
3,544
1906.07251
2950919816
Generating a photorealistic image with intended human pose is a promising yet challenging research topic for many applications such as smart photo editing, movie making, virtual try-on, and fashion display. In this paper, we present a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent. In order to formulate the framework, we employ one generator and two discriminators for image synthesis. The generator includes an image encoder, a pose encoder and a decoder. The two encoders provide good representation of visual and geometrical context which will be utilized by the decoder in order to generate a photorealistic image. Unlike existing pose-guided image generation models, we exploit two discriminators to guide the synthesis process where one discriminator differentiates between generated image and real images (training samples), and another discriminator verifies the consistency of appearance between a target pose and a generated image. We perform end-to-end training of the network to learn the parameters through back-propagation given ground-truth images. The proposed generative model is capable of synthesizing a photorealistic image of a person given a target pose. We have demonstrated our results by conducting rigorous experiments on two data sets, both quantitatively and qualitatively.
Even though we aim at solving similar problem as @cite_8 , our work differs from @cite_8 in terms of architectural choices both in generator and discriminator. Unlike most of the image generative approaches, we exploit multiple images of a fashion item as input which are usually available on e-commerce shopping platforms.
{ "abstract": [ "This paper proposes the novel Pose Guided Person Generation Network (PG @math ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG @math utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128 @math 64 re-identification images and 256 @math 256 fashion photos show that our model generates high-quality person images with convincing details." ], "cite_N": [ "@cite_8" ], "mid": [ "2962819541" ] }
Pose Guided Fashion Image Synthesis Using Deep Generative Model Image Synthesis, bidirectional LSTM, Generative Adversarial Net- works ACM Reference Format
Over the past few years, online fashion industry has been shaped by recent technological innovations such as augmented reality, virtual reality, wearable tech, and connected fitting rooms. In order to attract online shoppers and to deliver rich and intuitive online experience, retailers strive to provide high-quality and informative pictures of the products. Online shoppers usually expect to see multiple photos of a garment item from different viewpoints, or multiple photos of a fashion model wearing the same garment from different angles or under different poses. In such scenarios, image synthesis techniques can be exploited to enhance shopping experience for shoppers and to reduce cost for retailers. In computer vision, image generative models [7,27,28,36], which are capable of generating high quality photorealistic images, have been successfully applied in numerous applications. In this paper, our main objective is to develop an image generative model in order to transfer a person from its current pose to an intended target pose. Generative Adversarial Network (GAN) [7] is one of the prominent approaches for image synthesis that has been widely used. For fashion applications, there have been some prior works that utilize generative models at conditional settings. In [18], a reference image has been utilized to transfer a person with given pose to intended pose. Shape information has been incorporated in [5] to aid image generation process. Unlike these two methods, which use one discriminator for pose guided image generation task, we utilize two specific discriminators -one discriminator differentiates between real image and generated image, and another one enhances the consistency between the generated image and the target pose. For virtual try-on, Han et al. propose a VITON network [9] that virtually dresses a person with an different fashion item. The objective is different between VITON and our work-VITON allows for a user to virtually try on different garments, while our work allows for a online retailer to easily generate various display photos. Moreover, online retailers usually provide multiple photos. In such scenario, it will be advantageous to utilize multiple photos as input in order to extract visual-semantic features both for training and image generation. Unlike most of the image generation approaches [9,18], we exploit a set of images of the same fashion item, either garment itself or a fashion model wearing the garment, from which a meaningful representation is learned. In this paper, we aim to develop a novel generative model to produce photorealistic images of a person with new pose different from its current pose. The proposed framework exploits a bi-directional convolutional LSTM [4,32] network and U-Net architecture [23] for image generation process. The LSTM network is utilized to discover the common attributes from multiple images by observing the change in the various semantic image features, such as colors, textures, and shapes. The network is also capable of distinguishing background or noise from the variation in semantic features. A U-Net encoder is used to learn a compact representation of appearance. The representations learned from convolutional LSTM and U-Net encoder are then exploited to synthesize a new image. Two discriminators are designed and deployed in order to guide the image generation process. We perform end-to-end training of generator and discriminator networks. We show both quantitative and qualitative analysis to evaluate the performance of our image generative model on two datasets. Main Contributions. Our major contributions are as follows. • In this paper, we present a novel generative model which employs two encoders (E I and E P ) and one decoder to generate a new image. The representations learned by the two encoders from multiple images of the same fashion item is compact and meaningful, which can be applied in other tasks such as image search and garment parsing. • The proposed framework exploits two discriminators where one discriminator enforces photorealism of the generated images, and the other discriminator enhances the consistency between generated image and target pose. • Using multiple images (e.g., images of a person wearing same garment item with different poses) allows the convolutional LSTM network to learn more visual-semantic context that helps guide the image generation process. PROPOSED MODEL Given a set of images and pose maps as input, our objective is to generate photorealistic image of a person with a new pose different from current one. The proposed framework has two basic components -(a) generator, and (b) discriminator. Fig. 2 demonstrates the overall architecture. Real Fake Generator In this paper, we develop a generator G to produce a photorealistic image of a person with a target pose. Our generator has three parts: (a) E I encoder, (b) E P encoder, and (c) a decoder. Fig. 2 illustrates how different components are utilized to form an image generator. The generator exploits visual-semantic context and pose information obtained from E I and E P encoder respectively, which will then be fed into a decoder in order to generate a new image. 3.1.1 Image Encoder E I . The objective of E I is to learn a semantic representation from a set of images or from a single image. To extract visual features from images, we use ResNet [10] architecture that includes several residual blocks. At each layer, the network learn different features, e.g., texture, color, edges, contours. Next, these features are fed to a bidirectional convolutional LSTM (bC-LSTM). While LSTM has been used in several recognition tasks to extract sequential information. The main motivation of using the bC-LSTM network in our work is to connect the common features from the same person wearing same fashion item at different viewpoints. The bC-LSTM network observes the transition of various semantic image features, such as colors, textures, and shapes from one image to other. As a result, the network can also distinguish background and noise from the variation in features. After training, E I is able to learn useful visual-semantic representation of the images. The learned representation or 'codes' match the concepts of different aspects of fashion items, such as semantic components of the garment (e.g., sleeves, neck, etc.) and certain textural information of the fashion item. We denote the representation as C I . The representative code C I learned from E I will be utilized by a decoder to generate new image. Fig. 2 also shows the E P encoder used in our framework. We use a U-Net architecture [23] to encode the pose information. We provide pose feature maps with 3 channels (R, G, B) as input to the network. Human pose estimation method [1] is used to generate locations of 18 keypoints. We create a pose map by joining the keypoints with a straight line using different colors as shown in Fig. 2. 3.1.2 Pose Encoder E P . The map will then be used by U-Net encoder to aggregate geometrical features. The U-Net encoder includes two 3 × 3 convolutions. Each convolution is followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling operations. We also increase the number of feature channels by 2 as in [23]. Each layer of U-Net encoder is connected to later layers of U-Net decoder by skip connections in order to produce high level features. Finally, we obtain a representation C P . In the following section, we will discuss how the outputs of E I encoder and E P encoder have been further utilized in the decoder network. Decoder. The primary focus of the decoder is to generate a new image by decoding the representative codes C I and C P obtained from E I and E P encoders respectively. The encoded features C I and C P are concatenated in the intermediate stage which will be taken as input to Decoder. Fig. 2 shows the steps of image synthesis process. For decoder, we use convolutional decoder from U-Net architecture with skip connections. The advantage of using the skip connections with E P encoder is that it allows the network to align the visual-semantic features with appearance context learned in U-Net encoder. We fuse the visual C I and pose C P encoded features computed from E I and E P respectively. The fused feature maps are fed to the U-Net decoder. At each layer of decoder, we first aggregate the feature maps obtained from previous layer and the precomputed feature maps at early stage chained by skip connection. Next, we upsample the feature map which is followed by 2×2 up-convolution. This operation also decreases the number of channels by half. Upconvolution is followed by 3 × 3 convolution and ReLU operation. Finally, we obtain a synthesized imageŷ as output of U-Net decoder. Discriminator The main objective of discriminator is to guide the image generation process to be photorealistic by comparing synthesized images against genuine ones. During the training process of the network, we apply two discriminators: discriminator D I classifying whether an image is real or fake (generated); and discriminator D P aiming to estimate whether a pair, e.g., an image of a person and a pose, is consistent. The architectures for D I and D P are shown in bottom right of Fig. 2. Similar to other traditional GAN models, we use a discriminator network D I to guide the generation of an image. The D I discriminator distinguishes between an real image and fake (generated) image. Sometimes, the generated images looks 'real', but not consistent with pose provided. In this paper, we propose another discriminator, denoted as D P , which aims to distinguish between a generated image-pose pair (ŷ, p) and a real image-pose pair (y, p) by checking the consistency between them. Here,ŷ, y and p represent the real, generated (fake) and pose map. This discriminator plays a vital role to align a person with a target pose. Thus, our model can also generate images with complicated pose by enforcing consistency. Exploitation of two discriminators makes our image generation process more robust, consistent and photorealistic. Training During the training of generator, we define the loss function in a way so that the generated image is judged as 'real' and 'consistent' with corresponding pose provided by the discriminators. In contrast, the loss functions for discriminators are chosen to predict the newly generated image as fake or inconsistent with high confidence. We take advantage of the adversarial examples to train the whole network simultaneously. After optimization of the parameters, the proposed generator is able to generate photorealistic images similar to the training images which cannot be distinguished from real images by the two discriminators. Let us denote a set of images that belong to a same person wearing same fashion garment with different pose as {x i } N i=1 , and {p i } N i=1 represent the corresponding pose maps 1 , where N is the number of images. The generator G generates a set of images {ŷ i } N i=1 given {x i } N i=1 , and {p i } N i=1 . Here, G indicates a combination of Image Encoder, Pose Encoder, and Decoder. The generator model G learns a mapping function G(x i , p i ) =ŷ i . Using the ground-truth images, we can write the loss function for the generator as L G (G) = ||y − G(x, p)|| 1 + k λ k ||Φ k (y) − Φ k (G(x, p))|| 1(1) Our goal is to generate an imageŷ = G(x, p) which resembles ground-truth y. The first term of Eqn. 1 is the L1 loss function. Φ k (.) denotes the feature maps of an image at k-th layer of a visual perception network. We use V GG19 [25] network which is trained on ImageNet dataset. λ k is a hyperparameter which represents the importance of k-th layer towards loss function. The second term in Eqn. 1 measures the perceptual similarity between an input image y and an output imageŷ. We refer L G (G) as reconstruction loss. In order to train discriminators, we also consider additional poses taken from different fashion item as shown in Fig. 2. Let us denote these additional poses asp. Withp as input, the generator G will produce new imagesŷ ′ . D I discriminator aims to identify generated images as 'fake'. In order to learn the parameters of D I , we adopt adversarial training as presented in [7]. The loss function can be written as L D I (G, D I ) = E[log D I (y)] + E[1 − log D I (G(x,p))](2) Similarly, D P discriminator distinguishes between real and fake by checking the consistency between given image and pose pair. The 1 For simplicity, we often omit the subscript. loss function for D P can be written as L D P (G, D P ) = E[log D P (ỹ,p)] + E[1 − log D P (G(x,p),p)] (3) y andp represent image samples different from input image and corresponding pose map in training set respectively. We formulate our full objective as G ⋆ , D ⋆ I , D ⋆ P = arg min G max D 1 , D 2 L G + αL D I + βL D P(4) α and β are the weights on the loss functions of two discriminators. EXPERIMENTAL RESULTS In this section, we demonstrate our experimental results for generating photorealistic images of a person guided by target pose. We evaluate the proposed network on two datasets: DeepFashion [16] and Market-1501 [37]. We show both qualitative and quantitative results and compare our approach against recent state-of-the-art pose-guided image generation methods. Dataset In our experiment, DeepFashion [16] and Market-1501 [37] datasets are used for evaluation. We use In-shop Cloth Retrieval benchmark from DeepFashion dataset. DeepFashion includes multiple images of a person with different poses. This dataset contains 52, 712 inshop clothes images. We use the same training and testing set as presented in [18]. The resolution of an image is 256 × 256. We also demonstrate our experimentation on Market-1501 dataset. This dataset is very challenging due to variety of pose, illumination, and background. It has 32, 668 images with 128 × 64 resolution of 1, 501 persons captured from six different view points. For fair comparison, we follow PG 2 [18] for splitting training and testing sets. Implementation Details Our U-Net encoder and decoder follow the network architecture as presented in [38]. The network contains two stride-2 convolution layers and several residual blocks, and two fractionally-strided convolutions with stride 1 2 . Each layer of the image encoder only contains convolutional residual blocks. For DeepFashion dataset, we use 6 residual blocks. In order to train two discriminators D I and D P , we adopt the training procedure as PatchGAN [11]. The discriminator uses 70 × 70 patch and averages all scores by sliding across the image to determine the final output. This allows us to capture high frequency structures. To optimize the network parameters, we use Adam optimizer [12] with β 1 = 0.5 and β 2 = 0.999. We use batch size of 1, with initial learning rate 1e −4 , decay 0.5 every 50 epochs. Here, batch size represents one SKU which includes multiple images of a fashion item ranging from 2 to 5. These images are further used for data augmentation. We randomly crop images and flip the image left-right for data augmentation. We also randomly rotate images to increase the training set. Quantitative Results In order to evaluate our proposed model, we consider two metrics to measure the quality of image synthesis. We utilize Structural Similarity (SSIM) [30] and the Inception Score (IS) [24] as evaluation criteria. Table. 1 shows the quantitative results on DeepFashion and DeepFashion Market-1501 Methods SSIM IS SSIM IS Real Data 1.000 3.415 1.000 3.678 pix2pix [11] 0.646 2.640 0.166 2.289 PG 2 (G1 + D) [18] 0.761 3.091 0.283 3.490 PG 2 [18] 0.762 3.090 0.253 3.460 Varational U-Net [5] 0 Market-1501 datasets. Next, we will compare against some baseline and state-of-the-art methods. Impact of Two Discriminators. Unlike most of the pose guided image generation methods [5,18], we take advantage of adversarial training with two discriminators -D I and D P . In order to analyze the effect of these two discriminators we remove one discriminator at a time, and evaluate the performance. If we remove D I from the network, the loss function L D I in Eqn. 4 does not have any impact. In other words, the mapping function D I in Eqn. 4 has no contribution in the network. We denote this model as G + D I . To verify the effectiveness of the two discriminators, we pick Deep-Fashion dataset to run the framework with different configurations. Furthermore, we provide the results of our proposed model with two discriminators on Market1501 dataset as shown in Table. 1. As can be seen in Table. 1, after removal of D I , both SSIM and IS scores have been significantly dropped by 6.11% and 9.18% respectively compared with (G + D I + D P ) − S. Since D I distinguishes whether an image is real or generated, G + D P model can not generate photorealistic images with high SSIM and IS score. Similarly, we refer the removal of D P discriminator from the proposed architecture as G + D P . D P helps the model to generate photorealistic images of a person with target pose by comparing between real image-pose pair and generated image-pose pair. From Table. 1, we can see that the G + D I achieves 3.059 and 0.715 in IS and SSIM scores respectively. We observe a large drop ( 4.9%) in SSIM score with compared to proposed model (G + D I + D P ) − S. The SSIM score can be improved by exploiting D P discriminator as shown in Table. 1. Effect of Using Multiple Images. In our proposed architecture, we exploit multiple photos of a same fashion item to extract visual-semantic features. During the training process, we allow bi-directional convolutional LSTM (bC-LSTM) network to learn common attributes by observing the transition between multiple images. These attributes or visual-semantic features are utilized to generate photorealistic images. The proposed model is also capable of taking single image as input. Table. 1 shows its SSIM and IS score on DeepFashion and Market-1501. Multi-Image mode outperforms Single-Image model by large margin (2.87%) on DeepFashion and Market-1501 datasets in terms of IS score. Multi-Image model also achieves better SSIM score than Single-Image model on Market-1501 dataset. For DeepFashion dataset, both models achieve similar SSIM score. From Table. 1, we can conclude that bC-LSTM in the generator learns visual-semantic contextual details by exploiting multiple images as input. 4.3.3 Compare against State-of-the-art. In this section, we compare our proposed model against other state-of-the-art deep generative models. We choose some recent works -PG 2 [18], PG 2 (G1 +D) [18], pix2pix [11], and variational U-Net [5] to evaluate the performance of our proposed model. From the Table. 1, we can see that the proposed method achieves comparable results with other exiting works in terms of SSIM score. To measure the quality of image generation process, we also compare the proposed approach with other state-of-the-art models by exploiting IS score as shown in Table. 1. The proposed model outperforms PG 2 [18], PG 2 (G1 + D) [18], pix2pix [11], and variational U-Net [5] by large margin in IS score on DeepFashion and datasets. Market-1501 [37] dataset has images with various background which becomes very difficult to predict as there is no information of background in target image from the input. Our model is able to generate photorealistic images with high SSIM and IS score on Market-1501 dataset as presented in Table. 1. Our model get state-of-the-art performance in terms of Inception Score, which indicates our model not only generate realistic images, but also output a high diversity of images, and with a lower probability to mode collapse. As for SSIM, we achieve improvement to [18] and comparable results with [5]. Qualitative Analysis Given an image or multiple images of a fashion item along with target pose, our proposed model is able to transfer a person's current pose to intended pose. Fig. 3, we can see the high resemblance between the synthetic (4 t h and 5 t h columns) and target ground truth images (6 t h column) as shown in the figure. Furthermore, the proposed model is also able to predict reasonable face details such as mouth, eyes and nose of a person as illustrated in Fig. 3. CONCLUSION In this paper, we present a novel generative model to produce photorealistic images of a person change to a target pose. We utilize convolutional LSTM, and U-Net architecture to develop the generator, in which we 1) exploit multiple images of the same person in order to learn semantic visual context from the convolutional LSTM network; 2) apply a U-Net encoder to learn the appearance/geometrical information; and 3) use a U-Net decoder to generate an image by exploiting visual and appearance context. In order to better guide the image generation process, we apply two discriminators specifically designed for image authenticity and pose consistency. Our experimental results show that the proposed model can produce high-quality images both in qualitative and quantitative measures. As future direction, we will explore the usage of visual and appearance context for human parsing.
3,544
1906.07374
2952165569
Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories. Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors. However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator's behavior. This high sample complexity often prohibits these algorithms from being deployed on physical robots. In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms. We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency.
Techniques for imitation learning differ in the way they approach the problem. Two popular approaches to imitation learning have been behavioral cloning @cite_11 and inverse reinforcement learning (IRL) @cite_17 @cite_7 . Behavioral cloning views the imitation learning problem as a supervised learning problem that attempts to learn a direct mapping from states to actions. On the other hand, inverse reinforcement learning works to find a cost function under which the expert demonstrator is optimal. One approach of this type is guided cost learning @cite_8 which builds on maximum entropy IRL @cite_1 and guided policy search algorithm @cite_19 and achieves impressive results on physical robots. Later, in ho2016generative , ho2016generative used generative adversarial networks to imitate policies when both states and actions are available using a technique called generative adversarial imitation learning (GAIL) @cite_28 . One imitator network attempts to imitate the policy while another attempts to discriminate between the imitation and provided demonstration data @cite_9 . Several follow-up works have improved upon this approach on different aspects @cite_20 @cite_21 and recently, there has been efforts to address sample efficiency of this algorithm by proposing approaches for unbiasing rewards and deriving an off-policy formulation of adversarial imitation learning algorithms @cite_4 .
{ "abstract": [ "The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN is a backpropagation network designed to drive the CMU Navlab, a modified Chevy van. This paper describes the training techniques that allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching the reactions of a human driver. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, and multilane lined and unlined roads, at speeds of up to 20 miles per hour.", "", "", "Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.", "", "", "", "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.", "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.", "Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose adverserial inverse reinforcement learning (AIRL), a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.", "Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ..." ], "cite_N": [ "@cite_11", "@cite_4", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_21", "@cite_1", "@cite_19", "@cite_20", "@cite_17" ], "mid": [ "2051228319", "", "", "2963590100", "", "", "", "2098774185", "2121103318", "2766610320", "2061562262" ] }
Sample-efficient Adversarial Imitation Learning from Observation
Teaching new actions to robot actors through demonstration is one of the most attractive methods for behavior learning. While robots can learn new behaviors using reinforcement learning with a pre-specified reward function (Sutton & Barto, 1998), significant exploration is often required to extract the behavior from the reward. In some cases, denser reward functions can help speed up the exploration process, but designing them requires a certain level of skill and understanding of the reinforcement learning process, and can often result in unexpected behaviors when the reward function doesn't precisely guide the action. Instead, teaching a robot a behavior simply by demonstrating it removes the requirement of explicitly specifying a reward function altogether. Anyone who knows how to perform the task can demonstrate it without understanding the learning process, While being able to imitate a behavior after observing the state and actions of a demonstrator is useful, there are many situations where the actions of the demonstrator are unknown. Common approaches to LfD require both the states and actions of the demonstrator to be recorded (Argall et al., 2009). In imitation from external observation (IfO) (Liu et al., 2018;Torabi et al., 2019c), on the other hand, just the observable states of the demonstrator are known--no action information is available. Imitating behaviors solely from observable data greatly expands the set of possible demonstrators: behaviors could be learned from in-person human demonstrators or even the vast collection of videos available online. While imitation from external observation has been studied and performed with some success for two decades (Ijspeert et al., 2001), recent advances in deep neural networks have widened the set of behaviors that can be imitated and the ways that demonstration data can be collected. One way deep learning has been applied to IfO is through generative adversarial networks (Torabi et al., 2018b;Ho & Ermon, 2016;Chen et al., 2016). In this approach--generative adversarial imitation from observation (GAIfO)--one network learns a control policy for imitating the demonstrator while the other learns to discriminate between the demonstrator's behavior and that of the imitator. While GAIfO advanced the state of the art in imitation from observation, it comes with its own set of challenges. First, in comparison with simpler regressed models, deep networks are notorious for requiring orders of magnitude more training data, and GAIfO is no exception. Second, this algorithm uses model-free reinforcement algorithms which are usually very data inefficient. Some of the possible benefits of the applications of IfO break down when a high sample size is required. Therefore, in practice, this algorithm has been largely limited to being studied in simulation. In simulation, many experiences and large demonstration sets can be collected quickly. Physical demonstrations are more costly to perform, and real-time constraints limit the speed at which control policies can be arXiv:1906.07374v1 [cs. LG] 18 Jun 2019 evaluated and thus behavior learned. For imitation from observation to work on a physical robot, a higher degree of sample efficiency is required. Deep reinforcement learning has faced similar obstacles with learning with limited samples, especially in the context of robotic control policies with complex dynamics. However, recently, trajectory centric reinforcement learning algorithms are being used to guide neural network policy search which has been shown that is very sample-efficient (Levine & Koltun, 2013;Levine & Abbeel, 2014;Levine et al., 2015;. These algorithms achieve this sample efficiency in part by gaining insight into dynamics through the iterative training of linear quadratic regulators (iLQR's) (Tassa et al., 2012) on a set of trajectory controllers. In this paper, we propose an imitation from observation algorithm, LQR+GAIfO, that takes advantage of both (1) the high performance of the adversarial learning algorithms, and (2) the sample efficiency of trajectory centric reinforcement learning algorithms. We apply the proposed algorithm to a 6-degree-of-freedom robot arm to learn to imitate behaviors from a set of low-level state trajectories. We find that this new method results in successful imitation learning with fewer samples than the previous algorithms. In Section 2 of this paper, we discuss previous work related to this topic. In Section 3, we cover the techniques involved in GAIfO and LQR. Section 4, describes our approach to combining LQR and GAIfO into one functional algorithm. In Section 5, we share our experimental setup and results, and we discuss results in Section 6. Finally, in Section 7, we summarize and discuss potential future work. Preliminaries and Overview In this section, we describe the notation considered throughout the paper, and the two methods that our proposed algorithm are based on, (1) adversarial imitation from observation, and (2) trajectory centric reinforcement learning. Notation We consider agents acting within the broad framework of Markov decision processes (MDPs). We denote a MDP using the 5-tuple M = {S, A, P, r, γ}, where S is the agent's state space, A is its action space, P (s t+1 |s t , a t ) is a function denoting the probability of the agent transitioning from state s t to s t+1 after taking action a t , r : S × A → R is a function specifying the immediate reward that the agent receives for taking a specific action in a given state, and γ is a discount factor. In this framework, agent behavior can be specified by a policy, π : S → A, which specifies the action (or distribution over actions) that the agent should use when in a particular state. In reinforcement Learning the goal is to learn a policy, π, by maximizing the accumulated reward, r, through interaction with the environment. However, imitation learning considers the setting of M\r, i.e. the reward function is excluded. Instead the agent has access to some demonstrated trajectories. The problem that we are interested in this paper is imitation from observation where these demonstrations only include state-trajectories of the expert τ E = {s t }. Adversarial Imitation from Observation Generative adversarial imitation from observation (Torabi et al., 2018b) is an algorithm of this type in which attempts to learn tasks by bringing the state transition distribution of the imitator closer to that of the demonstrator. The algorithm works as follows. There is an imitator policy network, π φ , that is initialized randomly. This policy is then executed in the environment to generate trajectories τ π where each trajectory is a set of states {(s 0 , s 1 , ..., s n )}. There is also a discriminator network parameterized by weights θ and maps input trajectories to a score between 0 and 1: D θ : S × A → [0, 1], The discriminator is trained in a way to output values close to zero for the data coming from the expert and close to one for the data coming from the imitator. To do so, θ is updated by taking an iteration towards solving the following optimization problem. max D θ ∈(0,1) S×S E τπ [log(D θ (s, s ))]+E τ E [log(1−D θ (s, s ))] (1) From a reinforcement learning point of view, the discriminator network provides a cost function that could change φ to move the distribution of trajectories created by π φ towards the distribution of the demonstrated trajectories τ E . Therefore, following the update to D θ , the imitator policy, π φ , is updated using the technique of Trust Region Policy Optimization (Schulman et al., 2015) under the cost function log(D θ (s, s ))(2) where D θ is the newly updated discriminator network. The whole process is repeated until convergence. It is a quite well-known fact that model-free reinforcement learning algorithms (e.g. TRPO) often require a large number of environment interactions. Therefore, it is not practical to deploy these types of algorithms on physical robots. On the other hand, model-based RL algorithms have shown promising performance in the real world . Trajectory Centric Reinforcement Learning Linear quadratic regulators (LQR's) learn control policies under two assumptions (Bemporad et al., 2002): 1. The dynamics of the environment are linear. This means that the transition from a particular state given an action f (s t , a t ) can be represented as the product of the state/action and a matrix F t plus a constant vector f t : f (s t , a t ) = F t s t a t + f t 2. The cost is quadratic. The cost is represented by a quadratic term C t and a linear vector c t : c(s t , a t ) = 1 2 s t a t T C t s t a t + s t a t T c t The algorithm attempts to solve an optimization problem that returns the actions that have the highest return in the course of an episode. Solving this optimization problem, results in a linear controller: a t = K t s t + k t(3) where the K t s and k t s are matrices and vectors which are combinations of F t s, C t s, f t s, and c t s that can be computed for each time-step. In situations where the dynamics are assumed to be close to linear but are not completely known or are non-deterministic, the linear transition function is often replaced by a conditional probability specified under a normal Gaussian distribution, with a mean of the linear dynamics and a covariance: p(s t+1 |s t , a t ) = N (F t s t a t + f t , σ 2 ) When the covariance is constant (independent of the state and action), the optimal policy is identical to the nonstochastic LQR. In non-linear systems where the cost is not quadratic, the techniques of LQR can be used by approximating the dynamics with a first-order Taylor expansion and approximating the cost with a second-order Taylor expansion: F t = ∇ st,at f (s t , a t ), C t = ∇ 2 st,at c(s t , a t ), c t = ∇ st,at c(s t , a t ) Iterative linear quadratic regulators (iLQR's) can be used to find optimal controllers under non-linear models by running LQR with the approximated dynamics, then updating the dynamics fit on each iteration (Li & Todorov, 2004). The resulting controller is: a t = K t (s t −ŝ t ) + k t +â t Whereŝ t andâ t are the states and actions around which the Taylor expansion is computed. LQR assumes that the dynamics of the environment are known. Learning dynamics for a given situation involves building a model to define f (s t , a t ) from a set of observed state/action transitions τ = {(s t , a t , s t+1 )}. A simple approach to this model building is to use linear regression to estimate the dynamics, finding some matrices X and Y that model the transition as f (s t , a t ) = Xs t + Y a t + c, or in a stochastic environment, p(s t+1 |s t , a t ) = N (Xs t + Y a t + c, σ 2 ). Modelling dynamics with a Gaussian approximation of the linear regression (often called linear Gaussian models) has the advantage of being very sample-efficient. To avoid the erroneous pursuit of an incorrect global optimal, a set of local models can be used to replace a global model. The most expressive case of local models is a set of models with a single model for every time-step. In the linear regression approach, this amounts to fitting new X t and Y t for every time-step, often called time-varying controllers. Because dynamics are often highly correlated between time-steps, this approach can be refined by using a global model as a prior for a Bayesian linear regression at each time-step. For a better approximation of the local models it is shown that linear-Gaussian controllers, p(a t |s t ) = N (K t (s t −ŝ t ) + k t +â t , Σ t ), should be used for generating the training data . The covariance depends on the sensitivity of the total cost to the choice of action. Because linear regression can overshoot optimals of nonlinear dynamics, policy adjustment can be bounded so that each iteration's update to the model's transition distribution (or trajectory distribution) is not too large. This can be achieved with a bound on the Kullback-Leibler (KL) divergence-a relative measure of divergence between distributions-between the previous trajectory distribution and the current trajectory distribution. Proposed Algorithm In this section, we propose an imitation from observation algorithm, LQR+GAIfO, to learn an imitation policy from state only demonstrations, τ E . Our algorithm takes advantage of the high performance of adversarial imitation from observation algorithms and the sample efficiency of trajectory-centric reinforcement learning algorithms. To do so, we build upon the methods described in Section 3. For LQR to be useful in an imitation learning scenario, it can no longer depend on a pre-specified reward function that defines the task. Instead, the trajectory optimization step in LQR should be based on the existing controller's ability to imitate the expert demonstration. To achieve this capability, we train a discriminator network on each iteration and use an approximate version of its loss on the sampled trajectories to optimize the controllers. Our algorithm begins by initializing the linear Gaussian controller and executing it inside the environment to collect state-action trajectories {(s t , a t )}. Then it randomly initializes a time-varying model p to model the trajectory dynamics. p is specified as p(s t+1 |s t , a t ) = N (F t s t a t + f t , σ 2 ). Given a set of state-action trajectories {s t , a t }, F t , f t , and σ 2 are fit to the sample data at each time-step using Bayesian linear regression with a normal-inverse-Wishart prior. For this prior, it fits the entire trajectory sample to a Gaussian mixture model (GMM), which previous research has found to be effective . Following the dynamics model update, a randomly initialized neural network is considered as the discriminator, D θ , which takes state-transitions (s t , s t+1 ) as input and outputs a value. Similar to Section 3.2, The goal is to train the discriminator to distinguish between the state-transitions coming from the controller and the demonstrator. However, in order to stabilize the learning, our algorithm uses Wasserstein loss and takes an iteration on the following optimization problem. min D S×S θ E p(a|s) [D θ (s, s )] − E τ E [D θ (s, s ))] Algorithm 1 LQR+GAIfO 1: Initialize controller p(a|s) 2: Initialize a neural network discriminator D θ with random parameter θ 3: Obtain state-only expert demonstration trajectories τ E = {s t } 4: while Controller Improves do 5: Execute the controller, p(a|s), and store the resulting trajectories τ p(a|s) = {(s, a, s )} 6: Learn dynamics model p(s |s, a) over τ 7: Update D θ using loss min D S×S θ E τ p(a|s) [D θ (s, s )] − E τ E [D θ (s, s ))] 8: Create the composite function C(s t , a t ) = (D θ • f t )(s t , a t ) 9: Compute the quadratically approximated cost function by taking the second order Taylor expansion of C(s t , a t ) c q (s t , a t ) = 1 2 s t a t T ∇ 2 s,a C(s t , a t ) s t a t + s t a t T ∇ s,a C(s t , a t ) 10: Improve controller p(a|s) by LQR 11: end while Gradient penalties are also used as the regularization for further stabilization of the learning process (Gulrajani et al., 2017). As discussed in Section 3, the discriminator-a function of state-transition (s t , s t+1 )-can be used as the cost function for training the controller. However, LQR requires the cost function to be a quadratic function of states and actions. Therefore, first, the discriminator is combined with the Gaussian dynamics models to create a composite cost function C(s t , a t ) = (D θ • f t )(s t , a t ). This composite function is then quadratically approximated by taking the second order Taylor expansions of the cost: c q (s t , a t ) = 1 2 s t a t T ∇ 2 s,a C(s t , a t ) s t a t + s t a t T ∇ s,a C(s t , a t ) Where ∇ 2 s,a and ∇ s,a are the Hessian and gradient with respect to the concatenation of s and a vectors, respectively. Finally, an iteration of LQR uses this cost approximation c q to optimize the trajectory to form a new linear-Gaussian controller. The step size of this update is bounded by the Experiments To evaluate the performance of our algorithm, we studied its ability to imitate a reaching task on a robot arm-both on a physical arm and in a simulator. Setup For a testing platform, we used a Universal Robotics UR5, a 6-degree-of-freedom robotic arm (Figure 2). The task that is demonstrated is a reaching task in which the arm begins in a consistent, retracted position and reaches towards a point in Cartesian space. When the end effector (the gripper at the end of the arm) reaches this point, the arm stops moving. This task is shown in Figure 4. The expert is trained by iterating between iLQR and dynamics learning with a specified reward function until convergence. This policy is then executed and recorded a number of times to create the demonstration data. We modified the software to record the state of the arm and the action chosen at every time-step of the trajectory execution. For the initial experiments, the state consisted of: For testing in simulation, we used the Gazebo simulation environment (Figure 3) with a model of the UR5. Each trial lasts for 100 timesteps (10 seconds) and ends regardless of the end effector reaching the goal state. At each iteration, the policy being evaluated is executed five times to collect five sample trajectories. The policy is also evaluated once without noise per iteration, and the performance according to the cost function is logged. The cost function used takes into account the distance from the end effector to the target position, weighted linearly as the trial progresses. With the distance from the goal position to the end effector at a given time-step d t , the cost of a trajectory with n time-steps is calculated as: C(τ ) = d tn + n i=0 i n d ti The same cost function is used to train the expert through reinforcement learning as well as to evaluate the performance of the imitator. In this sense, the task of imitation learning can be seen as recovering the cost function that guided the expert (Torabi et al., 2018b). For a more complex task or more specific cost function than the one studied, it's possible that the imitator could recover the task behavior correctly while not performing well in the eyes of the cost function, or vice versa. However, for the arm reaching task, the cost function is simple and directly related to the task, making it appropriate as an evaluator of imitation performance. For the imitation tasks, this cost function was used to evaluate each trajectory sample at a given iteration. The results were normalized on a range from zero to one, with zero mapping to the average cost of a random policy, and one mapping to the cost achieved by the expert. A policy that performs as well as the expert would achieve a score of one on this normalized performance scale. We compare our algorithm with GAIfO which is instrumented to interface with the arm control and simulation platform. Trials for the GAIfO also involved taking five samples per iteration, in the same way as ours. The GAIfO policy network was updated using Proximal Policy Optimization (PPO). Experimental Design We conducted three main experiments to evaluate our algorithm. In the first experiment, the learning rate is compared to learning under GAIfO. In the second experiment, we test our algorithm's ability to generalize to unseen target positions. Finally, we compare the performance of the algorithm in the simulated environment to the physical arm. COMPARISON TO GAIFO To compare the learning rate of our algorithm to that of GAIfO, we ran trials for both algorithms for 100 iterations and tracked the policy's performance at each iteration using the cost function described in Section 5.1. This process was repeated for both algorithms (n=30 for ours, n=55 for GAIfO) to collect average performance data. The algorithms' performance along with the mean standard error is plotted in Figure 5. The performance of our algorithm quickly exceeds GAIfO and peaks around iteration 30. GENERALIZATION To test our algorithm's ability to generalize a policy for a point that is not in the expert demonstration data, we collected expert demonstration trajectories for 8 points on the edge of a square (shown in Figure 7). For each point, we trained the expert and recorded five sample trajectories when the expert converged. Then, after choosing a subset of the points on the square as {τ E }, we tasked the arm with moving to a point in the center of the square. Because the center point was not in {τ E }, the control policy was required to generalize the expert trajectories to this unseen point. We varied the number of points included in {τ E }, and tracked the normalized performance of our algorithm over 15 iterations. As shown in Figure 6, while performance was similar in the early iterations, our algorithm generally performed better in later iterations when more points were included in {τ E }. PERFORMANCE ON PHYSICAL ARM Our algorithm was run on both the simulator and the physical arm to examine how closely simulated performance mapped to real-world performance. Over 25 iterations, the policy performance on the physical arm began to surpass the performance of the simulated arm, as shown in Figure 8. Discussion Our research began by asking if a combination of LQR and GAIfO could increase sample efficiency in imitation learning. The comparison of LQR+GAIfO to GAIfO suggests that LQR+GAIfO can indeed produce a policy that is better at imitating a behavior in a limited number of iterations, confirming our hypothesis. The steep initial learning curve of LQR+GAIfO indicates significantly higher sample efficiency compared to GAIfO alone. However, the performance of LQR+GAIfO seems to degrade around iteration 60. Without this performance degradation, LQR+GAIfO would outperform GAIfO past iteration 100. The reason for this degradation may be that in adversarial algorithms, improvement of the generator and the discriminator should occur at relatively similar rate. However, in our algorithm, since the controller's representation complexity is limited, after some number of iteration, the controller does not improve as fast as the discriminator. In addition, even without this degradation, the GAIfO approach would eventually surpass the performance of LQR+GAIfO, likely due to the ability of the generator network in GAIfO to produce more complex policies than those that can be represented with linear Gaussian controllers in LQR. Although most of the ability for a policy to perform a task that is different from the expert trajectories in GAIfO and GPS result from a complex model considered for the policy (neural network), the linear Gaussian controllers in LQR+GAIfO still have the ability to generalize to some degree. As expected, the ability to successfully generalize increases with demonstration trajectories, as shown in Figure 6. The reason may be that the discriminator learns a general cost function that could be applied to new target points and as a result LQR can learn a relatively good controller. Future work integrating the full GPS approach would likely lead to better generalization. We studied the performance of LQR+GAIfO on the physical arm to validate the tractability of this technique on a real robot and to establish a sense of how directly the performance studied in the simulator would translate to the physical arm. Our results, as seen in Figure 8, show that the policy performance seen in the simulator can be trusted to model policy performance on the real arm. Surprisingly, the performance of LQR+GAIfO on the physical arm exceeds the simulator performance. It is possible that the noise introduced by the physical arm as a result of actuator noise or other physical effects lead to wider exploration and faster policy improvement. If this is the case, it could be possible to achieve similar performance in the simulator by introducing more policy noise. Conclusion and Future Work We have found that combining generative adversarial imitation from observation with Linear Quadratic Regulators leads to faster learning of imitation behavior over fewer samples than with GAIfO alone, confirming our hypothesis. While LQR+GAIfO doesn't reach the absolute imitation performance of GAIfO over an extended training period with thousands of samples, achieving adequate imitation performance with limited samples opens the door to imitation research on physical robotic systems, for which imitation learning has posed logistical challenges in the past. While LQR is a powerful technique by itself, a policy based solely on Gaussian controllers has limits in complexity. Work in GPS has already produced a method for combining sample-efficient Gaussian controllers with a deep network model that is trained through the controllers. Using a deep network as part of the policy offers increased performance in the long run and greatly increased generalization ability. Incorporating this deep network policy driven by importanceweighted samples of the linear Gaussian controllers is an obvious and promising next step for this work. To validate the LQR+GAIfO technique, we represented the expert trajectories using low-level data like the Cartesian position of the arm's end effector. GAIfO has had success in using higher level data-like a visual recording of the demonstrator-as the state in trajectories. Additionally, GPS has been used in learning neural network policies from visual observation . Pursuing imitation learning from visual data alone would greatly widen the situations in which demonstration data could be collected. Adding a convolutional layer to the discriminator so that it can accept visual data is a natural next step for extending this research.
4,407
1906.07374
2952165569
Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories. Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors. However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator's behavior. This high sample complexity often prohibits these algorithms from being deployed on physical robots. In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms. We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency.
On the other hand, in reinforcement learning policy learning through environment-provided reward functions only direct policy search in a large state-action space requires numerous samples and often can fall into poor local optima. Guided policy search (GPS) is a method to improve the sample efficiency of direct policy search and guide learning in a large space away from poor local optima @cite_18 . The basis of GPS is to use trajectory optimization to focus policy learning on high-reward actions.
{ "abstract": [ "Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running." ], "cite_N": [ "@cite_18" ], "mid": [ "2104733512" ] }
Sample-efficient Adversarial Imitation Learning from Observation
Teaching new actions to robot actors through demonstration is one of the most attractive methods for behavior learning. While robots can learn new behaviors using reinforcement learning with a pre-specified reward function (Sutton & Barto, 1998), significant exploration is often required to extract the behavior from the reward. In some cases, denser reward functions can help speed up the exploration process, but designing them requires a certain level of skill and understanding of the reinforcement learning process, and can often result in unexpected behaviors when the reward function doesn't precisely guide the action. Instead, teaching a robot a behavior simply by demonstrating it removes the requirement of explicitly specifying a reward function altogether. Anyone who knows how to perform the task can demonstrate it without understanding the learning process, While being able to imitate a behavior after observing the state and actions of a demonstrator is useful, there are many situations where the actions of the demonstrator are unknown. Common approaches to LfD require both the states and actions of the demonstrator to be recorded (Argall et al., 2009). In imitation from external observation (IfO) (Liu et al., 2018;Torabi et al., 2019c), on the other hand, just the observable states of the demonstrator are known--no action information is available. Imitating behaviors solely from observable data greatly expands the set of possible demonstrators: behaviors could be learned from in-person human demonstrators or even the vast collection of videos available online. While imitation from external observation has been studied and performed with some success for two decades (Ijspeert et al., 2001), recent advances in deep neural networks have widened the set of behaviors that can be imitated and the ways that demonstration data can be collected. One way deep learning has been applied to IfO is through generative adversarial networks (Torabi et al., 2018b;Ho & Ermon, 2016;Chen et al., 2016). In this approach--generative adversarial imitation from observation (GAIfO)--one network learns a control policy for imitating the demonstrator while the other learns to discriminate between the demonstrator's behavior and that of the imitator. While GAIfO advanced the state of the art in imitation from observation, it comes with its own set of challenges. First, in comparison with simpler regressed models, deep networks are notorious for requiring orders of magnitude more training data, and GAIfO is no exception. Second, this algorithm uses model-free reinforcement algorithms which are usually very data inefficient. Some of the possible benefits of the applications of IfO break down when a high sample size is required. Therefore, in practice, this algorithm has been largely limited to being studied in simulation. In simulation, many experiences and large demonstration sets can be collected quickly. Physical demonstrations are more costly to perform, and real-time constraints limit the speed at which control policies can be arXiv:1906.07374v1 [cs. LG] 18 Jun 2019 evaluated and thus behavior learned. For imitation from observation to work on a physical robot, a higher degree of sample efficiency is required. Deep reinforcement learning has faced similar obstacles with learning with limited samples, especially in the context of robotic control policies with complex dynamics. However, recently, trajectory centric reinforcement learning algorithms are being used to guide neural network policy search which has been shown that is very sample-efficient (Levine & Koltun, 2013;Levine & Abbeel, 2014;Levine et al., 2015;. These algorithms achieve this sample efficiency in part by gaining insight into dynamics through the iterative training of linear quadratic regulators (iLQR's) (Tassa et al., 2012) on a set of trajectory controllers. In this paper, we propose an imitation from observation algorithm, LQR+GAIfO, that takes advantage of both (1) the high performance of the adversarial learning algorithms, and (2) the sample efficiency of trajectory centric reinforcement learning algorithms. We apply the proposed algorithm to a 6-degree-of-freedom robot arm to learn to imitate behaviors from a set of low-level state trajectories. We find that this new method results in successful imitation learning with fewer samples than the previous algorithms. In Section 2 of this paper, we discuss previous work related to this topic. In Section 3, we cover the techniques involved in GAIfO and LQR. Section 4, describes our approach to combining LQR and GAIfO into one functional algorithm. In Section 5, we share our experimental setup and results, and we discuss results in Section 6. Finally, in Section 7, we summarize and discuss potential future work. Preliminaries and Overview In this section, we describe the notation considered throughout the paper, and the two methods that our proposed algorithm are based on, (1) adversarial imitation from observation, and (2) trajectory centric reinforcement learning. Notation We consider agents acting within the broad framework of Markov decision processes (MDPs). We denote a MDP using the 5-tuple M = {S, A, P, r, γ}, where S is the agent's state space, A is its action space, P (s t+1 |s t , a t ) is a function denoting the probability of the agent transitioning from state s t to s t+1 after taking action a t , r : S × A → R is a function specifying the immediate reward that the agent receives for taking a specific action in a given state, and γ is a discount factor. In this framework, agent behavior can be specified by a policy, π : S → A, which specifies the action (or distribution over actions) that the agent should use when in a particular state. In reinforcement Learning the goal is to learn a policy, π, by maximizing the accumulated reward, r, through interaction with the environment. However, imitation learning considers the setting of M\r, i.e. the reward function is excluded. Instead the agent has access to some demonstrated trajectories. The problem that we are interested in this paper is imitation from observation where these demonstrations only include state-trajectories of the expert τ E = {s t }. Adversarial Imitation from Observation Generative adversarial imitation from observation (Torabi et al., 2018b) is an algorithm of this type in which attempts to learn tasks by bringing the state transition distribution of the imitator closer to that of the demonstrator. The algorithm works as follows. There is an imitator policy network, π φ , that is initialized randomly. This policy is then executed in the environment to generate trajectories τ π where each trajectory is a set of states {(s 0 , s 1 , ..., s n )}. There is also a discriminator network parameterized by weights θ and maps input trajectories to a score between 0 and 1: D θ : S × A → [0, 1], The discriminator is trained in a way to output values close to zero for the data coming from the expert and close to one for the data coming from the imitator. To do so, θ is updated by taking an iteration towards solving the following optimization problem. max D θ ∈(0,1) S×S E τπ [log(D θ (s, s ))]+E τ E [log(1−D θ (s, s ))] (1) From a reinforcement learning point of view, the discriminator network provides a cost function that could change φ to move the distribution of trajectories created by π φ towards the distribution of the demonstrated trajectories τ E . Therefore, following the update to D θ , the imitator policy, π φ , is updated using the technique of Trust Region Policy Optimization (Schulman et al., 2015) under the cost function log(D θ (s, s ))(2) where D θ is the newly updated discriminator network. The whole process is repeated until convergence. It is a quite well-known fact that model-free reinforcement learning algorithms (e.g. TRPO) often require a large number of environment interactions. Therefore, it is not practical to deploy these types of algorithms on physical robots. On the other hand, model-based RL algorithms have shown promising performance in the real world . Trajectory Centric Reinforcement Learning Linear quadratic regulators (LQR's) learn control policies under two assumptions (Bemporad et al., 2002): 1. The dynamics of the environment are linear. This means that the transition from a particular state given an action f (s t , a t ) can be represented as the product of the state/action and a matrix F t plus a constant vector f t : f (s t , a t ) = F t s t a t + f t 2. The cost is quadratic. The cost is represented by a quadratic term C t and a linear vector c t : c(s t , a t ) = 1 2 s t a t T C t s t a t + s t a t T c t The algorithm attempts to solve an optimization problem that returns the actions that have the highest return in the course of an episode. Solving this optimization problem, results in a linear controller: a t = K t s t + k t(3) where the K t s and k t s are matrices and vectors which are combinations of F t s, C t s, f t s, and c t s that can be computed for each time-step. In situations where the dynamics are assumed to be close to linear but are not completely known or are non-deterministic, the linear transition function is often replaced by a conditional probability specified under a normal Gaussian distribution, with a mean of the linear dynamics and a covariance: p(s t+1 |s t , a t ) = N (F t s t a t + f t , σ 2 ) When the covariance is constant (independent of the state and action), the optimal policy is identical to the nonstochastic LQR. In non-linear systems where the cost is not quadratic, the techniques of LQR can be used by approximating the dynamics with a first-order Taylor expansion and approximating the cost with a second-order Taylor expansion: F t = ∇ st,at f (s t , a t ), C t = ∇ 2 st,at c(s t , a t ), c t = ∇ st,at c(s t , a t ) Iterative linear quadratic regulators (iLQR's) can be used to find optimal controllers under non-linear models by running LQR with the approximated dynamics, then updating the dynamics fit on each iteration (Li & Todorov, 2004). The resulting controller is: a t = K t (s t −ŝ t ) + k t +â t Whereŝ t andâ t are the states and actions around which the Taylor expansion is computed. LQR assumes that the dynamics of the environment are known. Learning dynamics for a given situation involves building a model to define f (s t , a t ) from a set of observed state/action transitions τ = {(s t , a t , s t+1 )}. A simple approach to this model building is to use linear regression to estimate the dynamics, finding some matrices X and Y that model the transition as f (s t , a t ) = Xs t + Y a t + c, or in a stochastic environment, p(s t+1 |s t , a t ) = N (Xs t + Y a t + c, σ 2 ). Modelling dynamics with a Gaussian approximation of the linear regression (often called linear Gaussian models) has the advantage of being very sample-efficient. To avoid the erroneous pursuit of an incorrect global optimal, a set of local models can be used to replace a global model. The most expressive case of local models is a set of models with a single model for every time-step. In the linear regression approach, this amounts to fitting new X t and Y t for every time-step, often called time-varying controllers. Because dynamics are often highly correlated between time-steps, this approach can be refined by using a global model as a prior for a Bayesian linear regression at each time-step. For a better approximation of the local models it is shown that linear-Gaussian controllers, p(a t |s t ) = N (K t (s t −ŝ t ) + k t +â t , Σ t ), should be used for generating the training data . The covariance depends on the sensitivity of the total cost to the choice of action. Because linear regression can overshoot optimals of nonlinear dynamics, policy adjustment can be bounded so that each iteration's update to the model's transition distribution (or trajectory distribution) is not too large. This can be achieved with a bound on the Kullback-Leibler (KL) divergence-a relative measure of divergence between distributions-between the previous trajectory distribution and the current trajectory distribution. Proposed Algorithm In this section, we propose an imitation from observation algorithm, LQR+GAIfO, to learn an imitation policy from state only demonstrations, τ E . Our algorithm takes advantage of the high performance of adversarial imitation from observation algorithms and the sample efficiency of trajectory-centric reinforcement learning algorithms. To do so, we build upon the methods described in Section 3. For LQR to be useful in an imitation learning scenario, it can no longer depend on a pre-specified reward function that defines the task. Instead, the trajectory optimization step in LQR should be based on the existing controller's ability to imitate the expert demonstration. To achieve this capability, we train a discriminator network on each iteration and use an approximate version of its loss on the sampled trajectories to optimize the controllers. Our algorithm begins by initializing the linear Gaussian controller and executing it inside the environment to collect state-action trajectories {(s t , a t )}. Then it randomly initializes a time-varying model p to model the trajectory dynamics. p is specified as p(s t+1 |s t , a t ) = N (F t s t a t + f t , σ 2 ). Given a set of state-action trajectories {s t , a t }, F t , f t , and σ 2 are fit to the sample data at each time-step using Bayesian linear regression with a normal-inverse-Wishart prior. For this prior, it fits the entire trajectory sample to a Gaussian mixture model (GMM), which previous research has found to be effective . Following the dynamics model update, a randomly initialized neural network is considered as the discriminator, D θ , which takes state-transitions (s t , s t+1 ) as input and outputs a value. Similar to Section 3.2, The goal is to train the discriminator to distinguish between the state-transitions coming from the controller and the demonstrator. However, in order to stabilize the learning, our algorithm uses Wasserstein loss and takes an iteration on the following optimization problem. min D S×S θ E p(a|s) [D θ (s, s )] − E τ E [D θ (s, s ))] Algorithm 1 LQR+GAIfO 1: Initialize controller p(a|s) 2: Initialize a neural network discriminator D θ with random parameter θ 3: Obtain state-only expert demonstration trajectories τ E = {s t } 4: while Controller Improves do 5: Execute the controller, p(a|s), and store the resulting trajectories τ p(a|s) = {(s, a, s )} 6: Learn dynamics model p(s |s, a) over τ 7: Update D θ using loss min D S×S θ E τ p(a|s) [D θ (s, s )] − E τ E [D θ (s, s ))] 8: Create the composite function C(s t , a t ) = (D θ • f t )(s t , a t ) 9: Compute the quadratically approximated cost function by taking the second order Taylor expansion of C(s t , a t ) c q (s t , a t ) = 1 2 s t a t T ∇ 2 s,a C(s t , a t ) s t a t + s t a t T ∇ s,a C(s t , a t ) 10: Improve controller p(a|s) by LQR 11: end while Gradient penalties are also used as the regularization for further stabilization of the learning process (Gulrajani et al., 2017). As discussed in Section 3, the discriminator-a function of state-transition (s t , s t+1 )-can be used as the cost function for training the controller. However, LQR requires the cost function to be a quadratic function of states and actions. Therefore, first, the discriminator is combined with the Gaussian dynamics models to create a composite cost function C(s t , a t ) = (D θ • f t )(s t , a t ). This composite function is then quadratically approximated by taking the second order Taylor expansions of the cost: c q (s t , a t ) = 1 2 s t a t T ∇ 2 s,a C(s t , a t ) s t a t + s t a t T ∇ s,a C(s t , a t ) Where ∇ 2 s,a and ∇ s,a are the Hessian and gradient with respect to the concatenation of s and a vectors, respectively. Finally, an iteration of LQR uses this cost approximation c q to optimize the trajectory to form a new linear-Gaussian controller. The step size of this update is bounded by the Experiments To evaluate the performance of our algorithm, we studied its ability to imitate a reaching task on a robot arm-both on a physical arm and in a simulator. Setup For a testing platform, we used a Universal Robotics UR5, a 6-degree-of-freedom robotic arm (Figure 2). The task that is demonstrated is a reaching task in which the arm begins in a consistent, retracted position and reaches towards a point in Cartesian space. When the end effector (the gripper at the end of the arm) reaches this point, the arm stops moving. This task is shown in Figure 4. The expert is trained by iterating between iLQR and dynamics learning with a specified reward function until convergence. This policy is then executed and recorded a number of times to create the demonstration data. We modified the software to record the state of the arm and the action chosen at every time-step of the trajectory execution. For the initial experiments, the state consisted of: For testing in simulation, we used the Gazebo simulation environment (Figure 3) with a model of the UR5. Each trial lasts for 100 timesteps (10 seconds) and ends regardless of the end effector reaching the goal state. At each iteration, the policy being evaluated is executed five times to collect five sample trajectories. The policy is also evaluated once without noise per iteration, and the performance according to the cost function is logged. The cost function used takes into account the distance from the end effector to the target position, weighted linearly as the trial progresses. With the distance from the goal position to the end effector at a given time-step d t , the cost of a trajectory with n time-steps is calculated as: C(τ ) = d tn + n i=0 i n d ti The same cost function is used to train the expert through reinforcement learning as well as to evaluate the performance of the imitator. In this sense, the task of imitation learning can be seen as recovering the cost function that guided the expert (Torabi et al., 2018b). For a more complex task or more specific cost function than the one studied, it's possible that the imitator could recover the task behavior correctly while not performing well in the eyes of the cost function, or vice versa. However, for the arm reaching task, the cost function is simple and directly related to the task, making it appropriate as an evaluator of imitation performance. For the imitation tasks, this cost function was used to evaluate each trajectory sample at a given iteration. The results were normalized on a range from zero to one, with zero mapping to the average cost of a random policy, and one mapping to the cost achieved by the expert. A policy that performs as well as the expert would achieve a score of one on this normalized performance scale. We compare our algorithm with GAIfO which is instrumented to interface with the arm control and simulation platform. Trials for the GAIfO also involved taking five samples per iteration, in the same way as ours. The GAIfO policy network was updated using Proximal Policy Optimization (PPO). Experimental Design We conducted three main experiments to evaluate our algorithm. In the first experiment, the learning rate is compared to learning under GAIfO. In the second experiment, we test our algorithm's ability to generalize to unseen target positions. Finally, we compare the performance of the algorithm in the simulated environment to the physical arm. COMPARISON TO GAIFO To compare the learning rate of our algorithm to that of GAIfO, we ran trials for both algorithms for 100 iterations and tracked the policy's performance at each iteration using the cost function described in Section 5.1. This process was repeated for both algorithms (n=30 for ours, n=55 for GAIfO) to collect average performance data. The algorithms' performance along with the mean standard error is plotted in Figure 5. The performance of our algorithm quickly exceeds GAIfO and peaks around iteration 30. GENERALIZATION To test our algorithm's ability to generalize a policy for a point that is not in the expert demonstration data, we collected expert demonstration trajectories for 8 points on the edge of a square (shown in Figure 7). For each point, we trained the expert and recorded five sample trajectories when the expert converged. Then, after choosing a subset of the points on the square as {τ E }, we tasked the arm with moving to a point in the center of the square. Because the center point was not in {τ E }, the control policy was required to generalize the expert trajectories to this unseen point. We varied the number of points included in {τ E }, and tracked the normalized performance of our algorithm over 15 iterations. As shown in Figure 6, while performance was similar in the early iterations, our algorithm generally performed better in later iterations when more points were included in {τ E }. PERFORMANCE ON PHYSICAL ARM Our algorithm was run on both the simulator and the physical arm to examine how closely simulated performance mapped to real-world performance. Over 25 iterations, the policy performance on the physical arm began to surpass the performance of the simulated arm, as shown in Figure 8. Discussion Our research began by asking if a combination of LQR and GAIfO could increase sample efficiency in imitation learning. The comparison of LQR+GAIfO to GAIfO suggests that LQR+GAIfO can indeed produce a policy that is better at imitating a behavior in a limited number of iterations, confirming our hypothesis. The steep initial learning curve of LQR+GAIfO indicates significantly higher sample efficiency compared to GAIfO alone. However, the performance of LQR+GAIfO seems to degrade around iteration 60. Without this performance degradation, LQR+GAIfO would outperform GAIfO past iteration 100. The reason for this degradation may be that in adversarial algorithms, improvement of the generator and the discriminator should occur at relatively similar rate. However, in our algorithm, since the controller's representation complexity is limited, after some number of iteration, the controller does not improve as fast as the discriminator. In addition, even without this degradation, the GAIfO approach would eventually surpass the performance of LQR+GAIfO, likely due to the ability of the generator network in GAIfO to produce more complex policies than those that can be represented with linear Gaussian controllers in LQR. Although most of the ability for a policy to perform a task that is different from the expert trajectories in GAIfO and GPS result from a complex model considered for the policy (neural network), the linear Gaussian controllers in LQR+GAIfO still have the ability to generalize to some degree. As expected, the ability to successfully generalize increases with demonstration trajectories, as shown in Figure 6. The reason may be that the discriminator learns a general cost function that could be applied to new target points and as a result LQR can learn a relatively good controller. Future work integrating the full GPS approach would likely lead to better generalization. We studied the performance of LQR+GAIfO on the physical arm to validate the tractability of this technique on a real robot and to establish a sense of how directly the performance studied in the simulator would translate to the physical arm. Our results, as seen in Figure 8, show that the policy performance seen in the simulator can be trusted to model policy performance on the real arm. Surprisingly, the performance of LQR+GAIfO on the physical arm exceeds the simulator performance. It is possible that the noise introduced by the physical arm as a result of actuator noise or other physical effects lead to wider exploration and faster policy improvement. If this is the case, it could be possible to achieve similar performance in the simulator by introducing more policy noise. Conclusion and Future Work We have found that combining generative adversarial imitation from observation with Linear Quadratic Regulators leads to faster learning of imitation behavior over fewer samples than with GAIfO alone, confirming our hypothesis. While LQR+GAIfO doesn't reach the absolute imitation performance of GAIfO over an extended training period with thousands of samples, achieving adequate imitation performance with limited samples opens the door to imitation research on physical robotic systems, for which imitation learning has posed logistical challenges in the past. While LQR is a powerful technique by itself, a policy based solely on Gaussian controllers has limits in complexity. Work in GPS has already produced a method for combining sample-efficient Gaussian controllers with a deep network model that is trained through the controllers. Using a deep network as part of the policy offers increased performance in the long run and greatly increased generalization ability. Incorporating this deep network policy driven by importanceweighted samples of the linear Gaussian controllers is an obvious and promising next step for this work. To validate the LQR+GAIfO technique, we represented the expert trajectories using low-level data like the Cartesian position of the arm's end effector. GAIfO has had success in using higher level data-like a visual recording of the demonstrator-as the state in trajectories. Additionally, GPS has been used in learning neural network policies from visual observation . Pursuing imitation learning from visual data alone would greatly widen the situations in which demonstration data could be collected. Adding a convolutional layer to the discriminator so that it can accept visual data is a natural next step for extending this research.
4,407
1906.07374
2952165569
Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories. Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors. However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator's behavior. This high sample complexity often prohibits these algorithms from being deployed on physical robots. In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms. We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency.
In guided policy search under unknown dynamics, time-varying linear Gaussian models of the dynamics for a small set of specific tasks are first trained to fit a small set of sample data through LQR @cite_19 . These Gaussian controllers are then sampled to generate samples to optimize a general policy for a model with thousands of parameters that would typically require much more training data. Specifically, samples in regions of trajectories that have been found to lead to higher reward are generated, guiding the policy learning.
{ "abstract": [ "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation." ], "cite_N": [ "@cite_19" ], "mid": [ "2121103318" ] }
Sample-efficient Adversarial Imitation Learning from Observation
Teaching new actions to robot actors through demonstration is one of the most attractive methods for behavior learning. While robots can learn new behaviors using reinforcement learning with a pre-specified reward function (Sutton & Barto, 1998), significant exploration is often required to extract the behavior from the reward. In some cases, denser reward functions can help speed up the exploration process, but designing them requires a certain level of skill and understanding of the reinforcement learning process, and can often result in unexpected behaviors when the reward function doesn't precisely guide the action. Instead, teaching a robot a behavior simply by demonstrating it removes the requirement of explicitly specifying a reward function altogether. Anyone who knows how to perform the task can demonstrate it without understanding the learning process, While being able to imitate a behavior after observing the state and actions of a demonstrator is useful, there are many situations where the actions of the demonstrator are unknown. Common approaches to LfD require both the states and actions of the demonstrator to be recorded (Argall et al., 2009). In imitation from external observation (IfO) (Liu et al., 2018;Torabi et al., 2019c), on the other hand, just the observable states of the demonstrator are known--no action information is available. Imitating behaviors solely from observable data greatly expands the set of possible demonstrators: behaviors could be learned from in-person human demonstrators or even the vast collection of videos available online. While imitation from external observation has been studied and performed with some success for two decades (Ijspeert et al., 2001), recent advances in deep neural networks have widened the set of behaviors that can be imitated and the ways that demonstration data can be collected. One way deep learning has been applied to IfO is through generative adversarial networks (Torabi et al., 2018b;Ho & Ermon, 2016;Chen et al., 2016). In this approach--generative adversarial imitation from observation (GAIfO)--one network learns a control policy for imitating the demonstrator while the other learns to discriminate between the demonstrator's behavior and that of the imitator. While GAIfO advanced the state of the art in imitation from observation, it comes with its own set of challenges. First, in comparison with simpler regressed models, deep networks are notorious for requiring orders of magnitude more training data, and GAIfO is no exception. Second, this algorithm uses model-free reinforcement algorithms which are usually very data inefficient. Some of the possible benefits of the applications of IfO break down when a high sample size is required. Therefore, in practice, this algorithm has been largely limited to being studied in simulation. In simulation, many experiences and large demonstration sets can be collected quickly. Physical demonstrations are more costly to perform, and real-time constraints limit the speed at which control policies can be arXiv:1906.07374v1 [cs. LG] 18 Jun 2019 evaluated and thus behavior learned. For imitation from observation to work on a physical robot, a higher degree of sample efficiency is required. Deep reinforcement learning has faced similar obstacles with learning with limited samples, especially in the context of robotic control policies with complex dynamics. However, recently, trajectory centric reinforcement learning algorithms are being used to guide neural network policy search which has been shown that is very sample-efficient (Levine & Koltun, 2013;Levine & Abbeel, 2014;Levine et al., 2015;. These algorithms achieve this sample efficiency in part by gaining insight into dynamics through the iterative training of linear quadratic regulators (iLQR's) (Tassa et al., 2012) on a set of trajectory controllers. In this paper, we propose an imitation from observation algorithm, LQR+GAIfO, that takes advantage of both (1) the high performance of the adversarial learning algorithms, and (2) the sample efficiency of trajectory centric reinforcement learning algorithms. We apply the proposed algorithm to a 6-degree-of-freedom robot arm to learn to imitate behaviors from a set of low-level state trajectories. We find that this new method results in successful imitation learning with fewer samples than the previous algorithms. In Section 2 of this paper, we discuss previous work related to this topic. In Section 3, we cover the techniques involved in GAIfO and LQR. Section 4, describes our approach to combining LQR and GAIfO into one functional algorithm. In Section 5, we share our experimental setup and results, and we discuss results in Section 6. Finally, in Section 7, we summarize and discuss potential future work. Preliminaries and Overview In this section, we describe the notation considered throughout the paper, and the two methods that our proposed algorithm are based on, (1) adversarial imitation from observation, and (2) trajectory centric reinforcement learning. Notation We consider agents acting within the broad framework of Markov decision processes (MDPs). We denote a MDP using the 5-tuple M = {S, A, P, r, γ}, where S is the agent's state space, A is its action space, P (s t+1 |s t , a t ) is a function denoting the probability of the agent transitioning from state s t to s t+1 after taking action a t , r : S × A → R is a function specifying the immediate reward that the agent receives for taking a specific action in a given state, and γ is a discount factor. In this framework, agent behavior can be specified by a policy, π : S → A, which specifies the action (or distribution over actions) that the agent should use when in a particular state. In reinforcement Learning the goal is to learn a policy, π, by maximizing the accumulated reward, r, through interaction with the environment. However, imitation learning considers the setting of M\r, i.e. the reward function is excluded. Instead the agent has access to some demonstrated trajectories. The problem that we are interested in this paper is imitation from observation where these demonstrations only include state-trajectories of the expert τ E = {s t }. Adversarial Imitation from Observation Generative adversarial imitation from observation (Torabi et al., 2018b) is an algorithm of this type in which attempts to learn tasks by bringing the state transition distribution of the imitator closer to that of the demonstrator. The algorithm works as follows. There is an imitator policy network, π φ , that is initialized randomly. This policy is then executed in the environment to generate trajectories τ π where each trajectory is a set of states {(s 0 , s 1 , ..., s n )}. There is also a discriminator network parameterized by weights θ and maps input trajectories to a score between 0 and 1: D θ : S × A → [0, 1], The discriminator is trained in a way to output values close to zero for the data coming from the expert and close to one for the data coming from the imitator. To do so, θ is updated by taking an iteration towards solving the following optimization problem. max D θ ∈(0,1) S×S E τπ [log(D θ (s, s ))]+E τ E [log(1−D θ (s, s ))] (1) From a reinforcement learning point of view, the discriminator network provides a cost function that could change φ to move the distribution of trajectories created by π φ towards the distribution of the demonstrated trajectories τ E . Therefore, following the update to D θ , the imitator policy, π φ , is updated using the technique of Trust Region Policy Optimization (Schulman et al., 2015) under the cost function log(D θ (s, s ))(2) where D θ is the newly updated discriminator network. The whole process is repeated until convergence. It is a quite well-known fact that model-free reinforcement learning algorithms (e.g. TRPO) often require a large number of environment interactions. Therefore, it is not practical to deploy these types of algorithms on physical robots. On the other hand, model-based RL algorithms have shown promising performance in the real world . Trajectory Centric Reinforcement Learning Linear quadratic regulators (LQR's) learn control policies under two assumptions (Bemporad et al., 2002): 1. The dynamics of the environment are linear. This means that the transition from a particular state given an action f (s t , a t ) can be represented as the product of the state/action and a matrix F t plus a constant vector f t : f (s t , a t ) = F t s t a t + f t 2. The cost is quadratic. The cost is represented by a quadratic term C t and a linear vector c t : c(s t , a t ) = 1 2 s t a t T C t s t a t + s t a t T c t The algorithm attempts to solve an optimization problem that returns the actions that have the highest return in the course of an episode. Solving this optimization problem, results in a linear controller: a t = K t s t + k t(3) where the K t s and k t s are matrices and vectors which are combinations of F t s, C t s, f t s, and c t s that can be computed for each time-step. In situations where the dynamics are assumed to be close to linear but are not completely known or are non-deterministic, the linear transition function is often replaced by a conditional probability specified under a normal Gaussian distribution, with a mean of the linear dynamics and a covariance: p(s t+1 |s t , a t ) = N (F t s t a t + f t , σ 2 ) When the covariance is constant (independent of the state and action), the optimal policy is identical to the nonstochastic LQR. In non-linear systems where the cost is not quadratic, the techniques of LQR can be used by approximating the dynamics with a first-order Taylor expansion and approximating the cost with a second-order Taylor expansion: F t = ∇ st,at f (s t , a t ), C t = ∇ 2 st,at c(s t , a t ), c t = ∇ st,at c(s t , a t ) Iterative linear quadratic regulators (iLQR's) can be used to find optimal controllers under non-linear models by running LQR with the approximated dynamics, then updating the dynamics fit on each iteration (Li & Todorov, 2004). The resulting controller is: a t = K t (s t −ŝ t ) + k t +â t Whereŝ t andâ t are the states and actions around which the Taylor expansion is computed. LQR assumes that the dynamics of the environment are known. Learning dynamics for a given situation involves building a model to define f (s t , a t ) from a set of observed state/action transitions τ = {(s t , a t , s t+1 )}. A simple approach to this model building is to use linear regression to estimate the dynamics, finding some matrices X and Y that model the transition as f (s t , a t ) = Xs t + Y a t + c, or in a stochastic environment, p(s t+1 |s t , a t ) = N (Xs t + Y a t + c, σ 2 ). Modelling dynamics with a Gaussian approximation of the linear regression (often called linear Gaussian models) has the advantage of being very sample-efficient. To avoid the erroneous pursuit of an incorrect global optimal, a set of local models can be used to replace a global model. The most expressive case of local models is a set of models with a single model for every time-step. In the linear regression approach, this amounts to fitting new X t and Y t for every time-step, often called time-varying controllers. Because dynamics are often highly correlated between time-steps, this approach can be refined by using a global model as a prior for a Bayesian linear regression at each time-step. For a better approximation of the local models it is shown that linear-Gaussian controllers, p(a t |s t ) = N (K t (s t −ŝ t ) + k t +â t , Σ t ), should be used for generating the training data . The covariance depends on the sensitivity of the total cost to the choice of action. Because linear regression can overshoot optimals of nonlinear dynamics, policy adjustment can be bounded so that each iteration's update to the model's transition distribution (or trajectory distribution) is not too large. This can be achieved with a bound on the Kullback-Leibler (KL) divergence-a relative measure of divergence between distributions-between the previous trajectory distribution and the current trajectory distribution. Proposed Algorithm In this section, we propose an imitation from observation algorithm, LQR+GAIfO, to learn an imitation policy from state only demonstrations, τ E . Our algorithm takes advantage of the high performance of adversarial imitation from observation algorithms and the sample efficiency of trajectory-centric reinforcement learning algorithms. To do so, we build upon the methods described in Section 3. For LQR to be useful in an imitation learning scenario, it can no longer depend on a pre-specified reward function that defines the task. Instead, the trajectory optimization step in LQR should be based on the existing controller's ability to imitate the expert demonstration. To achieve this capability, we train a discriminator network on each iteration and use an approximate version of its loss on the sampled trajectories to optimize the controllers. Our algorithm begins by initializing the linear Gaussian controller and executing it inside the environment to collect state-action trajectories {(s t , a t )}. Then it randomly initializes a time-varying model p to model the trajectory dynamics. p is specified as p(s t+1 |s t , a t ) = N (F t s t a t + f t , σ 2 ). Given a set of state-action trajectories {s t , a t }, F t , f t , and σ 2 are fit to the sample data at each time-step using Bayesian linear regression with a normal-inverse-Wishart prior. For this prior, it fits the entire trajectory sample to a Gaussian mixture model (GMM), which previous research has found to be effective . Following the dynamics model update, a randomly initialized neural network is considered as the discriminator, D θ , which takes state-transitions (s t , s t+1 ) as input and outputs a value. Similar to Section 3.2, The goal is to train the discriminator to distinguish between the state-transitions coming from the controller and the demonstrator. However, in order to stabilize the learning, our algorithm uses Wasserstein loss and takes an iteration on the following optimization problem. min D S×S θ E p(a|s) [D θ (s, s )] − E τ E [D θ (s, s ))] Algorithm 1 LQR+GAIfO 1: Initialize controller p(a|s) 2: Initialize a neural network discriminator D θ with random parameter θ 3: Obtain state-only expert demonstration trajectories τ E = {s t } 4: while Controller Improves do 5: Execute the controller, p(a|s), and store the resulting trajectories τ p(a|s) = {(s, a, s )} 6: Learn dynamics model p(s |s, a) over τ 7: Update D θ using loss min D S×S θ E τ p(a|s) [D θ (s, s )] − E τ E [D θ (s, s ))] 8: Create the composite function C(s t , a t ) = (D θ • f t )(s t , a t ) 9: Compute the quadratically approximated cost function by taking the second order Taylor expansion of C(s t , a t ) c q (s t , a t ) = 1 2 s t a t T ∇ 2 s,a C(s t , a t ) s t a t + s t a t T ∇ s,a C(s t , a t ) 10: Improve controller p(a|s) by LQR 11: end while Gradient penalties are also used as the regularization for further stabilization of the learning process (Gulrajani et al., 2017). As discussed in Section 3, the discriminator-a function of state-transition (s t , s t+1 )-can be used as the cost function for training the controller. However, LQR requires the cost function to be a quadratic function of states and actions. Therefore, first, the discriminator is combined with the Gaussian dynamics models to create a composite cost function C(s t , a t ) = (D θ • f t )(s t , a t ). This composite function is then quadratically approximated by taking the second order Taylor expansions of the cost: c q (s t , a t ) = 1 2 s t a t T ∇ 2 s,a C(s t , a t ) s t a t + s t a t T ∇ s,a C(s t , a t ) Where ∇ 2 s,a and ∇ s,a are the Hessian and gradient with respect to the concatenation of s and a vectors, respectively. Finally, an iteration of LQR uses this cost approximation c q to optimize the trajectory to form a new linear-Gaussian controller. The step size of this update is bounded by the Experiments To evaluate the performance of our algorithm, we studied its ability to imitate a reaching task on a robot arm-both on a physical arm and in a simulator. Setup For a testing platform, we used a Universal Robotics UR5, a 6-degree-of-freedom robotic arm (Figure 2). The task that is demonstrated is a reaching task in which the arm begins in a consistent, retracted position and reaches towards a point in Cartesian space. When the end effector (the gripper at the end of the arm) reaches this point, the arm stops moving. This task is shown in Figure 4. The expert is trained by iterating between iLQR and dynamics learning with a specified reward function until convergence. This policy is then executed and recorded a number of times to create the demonstration data. We modified the software to record the state of the arm and the action chosen at every time-step of the trajectory execution. For the initial experiments, the state consisted of: For testing in simulation, we used the Gazebo simulation environment (Figure 3) with a model of the UR5. Each trial lasts for 100 timesteps (10 seconds) and ends regardless of the end effector reaching the goal state. At each iteration, the policy being evaluated is executed five times to collect five sample trajectories. The policy is also evaluated once without noise per iteration, and the performance according to the cost function is logged. The cost function used takes into account the distance from the end effector to the target position, weighted linearly as the trial progresses. With the distance from the goal position to the end effector at a given time-step d t , the cost of a trajectory with n time-steps is calculated as: C(τ ) = d tn + n i=0 i n d ti The same cost function is used to train the expert through reinforcement learning as well as to evaluate the performance of the imitator. In this sense, the task of imitation learning can be seen as recovering the cost function that guided the expert (Torabi et al., 2018b). For a more complex task or more specific cost function than the one studied, it's possible that the imitator could recover the task behavior correctly while not performing well in the eyes of the cost function, or vice versa. However, for the arm reaching task, the cost function is simple and directly related to the task, making it appropriate as an evaluator of imitation performance. For the imitation tasks, this cost function was used to evaluate each trajectory sample at a given iteration. The results were normalized on a range from zero to one, with zero mapping to the average cost of a random policy, and one mapping to the cost achieved by the expert. A policy that performs as well as the expert would achieve a score of one on this normalized performance scale. We compare our algorithm with GAIfO which is instrumented to interface with the arm control and simulation platform. Trials for the GAIfO also involved taking five samples per iteration, in the same way as ours. The GAIfO policy network was updated using Proximal Policy Optimization (PPO). Experimental Design We conducted three main experiments to evaluate our algorithm. In the first experiment, the learning rate is compared to learning under GAIfO. In the second experiment, we test our algorithm's ability to generalize to unseen target positions. Finally, we compare the performance of the algorithm in the simulated environment to the physical arm. COMPARISON TO GAIFO To compare the learning rate of our algorithm to that of GAIfO, we ran trials for both algorithms for 100 iterations and tracked the policy's performance at each iteration using the cost function described in Section 5.1. This process was repeated for both algorithms (n=30 for ours, n=55 for GAIfO) to collect average performance data. The algorithms' performance along with the mean standard error is plotted in Figure 5. The performance of our algorithm quickly exceeds GAIfO and peaks around iteration 30. GENERALIZATION To test our algorithm's ability to generalize a policy for a point that is not in the expert demonstration data, we collected expert demonstration trajectories for 8 points on the edge of a square (shown in Figure 7). For each point, we trained the expert and recorded five sample trajectories when the expert converged. Then, after choosing a subset of the points on the square as {τ E }, we tasked the arm with moving to a point in the center of the square. Because the center point was not in {τ E }, the control policy was required to generalize the expert trajectories to this unseen point. We varied the number of points included in {τ E }, and tracked the normalized performance of our algorithm over 15 iterations. As shown in Figure 6, while performance was similar in the early iterations, our algorithm generally performed better in later iterations when more points were included in {τ E }. PERFORMANCE ON PHYSICAL ARM Our algorithm was run on both the simulator and the physical arm to examine how closely simulated performance mapped to real-world performance. Over 25 iterations, the policy performance on the physical arm began to surpass the performance of the simulated arm, as shown in Figure 8. Discussion Our research began by asking if a combination of LQR and GAIfO could increase sample efficiency in imitation learning. The comparison of LQR+GAIfO to GAIfO suggests that LQR+GAIfO can indeed produce a policy that is better at imitating a behavior in a limited number of iterations, confirming our hypothesis. The steep initial learning curve of LQR+GAIfO indicates significantly higher sample efficiency compared to GAIfO alone. However, the performance of LQR+GAIfO seems to degrade around iteration 60. Without this performance degradation, LQR+GAIfO would outperform GAIfO past iteration 100. The reason for this degradation may be that in adversarial algorithms, improvement of the generator and the discriminator should occur at relatively similar rate. However, in our algorithm, since the controller's representation complexity is limited, after some number of iteration, the controller does not improve as fast as the discriminator. In addition, even without this degradation, the GAIfO approach would eventually surpass the performance of LQR+GAIfO, likely due to the ability of the generator network in GAIfO to produce more complex policies than those that can be represented with linear Gaussian controllers in LQR. Although most of the ability for a policy to perform a task that is different from the expert trajectories in GAIfO and GPS result from a complex model considered for the policy (neural network), the linear Gaussian controllers in LQR+GAIfO still have the ability to generalize to some degree. As expected, the ability to successfully generalize increases with demonstration trajectories, as shown in Figure 6. The reason may be that the discriminator learns a general cost function that could be applied to new target points and as a result LQR can learn a relatively good controller. Future work integrating the full GPS approach would likely lead to better generalization. We studied the performance of LQR+GAIfO on the physical arm to validate the tractability of this technique on a real robot and to establish a sense of how directly the performance studied in the simulator would translate to the physical arm. Our results, as seen in Figure 8, show that the policy performance seen in the simulator can be trusted to model policy performance on the real arm. Surprisingly, the performance of LQR+GAIfO on the physical arm exceeds the simulator performance. It is possible that the noise introduced by the physical arm as a result of actuator noise or other physical effects lead to wider exploration and faster policy improvement. If this is the case, it could be possible to achieve similar performance in the simulator by introducing more policy noise. Conclusion and Future Work We have found that combining generative adversarial imitation from observation with Linear Quadratic Regulators leads to faster learning of imitation behavior over fewer samples than with GAIfO alone, confirming our hypothesis. While LQR+GAIfO doesn't reach the absolute imitation performance of GAIfO over an extended training period with thousands of samples, achieving adequate imitation performance with limited samples opens the door to imitation research on physical robotic systems, for which imitation learning has posed logistical challenges in the past. While LQR is a powerful technique by itself, a policy based solely on Gaussian controllers has limits in complexity. Work in GPS has already produced a method for combining sample-efficient Gaussian controllers with a deep network model that is trained through the controllers. Using a deep network as part of the policy offers increased performance in the long run and greatly increased generalization ability. Incorporating this deep network policy driven by importanceweighted samples of the linear Gaussian controllers is an obvious and promising next step for this work. To validate the LQR+GAIfO technique, we represented the expert trajectories using low-level data like the Cartesian position of the arm's end effector. GAIfO has had success in using higher level data-like a visual recording of the demonstrator-as the state in trajectories. Additionally, GPS has been used in learning neural network policies from visual observation . Pursuing imitation learning from visual data alone would greatly widen the situations in which demonstration data could be collected. Adding a convolutional layer to the discriminator so that it can accept visual data is a natural next step for extending this research.
4,407
1810.08606
2894886314
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy (86.14 ) on the SNLI dataset and (77.05 ) on SciTail.
Different NLI models apply dropout at different layers in general NLI architecture. NLI models proposed by @cite_8 and @cite_3 apply dropout to each feed-forward layer in the network whereas others have applied dropout only to the final classifier layer @cite_15 . @cite_13 apply dropout only to the input and output of sentence encoding layers.The models proposed by @cite_9 and @cite_6 applied dropout to the output of embedding layer and to the input and output of classifier layer. @cite_5 and @cite_11 use dropout but they do not elaborate on the location.
{ "abstract": [ "We present a novel deep learning architecture to address the natural language inference (NLI) task. Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis. Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference. We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions. Finally, we demonstrate how the results can be improved further with an additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset.", "Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suer from two key technical problems that make them slow and unwieldyforlarge-scaleNLPtasks: theyusually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducingtheStack-augmentedParser-Interpreter NeuralNetwork(SPINN),whichcombines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shiftreduceparser. Ourmodelsupportsbatched computation for a speedup of up to 25◊ over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.", "This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new compare-propagate architecture where alignments pairs are compared and then propagated to upper layers for enhanced representation learning. Secondly, we adopt novel factorization layers for efficient compression of alignment vectors into scalar valued features, which are then be used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving state-of-the-art performance on all. A lightweight parameterization of our model enjoys a @math reduction in parameter size compared to the ESIM and DIIN, while maintaining competitive performance. Visual analysis shows that our propagated features are highly interpretable, opening new avenues to explainability in neural NLI models.", "", "Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset.", "In this paper, we proposed a sentence encoding-based model for recognizing text entailment. In our approach, the encoding of sentence is a two-stage process. Firstly, average pooling was used over word-level bidirectional LSTM (biLSTM) to generate a first-stage sentence representation. Secondly, attention mechanism was employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilized the sentence's first-stage representation to attend words appeared in itself, which is called \"Inner-Attention\" in our paper . Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of \"Inner-Attention\" mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.", "Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.", "" ], "cite_N": [ "@cite_8", "@cite_9", "@cite_3", "@cite_6", "@cite_5", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2785375517", "2308720496", "2782363479", "2963580443", "2768523431", "2415204069", "1840435438", "2267186426" ] }
An Exploration of Dropout with RNNs for Natural Language Inference
Natural Language Understanding (NLU) is the process to enable computers to understand the semantics of natural language text. The inherent complexities and ambiguities in natural language text make NLU challenging for computers. Natural Language Inference (NLI) is a fundamental step towards NLU [14]. NLI involves logically inferring a hypothesis sentence from a given premise sentence. The recent release of a large public dataset the Stanford Natural Language Inference (SNLI) [2] has made it feasible to train complex neural network models for NLI. Recurrent Neural Networks (RNNs), particularly bidirectional LSTMs (BiLSTMs) have shown state-of-the-art results on the SNLI dataset [9]. However, RNNs are susceptible to overfitting − the case when a neural network learns the exact patterns present in the training data but fails to generalize to unseen data [21]. In NLI models, regularization techniques such as early stopping [4], L2 regularization and dropout [20] are used to prevent overfitting. For RNNs, dropout is an effective regularization technique [21]. The idea of dropout is to randomly omit computing units in a neural network during training but to keep all of them for testing. Dropout consists of element-wise multiplication of the neural network layer activations with a zero-one mask (r j ) during training. Each element of the zero-one mask is drawn independently from r j ∼ Bernoulli(p), where p is the probability with which the units are retained in the network. During testing, activations of the layer are multiplied by p [19]. Dropout is a crucial regularization technique for NLI [9] [20]. However, the location of dropout varies considerably between NLI models and is based on trail-and-error experiments with different locations in the network. To the best of our knowledge no prior work has been performed to evaluate the effectiveness of dropout location and rates in the RNN NLI models. In this paper, we study the effect of applying dropout at different locations in an RNN model for NLI. We also investigate the effect of varying the dropout rate. Our results suggest that applying dropout for every feed forward connection, especially at higher dropout rates degrades the performance of RNN. Our best model achieves an accuracy of 86.14% on the SNLI dataset and an accuracy of 77.05% on SciTail dataset. To the best of our knowledge this research is the first exploratory analysis of dropout for NLI. The main contributions of this paper are as follows: (1) A RNN model based on BiLSTMs for NLI. (2) A comparative analysis of different locations and dropout rates in the proposed RNN NLI model. (3) Recommendations for the usage of dropout in the RNN models for NLI task. The layout of the paper is as follows. In Section 2, we describe the related work. In Section 3, we discuss the proposed RNN based NLI model. Experiments and the results are presented in Section 4. Recommendations for the application of dropouts are presented in Section 5. We conclude in Section 6. Recurrent Neural Network Model for NLI Task The proposed RNN NLI model follows the general architecture of NLI models and is depicted in Fig.1. The model combines the intra-attention model [13] with soft-attention mechanism [11]. The embedding layer takes as input word M = tanh W y Y + W h R avg ⊗ e L (1) α = sof tmax w T M (2) R = Y α T (3) where, W y , W h are trained projection matrices, w T is the transpose of trained parameter vector w, Y is the matrix of hidden output vectors of the BiLSTM layer, R avg is obtained from the average pooling of Y , e L ∈ R L is a vector of 1s, α is a vector of attention weights and R is the attention weighted sequence representation. The attention weighted sequence representation is generated for premise and hypothesis and is denoted as R p and R h . The attention weighted representation gives more importance to the words which are important to the semantics of the sequence and also captures its global context. The interaction between R p and R h is performed by inter-attention layer, following the Equations (4) − (6). I v = R T p R h (4) R p = sof tmax(I v )R h (5) R h = sof tmax(I v )R p(6) where, I v is the interaction vector.R p contains the words which are relevant based on the content of sequence R h . Similarly,R h contains words which are important with respect to the content of sequence R p . The final sequence encoding is obtained from the element-wise multiplication of intra-attention weighted representation and inter-attention weighted representation as follows: F p =R p R p (7) F h =R h R h(8) To classify the relationship between premise and hypothesis a relation vector is formed from the encoding of premise and hypothesis generated in Equation (7) and (8), as follows: v p,avg = averagepooling(F p ), v p,max = maxpooling(F p ) v h,avg = averagepooling(F h ), v h,max = maxpooling(F h )(9)F relation = [v p,avg ; v p,max ; v h,avg ; v h,max ](10) where v is a vector of length L. The relation vector (F relation ) is fed to the MLP layer. The three-way softmax layer outputs the probability for each class of NLI. Experiments and Results Experimental Setup The standard train, validation and test splits of SNLI [2] and SciTail [10] [12] optimizer with first momentum is set to 0.9 and the second to 0.999 is used. The word embeddings are initialized with pre-trained 300-D Glove 840B vectors [17]. Extensive experiments with dropout locations and hidden units were conducted however we show only the best results for brevity and space limits. Table 1 presents the models with different combinations of layers to the output of which dropout are applied in our model depicted in Fig. 1. Table 2. shows the results for the models in Table 1. Each model is evaluated with dropout rates ranging from 0.1 to 0.5 with a granularity of 0.1. Dropout at Individual Layers We first apply dropout at each layer including the embedding layer. Although the embedding layer is the largest layer it is often not regularized for many language applications [8]. However, we observe the benefit of regularizing it. For SNLI, the highest accuracy is achieved when the embedding layer is regularized (Model 2, DR 0.4). Dropout at Different Layers for NLI Model For SciTail, the highest accuracy is obtained when the recurrent layer is regularized (Model 3, DR 0.1). The dropout injected noise at lower layers prevents higher fully connected layers from overfitting. We further experimented regularizing higher fully connected layers (Intra-Attention, Inter-Attention, MLP) individually, however no significant performance gains observed. Dropout at Multiple Layers We next explore the effect of applying dropout at multiple layers. For SNLI and SciTail, the models achieve higher performance when dropout is applied to embedding and recurrent layer (Model 4, DR 0.2). This supports the importance of regularizing embedding and recurrent layer as shown for individual layers. It is interesting to note that regularizing the recurrent layer helps SciTail (Model 7, DR 0.2) whereas regularizing the embedding layer helps SNLI (Model 8, DR 0.2). A possible explanation to this is that for the smaller SciTail dataset the model can not afford to lose information in the input, whereas for the larger SNLI dataset the model has a chance to learn even with the loss of information in input. Also, the results from models 7 and 8 suggests that applying dropout at a single lower layer (Embedding or Recurrent; depending on the amount of training data) and to the inputs and outputs of MLP layer improves performance. We can infer from models 9, 10, 11 and 12 that applying dropout to each feed forward connection helps preventing the model overfit for SciTail (DR 0.1 and 0.2). However, for both the datasets with different dropout locations the performance of the model decreases as the dropout rate increases (Section 4.4). The Effectiveness of Dropout for Overfitting We study the efficacy of dropout on overfitting. The main results are shown in Fig. 2. For SNLI, Fig. 2 (a) -(b), shows the convergence curves for the baseline model and the model achieving the highest accuracy (Model 2, DR 0.4). The convergence curve show that dropout is very effective in preventing overfitting. However, for the smaller SciTail dataset when regularizing multiple layers we observe that the highest accuracy achieving model (Model 9, DP 0.2), overfits significantly ( Fig. 2(d)). This overfitting is due to the large model size. With limited training data of SciTail, our model with higher number of hidden units learns the relationship between the premise and the hypothesis most accurately (Fig. 2(d)). However, these relationships are not representative of the validation set data and thus the model does not generalize well. When we reduced the model size (50, 100 and 200 hidden units) we achieved the best accuracy for SciTail at 100 hidden units ( Table 3). The convergence curve (Fig. 2(c)) shows that dropout effectively prevents overfitting in the model with 100 hidden units in comparison to 300 units. Furthermore, for SciTail dataset, the model with 100 The results of this experiment suggest that given the high learning capacity of RNNs an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in such scenarios. Dropout Rate Effect on Accuracy and Dropout Location We next investigate the effect of varying dropout rates on the accuracy of the models and on various dropout locations. Fig 3. illustrates varying dropout rates and the corresponding test accuracy for SNLI. We observe some distinct trends from the plot. First, the dropout rate and location does not affect the accuracy of the models 2 and 8 over the baseline. Second, in the dropout range [0.2 -0.5], the dropout locations affect the accuracy of the models significantly. Increasing the dropout rate from 0.2 to 0.5 the accuracy of models 5 and 12 decreases significantly by 21.3% and 15.9% respectively. For most of the models (3, 4, 6, 7, 9 and 10) the dropout rate of 0.5 decreases accuracy. From the experiments on SciTail dataset (Fig. 4), we observed that the dropout rate and its location do not have significant effect on most of the models, with the exception of model 8 (which shows erratic performance). Finally, for almost all the experiments a large dropout rate (0.5) decreases the accuracy of the models. The dropout rate of 0.5 works for a wide rang of neural networks and tasks [19]. However, our results show that this is not desirable for RNN models of NLI. Based on our evaluations a dropout range of [0.2 − 0.4] is advised. Recommendations for Dropout Application Based on our empirical evaluations, the following is recommended for regularizing a RNN model for NLI task: (1) Embedding layer should be regularized for large datasets like SNLI. For smaller datasets such as SciTail regularizing recurrent layer is an efficient option. The dropout injected noise at these layers prevent the higher fully connected layers from overfitting. (2) When regularizing multiple layers, regularizing a lower layer (embedding or recurrent; depending on the amount of data) with the inputs and outputs of MLP layer should be considered. The performance of our model decreased when dropout is applied at each intermediate feed-forward connection. (3) When dropout is applied at multiple feed forward connections, it is almost always better to apply it at lower rate − [0.2 − 0.4]. (4) Given the high learning capacity of RNNs, an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in the scenarios otherwise. Conclusions In this paper, we reported the outcome of experiments conducted to investigate the effect of applying dropout at different layers in an RNN model for the NLI task. Based on our empirical evaluations we recommended the probable locations of dropouts to gain high performance on NLI task. Through extensive exploration, for the correct dropout location in our model, we achieved the accuracies of 86.14% on SNLI and 77.05% on SciTail datasets. In future research, we aim to investigate the effect of different dropout rates at distinct layers.
2,046
1810.08606
2894886314
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy (86.14 ) on the SNLI dataset and (77.05 ) on SciTail.
Dropout rates are also crucial for the NLI models @cite_12 . Even the models which apply dropout at the same locations vary dropout rates.
{ "abstract": [ "Neural networks with recurrent or recursive architecture have shown promising results on various natural language processing (NLP) tasks. The recurrent and recursive architectures have their own strength and limitations. The recurrent networks process input text sequentially and model the conditional transition between word tokens. In contrast, the recursive networks explicitly model the compositionality and the recursive structure of natural language. Current recursive architecture is based on syntactic tree, thus limiting its practical applicability in different NLP applications. In this paper, we introduce a class of tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic tree-based recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and different forms of node function. We demonstrated the effectiveness and the flexibility of a binary-tree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification." ], "cite_N": [ "@cite_12" ], "mid": [ "2496570145" ] }
An Exploration of Dropout with RNNs for Natural Language Inference
Natural Language Understanding (NLU) is the process to enable computers to understand the semantics of natural language text. The inherent complexities and ambiguities in natural language text make NLU challenging for computers. Natural Language Inference (NLI) is a fundamental step towards NLU [14]. NLI involves logically inferring a hypothesis sentence from a given premise sentence. The recent release of a large public dataset the Stanford Natural Language Inference (SNLI) [2] has made it feasible to train complex neural network models for NLI. Recurrent Neural Networks (RNNs), particularly bidirectional LSTMs (BiLSTMs) have shown state-of-the-art results on the SNLI dataset [9]. However, RNNs are susceptible to overfitting − the case when a neural network learns the exact patterns present in the training data but fails to generalize to unseen data [21]. In NLI models, regularization techniques such as early stopping [4], L2 regularization and dropout [20] are used to prevent overfitting. For RNNs, dropout is an effective regularization technique [21]. The idea of dropout is to randomly omit computing units in a neural network during training but to keep all of them for testing. Dropout consists of element-wise multiplication of the neural network layer activations with a zero-one mask (r j ) during training. Each element of the zero-one mask is drawn independently from r j ∼ Bernoulli(p), where p is the probability with which the units are retained in the network. During testing, activations of the layer are multiplied by p [19]. Dropout is a crucial regularization technique for NLI [9] [20]. However, the location of dropout varies considerably between NLI models and is based on trail-and-error experiments with different locations in the network. To the best of our knowledge no prior work has been performed to evaluate the effectiveness of dropout location and rates in the RNN NLI models. In this paper, we study the effect of applying dropout at different locations in an RNN model for NLI. We also investigate the effect of varying the dropout rate. Our results suggest that applying dropout for every feed forward connection, especially at higher dropout rates degrades the performance of RNN. Our best model achieves an accuracy of 86.14% on the SNLI dataset and an accuracy of 77.05% on SciTail dataset. To the best of our knowledge this research is the first exploratory analysis of dropout for NLI. The main contributions of this paper are as follows: (1) A RNN model based on BiLSTMs for NLI. (2) A comparative analysis of different locations and dropout rates in the proposed RNN NLI model. (3) Recommendations for the usage of dropout in the RNN models for NLI task. The layout of the paper is as follows. In Section 2, we describe the related work. In Section 3, we discuss the proposed RNN based NLI model. Experiments and the results are presented in Section 4. Recommendations for the application of dropouts are presented in Section 5. We conclude in Section 6. Recurrent Neural Network Model for NLI Task The proposed RNN NLI model follows the general architecture of NLI models and is depicted in Fig.1. The model combines the intra-attention model [13] with soft-attention mechanism [11]. The embedding layer takes as input word M = tanh W y Y + W h R avg ⊗ e L (1) α = sof tmax w T M (2) R = Y α T (3) where, W y , W h are trained projection matrices, w T is the transpose of trained parameter vector w, Y is the matrix of hidden output vectors of the BiLSTM layer, R avg is obtained from the average pooling of Y , e L ∈ R L is a vector of 1s, α is a vector of attention weights and R is the attention weighted sequence representation. The attention weighted sequence representation is generated for premise and hypothesis and is denoted as R p and R h . The attention weighted representation gives more importance to the words which are important to the semantics of the sequence and also captures its global context. The interaction between R p and R h is performed by inter-attention layer, following the Equations (4) − (6). I v = R T p R h (4) R p = sof tmax(I v )R h (5) R h = sof tmax(I v )R p(6) where, I v is the interaction vector.R p contains the words which are relevant based on the content of sequence R h . Similarly,R h contains words which are important with respect to the content of sequence R p . The final sequence encoding is obtained from the element-wise multiplication of intra-attention weighted representation and inter-attention weighted representation as follows: F p =R p R p (7) F h =R h R h(8) To classify the relationship between premise and hypothesis a relation vector is formed from the encoding of premise and hypothesis generated in Equation (7) and (8), as follows: v p,avg = averagepooling(F p ), v p,max = maxpooling(F p ) v h,avg = averagepooling(F h ), v h,max = maxpooling(F h )(9)F relation = [v p,avg ; v p,max ; v h,avg ; v h,max ](10) where v is a vector of length L. The relation vector (F relation ) is fed to the MLP layer. The three-way softmax layer outputs the probability for each class of NLI. Experiments and Results Experimental Setup The standard train, validation and test splits of SNLI [2] and SciTail [10] [12] optimizer with first momentum is set to 0.9 and the second to 0.999 is used. The word embeddings are initialized with pre-trained 300-D Glove 840B vectors [17]. Extensive experiments with dropout locations and hidden units were conducted however we show only the best results for brevity and space limits. Table 1 presents the models with different combinations of layers to the output of which dropout are applied in our model depicted in Fig. 1. Table 2. shows the results for the models in Table 1. Each model is evaluated with dropout rates ranging from 0.1 to 0.5 with a granularity of 0.1. Dropout at Individual Layers We first apply dropout at each layer including the embedding layer. Although the embedding layer is the largest layer it is often not regularized for many language applications [8]. However, we observe the benefit of regularizing it. For SNLI, the highest accuracy is achieved when the embedding layer is regularized (Model 2, DR 0.4). Dropout at Different Layers for NLI Model For SciTail, the highest accuracy is obtained when the recurrent layer is regularized (Model 3, DR 0.1). The dropout injected noise at lower layers prevents higher fully connected layers from overfitting. We further experimented regularizing higher fully connected layers (Intra-Attention, Inter-Attention, MLP) individually, however no significant performance gains observed. Dropout at Multiple Layers We next explore the effect of applying dropout at multiple layers. For SNLI and SciTail, the models achieve higher performance when dropout is applied to embedding and recurrent layer (Model 4, DR 0.2). This supports the importance of regularizing embedding and recurrent layer as shown for individual layers. It is interesting to note that regularizing the recurrent layer helps SciTail (Model 7, DR 0.2) whereas regularizing the embedding layer helps SNLI (Model 8, DR 0.2). A possible explanation to this is that for the smaller SciTail dataset the model can not afford to lose information in the input, whereas for the larger SNLI dataset the model has a chance to learn even with the loss of information in input. Also, the results from models 7 and 8 suggests that applying dropout at a single lower layer (Embedding or Recurrent; depending on the amount of training data) and to the inputs and outputs of MLP layer improves performance. We can infer from models 9, 10, 11 and 12 that applying dropout to each feed forward connection helps preventing the model overfit for SciTail (DR 0.1 and 0.2). However, for both the datasets with different dropout locations the performance of the model decreases as the dropout rate increases (Section 4.4). The Effectiveness of Dropout for Overfitting We study the efficacy of dropout on overfitting. The main results are shown in Fig. 2. For SNLI, Fig. 2 (a) -(b), shows the convergence curves for the baseline model and the model achieving the highest accuracy (Model 2, DR 0.4). The convergence curve show that dropout is very effective in preventing overfitting. However, for the smaller SciTail dataset when regularizing multiple layers we observe that the highest accuracy achieving model (Model 9, DP 0.2), overfits significantly ( Fig. 2(d)). This overfitting is due to the large model size. With limited training data of SciTail, our model with higher number of hidden units learns the relationship between the premise and the hypothesis most accurately (Fig. 2(d)). However, these relationships are not representative of the validation set data and thus the model does not generalize well. When we reduced the model size (50, 100 and 200 hidden units) we achieved the best accuracy for SciTail at 100 hidden units ( Table 3). The convergence curve (Fig. 2(c)) shows that dropout effectively prevents overfitting in the model with 100 hidden units in comparison to 300 units. Furthermore, for SciTail dataset, the model with 100 The results of this experiment suggest that given the high learning capacity of RNNs an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in such scenarios. Dropout Rate Effect on Accuracy and Dropout Location We next investigate the effect of varying dropout rates on the accuracy of the models and on various dropout locations. Fig 3. illustrates varying dropout rates and the corresponding test accuracy for SNLI. We observe some distinct trends from the plot. First, the dropout rate and location does not affect the accuracy of the models 2 and 8 over the baseline. Second, in the dropout range [0.2 -0.5], the dropout locations affect the accuracy of the models significantly. Increasing the dropout rate from 0.2 to 0.5 the accuracy of models 5 and 12 decreases significantly by 21.3% and 15.9% respectively. For most of the models (3, 4, 6, 7, 9 and 10) the dropout rate of 0.5 decreases accuracy. From the experiments on SciTail dataset (Fig. 4), we observed that the dropout rate and its location do not have significant effect on most of the models, with the exception of model 8 (which shows erratic performance). Finally, for almost all the experiments a large dropout rate (0.5) decreases the accuracy of the models. The dropout rate of 0.5 works for a wide rang of neural networks and tasks [19]. However, our results show that this is not desirable for RNN models of NLI. Based on our evaluations a dropout range of [0.2 − 0.4] is advised. Recommendations for Dropout Application Based on our empirical evaluations, the following is recommended for regularizing a RNN model for NLI task: (1) Embedding layer should be regularized for large datasets like SNLI. For smaller datasets such as SciTail regularizing recurrent layer is an efficient option. The dropout injected noise at these layers prevent the higher fully connected layers from overfitting. (2) When regularizing multiple layers, regularizing a lower layer (embedding or recurrent; depending on the amount of data) with the inputs and outputs of MLP layer should be considered. The performance of our model decreased when dropout is applied at each intermediate feed-forward connection. (3) When dropout is applied at multiple feed forward connections, it is almost always better to apply it at lower rate − [0.2 − 0.4]. (4) Given the high learning capacity of RNNs, an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in the scenarios otherwise. Conclusions In this paper, we reported the outcome of experiments conducted to investigate the effect of applying dropout at different layers in an RNN model for the NLI task. Based on our empirical evaluations we recommended the probable locations of dropouts to gain high performance on NLI task. Through extensive exploration, for the correct dropout location in our model, we achieved the accuracies of 86.14% on SNLI and 77.05% on SciTail datasets. In future research, we aim to investigate the effect of different dropout rates at distinct layers.
2,046
1810.08606
2894886314
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy (86.14 ) on the SNLI dataset and (77.05 ) on SciTail.
Previous research on dropout for RNNs on the applications such as neural language models @cite_16 , handwriting recognition @cite_14 and machine translation @cite_19 have established that recurrent connection dropout should not be applied to RNNs as it affects the long term dependencies in sequential data.
{ "abstract": [ "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.", "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successful word and character-level LMs. Why do they work so well, in particular better than linear neural LMs? Possible explanations are that RNNs have an implicitly better regularization or that RNNs have a higher capacity for storing patterns due to their nonlinearities or both. Here we argue for the first explanation in the limit of little training data and the second explanation for large amounts of text data. We show state-of-the-art performance on the popular and small Penn dataset when RNN LMs are regularized with random dropout. Nonetheless, we show even better performance from a simplified, much less expressive linear RNN model without off-diagonal entries in the recurrent matrix. We call this model an impulse-response LM (IRLM). Using random dropout, column normalization and annealed learning rates, IRLMs develop neurons that keep a memory of up to 50 words in the past and achieve a perplexity of 102.5 on the Penn dataset. On two large datasets however, the same regularization methods are unsuccessful for both models and the RNN's expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity, respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the Microsoft Research Sentence Completion (MRSC) task. We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60.8 . Our analysis indicates that a fruitful direction of research for neural LMs lies in developing more accessible internal representations, and suggests an optimization regime of very high momentum terms for effectively training such models." ], "cite_N": [ "@cite_19", "@cite_14", "@cite_16" ], "mid": [ "1591801644", "2964325005", "1836307405" ] }
An Exploration of Dropout with RNNs for Natural Language Inference
Natural Language Understanding (NLU) is the process to enable computers to understand the semantics of natural language text. The inherent complexities and ambiguities in natural language text make NLU challenging for computers. Natural Language Inference (NLI) is a fundamental step towards NLU [14]. NLI involves logically inferring a hypothesis sentence from a given premise sentence. The recent release of a large public dataset the Stanford Natural Language Inference (SNLI) [2] has made it feasible to train complex neural network models for NLI. Recurrent Neural Networks (RNNs), particularly bidirectional LSTMs (BiLSTMs) have shown state-of-the-art results on the SNLI dataset [9]. However, RNNs are susceptible to overfitting − the case when a neural network learns the exact patterns present in the training data but fails to generalize to unseen data [21]. In NLI models, regularization techniques such as early stopping [4], L2 regularization and dropout [20] are used to prevent overfitting. For RNNs, dropout is an effective regularization technique [21]. The idea of dropout is to randomly omit computing units in a neural network during training but to keep all of them for testing. Dropout consists of element-wise multiplication of the neural network layer activations with a zero-one mask (r j ) during training. Each element of the zero-one mask is drawn independently from r j ∼ Bernoulli(p), where p is the probability with which the units are retained in the network. During testing, activations of the layer are multiplied by p [19]. Dropout is a crucial regularization technique for NLI [9] [20]. However, the location of dropout varies considerably between NLI models and is based on trail-and-error experiments with different locations in the network. To the best of our knowledge no prior work has been performed to evaluate the effectiveness of dropout location and rates in the RNN NLI models. In this paper, we study the effect of applying dropout at different locations in an RNN model for NLI. We also investigate the effect of varying the dropout rate. Our results suggest that applying dropout for every feed forward connection, especially at higher dropout rates degrades the performance of RNN. Our best model achieves an accuracy of 86.14% on the SNLI dataset and an accuracy of 77.05% on SciTail dataset. To the best of our knowledge this research is the first exploratory analysis of dropout for NLI. The main contributions of this paper are as follows: (1) A RNN model based on BiLSTMs for NLI. (2) A comparative analysis of different locations and dropout rates in the proposed RNN NLI model. (3) Recommendations for the usage of dropout in the RNN models for NLI task. The layout of the paper is as follows. In Section 2, we describe the related work. In Section 3, we discuss the proposed RNN based NLI model. Experiments and the results are presented in Section 4. Recommendations for the application of dropouts are presented in Section 5. We conclude in Section 6. Recurrent Neural Network Model for NLI Task The proposed RNN NLI model follows the general architecture of NLI models and is depicted in Fig.1. The model combines the intra-attention model [13] with soft-attention mechanism [11]. The embedding layer takes as input word M = tanh W y Y + W h R avg ⊗ e L (1) α = sof tmax w T M (2) R = Y α T (3) where, W y , W h are trained projection matrices, w T is the transpose of trained parameter vector w, Y is the matrix of hidden output vectors of the BiLSTM layer, R avg is obtained from the average pooling of Y , e L ∈ R L is a vector of 1s, α is a vector of attention weights and R is the attention weighted sequence representation. The attention weighted sequence representation is generated for premise and hypothesis and is denoted as R p and R h . The attention weighted representation gives more importance to the words which are important to the semantics of the sequence and also captures its global context. The interaction between R p and R h is performed by inter-attention layer, following the Equations (4) − (6). I v = R T p R h (4) R p = sof tmax(I v )R h (5) R h = sof tmax(I v )R p(6) where, I v is the interaction vector.R p contains the words which are relevant based on the content of sequence R h . Similarly,R h contains words which are important with respect to the content of sequence R p . The final sequence encoding is obtained from the element-wise multiplication of intra-attention weighted representation and inter-attention weighted representation as follows: F p =R p R p (7) F h =R h R h(8) To classify the relationship between premise and hypothesis a relation vector is formed from the encoding of premise and hypothesis generated in Equation (7) and (8), as follows: v p,avg = averagepooling(F p ), v p,max = maxpooling(F p ) v h,avg = averagepooling(F h ), v h,max = maxpooling(F h )(9)F relation = [v p,avg ; v p,max ; v h,avg ; v h,max ](10) where v is a vector of length L. The relation vector (F relation ) is fed to the MLP layer. The three-way softmax layer outputs the probability for each class of NLI. Experiments and Results Experimental Setup The standard train, validation and test splits of SNLI [2] and SciTail [10] [12] optimizer with first momentum is set to 0.9 and the second to 0.999 is used. The word embeddings are initialized with pre-trained 300-D Glove 840B vectors [17]. Extensive experiments with dropout locations and hidden units were conducted however we show only the best results for brevity and space limits. Table 1 presents the models with different combinations of layers to the output of which dropout are applied in our model depicted in Fig. 1. Table 2. shows the results for the models in Table 1. Each model is evaluated with dropout rates ranging from 0.1 to 0.5 with a granularity of 0.1. Dropout at Individual Layers We first apply dropout at each layer including the embedding layer. Although the embedding layer is the largest layer it is often not regularized for many language applications [8]. However, we observe the benefit of regularizing it. For SNLI, the highest accuracy is achieved when the embedding layer is regularized (Model 2, DR 0.4). Dropout at Different Layers for NLI Model For SciTail, the highest accuracy is obtained when the recurrent layer is regularized (Model 3, DR 0.1). The dropout injected noise at lower layers prevents higher fully connected layers from overfitting. We further experimented regularizing higher fully connected layers (Intra-Attention, Inter-Attention, MLP) individually, however no significant performance gains observed. Dropout at Multiple Layers We next explore the effect of applying dropout at multiple layers. For SNLI and SciTail, the models achieve higher performance when dropout is applied to embedding and recurrent layer (Model 4, DR 0.2). This supports the importance of regularizing embedding and recurrent layer as shown for individual layers. It is interesting to note that regularizing the recurrent layer helps SciTail (Model 7, DR 0.2) whereas regularizing the embedding layer helps SNLI (Model 8, DR 0.2). A possible explanation to this is that for the smaller SciTail dataset the model can not afford to lose information in the input, whereas for the larger SNLI dataset the model has a chance to learn even with the loss of information in input. Also, the results from models 7 and 8 suggests that applying dropout at a single lower layer (Embedding or Recurrent; depending on the amount of training data) and to the inputs and outputs of MLP layer improves performance. We can infer from models 9, 10, 11 and 12 that applying dropout to each feed forward connection helps preventing the model overfit for SciTail (DR 0.1 and 0.2). However, for both the datasets with different dropout locations the performance of the model decreases as the dropout rate increases (Section 4.4). The Effectiveness of Dropout for Overfitting We study the efficacy of dropout on overfitting. The main results are shown in Fig. 2. For SNLI, Fig. 2 (a) -(b), shows the convergence curves for the baseline model and the model achieving the highest accuracy (Model 2, DR 0.4). The convergence curve show that dropout is very effective in preventing overfitting. However, for the smaller SciTail dataset when regularizing multiple layers we observe that the highest accuracy achieving model (Model 9, DP 0.2), overfits significantly ( Fig. 2(d)). This overfitting is due to the large model size. With limited training data of SciTail, our model with higher number of hidden units learns the relationship between the premise and the hypothesis most accurately (Fig. 2(d)). However, these relationships are not representative of the validation set data and thus the model does not generalize well. When we reduced the model size (50, 100 and 200 hidden units) we achieved the best accuracy for SciTail at 100 hidden units ( Table 3). The convergence curve (Fig. 2(c)) shows that dropout effectively prevents overfitting in the model with 100 hidden units in comparison to 300 units. Furthermore, for SciTail dataset, the model with 100 The results of this experiment suggest that given the high learning capacity of RNNs an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in such scenarios. Dropout Rate Effect on Accuracy and Dropout Location We next investigate the effect of varying dropout rates on the accuracy of the models and on various dropout locations. Fig 3. illustrates varying dropout rates and the corresponding test accuracy for SNLI. We observe some distinct trends from the plot. First, the dropout rate and location does not affect the accuracy of the models 2 and 8 over the baseline. Second, in the dropout range [0.2 -0.5], the dropout locations affect the accuracy of the models significantly. Increasing the dropout rate from 0.2 to 0.5 the accuracy of models 5 and 12 decreases significantly by 21.3% and 15.9% respectively. For most of the models (3, 4, 6, 7, 9 and 10) the dropout rate of 0.5 decreases accuracy. From the experiments on SciTail dataset (Fig. 4), we observed that the dropout rate and its location do not have significant effect on most of the models, with the exception of model 8 (which shows erratic performance). Finally, for almost all the experiments a large dropout rate (0.5) decreases the accuracy of the models. The dropout rate of 0.5 works for a wide rang of neural networks and tasks [19]. However, our results show that this is not desirable for RNN models of NLI. Based on our evaluations a dropout range of [0.2 − 0.4] is advised. Recommendations for Dropout Application Based on our empirical evaluations, the following is recommended for regularizing a RNN model for NLI task: (1) Embedding layer should be regularized for large datasets like SNLI. For smaller datasets such as SciTail regularizing recurrent layer is an efficient option. The dropout injected noise at these layers prevent the higher fully connected layers from overfitting. (2) When regularizing multiple layers, regularizing a lower layer (embedding or recurrent; depending on the amount of data) with the inputs and outputs of MLP layer should be considered. The performance of our model decreased when dropout is applied at each intermediate feed-forward connection. (3) When dropout is applied at multiple feed forward connections, it is almost always better to apply it at lower rate − [0.2 − 0.4]. (4) Given the high learning capacity of RNNs, an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in the scenarios otherwise. Conclusions In this paper, we reported the outcome of experiments conducted to investigate the effect of applying dropout at different layers in an RNN model for the NLI task. Based on our empirical evaluations we recommended the probable locations of dropouts to gain high performance on NLI task. Through extensive exploration, for the correct dropout location in our model, we achieved the accuracies of 86.14% on SNLI and 77.05% on SciTail datasets. In future research, we aim to investigate the effect of different dropout rates at distinct layers.
2,046
1810.08606
2894886314
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy (86.14 ) on the SNLI dataset and (77.05 ) on SciTail.
@cite_4 studied dropout at different places with respect to the LSTM units in the network proposed by @cite_14 for handwriting recognition. The results show that significant performance difference is observed when dropout is applied to distinct places. They concluded that applying dropout only after recurrent layers (as applied by @cite_14 ) or between every feed-forward layer (as done by @cite_19 ) does not always yield good results. @cite_10 , investigated the effect of applying dropout in LSTMs. They randomly switch off the outputs of various gates of LSTM, achieving an optimal word error rate when dropout is applied to output, forget and input gates of the LSTM.
{ "abstract": [ "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.", "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "The dropout technique is a data-driven regularization method for neural networks. It consists in randomly setting some activations from a given hidden layer to zero during training. Repeating the procedure for each training example, it is equivalent to sample a network from an exponential number of architectures that share weights. The goal of dropout is to prevent feature detectors to rely on each other. Dropout has successfully been applied to Deep MLPs and to convolutional neural networks, for various tasks of Speech Recognition and Computer Vision. We recently proposed a way to use dropout in MDLSTM-RNNs for handwritten word and line recognition. In this paper, we show that further improvement can be achieved by implementing dropout differently, more specifically by applying it at better positions relative to the LSTM units.", "" ], "cite_N": [ "@cite_19", "@cite_14", "@cite_4", "@cite_10" ], "mid": [ "1591801644", "2964325005", "2162456913", "2747135936" ] }
An Exploration of Dropout with RNNs for Natural Language Inference
Natural Language Understanding (NLU) is the process to enable computers to understand the semantics of natural language text. The inherent complexities and ambiguities in natural language text make NLU challenging for computers. Natural Language Inference (NLI) is a fundamental step towards NLU [14]. NLI involves logically inferring a hypothesis sentence from a given premise sentence. The recent release of a large public dataset the Stanford Natural Language Inference (SNLI) [2] has made it feasible to train complex neural network models for NLI. Recurrent Neural Networks (RNNs), particularly bidirectional LSTMs (BiLSTMs) have shown state-of-the-art results on the SNLI dataset [9]. However, RNNs are susceptible to overfitting − the case when a neural network learns the exact patterns present in the training data but fails to generalize to unseen data [21]. In NLI models, regularization techniques such as early stopping [4], L2 regularization and dropout [20] are used to prevent overfitting. For RNNs, dropout is an effective regularization technique [21]. The idea of dropout is to randomly omit computing units in a neural network during training but to keep all of them for testing. Dropout consists of element-wise multiplication of the neural network layer activations with a zero-one mask (r j ) during training. Each element of the zero-one mask is drawn independently from r j ∼ Bernoulli(p), where p is the probability with which the units are retained in the network. During testing, activations of the layer are multiplied by p [19]. Dropout is a crucial regularization technique for NLI [9] [20]. However, the location of dropout varies considerably between NLI models and is based on trail-and-error experiments with different locations in the network. To the best of our knowledge no prior work has been performed to evaluate the effectiveness of dropout location and rates in the RNN NLI models. In this paper, we study the effect of applying dropout at different locations in an RNN model for NLI. We also investigate the effect of varying the dropout rate. Our results suggest that applying dropout for every feed forward connection, especially at higher dropout rates degrades the performance of RNN. Our best model achieves an accuracy of 86.14% on the SNLI dataset and an accuracy of 77.05% on SciTail dataset. To the best of our knowledge this research is the first exploratory analysis of dropout for NLI. The main contributions of this paper are as follows: (1) A RNN model based on BiLSTMs for NLI. (2) A comparative analysis of different locations and dropout rates in the proposed RNN NLI model. (3) Recommendations for the usage of dropout in the RNN models for NLI task. The layout of the paper is as follows. In Section 2, we describe the related work. In Section 3, we discuss the proposed RNN based NLI model. Experiments and the results are presented in Section 4. Recommendations for the application of dropouts are presented in Section 5. We conclude in Section 6. Recurrent Neural Network Model for NLI Task The proposed RNN NLI model follows the general architecture of NLI models and is depicted in Fig.1. The model combines the intra-attention model [13] with soft-attention mechanism [11]. The embedding layer takes as input word M = tanh W y Y + W h R avg ⊗ e L (1) α = sof tmax w T M (2) R = Y α T (3) where, W y , W h are trained projection matrices, w T is the transpose of trained parameter vector w, Y is the matrix of hidden output vectors of the BiLSTM layer, R avg is obtained from the average pooling of Y , e L ∈ R L is a vector of 1s, α is a vector of attention weights and R is the attention weighted sequence representation. The attention weighted sequence representation is generated for premise and hypothesis and is denoted as R p and R h . The attention weighted representation gives more importance to the words which are important to the semantics of the sequence and also captures its global context. The interaction between R p and R h is performed by inter-attention layer, following the Equations (4) − (6). I v = R T p R h (4) R p = sof tmax(I v )R h (5) R h = sof tmax(I v )R p(6) where, I v is the interaction vector.R p contains the words which are relevant based on the content of sequence R h . Similarly,R h contains words which are important with respect to the content of sequence R p . The final sequence encoding is obtained from the element-wise multiplication of intra-attention weighted representation and inter-attention weighted representation as follows: F p =R p R p (7) F h =R h R h(8) To classify the relationship between premise and hypothesis a relation vector is formed from the encoding of premise and hypothesis generated in Equation (7) and (8), as follows: v p,avg = averagepooling(F p ), v p,max = maxpooling(F p ) v h,avg = averagepooling(F h ), v h,max = maxpooling(F h )(9)F relation = [v p,avg ; v p,max ; v h,avg ; v h,max ](10) where v is a vector of length L. The relation vector (F relation ) is fed to the MLP layer. The three-way softmax layer outputs the probability for each class of NLI. Experiments and Results Experimental Setup The standard train, validation and test splits of SNLI [2] and SciTail [10] [12] optimizer with first momentum is set to 0.9 and the second to 0.999 is used. The word embeddings are initialized with pre-trained 300-D Glove 840B vectors [17]. Extensive experiments with dropout locations and hidden units were conducted however we show only the best results for brevity and space limits. Table 1 presents the models with different combinations of layers to the output of which dropout are applied in our model depicted in Fig. 1. Table 2. shows the results for the models in Table 1. Each model is evaluated with dropout rates ranging from 0.1 to 0.5 with a granularity of 0.1. Dropout at Individual Layers We first apply dropout at each layer including the embedding layer. Although the embedding layer is the largest layer it is often not regularized for many language applications [8]. However, we observe the benefit of regularizing it. For SNLI, the highest accuracy is achieved when the embedding layer is regularized (Model 2, DR 0.4). Dropout at Different Layers for NLI Model For SciTail, the highest accuracy is obtained when the recurrent layer is regularized (Model 3, DR 0.1). The dropout injected noise at lower layers prevents higher fully connected layers from overfitting. We further experimented regularizing higher fully connected layers (Intra-Attention, Inter-Attention, MLP) individually, however no significant performance gains observed. Dropout at Multiple Layers We next explore the effect of applying dropout at multiple layers. For SNLI and SciTail, the models achieve higher performance when dropout is applied to embedding and recurrent layer (Model 4, DR 0.2). This supports the importance of regularizing embedding and recurrent layer as shown for individual layers. It is interesting to note that regularizing the recurrent layer helps SciTail (Model 7, DR 0.2) whereas regularizing the embedding layer helps SNLI (Model 8, DR 0.2). A possible explanation to this is that for the smaller SciTail dataset the model can not afford to lose information in the input, whereas for the larger SNLI dataset the model has a chance to learn even with the loss of information in input. Also, the results from models 7 and 8 suggests that applying dropout at a single lower layer (Embedding or Recurrent; depending on the amount of training data) and to the inputs and outputs of MLP layer improves performance. We can infer from models 9, 10, 11 and 12 that applying dropout to each feed forward connection helps preventing the model overfit for SciTail (DR 0.1 and 0.2). However, for both the datasets with different dropout locations the performance of the model decreases as the dropout rate increases (Section 4.4). The Effectiveness of Dropout for Overfitting We study the efficacy of dropout on overfitting. The main results are shown in Fig. 2. For SNLI, Fig. 2 (a) -(b), shows the convergence curves for the baseline model and the model achieving the highest accuracy (Model 2, DR 0.4). The convergence curve show that dropout is very effective in preventing overfitting. However, for the smaller SciTail dataset when regularizing multiple layers we observe that the highest accuracy achieving model (Model 9, DP 0.2), overfits significantly ( Fig. 2(d)). This overfitting is due to the large model size. With limited training data of SciTail, our model with higher number of hidden units learns the relationship between the premise and the hypothesis most accurately (Fig. 2(d)). However, these relationships are not representative of the validation set data and thus the model does not generalize well. When we reduced the model size (50, 100 and 200 hidden units) we achieved the best accuracy for SciTail at 100 hidden units ( Table 3). The convergence curve (Fig. 2(c)) shows that dropout effectively prevents overfitting in the model with 100 hidden units in comparison to 300 units. Furthermore, for SciTail dataset, the model with 100 The results of this experiment suggest that given the high learning capacity of RNNs an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in such scenarios. Dropout Rate Effect on Accuracy and Dropout Location We next investigate the effect of varying dropout rates on the accuracy of the models and on various dropout locations. Fig 3. illustrates varying dropout rates and the corresponding test accuracy for SNLI. We observe some distinct trends from the plot. First, the dropout rate and location does not affect the accuracy of the models 2 and 8 over the baseline. Second, in the dropout range [0.2 -0.5], the dropout locations affect the accuracy of the models significantly. Increasing the dropout rate from 0.2 to 0.5 the accuracy of models 5 and 12 decreases significantly by 21.3% and 15.9% respectively. For most of the models (3, 4, 6, 7, 9 and 10) the dropout rate of 0.5 decreases accuracy. From the experiments on SciTail dataset (Fig. 4), we observed that the dropout rate and its location do not have significant effect on most of the models, with the exception of model 8 (which shows erratic performance). Finally, for almost all the experiments a large dropout rate (0.5) decreases the accuracy of the models. The dropout rate of 0.5 works for a wide rang of neural networks and tasks [19]. However, our results show that this is not desirable for RNN models of NLI. Based on our evaluations a dropout range of [0.2 − 0.4] is advised. Recommendations for Dropout Application Based on our empirical evaluations, the following is recommended for regularizing a RNN model for NLI task: (1) Embedding layer should be regularized for large datasets like SNLI. For smaller datasets such as SciTail regularizing recurrent layer is an efficient option. The dropout injected noise at these layers prevent the higher fully connected layers from overfitting. (2) When regularizing multiple layers, regularizing a lower layer (embedding or recurrent; depending on the amount of data) with the inputs and outputs of MLP layer should be considered. The performance of our model decreased when dropout is applied at each intermediate feed-forward connection. (3) When dropout is applied at multiple feed forward connections, it is almost always better to apply it at lower rate − [0.2 − 0.4]. (4) Given the high learning capacity of RNNs, an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in the scenarios otherwise. Conclusions In this paper, we reported the outcome of experiments conducted to investigate the effect of applying dropout at different layers in an RNN model for the NLI task. Based on our empirical evaluations we recommended the probable locations of dropouts to gain high performance on NLI task. Through extensive exploration, for the correct dropout location in our model, we achieved the accuracies of 86.14% on SNLI and 77.05% on SciTail datasets. In future research, we aim to investigate the effect of different dropout rates at distinct layers.
2,046
1810.08606
2894886314
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy (86.14 ) on the SNLI dataset and (77.05 ) on SciTail.
Evaluations in previous research were conducted on datasets with fewer samples. We evaluate the RNN model on a large, SNLI dataset (570,000 data sample) as well as on a smaller SciTail dataset (27,000 data samples). Furthermore, previous studies concentrate only on the location of dropout in the network with fixed dropout rate.We further investigate the effect of varying dropout rates. We focus on the application of widely used conventional dropout @cite_18 to non-recurrent connection in RNNs.
{ "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets." ], "cite_N": [ "@cite_18" ], "mid": [ "2095705004" ] }
An Exploration of Dropout with RNNs for Natural Language Inference
Natural Language Understanding (NLU) is the process to enable computers to understand the semantics of natural language text. The inherent complexities and ambiguities in natural language text make NLU challenging for computers. Natural Language Inference (NLI) is a fundamental step towards NLU [14]. NLI involves logically inferring a hypothesis sentence from a given premise sentence. The recent release of a large public dataset the Stanford Natural Language Inference (SNLI) [2] has made it feasible to train complex neural network models for NLI. Recurrent Neural Networks (RNNs), particularly bidirectional LSTMs (BiLSTMs) have shown state-of-the-art results on the SNLI dataset [9]. However, RNNs are susceptible to overfitting − the case when a neural network learns the exact patterns present in the training data but fails to generalize to unseen data [21]. In NLI models, regularization techniques such as early stopping [4], L2 regularization and dropout [20] are used to prevent overfitting. For RNNs, dropout is an effective regularization technique [21]. The idea of dropout is to randomly omit computing units in a neural network during training but to keep all of them for testing. Dropout consists of element-wise multiplication of the neural network layer activations with a zero-one mask (r j ) during training. Each element of the zero-one mask is drawn independently from r j ∼ Bernoulli(p), where p is the probability with which the units are retained in the network. During testing, activations of the layer are multiplied by p [19]. Dropout is a crucial regularization technique for NLI [9] [20]. However, the location of dropout varies considerably between NLI models and is based on trail-and-error experiments with different locations in the network. To the best of our knowledge no prior work has been performed to evaluate the effectiveness of dropout location and rates in the RNN NLI models. In this paper, we study the effect of applying dropout at different locations in an RNN model for NLI. We also investigate the effect of varying the dropout rate. Our results suggest that applying dropout for every feed forward connection, especially at higher dropout rates degrades the performance of RNN. Our best model achieves an accuracy of 86.14% on the SNLI dataset and an accuracy of 77.05% on SciTail dataset. To the best of our knowledge this research is the first exploratory analysis of dropout for NLI. The main contributions of this paper are as follows: (1) A RNN model based on BiLSTMs for NLI. (2) A comparative analysis of different locations and dropout rates in the proposed RNN NLI model. (3) Recommendations for the usage of dropout in the RNN models for NLI task. The layout of the paper is as follows. In Section 2, we describe the related work. In Section 3, we discuss the proposed RNN based NLI model. Experiments and the results are presented in Section 4. Recommendations for the application of dropouts are presented in Section 5. We conclude in Section 6. Recurrent Neural Network Model for NLI Task The proposed RNN NLI model follows the general architecture of NLI models and is depicted in Fig.1. The model combines the intra-attention model [13] with soft-attention mechanism [11]. The embedding layer takes as input word M = tanh W y Y + W h R avg ⊗ e L (1) α = sof tmax w T M (2) R = Y α T (3) where, W y , W h are trained projection matrices, w T is the transpose of trained parameter vector w, Y is the matrix of hidden output vectors of the BiLSTM layer, R avg is obtained from the average pooling of Y , e L ∈ R L is a vector of 1s, α is a vector of attention weights and R is the attention weighted sequence representation. The attention weighted sequence representation is generated for premise and hypothesis and is denoted as R p and R h . The attention weighted representation gives more importance to the words which are important to the semantics of the sequence and also captures its global context. The interaction between R p and R h is performed by inter-attention layer, following the Equations (4) − (6). I v = R T p R h (4) R p = sof tmax(I v )R h (5) R h = sof tmax(I v )R p(6) where, I v is the interaction vector.R p contains the words which are relevant based on the content of sequence R h . Similarly,R h contains words which are important with respect to the content of sequence R p . The final sequence encoding is obtained from the element-wise multiplication of intra-attention weighted representation and inter-attention weighted representation as follows: F p =R p R p (7) F h =R h R h(8) To classify the relationship between premise and hypothesis a relation vector is formed from the encoding of premise and hypothesis generated in Equation (7) and (8), as follows: v p,avg = averagepooling(F p ), v p,max = maxpooling(F p ) v h,avg = averagepooling(F h ), v h,max = maxpooling(F h )(9)F relation = [v p,avg ; v p,max ; v h,avg ; v h,max ](10) where v is a vector of length L. The relation vector (F relation ) is fed to the MLP layer. The three-way softmax layer outputs the probability for each class of NLI. Experiments and Results Experimental Setup The standard train, validation and test splits of SNLI [2] and SciTail [10] [12] optimizer with first momentum is set to 0.9 and the second to 0.999 is used. The word embeddings are initialized with pre-trained 300-D Glove 840B vectors [17]. Extensive experiments with dropout locations and hidden units were conducted however we show only the best results for brevity and space limits. Table 1 presents the models with different combinations of layers to the output of which dropout are applied in our model depicted in Fig. 1. Table 2. shows the results for the models in Table 1. Each model is evaluated with dropout rates ranging from 0.1 to 0.5 with a granularity of 0.1. Dropout at Individual Layers We first apply dropout at each layer including the embedding layer. Although the embedding layer is the largest layer it is often not regularized for many language applications [8]. However, we observe the benefit of regularizing it. For SNLI, the highest accuracy is achieved when the embedding layer is regularized (Model 2, DR 0.4). Dropout at Different Layers for NLI Model For SciTail, the highest accuracy is obtained when the recurrent layer is regularized (Model 3, DR 0.1). The dropout injected noise at lower layers prevents higher fully connected layers from overfitting. We further experimented regularizing higher fully connected layers (Intra-Attention, Inter-Attention, MLP) individually, however no significant performance gains observed. Dropout at Multiple Layers We next explore the effect of applying dropout at multiple layers. For SNLI and SciTail, the models achieve higher performance when dropout is applied to embedding and recurrent layer (Model 4, DR 0.2). This supports the importance of regularizing embedding and recurrent layer as shown for individual layers. It is interesting to note that regularizing the recurrent layer helps SciTail (Model 7, DR 0.2) whereas regularizing the embedding layer helps SNLI (Model 8, DR 0.2). A possible explanation to this is that for the smaller SciTail dataset the model can not afford to lose information in the input, whereas for the larger SNLI dataset the model has a chance to learn even with the loss of information in input. Also, the results from models 7 and 8 suggests that applying dropout at a single lower layer (Embedding or Recurrent; depending on the amount of training data) and to the inputs and outputs of MLP layer improves performance. We can infer from models 9, 10, 11 and 12 that applying dropout to each feed forward connection helps preventing the model overfit for SciTail (DR 0.1 and 0.2). However, for both the datasets with different dropout locations the performance of the model decreases as the dropout rate increases (Section 4.4). The Effectiveness of Dropout for Overfitting We study the efficacy of dropout on overfitting. The main results are shown in Fig. 2. For SNLI, Fig. 2 (a) -(b), shows the convergence curves for the baseline model and the model achieving the highest accuracy (Model 2, DR 0.4). The convergence curve show that dropout is very effective in preventing overfitting. However, for the smaller SciTail dataset when regularizing multiple layers we observe that the highest accuracy achieving model (Model 9, DP 0.2), overfits significantly ( Fig. 2(d)). This overfitting is due to the large model size. With limited training data of SciTail, our model with higher number of hidden units learns the relationship between the premise and the hypothesis most accurately (Fig. 2(d)). However, these relationships are not representative of the validation set data and thus the model does not generalize well. When we reduced the model size (50, 100 and 200 hidden units) we achieved the best accuracy for SciTail at 100 hidden units ( Table 3). The convergence curve (Fig. 2(c)) shows that dropout effectively prevents overfitting in the model with 100 hidden units in comparison to 300 units. Furthermore, for SciTail dataset, the model with 100 The results of this experiment suggest that given the high learning capacity of RNNs an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in such scenarios. Dropout Rate Effect on Accuracy and Dropout Location We next investigate the effect of varying dropout rates on the accuracy of the models and on various dropout locations. Fig 3. illustrates varying dropout rates and the corresponding test accuracy for SNLI. We observe some distinct trends from the plot. First, the dropout rate and location does not affect the accuracy of the models 2 and 8 over the baseline. Second, in the dropout range [0.2 -0.5], the dropout locations affect the accuracy of the models significantly. Increasing the dropout rate from 0.2 to 0.5 the accuracy of models 5 and 12 decreases significantly by 21.3% and 15.9% respectively. For most of the models (3, 4, 6, 7, 9 and 10) the dropout rate of 0.5 decreases accuracy. From the experiments on SciTail dataset (Fig. 4), we observed that the dropout rate and its location do not have significant effect on most of the models, with the exception of model 8 (which shows erratic performance). Finally, for almost all the experiments a large dropout rate (0.5) decreases the accuracy of the models. The dropout rate of 0.5 works for a wide rang of neural networks and tasks [19]. However, our results show that this is not desirable for RNN models of NLI. Based on our evaluations a dropout range of [0.2 − 0.4] is advised. Recommendations for Dropout Application Based on our empirical evaluations, the following is recommended for regularizing a RNN model for NLI task: (1) Embedding layer should be regularized for large datasets like SNLI. For smaller datasets such as SciTail regularizing recurrent layer is an efficient option. The dropout injected noise at these layers prevent the higher fully connected layers from overfitting. (2) When regularizing multiple layers, regularizing a lower layer (embedding or recurrent; depending on the amount of data) with the inputs and outputs of MLP layer should be considered. The performance of our model decreased when dropout is applied at each intermediate feed-forward connection. (3) When dropout is applied at multiple feed forward connections, it is almost always better to apply it at lower rate − [0.2 − 0.4]. (4) Given the high learning capacity of RNNs, an appropriate model size selection according to the amount of training data is essential. Dropout may independently be insufficient to prevent overfitting in the scenarios otherwise. Conclusions In this paper, we reported the outcome of experiments conducted to investigate the effect of applying dropout at different layers in an RNN model for the NLI task. Based on our empirical evaluations we recommended the probable locations of dropouts to gain high performance on NLI task. Through extensive exploration, for the correct dropout location in our model, we achieved the accuracies of 86.14% on SNLI and 77.05% on SciTail datasets. In future research, we aim to investigate the effect of different dropout rates at distinct layers.
2,046
1810.08393
2951836492
This paper addresses the challenge of dense pixel correspondence estimation between two images. This problem is closely related to optical flow estimation task where ConvNets (CNNs) have recently achieved significant progress. While optical flow methods produce very accurate results for the small pixel translation and limited appearance variation scenarios, they hardly deal with the strong geometric transformations that we consider in this work. In this paper, we propose a coarse-to-fine CNN-based framework that can leverage the advantages of optical flow approaches and extend them to the case of large transformations providing dense and subpixel accurate estimates. It is trained on synthetic transformations and demonstrates very good performance to unseen, realistic, data. Further, we apply our method to the problem of relative camera pose estimation and demonstrate that the model outperforms existing dense approaches.
Applying machine learning techniques has proven very effective in optical flow estimation problem @cite_31 @cite_30 @cite_29 which is closely related to finding pixel correspondence task. Recently proposed methods, PWC-Net @cite_29 and FlowNet2 @cite_31 , utilize a correlation layer to predict image similarities in some neighborhood around the center pixel in a coarse-to-fine manner. While such a spatially constrained correlation layer leads to state-of-the-art results in optical flow, it performs poorly for very strong geometric transformations that we consider in this work. Rocco al @cite_8 proposed a CNN-based approach for determining correspondences between two images and applying it to instance-level and category-level tasks. In contrast to optical flow methods @cite_31 @cite_29 , it comprises a matching layer calculating the correlation between target and reference feature maps without any spatial constraint. The method casts finding pixel correspondences task as a regression problem and consisting of two independent Siamese CNNs trained separately and directly predicting affine and TPS geometric transformations parametrizing 6-element and 18-element vectors. On the contrary, we propose a more general approach handling more diverse transformations and operating in an end-to-end fashion.
{ "abstract": [ "", "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.", "We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024 A— 436) images. Our models are available on our project website.", "We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset." ], "cite_N": [ "@cite_30", "@cite_31", "@cite_29", "@cite_8" ], "mid": [ "", "2560474170", "2963782415", "2604233003" ] }
DGC-Net: Dense Geometric Correspondence Network
Finding correspondences between images is a key task in many computer vision applications, including image alignment [28,29], visual localization [31,34,35], image retrieval [2,11], structure-from-motion [30], semantic correspondence [12,16], optical flow [14,15,26,33] and relative camera pose estimation [23,36]. In general, there are two ways to establish a pixel-wise correspondence field between images. The first group of methods is based on applying feature descriptors to an image pair and utilizing nearest neighbor criterion to match keypoints globally. However, these approaches do not produce dense correspondences explicitly and apply interpolation or local affine transformations [18] to turn a sparse set into a pixel-wise correspondences. Another possible direction of finding dense correspondences is to compare image patches in feature space. * The majority of the work was done during internship at ETH Zürich. Neural networks have been widely used to learn discriminative and robust descriptors [3,21]. Those descriptors are then compared pair-wise by thresholding Euclidean distance between them [6,9,22] or by predicting a binary label [37,38]. In contrast, the proposed approach processes the image as a whole, and thus, it can handle a broader set of geometric changes in images and directly predict dense correspondences without any post-processing steps. Recent optical flow methods [14,33] have demonstrated great success at estimating dense sub-pixel correspondences. However, the main limitation of these methods is a spatially constrained correlation layer predicting the matches in a small vicinity around the center pixel of each image patch. Thus, captured transformations are very restricted. To some extent this restriction can be alleviated with pyramid structure [33] but not completely. In this paper we propose a convolutional neural network (CNN) architecture, called DGC-Net, for learning dense pixel correspondences between a pair of images with strong geometric transformations. Following more recent optical flow methods [14,26,33] and the concept introduced by Lucas-Kanade [20], we exploit a coarse-to-fine image warping idea by creating a hierarchical network structure. Rather than considering only affine and thin-plate spline (TPS) transformations [28], we train our system on synthetic data in an end-to-end manner handling diverse geometric transformations present in real world. We demonstrate that the proposed approach substantially outperforms CNN-based optical flow and image matching methods on the challenging HPatches [4] and DTU [1] datasets. The main contributions of this paper are: 1) We propose an end-to-end CNN-based method, DGC-Net, to establish dense pixel correspondences between images with strong geometric transformations; 2) We demonstrate that even if DGC-Net is trained only on synthetic transformations, it can generalize well to real data; 3) We apply the proposed approach to the problem of relative camera pose estimation and demonstrate that our method outperforms strong baseline approaches by a large margin. In addition, we modify the original structure of DGC-Net and seamlessly integrate a matchability decoder into DGC-Net that can significantly improve the computational efficiency of the relative camera pose estimation pipeline by removing tentative correspondences with low confidence scores. Method Our goal is to determine correspondences between two input images I s , I t ∈ R W ×H×3 . The most straightforward way to solve this task is to predict the parameters of the relative transformation matrix parametrized for different geometric transformations, such as an homogra-phy [8], an affine or a TPS [28] transformation. However, realistic scenes usually contain more complex geometric transformations which can be hardly described by such parametrization. Inspired by recent work in image compositing [17] and optical flow estimation, we propose to predict a dense pixel correspondence map ω ∈ R W ×H×2 in an coarse-to-fine manner. Network Architecture In this section, we first present the structure of the proposed network and the general principles behind it, then formulate the view correspondence objective function to predict geometric transformations between image pairs. Schematic representation of the proposed approach is shown in Fig. 1. A pair of input images is fed into a module consisting of two pre-trained CNN branches which construct a feature pyramid. The correlation layer takes feature maps of the source and target images from the coarse (top) level of the pyramid and estimates the pairwise similarity between them. Then, the correspondence map decoder takes the output of the correlation layer and directly predicts pixel correspondences for this particular level of the pyramid. The estimates are then refined in an iterative manner. Feature pyramid creator. In order to create a representation of an input image pair in feature space, we construct a Siamese neural network with two branches with shared weights. The branches use the VGG-16 architecture [32] trained on ImageNet [7] and truncated at the last pooling layer followed by L2normalization [28]. We extract features f s , f t at different parts of each branch to create a 5-layer feature pyramid with the following spatial resolutions (from top to bottom): [15 × 15, 30 × 30, 60 × 60, 120 × 120, 240 × 240] and encoded with different colors in Fig. 1. The weights of CNN-branches are then fixed throughout the rest of the network training procedure. Correlation layer. In order to estimate a similarity score between two images, we follow an idea proposed in [28] and calculate the correlation volume between the normalized feature maps of the source and target images. In contrast to optical flow approaches [14,33], where the correlation volume is computed for the raw features in a restricted area around the center pixel, we compute global correlation and apply L2-normalization before and after the correlation layer to strongly down-weight ambiguous matches (c.f . Fig. 1). Specifically, the correlation layer computes the scalar product between each feature vector of the source f s and all vectors of the target f t feature maps f s , f t ∈ R W ×H×C and can be defined in the following way: Figure 1: Overview of our proposed iterative architecture DGC-Net consisting of four major components: 1) the feature pyramid creator. 2) the correlation layer estimates the pairwise similarity score of the source and target feature descriptors. c st (i, j) = f s (i, j) , f t (i, j) ,(1)( ) F = 3) the fully convolutional correspondence map decoders predict the dense correspondence map between input image pair at each level of the feature pyramid. 4) the warping layer warps features of the source image using the upsampled transforming grid from a correspondence map decoder. The matchability decoder is a tiny CNN that predicts a confidence map with higher scores for those pixels in the source image that have correspondences in the target. See Sec. 3.1 for more details. where . denotes the scalar product and c st is a L2normalized correlation volume c st ∈ R W ×H×(W ×H) . Since the third dimension of the correlation volume is a product of its W and H, it is not feasible to calculate such volumes in the bottom layers of the pyramid where the spatial resolution of the feature maps is large. Thus, at the bottom feature pyramid layers, we concatenate descriptors channel-wise. Correspondence map decoder. The output of the correlation layer is then fed into a correspondence map decoder consisting of 5 convolutional blocks (Conv-BN-ReLU) to estimate a 2D dense correspondence field ω (l) est at a particular level l of the feature pyramid. The estimates are parameterized such that each predicted pixel location in the map belongs to the interval [−1, 1] representing width and height normalized image coordinates. That is, we upsample the predicted correspondence field at the (l − 1) th level to warp the feature maps of the source image at l th level toward the target features. Finally, the upsampled field, warped source f s (ω (l) est ) and target f (l) t features are concatenated along the channel dimension and provided accordingly as input to the correspondence map decoder at l th level. Each convolution layer in the decoder is padded to keep the spatial resolution of the feature maps intact. Moreover, in order to be able to capture more spatial context at the bottom layers of the pyramid, starting from l = 3 different dilation factors have been added to the convolution blocks to increase the receptive field. The feature pyramid creator, correlation layer and a hierarchical chain of the correspondence map decoders together form a CNN architecture that we will refer to as DGC-Net in the following. Given an image pair and the ground truth pixel correspondence map ω gt , we can define a hierarchical objective loss function as follows: L c = L−1 l=0 α (l) 1 N (l) val N (l) val x M (l) gt ω (l) est (x) − ω (l) gt (x) 1(2) where . 1 is the L1 distance between an estimated ω val according to the ground truth mask at each level l of the L-level feature pyramid. In order to adjust the weight of different pyramid layers, we introduce a vector of scalar weight coefficients α (l) . Matchability decoder. According to recent advances in optical flow [15,33], it is still very challenging to estimate correct correspondences for ill-posed cases, such as occluded regions of an image pair. Thus, in addition to the pixel correspondence map produced by DGC-Net, we would like to directly predict a measure of confidence for each correspondence. Specifically, we modify the DGC-Net structure by adding a matchability branch. It contains four convolutional layers outputting a probability map (parametrized as a sigmoid) indicating a confidence score for each pixel location in the predicted correspondence map. We will refer to this architecture as called DGC+M-Net. Since, we consider this problem as a pixel classification task, we optimize a binary cross entropy (BCE) with logits loss that is defined as: L m = − 1 N N −1 i=0 (y i log σ (ŷ i ) + (1 − y i ) log (1 − σ (ŷ i ))) (3) where y i andŷ i are ground truth and estimated matchability masks, respectively; σ is the element-wise sigmoid function. The total loss for the DGC+M-Net model is the sum of the correspondence loss L c and the matchability loss L m with a weighted coefficient β (β = 1): L = L c + βL m .(4) We provide the detailed information about the hyperparameters used in training as well as the exact network definitions of all network components in supplementary. Experiments We discuss the experimental settings and evaluate the proposed method on two closely related tasks, i.e. finding correspondences between images and relative camera pose estimation. Baselines In this work we compare our approach with several strong baselines. Image alignment. Rocco et al. [28] propose a CNN-based method to estimate geometric transformations between two images achieving state-of-the art results in a semantic correspondence task. The transformations are parameterized as a 18-element vector and directly regressed by the network. We apply the estimates to a regular grid of the size of the input images to produce a dense pixel correspondence map. Optical flow estimation requires finding correspondences between two input images. Therefore, we consider three CNN-based optical flow approaches, i.e. SPyNet [26], FlowNet2 [14] and the recently proposed PWC-Net [33] as baseline methods. In detail, PWC-Net is based on a coarseto-fine paradigm and predicts optical flow at different scales of feature maps produced by a Siamese CNN. The coarse estimates are then used to refine the flow. For optical flow methods we use pre-trained models from the original authors. DeepMatching [27] is matching algorithm aiming at finding semi-dense image correspondences. Specifically, it relies on a multi-scale image pyramid architecture with no any trainable parts and can cope with very challenging scenes, such as repetitive textures and non-rigid image transformations. Datasets We compare the proposed approach with different baseline methods on two evaluation datasets. HPatches [4] consists of several sequences of real images with varying photometric and geometric changes. Each image sequence contains a reference (target) image and 5 source images taken under a different viewpoint. For all images the estimated ground truth homography H is provided, thus, dense correspondence maps can be obtained for each test image pair. There are 59 image sequences with challenging geometric transformations in total. DTU. The pixel correspondences produced by our method can be also used for relative camera pose estimation problem. Thus, in order to measure the performance of the proposed approach for this task, we utilize the DTU image dataset [1] consisting of 124 scenes with very accurate absolute camera poses collected by a precisely positioned robot. We create a list of camera pairs which have overlapping fields of view and then randomly choose about 3k image pairs covering all the scenes. Training datasets. We use training and validation splits proposed by [28] to compare both approaches fairly. Specifically, Rocco et al. [28] generate synthetic affine (aff) and thin-plate spline (TPS) transformations and apply them to images from Pascal VOC 2011 (P ) and Tokyo Time Machine (T ) datasets. Each synthetic dataset has 20k training and validation image pairs, respectively. However, those transformations are not very diverse. To be able to estimate the correspondences for HPatches scenes accurately, we therefore generate 20k labeled training examples [8] by applying random homography transformations to the (T ) dataset. All training datasets mentioned above represent only synthetic geometric transformations between images. However, it is hard to artificially generate such diverse transformations that are present in real 3D world. Therefore, in addition to synthetic data, we utilize the Citywall dataset used for 3D reconstruction and provided by [10]. Based on camera poses and depth maps estimated with the Multiview Reconstruction Environment [10], we create a list of 10k image pairs and ground truth correspondence maps. We use this data to fine-tune the proposed model. We emphasize that the objective of this experiment is to demonstrate that fine-tuning on realistic data leads to further improvement of the results. Metrics As predicting a dense corresponding grid is closely related to optical flow estimation, we follow the standard evaluation metric used in this task, i.e. the average endpoint error (AEPE). AEPE is defined as the average Euclidean distance between the estimated and ground truth correspondence map. In addition to AEPE, we also use Percentage of Correct Keypoints (PCK) as the evaluation metric. PCK shows the percentage of the correctly matched estimated pointsx i that are within a certain threshold (in pixels) from the ground truth corresponding points x i . In order to estimate the accuracy of matchability mask predictions, we report normalized Jaccard index (Intersection Over Union, IoU), i.e. 0 ≤ J ≤ 1 for the ground truth and estimated masks. This metric is interpreted as a similarity measure between two finite sample sets and widely used in semantic segmentation [13]. Results Synthetic Datasets. First, we experimentally compare the proposed DGC-Net and DGC-Net+M models with [28] by calculating AEPE. All the models have been trained on the data provided by [28]. More specifically, *-aff methods utilize only synthetic affine transformations during training but *-aff+tps methods additionally trained on TPS transformations. AEPE is measured only for valid pixel locations of (P ) and (T ) test data by applying the ground-truth mask. For DGC-Net+M-* models we also report normalized Jaccard index. Tab. 1 shows that DGC-Net significantly outperforms all baseline methods on both evaluation datasets. Despite the fact that DGC-Net+M model is marginally worse than DGC-Net in the case that the transformation between images can be described by an affine transformation, it is more universal approach as it additionally predicts a matchability map which is quite accurate according to the Jacard similarity score. It is worth noting that the proposed models generalize well to unseen data, since AEPE metric varies slightly for (P ) and (T ) evaluation datasets respectively. It shows that the model has learned the geometric transformations and not overfitting to the visual representation of images. Table 1: AEPE metric on the data from [28]. For DGC-Net+M models, the Jaccard index is also reported. Realistic Datasets. To demonstrate the performance on more realistic data, we evaluate all baseline methods and our approach on the HPatches dataset. That is, we calculate AEPE over all image sequences belonging to the same viewpoint ID and report the numbers in Tab. 2. Compared to *-aff models, fine-tuning on TPS transformations lead to a significant improvement in the performance reducing the overall EPE by 20% for Viewpoint II and by 9% for Viewpoint V, respectively. The performance is improved further by finetuning the model on synthetic homography data. To prevent large errors caused by interpolation, we directly calculate AEPE metric for the semi-dense DeepMatching [27] estimates (i.e. hence [27] has unfair advantage in terms of AEPE). The Jaccard index for DGC+M-Net-* models is provided in Tab. 3. In addition, we report a number of correctly matched pixels between two images by calculating PCK metric with different thresholds. Especially the comparison with [28] is interesting as the coarse level of our pipeline is based on its matching strategy. As shown in Fig. 2, the proposed method correctly matches around 85% pixels for the case where geometric transformations are quite small (Viewpoint I). It significantly outperforms [28] trained on the same data without any external synthetic datasets and can be further improved by utilizing more diverse transformations during training. Compared to FlowNet2 and PWC-Net, DGC-Net, our method can handle scenarios exhibiting drastic changes between views (Viewpoint IV and V), achieving 59% of PCK with a 1-pixel threshold for the most challenging case. Qualitative results on HPatches and DTU are illustrated Table 3: Normalized Jaccard index (higher is better) produced by the DGC+M-Net model on HPatches evaluation dataset with different types of synthetic transformations of (T ) training dataset. in Fig. 4 and Fig. 5, respectively. Relative camera pose. In this section, we demonstrate the application of the proposed method for predicting relative camera pose. Given a list of correspondences and the intrinsic camera parameters matrix K, we estimate the essential matrix E by applying RANSAC. To decrease the randomness of RANSAC, for each image pair we run a 1000iteration loop for 5 times and choose the estimated essential matrix corresponding to the maximum inliers count. Once this process is predicted, relative pose can be recovered based on E and K respectively. Similarly to [23], we use the relative orientation error and the relative translation error as metrics for evaluating the performance. Both metrics compute the angle between the estimated orientation/translation and the ground truth. Fig. 3a and 3b show a set of normalized cumulative histograms of relative orientation and translation errors for each baseline models evaluated on all scenes of the DTU dataset (Sec. 4.2). As before, DGC-Net and DGC+M-Net have been trained on only synthetic transformations (aff+tps+homo). For a fair comparison, we resize images to 256 × 256 size for all baseline methods and change internal camera parameters accordingly. Interestingly, both PWC-Net [33] and FlowNet2 [14] estimate relative orientation quite well achieving 20 • and 24 • median error calculated at level 0.5, respectively. The proposed approach outperforms all CNN-based baselines by 18% and 40% at estimating relative orientation and translation median error compared to PWC-Net. We also evaluate DGC+M-Net model which additionally predicts a matchability mask. This mask can be considered as a filter to remove tentative correspondences with small confidence score from the relative pose estimation pipeline. According to Fig. 3, DGC+M-Net falls slightly behind of DGC-Net in estimating relative pose but it achieves significant advantages in terms of computational efficiency decreasing the elapsed time from 312 sec. to 162 sec. for estimating relative camera pose for all test image pairs. To experiment with more realistic transformations, we fine-tune DGC-Net model on the Citywall dataset (Sec. 4.2), illustrated in the supplementary material. We refer to this model as DGC-Net-Citywall. As can be clearly seen, ground-truth transformation maps are incomplete leading to multiple missing regions in the warped reference images (see the supplementary). However, using external data with more diverse transformations helps to improve the performance of the method remarkably, decreasing the median relative translation error by 17% according to Fig. 3b. In addition, we calculate the epipolar error for the matches produced by our method, PWC-Net and FlowNet2. The error is defined in terms of the squared distances (d 2 ) between the points and corresponding epipolar lines as follows: D e = d 2 (x i , Fx i ) + d 2 (x i , F T x i ) 2 , ∀i ∈ N,(5) where x i = (x i , y i , 1) T and x i = (x i , y i , 1) T denote a pair of matching points in two images; F is the ground-truth fundamental matrix between two views; N is the number of image pixels (image resolution). The normalized cumulative histogram of the error is presented in Fig. 3c. Quantitatively, the proposed method provides quite accurate pixel correspondences between two views achieving a median error less than 4 pixels across the whole test dataset. Ablation Study In this section, we analyze some design decisions of the proposed approach. More specifically, our goal is to investigate the benefits of using global correlation layer compared to the one utilized in recent optical flow methods [14,33]. In addition, we experiment with another type of parametrization of ground truth data by representing a (a) Performance of synthetic data. All the models trained on synthetic affine transformations provided by [28]. Global correlation layer: In contrast to the proposed approach, the PWC-Net architecture comprises a local correlation layer computing similarities between two feature maps in some restricted area around the center pixel at each level of the feature pyramid. However, it is very hard to compare DGC-Net and off-the-shelf PWC-Net approach fairly due to the significant difference in network structures (see Tab. 4c). Therefore, we construct a new coarse-to-fine N -level CNN model by keeping all the blocks of DGC-Net except the correlation layer. More specifically, each feature pyramid level is complemented by a local correlation layer as it is used in PWC-Net structure. We dubbed this model to PWCm-Net. As shown in Tab. 4a, the global correlation layer achieves a significant improvement over the case with a set of spatially constrained correlation layers. Particularly, the error is reduced from 6.73 to 0.95 pixels on the (P ) dataset. All results have been obtained for only affine transformations in training data. L2 normalization: As explained in Sec. 3.1, we L2 normalize the output of the correlation layer to down-weigh the putative matches. In Tab. 4a we compare original DGC-Net model and its modified version without correlation layer normalization step (DGC-Net no L2norm). According to the results, the normalization improves the error by about 15% for all test cases demonstrating the importance of this step. Different parametrization: Given two images, the proposed approach predicts a dense pixel correspondence map representing the absolute location of each image pixel. In contrast, all optical flow methods estimate pixel displacements between images. To dispel this doubt in parameterization, we train DGC-Net model on the same synthetic data as before but with ground-truth labels recalculated in an optical flow manner. We title this model DGC-Net-flow and provide the results in Tab. 4a and Tab. 4b. Interestingly, while DGC-Net-flow model marginally performs better on synthetic data, DGC-Net producing more accurate results in large geometric transformations case (Tab. 4b) demonstrating the benefit of the original parametrization. Method Conclusion Our paper addressed the challenging problem of finding dense pixel correspondences. We have proposed a coarse-to-fine network architecture that efficiently handles diverse transformations between two views. We have shown that our contributions were crucial to outperforming strong baselines on the challenging realistic datasets. Additionally, we have also applied the proposed method to the relative camera pose estimation problem, demonstrating very promising results. We hope this paper inspires more research into applying deep learning to accurate and reliable dense pixel correspondence estimation. Implementation details We train our network end-to-end using Adam [4] solver with β 1 = 0.9 and β 2 = 0.999. As a preprocessing step, the training images are resized to 240 × 240 and further meancentered and normalized using mean and standard deviation of ImageNet dataset [1]. We use a batch size of 32, an initial learning rate of 10 −2 which is gradually decreased during training. For fine-tuning on the Citywall dataset ( Fig. 1), the learning rate is set to 10 −4 . The weight decay is initialized to 10 −5 in all experiments and no dropout was used in our experiments. Our method is implemented using PyTorch framework [5] and trained on two NVIDIA Titan X GPUs. Ablation study Dilated convolutions: The quantitative evaluation of the proposed method without any dilation factors used in the correspondence map decoders is presented in Tab Qualitative results We show more qualitative results of pixel-wise dense correspondence estimation on the HPatches and DTU datasets in Fig. 4 and Fig. 5
4,129
1906.07138
2900464026
Mapping road networks today is labor-intensive. As a result, road maps have poor coverage outside urban centers in many countries. Systems to automatically infer road network graphs from aerial imagery and GPS trajectories have been proposed to improve coverage of road maps. However, because of high error rates, these systems have not been adopted by mapping communities. We propose machine-assisted map editing, where automatic map inference is integrated into existing, human-centric map editing workflows. To realize this, we build Machine-Assisted iD (MAiD), where we extend the web-based OpenStreetMap editor, iD, with machine-assistance functionality. We complement MAiD with a novel approach for inferring road topology from aerial imagery that combines the speed of prior segmentation approaches with the accuracy of prior iterative graph construction methods. We design MAiD to tackle the addition of major, arterial roads in regions where existing maps have poor coverage, and the incremental improvement of coverage in regions where major roads are already mapped. We conduct two user studies and find that, when participants are given a fixed time to map roads, they are able to add as much as 3.5x more roads with MAiD.
Most state-of-the-art approaches for inferring road maps from aerial imagery apply convolutional neural networks (CNNs) to segment the imagery for road'' and non-road'' pixels, and then post-process the segmentation output to extract a road network graph. develop a cascaded CNN architecture with two jointly trained components, where the first component detects pixels on the road, and the second focuses on pixels close to the road centerline @cite_17 . They then threshold and thin the centerline segmentation output to extract a graph.
{ "abstract": [ "Accurate road detection and centerline extraction from very high resolution (VHR) remote sensing imagery are of central importance in a wide range of applications. Due to the complex backgrounds and occlusions of trees and cars, most road detection methods bring in the heterogeneous segments; besides for the centerline extraction task, most current approaches fail to extract a wonderful centerline network that appears smooth, complete, as well as single-pixel width. To address the above-mentioned complex issues, we propose a novel deep model, i.e., a cascaded end-to-end convolutional neural network (CasNet), to simultaneously cope with the road detection and centerline extraction tasks. Specifically, CasNet consists of two networks. One aims at the road detection task, whose strong representation ability is well able to tackle the complex backgrounds and occlusions of trees and cars. The other is cascaded to the former one, making full use of the feature maps produced formerly, to obtain the good centerline extraction. Finally, a thinning algorithm is proposed to obtain smooth, complete, and single-pixel width road centerline network. Extensive experiments demonstrate that CasNet outperforms the state-of-the-art methods greatly in learning quality and learning speed. That is, CasNet exceeds the comparing methods by a large margin in quantitative performance, and it is nearly 25 times faster than the comparing methods. Moreover, as another contribution, a large and challenging road centerline data set for the VHR remote sensing image will be publicly available for further studies." ], "cite_N": [ "@cite_17" ], "mid": [ "2593886839" ] }
Machine-Assisted Map Editing
In many countries, road maps have poor coverage outside urban centers. For example, in Indonesia, roads in the OpenStreetMap Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SIGSPATIAL '18, November 6-9, 2018, Seattle, WA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-5889-7/18/11. . . $15.00 https://doi.org/10.1145/3274895.3274927 dataset [9] cover only 55% of the country's road infrastructure 1 ; the closest mapped road to a small village may be tens of miles away. Map coverage improves slowly because mapping road networks is very labor-intensive. For example, when adding roads visible in aerial imagery, users need to perform repeated clicks to draw lines corresponding to road segments. This issue has motivated significant interest in automatic map inference. Several systems have been proposed for automatically constructing road maps from aerial imagery [6,11] and GPS trajectories [4,15]. Yet, despite over a decade of research in this space, these systems have not gained traction in OpenStreetMap and other mapping communities. Indeed, OpenStreetMap contributors continue to add roads solely by tracing them by hand. Fundamentally, high error rates make full automation impractical. Even state-of-the-art automatic map inference approaches have error rates between 5% and 10% [2,15]. Navigating the road network using road maps with such high frequencies of errors would be virtually impossible. Thus, we believe that automatic map inference can only be useful when it is integrated with existing, human-centric map editing workflows. In this paper, we propose machine-assisted map editing to do exactly that. Our primary contribution is the design and development of Machine-Assisted iD (MAiD), where we integrate machine-assistance functionality into iD, a web-based OpenStreetMap editor. At its core, MAiD replaces manual tracing of roads with human validation of automatically inferred road segments. We designed MAiD with a holistic view of the map editing process, focusing on the parts of the workflow that can benefit substantially from machine-assistance. Specifically, MAiD accelerates map editing in two ways. In regions where the map has low coverage, MAiD focuses the user's effort on validation of major, arterial roads that form the backbone of the road network. Incorporating these roads into the map is very useful since arterial roads are crucial to many routes. At the same time, because major roads span large distances, validating automatically inferred segments covering major roads is significantly faster than tracing the roads manually. However, road networks inferred by map inference methods include both major and minor roads. Thus, we propose a novel shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads. In regions where the map has high coverage, further improving map coverage requires users to painstakingly scan the aerial imagery and other data sources for unmapped roads. We reduce this scanning time by adding a "teleport" feature that immediately pans the user to an inferred road segment. Because many inferred segments correspond to service roads and residential roads that are not crucial to the road network, we design a segment ranking scheme to prioritize segments that are more useful. We find that existing schemes to automatically infer roads from aerial imagery are not suitable for the interactive workflow in MAiD. Segmentation-based approaches [6,11,14], which apply a CNN to label pixels in the imagery as "road" or "non-road", have low accuracy because they require an error-prone post-processing stage to extract a road network graph from the pixel labels. Iterative graph construction (IGC) approaches [2,17] improve accuracy by extracting road topology directly from the CNN, but have execution times six times slower than segmentation, which is too slow for interactivity. To facilitate machine-assisted interactive mapping, we develop a novel method for extracting road topology from aerial imagery that combines the speed of segmentation-based approaches with the high-accuracy of iterative graph construction (IGC) approaches. Our method adapts the IGC process to use a CNN that outputs road directions for all pixels in one shot; this substantially reduces the number of CNN evaluations, thereby reducing inference time for IGC by almost 8x with near-identical accuracy. Furthermore, in contrast to prior work, our approach infers not only unmapped roads, but also their connections to an existing road network graph. To evaluate MAiD, we conduct two user studies where we compare the mapping productivity of our validation-based editor (coupled with our map inference approach) to an editor that requires manual tracing. In the first study, we ask participants to map roads in an area of Indonesia with no coverage in OpenStreetMap, with the goal of maximizing the percentage of houses covered by the mapped road network. We find that, given a fixed time to map roads, participants are able to produce road network graphs with 1.7x the coverage and comparable error when using MAiD. In the second study, participants add roads in an area of Washington where major roads are already mapped. With MAiD, participants add 3.5x more roads with comparable error. In summary, the contributions of this paper are: • We develop MAiD, a machine-assisted map editing tool that enables efficient human validation of automatically inferred roads. • We propose a novel pruning algorithm and teleport feature that focus validation efforts on tasks where machine-assisted editing offers the greatest improvement in mapping productivity. • We develop an approach for inferring road topology from aerial imagery that complements MAiD by improving on prior work. • We conduct user studies to evaluate MAiD in realistic editing scenarios, where we use the current state of OpenStreetMap, and find that MAiD improves mapping productivity by as much as 3.5x. The remainder of this paper is organized as follows. In Section 2, we discuss related work. Then, in Section 3, we detail the machine-assisted map editing features that we develop to incorporate automatic map inference into the map editing process. In Section 4, we introduce our novel approach for map inference from aerial imagery. Finally, we evaluate MAiD and our map inference algorithm in Section 5, and conclude in Section 6. UI for Validation We build MAiD, where we incorporate our machine-assistance features into iD, a web-based OpenStreetMap editor. A road network graph is a graph where vertices are annotated with spatial coordinates (latitude and longitude) and edges correspond to straight-line road segments. MAiD inputs an existing road network graph G 0 = (V 0 , E 0 ) containing roads already incorporated in the map. To use MAiD, users first select a region of interest for improving map coverage. MAiD runs an automatic map inference approach in this region to obtain an inferred road network graph G = (V , E) containing inferred segments corresponding to unmapped roads. G should satisfy E 0 ∩ E = ; however, G and G 0 share vertices at the points where inferred segments connect with the existing map. To make validation of automatically inferred segments intuitive, MAiD then produces a yellow overlay that highlights inferred segments in G over the aerial imagery. Although the overlay is partially transparent, in some cases it is nevertheless difficult to verify the position of the road in the imagery when the overlay is active; thus, users can press and hold a key to temporarily hide the overlay so that they can consult the imagery. After verifying that an inferred segment is correct, users can left-click the segment to incorporate it into the map. Existing functionality in the editor can then be used to adjust the geometry or topology of the road. If an inferred segment is erroneous, users can either ignore the segment, or right-click on the segment to hide it. Figure 2 shows the MAiD editing workflow. Mapping Major Roads However, we find that this validation-based UI alone does not significantly increase mapping productivity. To address this, we first consider adding roads in regions where the map has low coverage. In practice, when mapping these regions, users typically focus on tracing major, arterial roads that form the backbone of the road network. More precisely, major roads connect centers of activity within a city, or link towns and villages outside cities; in Open-StreetMap, these roads are labelled "primary", "secondary", or "tertiary". Users skip short, minor roads because they are not useful until these important links are mapped. Because major roads span large distances, though, tracing them is slow. Thus, validation can substantially reduce the mapping time for these roads. Supporting efficient validation of major roads requires the pruning of inferred segments corresponding to minor roads. However, automatically distinguishing major roads is difficult. Often, major roads have the same width and appearance as minor roads in aerial imagery. Similarly, while major roads in general have higher coverage by GPS trajectories, more trips may traverse minor roads in population centers than major roads in rural regions. Rather than detecting major roads from the data source, we propose a shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads. Intuitively, major roads are related to shortest paths: because major roads offer fast connections between far apart locations, they should appear on shortest paths between such locations. We initially applied betweenness centrality [8], a measure of edge importance based on shortest paths. The betweenness centrality of an edge is the number of shortest paths between unique origindestination pairs that pass through the edge. (When computing shortest paths in the road network graph, the length of an edge is simply the distance between its endpoints.) Formally, for a road network graph G = (V , E), the betweenness centrality of an edge e is: д(e) = s t ∈V I [e ∈ shortest-path(s, t)] We can then filter edges in the graph by thresholding based on the betweenness centrality scores. However, we find that segments with high betweenness centrality often do not correspond to important links in the road network. When using a high threshold, the segments produced after thresholding cover major roads connecting dense clusters in the original graph, but miss connections to smaller clusters. When using a low threshold, most major roads are retained, but minor roads in dense clusters are also retained. Figure 3 shows an example of this issue. Additionally, different regions require very different thresholds. Grey segments are pruned to produce a road network graph containing the blue segments. On the left, a high threshold misses the road to the eastern cluster. On the right, a low threshold includes small roads in the northern and southern clusters. Thus, we propose an adaptation of betweenness centrality for our pruning problem. Pruning Minor Roads Fundamentally, betweenness centrality fails to consider the overall spatial distribution of vertices in the road network graph. Dense but compact clusters in the road network should not have an undue influence on the pruning process. Our pruning approach builds on our earlier intuition, that major roads connect far apart locations. Thus, rather than considering all shortest paths in the graph, we focus on long shortest paths. Additionally, we observe that the path may use minor roads near the source and near the destination, but edges on the middle of a shortest path are more likely to be major roads. We first cluster the vertices of the road network. Then, we compute shortest paths between cluster centers that are at least a minimum radius R apart. Rather than computing a score and then thresholding on the score, we build a set of edges E major containing edges corresponding to major roads that we will retain. For each shortest path, we trim a fixed distance from the ends of the path, and add all edges in the remaining middle of the path to E major . We prune any edge that does not appear in E major . Figure 4 illustrates our approach. We find that our approach is robust to the choice of the clustering algorithm. Clustering is primarily used to avoid placing cluster centers at vertices that are at the end of a long road that only connects a small number of destinations (and, thus, isn't a major road). In our implementation, we use a simple grid-based clustering scheme: we divide the road network into a grid of r ×r cells, remove cells that contain less than a minimum number of vertices, and then place cluster centers at the mean position of vertices in the remaining cells. We use r = 1 km, R = 5 km. In practice, we find that for constant R, the runtime of our approach scales linearly with the length of the input road network. MAiD Implementation. We add a button to toggle between an overlay containing all inferred roads, and an overlay after pruning. Figure 5 shows an example of pruning in Indonesia. Teleporting to Unmapped Roads In regions where the map already has high coverage, further improving the map coverage is tedious. Because most roads already appear in the map, users need to slowly scan the aerial imagery to identify unmapped roads in a very time-consuming process. To address this, we add a teleport capability into the map editor, which pans the editor viewport directly to an area with unmapped roads. Specifically, we identify connected components in the inferred road network G, and pan to a connected component. This functionality enables a user to teleport to an unmapped component, add the roads, and then immediately teleport to another component. By eliminating the time cost of searching for unmapped roads in the imagery, we speed up the mapping process significantly. However, there may be hundreds of thousands of connected components, and validating all of the components may not be practical. Thus, we propose a prioritization scheme so that longer roads that offer more alternate connections between points on the existing road network are validated first. Let area(C) be the area of a convex hull containing the edges of a connected component C in G, and let conn(C) be the number of vertices that appear in both C and G 0 , i.e., the number of connections between the existing road network and the inferred component C. We rank connected components by score(C) = area(C) + λconn(C), for a weighting factor λ. FAST, ACCURATE MAP INFERENCE In the map inference problem, given an existing road network graph G 0 = (V 0 , E 0 ), we want to produce an inferred road network graph G = (V , E) where each edge in E corresponds to a road segment visible in the imagery but missing from the existing map. Prior work in extracting road topology from aerial imagery generally employ a two-stage segmentation-based architecture. First, a convolutional neural network (CNN) is trained to label pixels in the aerial imagery as either "road" or "non-road". To extract a road network graph, the CNN output is passed through a heuristic postprocessing pipeline that begins with thresholding, morphological thinning [20], and Douglas-Peucker simplification [7]. However, robustly extracting a graph from the CNN output is challenging, and the post-processing pipeline is error-prone; often, noise in the CNN output is amplified in the final road network graph [2]. Rather than segmenting the imagery, RoadTracer [2] and IDL [17] propose an iterative graph construction (IGC) approach that improves accuracy by deriving the road network graph more directly from the CNN. IGC uses a step-by-step process to construct the graph, where each step contributes a short segment of road to a partial graph. To decide where to place this segment, IGC queries the CNN, which outputs the most likely direction of an unexplored road. Because we query the CNN on each step, though, IGC requires an order of magnitude more inference steps than segmentationbased approaches. We find that IGC is over six times slower than segmentation. Thus, existing map inference methods are not suitable for the interactive nature of MAiD. We combine the two-stage architecture of segmentation-based approaches with the road-direction output and iterative search process of IGC to achieve a high-speed, high-accuracy approach. In the first stage, rather than labeling pixels as road or non-road, we apply a CNN on the aerial imagery to annotate each pixel in the imagery with the direction of roads near that pixel. Figure 6 shows an example of these annotations. In the second stage, we iteratively construct a road network graph by following these directions in a search process. Ground Truth Direction Labels We first describe how we obtain the per-pixel road-direction information shown in Figure 6 from a ground truth road network G * = (V * , E * ). For each pixel (i, j), we compute a set of angles A * i, j . If there are no edges in G * within a matching threshold of (i, j), A * i, j = . Otherwise, suppose e is the closest edge to (i, j), and let p be the closest point on e computed by projecting (i, j) onto e. Let P i, j be the set of points in G * that are a fixed distance D from p; put another way, P i, j contains each point p ′ such that p ′ falls on some edge e ′ ∈ E * , and the shortest distance from p to p ′ in G * is D. Then, A * i, j = {angle(p ′ − (i, j)) | p ′ ∈ P i, j }, i.e., A * i, j contains the angle from (i, j) to each point in P i, j . Figure 7 shows an example of computing A * i, j . Representing Road Directions. We represent A * as a 3-dimensional matrix U * that can be output by a CNN. We discretize the space of angles corresponding to road directions into b = 64 buckets, where the kth bucket covers the range of angles from 2k π b to 2(k +1)π b . We then convert each set of road directions A * i, j to a b-vector u * (i, j), where u * (i, j) k = 1 if there is some angle in A * i, j falling into the kth angle bucket, and u * (i, j) k = 0 otherwise. Then, U * i, j,k = u(i, j) k . CNN Architecture. Our CNN model inputs the RGB channels from the w × h aerial imagery, and outputs a w × h × b matrix U . We apply 16 convolutional layers in a U-Net-like configuration [12], where the first 11 layers downsample to 1/32 the input resolution, and the last 5 layers upsample back up to 1/4 the input resolution. We use 3 × 3 kernels in all layers. We use sigmoid activation in the output layer, and rectified linear activation in all other layers. We use batch normalization in the 14 intermediate layers between the input and output layers. We train the CNN on random 256 × 256 crops of the imagery with a mean-squared-error loss, i, j,k (U i, j,k − U * i, j,k ) 2 , and use the ADAM gradient descent optimizer [10]. Search Process. At inference time, after applying the CNN on aerial imagery to obtain U , we perform a search process using the predicted road directions in U to derive a road network graph. We adapt the search process from IGC. Essentially, the search iteratively follows directions in U to construct the graph, adding a fixed-length road segment on each step. We assume that a set of points V init known to be on the road network are provided. If there is an existing map G 0 , we will show later how to derive V init from G 0 . Otherwise, V init may be derived from peaks in the two-dimensional matrix m(U ) i, j = max k U i, j,k . We initialize a road network graph G and a vertex stack S, and populate both with vertices at the points in V init . Let S top be the vertex at the head of S, and let u top = U (S top ) be the vector in U corresponding to the position of S top . For an angle bucket a, u top,a is the predicted likelihood that there is a road in the direction corresponding to a from S top . On each step of the search, we use u top to decide whether there is a road segment adjacent to S top that hasn't yet been mapped in G, and if there is such a segment, what direction that segment extends in. We first mask out directions in u top corresponding to roads already incorporated into G to obtain a masked vector mask(u top ); we will discuss the masking procedure later. Masking ensures that we do not add a road segment that duplicates a road that we captured earlier in the search process. Then, mask(u top ) a is the likelihood that there is an unexplored road in the direction a. If the maximum likelihood after masking, max a mask(u top ) a , exceeds a threshold T , then we decide to add a road segment. Let a best = argmax a mask(u top ) a be the direction with highest likelihood after masking, and let w a best be a unit-vector corresponding to angle bucket a best . We add a vertex v at S top + Dw a best , i.e., at the point D away from S top in the direction indicated by a best . We then add an edge (S top , v), and push v onto S. Otherwise, if max a mask(u top ) a < T , we stop searching from S top (since there are no unexplored directions with a high enough confidence in U ) by popping S top from S. On the next search step, we will return to the previous vertex in S. Figure 8 illustrates the search process. At the top, we show three search iterations, where we add a segment, stop, and then add another segment. At the bottom, we show the fourth iteration in detail. Likelihoods in u top peak to the left, topleft, and right. After masking, only the blue bars pointing right remain, since the left and topleft directions correspond to roads that we already mapped. We take the maximum of these remaining likelihoods and compare to the threshold T to decide whether to add a segment from S top or stop. When searching, we may need to merge the current search path with other parts of the graph. For example, in the fourth iteration of Figure 8, we approach an intersection on the right where the perpendicular road was already added to G earlier in the search. We handle merging with a simple heuristic that avoids creating spurious loops. Let N k (S top ) be the set of vertices within k edges from S top . If S top is within 2D of another vertex v in G such that v N 5 (S top ), then we add an edge (S top , v). Masking Explored Roads. If we do not mask during the search, then we would repeatedly explore the same road in a loop. Masking out directions corresponding to roads that were explored earlier in the search ensures that roads are not duplicated in G. We first mask out directions that are similar to the angle of edges incident to S top . For each edge e incident to S top , if the angle of e falls in bucket a, we set mask(u top ) a+k = 0 ∀k, −5 ≤ k ≤ 5. However, this is not sufficient. In the fourth iteration of Figure 8, there is an explored road to the north of S top , but that road is connected to a neighbor west of S top rather than directly to S top . Thus, we also mask directions that are similar to the angle from S top to any vertex in N 5 (S top ). Extending an Existing Map. We now show how to apply our map inference approach to improve an existing road network graph G 0 . Our key insight is that we can use points on G 0 as starting locations for the search process. Then, when new road segments are inferred, these points inform the connectivity between the new segments and G 0 . We first preprocess G 0 to derive a densified existing map G ′ 0 . Densification is necessary because there may not be a vertex at the point where an unmapped road branches off from a road in the existing map. To densify G 0 = (V 0 , E 0 ), for each e ∈ E 0 where length(e) > D, we add ⌊ length(e) D ⌋ evenly spaced vertices between the endpoints of e, and replace e with edges between those vertices. This densification preprocessing produces a base map G ′ 0 where the distance between adjacent vertices is at most D. To initialize the search, we set G = G ′ 0 , and add vertices in G ′ 0 to S. We then run the search process to termination. The search produces a merged road network graph G that contains both segments in the existing map and inferred segments. We extract the inferred road network graph by removing the edges of G ′ 0 from this output graph G. EVALUATION To evaluate MAiD, we perform two user studies. In Section 5.1, we consider a region of Indonesia where OpenStreetMap has poor coverage to evaluate our pruning approach. In Section 5.2, we turn to a region of Washington where major roads are already mapped to evaluate the teleport functionality. In Section 5.3, we compare our map inference scheme against prior work in map inference from aerial imagery on the RoadTracer dataset [2]. We show qualitative results when using MAiD with our map inference approach in Section 5.4. Indonesia Region: Low Coverage We first conduct a user study to evaluate mapping productivity when adding roads in a small area of Indonesia with no coverage in OSM. With MAiD, the interface includes a yellow overlay of automatically inferred roads; to obtain these roads, we generate an inferred graph from aerial imagery using our map inference method, and then apply our pruning algorithm to retain only the major roads. After validating the geometry of a road, the user can click it to incorporate the road into the map. In the baseline unmodified editor, users manually trace roads by performing repeated clicks along the road in the imagery. Procedure. The task is to map major roads in a region using the imagery, with the goal of maximizing coverage in terms of the percentage of houses within 1000 ft of the road network. Users are also asked to produce a connected road network, and to minimize the distance between road segments and the road position in the imagery. We define two metrics to measure this distance: road geometry error (RGE), the average distance between road segments that the participants add and a ground truth map that we hand label, and max-RGE, the maximum distance. Ten volunteers, all graduate and postdoctoral students age 20-30, participate in our study. We use a within-subjects design; five participants perform the task first on the baseline editor, and then on MAiD, and five participants proceed in the opposite order. Participants perform the experiment in a twenty-minute session. We select three regions from the unmapped area: an example region, a training region, and a test region. We first introduce participants to the iD editor, and enumerate the editor features as they add one road. We then describe the task, and show them the example region where the task has already been completed. Participants briefly practice the task on the training region, and then have four minutes to perform the task on a test region. We repeat the training and testing for both editors. We choose the test region so that it is too large to map within the allotted four minutes. We then evaluate the road network graphs that the participants produce using each editor in terms of coverage (percentage of houses covered), RGE, and max-RGE. Results. We report the mean and standard error of the percentage of houses covered by the participants with the two editors in Figure 9. We find that MAiD improves the mean percentage covered by 1.7x (from 17% to 29%). While manually tracing a major road may take 15-30 clicks, the road can be captured with one click in MAiD after the geometry of an inferred segment is verified. RGE and max-RGE are comparable for both editors, although there is more variance between participants with the baseline editor Washington Region: High Coverage Next, we evaluate mapping productivity in a high-coverage region of rural Washington. With MAiD, users can press a Teleport button to immediately pan to a group of unmapped roads. A yellow overlay includes all inferred segments covering those roads; we do not use our pruning approach for this study. With the baseline editor, users need to pan around the imagery to find unmapped roads. After finding an unmapped road, users manually trace it. Procedure. The task is to add roads that are visible in the aerial imagery but not yet covered by the map. Because major roads in this region are already mapped, rather than measuring house coverage, we ask users to add as much length of unmapped roads as possible. We again ask users to minimize the distance between road segments and the road position in the imagery, and to ensure that new segments are connected to the existing map. Ten volunteers (consisting of graduate students, postdoctoral students, and professional software engineers all age 20-30) participate in our study. We again use a within-subjects design and counterbalance the order of the baseline editor and MAiD. Participants perform the experiment in a fifteen-to-twenty minute session. For each editing interface, we first provide instructions on the task and editor functionality (accompanied by a 30-second video where we use the editor), and show images of example unmapped roads. Participants then practice the task on a training region in a warm-up phase, with a suggested duration of two to three minutes. After participants finish the warm-up, they are given three minutes to perform the task on a test region. As before, we repeat training and testing for both interfaces. We evaluate the road network graphs that the participants produce in terms of total road length, RGE, and max-RGE. Results. We report the mean and standard error of total road length added by the participants in Figure 10. MAiD improves mapping productivity in terms of road length by 3.5x (from 25 km to 88 km). Most of this improvement can be attributed to the teleport functionality eliminating the need for panning around the imagery to find unmapped roads. Additionally, though, because teleport prioritizes large unmapped components with many connections to the existing road network, validating these components is much faster than manually tracing them. As before, RGE and max-RGE are comparable for the two editors. Mean and standard error of RGE is 7.0 m ± 0.7 m with the baseline editor, and 5.3 m ± 0.1 m with MAiD. For max-RGE, it is 53 m ± 14 m with the baseline, and 39 m ± 4 m with MAiD. Automatic Map Inference Dataset. We evaluate our approach for inferring road topology from aerial imagery on the RoadTracer dataset [2], which contains imagery and ground truth road network graphs from forty cities. The data is split into a training set and a test set; the test set includes data for a 16 sq km region around the city centers of 15 cities, while the training set contains data from 25 other cities. Imagery is from Google Maps, and road network data is from OpenStreetMap. The test set includes 9 cities in the U.S., 3 in Canada, and 1 in each of France, the Netherlands, and Japan. Baselines. We compare against the baseline segmentation approach and the IGC implementation from [2]. The segmentation approach applies a 13-layer CNN, and then extracts a road network graph using thresholding, thinning, and refinement. The IGC approach, RoadTracer, trains a CNN using a supervised dynamic labels procedure that resembles reinforcement learning. This approach achieves state-of-the-art performance on the dataset, on which DeepRoadMapper [11] has also been evaluated. Metrics. We evaluate the road network graphs output by the map inference schemes on the TOPO metric [3], which is commonly used in the automatic road map inference literature [1]. TOPO evaluates both the geometrical accuracy (how closely the inferred segments align with the actual road) and the topological accuracy (correct connectivity) of an inferred map. It simulates an agent traveling on the road network from an origin location, and compares the destinations that can be reached within a fixed radius in the inferred map with those that can be reached in the ground truth map. This comparison is repeated over a large number of randomly selected origins to obtain an average precision and recall. We also evaluate the execution time of the schemes on an AWS p2.xlarge instance with an NVIDIA Tesla K80 GPU. Results. We show TOPO precision-recall curves obtained by varying parameter choices in Figure 11, and average execution time in the 15-square-km test regions for parameters that correspond to a 10% error rate in Table 1. We find that our approach exhibits both the high-accuracy of IGC and the speed of segmentation methods. Our map inference approach has comparable TOPO performance to IGC, while outperforming the segmentation approach on error rate by up to 1.6x. This improvement in error rate is crucial for machine-assisted map editing as it reduces the time users spend validating incorrect inferred segments. On execution time, our approach performs comparably to the segmentation approach, while IGC is almost 8x slower. A low execution time is crucial to MAiD's interactive workflow. Users can explore a new region for two to three minutes while the automatic map inference approach runs; however, a fifteen-minute runtime breaks the workflow. Qualitative Results In Figure 12, we show qualitative results from MAiD when using segments inferred by our map inference algorithm. CONCLUSION Full automation for building road maps has proven unfeasible due to high error rates in automatic map inference methods. We instead propose machine-assisted map editing, where we integrate automatically inferred road segments into the existing map editing process by having humans validate these segments before the segments are incorporated into the map. Our map editor, Machine-Assisted iD (MAiD), improves mapping productivity by as much as 3.5x by focusing on tasks where machine-assistance provides the most benefit. We believe that by improving mapping productivity, MAiD has the potential to substantially improve coverage in road maps. Figure 12: Qualitative results from MAiD with our map inference algorithm. Segments in the existing map are in white. We show our pruning approach applied on a region of Indonesia in the top image, with pruned roads in purple and retained roads in yellow. The middle and bottom images show connected components of inferred segments that the teleport feature pans the user to, in Washington and Bangkok respectively.
5,944