aid
string
mid
string
abstract
string
related_work
string
ref_abstract
dict
title
string
text_except_rw
string
total_words
int64
1808.06356
2886884222
We consider the problem of inferring the directed, causal graph from observational data, assuming no hidden confounders. We take an information theoretic approach, and make three main contributions. First, we show how through algorithmic information theory we can obtain SCI, a highly robust, effective and computationally efficient test for conditional independence---and show it outperforms the state of the art when applied in constraint-based inference methods such as stable PC. Second, building upon on SCI, we show how to tell apart the parents and children of a given node based on the algorithmic Markov condition. We give the Climb algorithm to efficiently discover the directed, causal Markov blanket---and show it is at least as accurate as inferring the global network, while being much more efficient. Last, but not least, we detail how we can use the Climb score to direct those edges that state of the art causal discovery algorithms based on PC or GES leave undirected---and show this improves their precision, recall and F1 scores by up to 20 .
The discovery of Markov blankets is important in two regards. First, it represents the optimal set of variables for feature selection @cite_15 and second, for investigating the local structure around a target it is much faster than discovering the whole Bayesian network @cite_2 . The idea of first discovering the neighbourhood of a node, instead of the full Bayesian network got more common with the grow and shrink (GS) algorithm @cite_8 . It consists of two sub routines. First, it discovers the potential parents and children in a bottom up approach. Then, it finds the spouses based on the parents and children from the previous step.
{ "abstract": [ "In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in real-world domains. We present an efficient algorithm for learning Bayes networks from data. Our approach constructs Bayesian networks by first identifying each node's Markov blankets, then connecting nodes in a maximally consistent way. In contrast to the majority of work, which typically uses hill-climbing approaches that may produce dense and causally incorrect nets, our approach yields much more compact causal networks by heeding independencies in the data. Compact causal networks facilitate fast inference and are also easier to understand. We prove that under mild assumptions, our approach requires time polynomial in the size of the data and the number of nodes. A randomized variant, also presented here, yields comparable results at much higher speeds.", "We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning.", "Learning Markov blanket (MB) structures has proven useful in performing feature selection, learning Bayesian networks (BNs), and discovering causal relationships. We present a formula for efficiently determining the number of MB structures given a target variable and a set of other variables. As expected, the number of MB structures grows exponentially. However, we show quantitatively that there are many fewer MB structures that contain the target variable than there are BN structures that contain it. In particular, the ratio of BN structures to MB structures appears to increase exponentially in the number of variables." ], "cite_N": [ "@cite_8", "@cite_15", "@cite_2" ], "mid": [ "2129564794", "2133091666", "1660826287" ] }
Causal Discovery by Telling Apart Parents and Children
Many mechanisms, including gene regulation and control mechanisms of complex systems, can be modelled naturally by causal graphs. While in theory it is easy to infer causal directions if we can manipulate parts of the network-i.e. through controlled experiments-in practice, however, controlled experiments are often too expensive or simply impossible, which means we have to work with observational data [21]. Constructing the causal graph given observations over its joint distribution can be understood as a global task, as finding the whole directed network, or locally, as discovering the local environment of a target variable in a causal graph [20], [29]. For both problem settings, constraint based algorithms using conditional independence tests, belong to the state of the art [2], [8], [29], [22]. As the name suggests, those algorithms strongly rely on one key component: the independence test. For discrete data, the state of the art methods often rely on the G 2 test [1], [26]. While it performs well given enough data, as we will see, it has a very strong bias to indicating independence in case of sparsity. Another often used method is conditional mutual information (CMI) [31], which like G 2 performs well in theory, but in practice has the opposite problem; in case of sparsity it tends to find spurious dependencies-i.e. it is likely to find no independence at all, unless we set arbitrary cut-offs. To overcome these limitations, we propose a new independence measure based on algorithmic conditional independence [11], which we instantiate through the Minimum Description Length principle [10], [24]. In particular, we do so using stochastic complexity [25], which allows us to measure the complexity of a sample in a mini-max optimal way. That is, it performs as close to the true model as possible (with optimality guarantees), regardless of whether the true model lies inside or outside the model class [10]. As we consider discrete data, we instantiate our test using stochastic complexity for multinomials. As we will show, our new measure overcomes the drawbacks of both G 2 and CMI, and performs much better in practice, especially under sparsity. Building upon our independence test, we consider the problem of finding the Markov blanket, short MB (see example in Fig. 1). Precisely, the Markov blanket of a target variable T is defined as the minimal set of variables, conditioned on which all other variables are independent of the target [20]. This set includes its parents, children and parents of common children, also called spouses. Simply put, the Markov blanket of a target node T is considered as the minimal set of nodes that contains all information about T [1]. Algorithms for finding the Markov blanket stop at this point and return a set of nodes, without identifying the parents, the children or the spouses [2], [8], [22]. We propose CLIMB, a new method based on the algorithmic Markov condition [11], that is not only faster than state of the art algorithms for discovering the MB, but is to the best of our knowledge the first algorithm for discovering the directed, or causal Markov blanket of a target node, without further exploration of the network. That is, it tells apart parents, children and spouses. Last but not least, we consider recovering the full causal graph. Current state of the art constraint based and score based algorithms [4], [29] only discover partially directed graphs but can not distinguish between Markov equivalent subgraphs. Based on CLIMB, we propose a procedure to infer the remaining undirected edges with a high precision. The key contributions of this paper are, that we (a) define SCI , a new conditional independence test, (b) derive a score to tell apart parents and children of a node, (c) propose CLIMB, to discover causal Markov blankets, and (d) show how to use the CLIMB score to orient those edges that can not be oriented by the PC or GES algorithm. This paper is structured as follows. In Sec. II we introduce the main concepts and notation, as well as properties of the stochastic complexity. Sec. III discusses related work. Since our contributions build upon each other, we first introduce SCI , our new independence test, in Sec. IV. Next, we define and explain the CLIMB algorithm to find the causal Markov blanket in Sec. V and extend this idea to decide between Markov equivalent DAGs. We empirically evaluate in Sec VI and round up with discussion and conclusion in Sec. VII. II. PRELIMINARIES In this section, we introduce the notation and main concepts we build upon in this paper. All logarithms are to base 2, and we use the common convention that 0 log 0 = 0. A. Bayesian Networks Given an m-dimensional vector (X 1 , . . . , X m ), a Bayesian network defines the joint probability over random variables X 1 , . . . , X m . We specifically consider discrete data, i.e. each variable X i has a domain X i of k i distinct values. Further, we assume we are given n i.i.d. samples drawn from the joint distribution of the network. We express the n-dimensional data vector for the i-th node with x n i . To describe interactions between the nodes, we use a directed acyclic graph (DAG) G. We denote the parent set of a node X i with PA i , its children with CH i and nodes that have a common child with X i as its spouses SP i . A set of variables X contains k X = Xj ∈X k j value combinations that can be non-ambiguously enumerated. We write X = j to refer to the j-th value combination of X. For instance such a set could be the set of parents, children or spouses of a node. As it is common for both inferring the Markov blanket as well as the complete network, we assume causal sufficiency, that is, we assume that we have measured all common causes of the measured variables. Further, we assume that the Bayesian network G is faithful to the underlying probability distribution P [29]. Definition 1 (Faithfulness). If a Bayesian network G is faithful to a probability distribution P , then for each pair of nodes X i and X j in G, X i and X j are adjacent in G iff. X i ⊥ ⊥ X j | Z, for each Z ⊂ G, with X i , X j ∈ Z. F ⊥ ⊥ T | D, E, or F is d-separated of T given D, E. Note that D ⊥ ⊥ T | E, F and E ⊥ ⊥ T | D, F . Equivalently, it holds that X i and X j are d-separated by Z, with Z ⊂ G and X i , X j ∈ Z, iff. X i ⊥ ⊥ X j | Z [11]. Generally, d-separation is an important concept for constraint based algorithms, since it is used to prune out false positives. As an example consider Fig. 2. All nodes D, E and F are associated with the target T . However, F ⊥ ⊥ T | D, E and hence F is d-separated from T given D and E, which means that it can be excluded from the parent set of T . Building on the faithfulness assumption, it follows that the probability to describe the whole network can be written as P (X 1 , . . . , X m ) = m i=1 P (X i | PA i ) ,(1) which means that we only need to know the conditional distributions for each node X i given its parents [11]. Having defined a Bayesian network, it is now easy to explain what a Markov blanket is. B. Markov Blankets Markov blankets were first described by Pearl [20]. A Markov blanket MB T of a target T is the minimal set of nodes in a graph G, given which all other nodes are conditionally independent of T . That is, knowing the values of MB T , we can fully explain the probability distribution of T and any further information is redundant [20]. Concretely, the Markov blanket of T consists of the parents, children and spouses of T . An example MB is shown in Fig. 1. Discovering the Markov blanket of a node has several advantages. First, the Markov blanket contains all information that we need to describe a target variable as well as its neighbourhood in the graph. In addition, the MB is theoretically the optimal set of attributes to predict the target values [16] and can be used for feature selection [1], [7]. In addition to those properties, the Markov blanket of a single target can be inferred much more efficiently than the whole Bayesian network [30]. This is especially beneficial if we are only interested in a single target in a large network, e.g. the activation of one gene. Most algorithms to discover the MB rely on faithfulness and a conditional independence test [1]. For the former, we have to trust the data, while for the latter we have a choice. To introduce the independence test we propose, we first need to explain the notions of Kolmogorov complexity and Stochastic complexity. C. Kolmogorov Complexity The Kolmogorov complexity of a finite binary string x is the length of the shortest binary program p * for a universal Turing machine U that generates x, and then halts [12], [14]. Formally, we have K(x) = min{|p| | p ∈ {0, 1} * , U(p) = x} . That is, program p * is the most succinct algorithmic description of x, or in other words, the ultimate lossless compressor for that string. We will also need conditional Kolmogorov complexity, K(x | y) ≤ K(x), which is again the length of the shortest binary program p * that generates x, and halts, but now given y as input for free. By definition, Kolmogorov complexity will make maximal use of any structure in x that can be expressed more succinctly algorithmically than by printing it verbatim. As such it is the theoretical optimal measure for complexity. However, due to the halting problem it is not computable, nor approximable up to arbitrary precision [14]. We can, however, approximate it from above through MDL. D. The Minimum Description Length Principle The Minimum Description Length (MDL) principle [24] provides a statistically well-founded and computable framework to approximate Kolmogorov complexity from above [10]. In refined MDL we measure the stochastic complexity of data x with regard to a model class M, L(x | M) ≥ K(x). The larger this model class, the closer we get to K(x); the ideal version of MDL considers the set of all programs that we know output x and halt, and hence is the same as Kolmogorov complexity. By using a refined MDL codeL, we have the guarantee thatL(x | M) is only a constant away from the number of bits we would need to encode the data if we already knew the best model This constant is called the regret. Importantly, it does not depend on the data, but only on the model class; and hence these guarantees hold even in settings where the data was drawn adversarially [10]. Only for a few model classes it is known how to efficiently compute the stochastic complexity. One of these is the stochastic complexity for multinomials [13], which we will introduce below. E. Stochastic Complexity for Multinomials Given n samples of a discrete-valued univariate attribute X with a domain X of k distinct values, x n ∈ X n , letθ(x n ) denote the maximum likelihood estimator for x n . Shtarkov [27] defined the mini-max optimal normalized maximum likelihood (NML) as P NML (x n | M k ) = P (x n |θ(x n , M k )) R n M k ,(2) where the normalizing factor, or regret R n M k , relative to the model class M k is defined as R n M k = x n ∈X n P (x n |θ(x n ), M k ) .(3) The sum goes over every possible x n over the domain of X, and for each considers the maximum likelihood for that data given model class M k . For discrete data, we can rewrite Eq. (2) as P NML (x n | M k ) = k v=1 hv n hv R n M k , writing h v for the frequency of value v in x n , resp. Eq. (3) as R n M k = h1+···+h k =n n! h 1 ! · · · h k ! k v=1 h v n hv . Mononen and Myllymäki [19] derived a formula to calculate the regret in sub-linear time, meaning that the whole formula can be computed in linear time w.r.t. the number of samples n. We obtain the stochastic complexity for x n with regard to model class M k by simply taking the negative logarithm of the P NML , which decomposes into a Shannon-entropy and a log regret term, SC(x n | M k ) = − log P NML (x n | M k ) , = nH(x n ) + log R n M k .(4) Conditional stochastic complexity [28] is defined analogue to conditional entropy, i.e. we have for any x n , y n that SC(x n | y n , M k ) = v∈Y SC(x n | y n = v, M k ) = v∈Y h v H(x n | y n = v) + v∈Y log R hv M k ,(5) with Y the domain of Y , and h v is the frequency of a value v in y n . For notational convenience, wherever clear from context we will write SC(X) for SC(x n | M k ), resp. SC(X | Y ) for SC(x n | y n , M k ). For the log-regret terms in Eq. (4), resp. Eq. (5), we will write ∆(X) resp. ∆(X | ·). To introduce our independence test and methods, we need the following property of the conditional stochastic complexity. Proposition 1. Given two discrete random variables X and Y and a set Z of discrete random variables. If ∆(X | Z) ≤ ∆(X | Z, Y ), we call ∆ monotone w.r.t. the number of conditioning variables. To proof Prop. 1, we first show that the regret term is logconcave in n. Lemma 1. For n ≥ 1, the regret term R n M k of the multinomial stochastic complexity of a random variable with a domain size of k ≥ 2 is log-concave in n. For consciousness, we postpone the proof of Lemma 1 to Appendix VIII-A. Based on Lemma 1 we can now proof Prop. 1. Proof of Prop. 1: In order to show Prop. 1, we can reduce the statement as follows. Consider that Z contains p distinct value combinations {r 1 , . . . , r p }. If we add Y to Z, the number of distinct value combinations, {l 1 , . . . , l q }, increases to q, where p ≤ q. Consequently, to show that Prop. 1 holds, it suffices to show that p i=1 log R |ri| M k ≤ q j=1 log R |lj | M k (6) whereas p i=1 |r i | = q j=1 |l j | = n. Next, consider w.l.o.g. that each value combination {r i } i=1,. ..,p is mapped to one or more value combinations in {l 1 , . . . , l q }. Hence, Eq. (6) holds, if the log R n M k is sub-additive in n. Since we know from Lemma 1 that the regret term is log-concave in n, subadditivity follows by definition. IV. INDEPENDENCE TESTING USING STOCHASTIC COMPLEXITY Most algorithms to discover the Markov blanket rely on two tests: association and conditional independence [1], [7]. The association test has to be precise for two reasons. First, by being too restrictive it might miss dependencies, which results in a bad recall. Second, if the test is too lenient we have to test more candidates. This may not sound bad, but as we face an exponential runtime w.r.t. the number of candidates, it is very much so in practice. The quality of the conditional independence test is even more important. Algorithms proven to be correct, are only correct under the assumption that the conditional independence test is so, too [7], [22]. Commonly used conditional independence tests for categorical data like the G 2 or the conditional mutual information (CMI), have good properties in the limit, but show several drawbacks on practical sample sizes. In the following, we will propose and justify a new test for conditional independence on discrete data. We start with Shannon conditional mutual information as a measure of independence. It is defined as follows. Definition 2 (Shannon Conditional Independence [6]). Given random variables X, Y and Z. If I(X; Y | Z) = H(X | Z) − H(X | Z, Y ) = 0(7) then X and Y are called statistically independent given Z. In essence, I(X; Y | Z) is a measure of association of X and Y conditioned on Z, where an association of 0 corresponds to statistical independence. If Z is the empty set it reduces to standard mutual information, meaning we directly measure the association between X and Y . As I is based on Shannon entropy, it assumes that we know the true distributions. In practice, we of course do not know these, but rather estimateĤ from finite samples. This becomes a problem when estimating conditional entropy, as to obtain a reasonable estimate we need a number of samples that is exponential in the domain size of the conditioning variable [15]; if we have too few samples, we tend to underestimate the conditional entropy, overestimate the conditional mutual information, and Eq. (7) will seldom be 0-even when X ⊥ ⊥ Y | Z. Hence, we needed to set an arbitrary cut-off δ, such that I ≤ δ. The problem is, however, that δ is hard to define, since it is dependent on the complexity of the variables, the amount of noise and the sample size. Therefore, we propose a new independence test based on (conditional) algorithmic independence that remains zero for X ⊥ ⊥ Y | Z even in settings where the sample size is small and the complexity of the variables is high. This we can achieve, by not only considering the complexity of the data under the model (i.e. the entropy) but also the complexity of the model (i.e. distribution). Before introducing our test, we first need to define algorithmic conditional independence. Definition 3 (Algorithmic Conditional Independence). Given the strings x, y and z, We write z * to denote the shortest program for z, and analogously (z, y) * for the shortest program for the concatenation of z and y. If I A (x; y | z) := K(x | z * ) − K(x | (z, y) * ) + = 0 (8) holds up to an additive constant that is independent of the data, then x and y are called algorithmically independent given z [3]. As discussed above, Kolmogorov complexity is not computable, and to use I A in practice we will need to instantiate it through MDL. We do so using stochastic complexity for multinomials. That is, we rewrite Eq. (8) in terms of stochastic complexity, I SC (X; Y | Z) = SC (X | Z) − SC (X | Z, Y ) (9) = n · I(X; Y | Z) + ∆(X | Z) − ∆(X | Z, Y ) where n is the number of samples. Note that the regret terms ∆(X | Z) and ∆(X | Z, Y ) in Eq. (9) are over the same domain. From Prop 1, we know that ∆(X | Z) is smaller or equal than ∆(X | Z, Y ). Hence, the new variable Y has to provide a significant gain in the term H(X | Z, Y ) in Eq. (7) to overcome the penalty from its regret term. To use I SC as an independence measure, we need one further adjustment. Since the regret terms are dependent on the domain size, it can happen that I SC (X; Y | Z) = I SC (Y ; X | Z). We make the score symmetric by simply taking the maximum of both directions, and define the Stochastic Complexity based Independence measure as SCI (X; Y | Z) = max{I SC (X; Y | Z), I SC (Y ; X | Z)} . We have that X ⊥ ⊥ Y | Z, iff SCI (X; Y | Z) ≤ 0. Note that SCI can be smaller than zero, if e.g. H(X | Z) = H(X | Y, Z) but ∆(X | Z) < ∆(X | Z, Y ). In the following, we explain why the SCI is a well defined measure for conditional independence. In particular, we show that it detects independence, i.e. SCI (X; Y | Z) ≤ 0 holds, if X ⊥ ⊥ Y | Z and that it converges to I. Lemma 2. SCI (X; Y | Z) ≤ 0, iff X ⊥ ⊥ Y | Z. Proof of Lemma 2: It suffices to show that I SC (X; Y | Z) ≤ 0, as I SC (Y ; X | Z) ≤ 0 follows analogously. Since the first part of I SC (X; Y | Z) is equal to n times I, this part will be zero by definition. Based on Prop 1, we have that ∆(X | Z)−∆(X | Z, Y ) ≤ 0, which concludes the argument. Next, we show that in the limit 1 n SCI (X; Y | Z) behaves like I(X; Y | Z). Lemma 3. Given two random variables X and Y and a set of random variables Z, it holds that lim n→∞ 1 n SCI (X; Y | Z) = I(X; Y | Z) , whereas n denotes the number of samples. Proof of Lemma 3: To show the claim, it suffices to show that I SC (X; Y | Z) asymptotically behaves like I(X; Y | Z), as I SC (Y ; X | Z) has the same asymptotic behaviour. We have lim n→∞ 1 n I SC (X; Y | Z) = lim n→∞ I(X; Y | Z) + 1 n (∆(X | Z) − ∆(X | Z, Y )) . Hence it remains to show that the second term goes to zero. Since log R n M k is concave in n, 1 n ∆(X | Z) and 1 n ∆(X | Z, Y ) will approach zero if n → ∞. In sum, asymptotically SCI behaves like conditional mutual information, but in contrast to I, it is robust given only few samples, and hence does not need an arbitrary threshold. In practice it also performs favourably compared to the G 2 test, as we will show in the experiments. Next, we build upon SCI and introduce CLIMB for discovering causal Markov blankets. V. CAUSAL MARKOV BLANKETS In this section, we introduce CLIMB, to discover directed, or causal Markov blankets. As an example, consider Fig. 1 again and further assume that we only observe T , its parents P 1 , P 2 , P 3 and its children C 1 , C 2 . Only in specific cases we can identify some of the parents using conditional independence tests. In particular, only where exist at least two parents P i and P j that are not connected by an edge or do not have any ancestor in common, i.e. when P i ⊥ ⊥ P j | ∅ but P i ⊥ ⊥ P j | T . We hence need another approach to tell all apart the parents and children of T . A. Telling apart Parents and Children To tell apart parents and children, we define a partition π(PC T ) on the set of parents and children of a target node T , which separates the parents and children into exactly two non-intersecting sets. We refer to the first set as the parents PA T and to the second as children CH T , for which PA T ∪ CH T = PC T and PA T ∩ CH T = ∅. Further, we consider two special cases, where we allow either PC T or CH T to be empty, which leaves the remaining set to contain all elements of PC T . Note that there exist 2 |PC T | −1 possible partitions of PC T . To decide which of the partitions fits best to the given data, we need to be able to score a partition. Building on the faithfulness assumption, we know that we can describe the each node as the conditional distribution given its parents (see Eq. 1). For causal networks, Janzing and Schölkopf [11] showed this equation can be expressed in terms of Kolmogorov complexity. Postulate 1 (Algorithmic Independence of Conditionals). A causal hypothesis is only acceptable if the shortest description of the joint density P is given by the concatenation of the shortest description of the Markov kernels. Formally, we write K(P (X 1 , . . . , X m )) + = j K(P (X j | PA j )) ,(10) which holds up to an additive constant independent of the input, and where PA j corresponds to the parents of X j in a causal directed acyclic graph (DAG). Further, Janzing and Schölkopf [11] show that this equation only holds for the true causal model. This means that it is minimal if each parent is correctly assigned to its corresponding children. Like in SCI , we again use stochastic complexity to approximate Kolmogorov complexity. In particular, we reformulate Eq. 10, such that we are able to describe the local neighbourhood of a target node T by its parents and children. In other words, we score a partition π as SC (π(PC T )) = SC (T | PA T ) + P ∈PA T SC (P ) + C∈CH T SC (C | T ) .(11) where, we calculate the costs of T given its parents, the unconditioned costs of the parents and the children given T . Further, by MDL, the best partition π * (PC T ) is the one minimizing Eq. (11). By exploring the whole search space, we can find the optimal partition with regard to our score. The corresponding computational complexity, which is exponential in the number of parents and children, is not the bottle neck for finding the causal Markov blanket, as it does not exceed the runtime for finding the parents and children in the first place. Moreover, in most real world data sets the average number of parents and children is rather small, leaving us on average with few computational steps here. B. The Climb Algorithm Now that we defined how to score a partition, we can now introduce the CLIMB algorithm. In essence, the algorithm builds upon and extends PCMB and IPCMB, but, unlike these, can discover the causal Markov blanket. CLIMB (Algorithm 1) consists of three main steps: First, we need to find the parents and children of the target node T (line 1), which can be done with any sound parents and children algorithm. Second, we compute the best partition π * (PC T ) using Eq. (11) (line 2). The last step is the search for spouses. Here, we only have to iterate over the children to find spouses, which saves computational time. To remove children of children, we apply the fast symmetry correction criterion as suggested by Fu et al. [8] (lines [6][7][8]. Further, we find the spouses as suggested in the PCMB algorithm [22] (lines 9-12), whereas the separating set S (line 10) can be recovered from the procedure that found the parents and children. In the last line, we output the causal MB by returning the distinct sets of parents, children and spouses. Complexity and Correctness: At worst, the computational complexity of CLIMB is as good as common Markov blanket discovery algorithms. This worst case is the scenario of having only children and therefore having to search each element in the parents and children set for the spouses. Given |V | as the number of nodes, we have to apply O(2 |M B| |V |) independence tests, which reduces to O(|M B| k |V |), if we restrict the number of conditioning variables in the independence test to k [2]. Calculating the independence test is linear, as calculating the the conditional mutual information takes linear time and the regret term of SCI can be computed in sublinear time [19]. In practice, CLIMB saves a lot of computation compared to PCMB or IPCMB because it can identify the children and hence does not need to iterate through the parents to search for spouses. If we search through the whole set of parents and children to identify the spouses, the correctness of the Markov blanket under the faithfulness condition and a correct independence test follows trivially from PCMB [22] and IPCMB [8]. To correctly infer the causal Markov blanket, we need to minimize our score over the complete causal network, which scales exponentially and is infeasible for networks with more than the causal network, which would eliminate the computational advantage of only discovering the Markov blanket. Instead, we compute the local optimal score to tell apart parents and children. This makes CLIMB feasible for large networks. C. Deciding between Markov equivalent DAGs In the previous section, we showed how to find the causal MB by telling apart parents and children. Besides, we can use this information to enhance current state of the art causal discovery algorithms. In particular, many causal discovery algorithms as GES [4] and the PC [29] algorithm find partial DAGs. That is, they can not orient all edges and leave some of them undirected. Precisely, if we would assign any direction at random to these undirected edges, the corresponding graph would be in the same Markov equivalence class as the original partial DAG. We can, however, use the score of CLIMB to also orient these edges in a post processing step as follows. First, we determine the parents and children of each node using the partial DAG. For an undirected edge connecting two nodes A and B, we assign B as a parent of A and vice versa A as a parent of B. It is easy to see that such an assignment creates a loop between A and B in the causal graph. In the second step, we iteratively resolve loops between two nodes A and B by we determining that configuration with the minimum costs according to Eq. 11. This we do by first assigning B as a parent of A in P C A , while keeping A as a child of B in P C B . We compute the sum of the costs of this configuration for A and B according to Eq. 11, compare the result to the costs of the inverse assignment and select the one with smaller costs. We repeat this until all edges have been directed. In the experiments we show that this simple technique significantly improves the precision, recall, and F1 in edge directions for both stable PC and FGES. VI. EXPERIMENTS In this section, we empirically evaluate our independence test, as well as the CLIMB algorithm and the corresponding edge orientation scheme. For research purposes, we provide the code for SCI , CLIMB and the orientation scheme online. 1 All experiments were ran single threaded on a Linux server with Intel Xenon E5-2643v2 processors and 64GB RAM. For the competing approaches, we evaluate different significance thresholds and present the best results. A. Independence Testing In this experiment, we illustrate the practical performance of SCI , the G 2 test and conditional mutual information. In particular, we evaluate how well they can distinguish between true d-separation and false alarms. To do so, we simulate dependencies as depicted in Fig. 2 and generated data under various samples sizes (100-2 500) and noise settings (0%-95%). For each setup we generated 200 data sets and assess the accuracy. In particular, we report 1 http://eda.mmci.uni-saarland.de/climb/ First, we focus on the results for G 2 and SCI , which we show in Fig 3. SCI performs with near to 100% accuracy for less than 70% noise and then starts to drop in the presence of more noise. At the level of 0% noise D and E can be fully explained by F and therefore an accuracy of 50%, i.e. all independences hold, is the correct choice. In comparison, the G 2 test marks everything as independent given less than 1 500 data points and starts to lose performance with more than 30% noise. However, it is better in very high noise setups (more than 75% noise), whereas it si questionable if the real signal can still be detected. Next we consider the results for CMI. As theoretically discussed, we expect that finding a sensible cut-off value is impossible, since it depends on the size of the data, as well as the domain sizes of both the target and conditioning set. As shown in Fig. 4, CMI with zero as cut-off performs nearly random. In addition, we can clearly see that CMI is highly dependent on the sample size as well as on the amount of noise. B. Plug and Play with SCI To evaluate how well SCI performs in practice, we plug it into both the PCMB [22] and the IPCMB [8] algorithm. As the results for IPCMB are similar to those for PCMB, we skip them for conciseness. In particular, we compare the results of PCMB using G 2 and SCI to CLIMB using SCI and the PCMB subroutine to find the parents and children. We refer to PCMB with G 2 as PCMB G 2 and using SCI as PCMB SCI . To compare those variants, we generate data from the Alarm network, where we generate for 100 up to 20 000 samples each ten data sets and plot the average F 1 score as well as the number of performed independence tests in Fig. 5. As we can see, the F 1 score for PCMB using G 2 reaches at most 67%. In comparison, PCMB using SCI as well as CLIMB obtain F 1 scores of more than 90%. For both the precision is greater than 95% given at least 1 000 data points. Note that CLIMB only uses the inferred children to search for the spouses and hence can at most be as good as PCMB SCI . When we consider the runtime, we observe that CLIMB has superior performance: Both the PCMB SCI and PCMB G 2 need more than 10 times as many tests than CLIMB. C. Telling apart Parents and Children Next, we evaluate how well we can tell apart parents and children. We again generate synthetic data from the Alarm network as above and average over the results. For each node in the network, we infer, given the true parents and children set, which are the parents and which the children using our score and plot the averaged accuracy in Fig. 6. Given only 100 samples, the accuracy is already around 80% and further increases to 88% given more data. In addition, the results show that there is no bias towards preferring either parents or children. D. Finding Causal Markov Blankets In the next experiment, we go one step further. Again, we consider generated data from the Alarm network for different sample sizes. This time, however, we apply CLIMB and compute the precision and recall for the directed edges. As a comparison, we infer the causal skeleton with stable PC [5] using the G 2 test, let it orientate as many edges as possible and then extract the Markov blanket. We plot the results in Fig. 7. We see that the precision of CLIMB reaches up to 90% and is always better than the PC algorithm, which reaches at most 79%, for up to 10 000 data points. In terms of recall, CLIMB is better than the PC algorithm, however, at 20 000 data points, they are about equal. Since CLIMB is, as far as we know, the first method to extract the causal Markov blanket directly from the data, we can not have a fair comparison of runtimes. Given 20 000 data points, CLIMB needs on average 221 independence tests per node. E. Causal Discovery Last but not least, we evaluate the use of SCI , respectively CLIMB, for causal discovery. First, we show that SCI significantly improves the F 1 score over the directed edges of stable PC for small sample sizes compared to the G 2 test. Second, we apply the CLIMB edge orientation procedure as post processing on top of the FGES and stable PC, and show how it improves their precision and recall on the directed edges. a) SCI for PC: First, we evaluate stable PC [5], using the standard G 2 test with α = 0.01 and SCI on the Alarm network with sample sizes between 100 and 20 000. We generate ten data sets for each sample size and calculate the average F 1 score for stable PC using G 2 (stable PC G 2 ) and and our independence test (stable PC SCI ). To calculate the F 1 score, we use the precision and recall on the directed edges, which means that only if an edge is present with the correct orientation, we count it as a true positive. We show the results in Fig. 8. Stable PC SCI has a much better performance than stable PC G 2 , given only few samples. When the sample size approaches 20 000, stable PC SCI still has an advantage of ∼ 12% over PC G 2 . b) Climb-based FGES and PC: To show that our CLIMB-based edge orientation scheme improves not only constraint based algorithms, but also score based methods, we apply it on top of FGES and stable PC, and correspond to the enhanced versions as FGES CLIMB and PC CLIMB . In Table I Figure 8. [Higher is better] F 1 score on directed edges for stable PC using G 2 and SCI on the Alarm network given different sample sizes. the Bayesian Network Repository. 2 We generate ten random data sets with 20 000 samples for each network and average over the results. Applying CLIMB to the undirected edges clearly improves both the results of PC and FGES. In all cases, except FGES on the Hailfinder network, the precision and recall of the enhanced method are more than one standard deviation and up to ∼ 25% better than the original results. To test the significance of these results, we apply the exact one-sided Wilcoxon rank-sum test [17] on the precision and recall. As result, the enhanced versions significantly improve the precision and recall with all p-values < 0.0018. VII. CONCLUSION This work includes three key contributions: SCI , a conditional independence test for discrete data, the CLIMB algorithm to mine the causal Markov blanket and the edge orienting scheme based on CLIMB for causal discovery. Through thorough empirical evaluation we showed that each of those contributions strongly improves local and global causal discovery on observational data. Moreover, we showed that SCI converges to the true conditional mutual information. In contrast to CMI, it does not require any cut-off value and is more robust in the presence of noise. In particular, incorporating SCI in either common Markov blanket discovery algorithms, or the PC algorithm leads to much better results than using the standard state of the art G 2 test-especially for small sample sizes. Further, we proposed CLIMB to efficiently find causal Markov blankets in large networks. By applying our edge orientation scheme based on CLIMB on top of common causal discovery algorithms, we can improve their precision and recall on the edge directions. For future work, we want to consider sparsification, by e.g. removing candidates that do not have a significant association to a target node. One possible way of doing this could be to formulate a significance test based on the nohypercompression inequality [10], [18]. Last but not least, we want to develop fast approximations to find and orient the parents and children of hub nodes, to extend the applicability of our method. VIII. APPENDIX A. Proof of log-concavity of the regret term Proof of Lemma 1: To improve the readability of this proof, we write R n L as shorthand for R n M L of a random variable with a domain size of L. Since n is an integer, each R n L > 0 and R 0 L = 1, we can prove Lemma 1, by showing that the fraction R n L /R n−1 L is decreasing for n ≥ 1, when n increases. We know from Mononen and Myllymäki [19] that R n L can be written as the sum where m(0, n) is equal to 1. It is easy to see that from n = 1 to n = 2 the fraction R n L /R n−1 L decreases, as R 0 L = 1, R 1 L = L and R 2 L = L + L(L − 1)/2. In the following, we will show the general case. We rewrite the fraction as follows. Next, we will show that both parts of the sum in Eq. 13 are decreasing when n increases. We start with the left part, which we rewrite to . When n increases, each term of the sum in the numerator in Eq. 14 decreases, while each element of the sum in the denominator increases. Hence, the whole term is decreasing. In the next step, we show that the right term in Eq. 13 also decreases when n increases. It holds that m(n, n) n−1 k=0 m(k, n − 1) ≥ m(n, n) m(n − 1, n − 1) . Using Eq. 12 we can reformulate the term as follows. After rewriting, it is easy to see that also the second term of Eq. 13 is decreasing and hence we can conclude the proof. Figure 9. [Zero-Baseline]F , F andF SCI on X ⊥ ⊥ Y , for different domain sizes k Y of Y . F is the ideal score,F uses empirical estimates, whileF SCI uses stochastic complexity.F identifies spurious associations for larger k Y , whereasF SCI correctly does not pick up any signal. B. SCI fulfills the Zero-Baseline Property Given a random variable X and a set of random variables Y, where each Y ∈ Y is jointly independent of X. An independence measure fulfills the zero-baseline property, if it remains zero, or indicates no association, independent of the sample size or the complexity of Y [15]. To illustrate that SCI fulfills the zero-baseline property, while CMI does not, we consider their behaviour when the conditioning set is empty. We generate X and Y independently, setting their domain sizes respectively to k X = 4 while increasing k Y from 4 0 to 4 5 . For a score between zero and one we divide I by H(X) and writeF (X; Y ) = I(X; Y | ∅)/H(X). We instantiate F using the true entropy,F using empirical estimates of H(·), andF SCI using SCI . For each setup between X and Y we generate 100 data sets with 1 000 samples, average over the results and plot them in Fig. 9. F SCI correctly identifies independence, whereasF almost immediately is non-zero, and quickly rises up to 1, identifying a functional relationship (!) instead of independence.
7,323
1808.06356
2886884222
We consider the problem of inferring the directed, causal graph from observational data, assuming no hidden confounders. We take an information theoretic approach, and make three main contributions. First, we show how through algorithmic information theory we can obtain SCI, a highly robust, effective and computationally efficient test for conditional independence---and show it outperforms the state of the art when applied in constraint-based inference methods such as stable PC. Second, building upon on SCI, we show how to tell apart the parents and children of a given node based on the algorithmic Markov condition. We give the Climb algorithm to efficiently discover the directed, causal Markov blanket---and show it is at least as accurate as inferring the global network, while being much more efficient. Last, but not least, we detail how we can use the Climb score to direct those edges that state of the art causal discovery algorithms based on PC or GES leave undirected---and show this improves their precision, recall and F1 scores by up to 20 .
To the best of our knowledge, there exists no algorithm that directly discovers the directed, causal Markov blanket, i.e. that can tell apart parents, children and spouses given only the Markov blanket of a target. In , we first discover the Markov blanket, and then orient the edges. To discover the blanket, we build upon and extend the state of the art @cite_17 and @cite_24 algorithms. Both follow the general structure of the GS algorithm, with employing fast symmetry correction to exclude children of children. Zhu and Yang @cite_13 proposed to speed up by pre-filtering based on mutual information, whereas @cite_14 discover Markov blankets based on conditional mutual information. Unlike our approach, these approaches require the user to set a cut-off value and a scaling parameter @math . For an in depth overview of related algorithms, we refer to the following articles @cite_15 @cite_25 .
{ "abstract": [ "Finding an efficient way to discover Markov blanket is one of the core issue s in data mining. This paper first discusses the problems existed in IAMB algorithm which is a typical algorithm for discovering the Markov b lanket of a target variable from the training dat a, and then proposes an improved algorithm λ -IAMB based on the improving approach which contains two aspects: code optimization and the improving strategy for conditional independence testing. E xperimental results show that λ -IAMB algorithm performs better than IAMB by finding Markov blanket of variables in typical Bayesian network and by testing the performance of them as feature selection method on some well-known real world datasets .", "", "We present an algorithmic framework for learning local causal structure around target variables of interest in the form of direct causes effects and Markov blankets applicable to very large data sets with relatively small samples. The selected feature sets can be used for causal discovery and classification. The framework (Generalized Local Learning, or GLL) can be instantiated in numerous ways, giving rise to both existing state-of-the-art as well as novel algorithms. The resulting algorithms are sound under well-defined sufficient conditions. In a first set of experiments we evaluate several algorithms derived from this framework in terms of predictivity and feature set parsimony and compare to other local causal discovery methods and to state-of-the-art non-causal feature selection methods using real data. A second set of experimental evaluations compares the algorithms in terms of ability to induce local causal neighborhoods using simulated and resimulated data and examines the relation of predictivity with causal induction performance. Our experiments demonstrate, consistently with causal feature selection theory, that local causal feature selection methods (under broad assumptions encompassing appropriate family of distributions, types of classifiers, and loss functions) exhibit strong feature set parsimony, high predictivity and local causal interpretability. Although non-causal feature selection methods are often used in practice to shed light on causal relationships, we find that they cannot be interpreted causally even when they achieve excellent predictivity. Therefore we conclude that only local causal techniques should be used when insight into causal structure is sought. In a companion paper we examine in depth the behavior of GLL algorithms, provide extensions, and show how local techniques can be used for scalable and accurate global causal graph learning.", "", "The problem of learning the Markov network structure from data has become increasingly important in machine learning, and in many other application fields. Markov networks are probabilistic graphical models, a widely used formalism for handling probability distributions in intelligent systems. This document focuses on a technology called independence-based learning, which allows for the learning of the independence structure of Markov networks from data in an efficient and sound manner, whenever the dataset is sufficiently large, and data is a representative sample of the target distribution. In the analysis of such technology, this work surveys the current state-of-the-art algorithms, discussing its limitations, and posing a series of open problems where future work may produce some advances in the area, in terms of quality and efficiency.", "We propose algorithms for learning Markov boundaries from data without having to learn a Bayesian network flrst. We study their correctness, scalability and data e‐ciency. The last two properties are important because we aim to apply the algorithms to identify the minimal set of features that is needed for probabilistic classiflcation in databases with thousands of features but few instances, e.g. gene expression databases. We evaluate the algorithms on synthetic and real databases, including one with 139351 features." ], "cite_N": [ "@cite_14", "@cite_24", "@cite_15", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "2114211380", "", "2133091666", "", "2052200456", "2182868835" ] }
Causal Discovery by Telling Apart Parents and Children
Many mechanisms, including gene regulation and control mechanisms of complex systems, can be modelled naturally by causal graphs. While in theory it is easy to infer causal directions if we can manipulate parts of the network-i.e. through controlled experiments-in practice, however, controlled experiments are often too expensive or simply impossible, which means we have to work with observational data [21]. Constructing the causal graph given observations over its joint distribution can be understood as a global task, as finding the whole directed network, or locally, as discovering the local environment of a target variable in a causal graph [20], [29]. For both problem settings, constraint based algorithms using conditional independence tests, belong to the state of the art [2], [8], [29], [22]. As the name suggests, those algorithms strongly rely on one key component: the independence test. For discrete data, the state of the art methods often rely on the G 2 test [1], [26]. While it performs well given enough data, as we will see, it has a very strong bias to indicating independence in case of sparsity. Another often used method is conditional mutual information (CMI) [31], which like G 2 performs well in theory, but in practice has the opposite problem; in case of sparsity it tends to find spurious dependencies-i.e. it is likely to find no independence at all, unless we set arbitrary cut-offs. To overcome these limitations, we propose a new independence measure based on algorithmic conditional independence [11], which we instantiate through the Minimum Description Length principle [10], [24]. In particular, we do so using stochastic complexity [25], which allows us to measure the complexity of a sample in a mini-max optimal way. That is, it performs as close to the true model as possible (with optimality guarantees), regardless of whether the true model lies inside or outside the model class [10]. As we consider discrete data, we instantiate our test using stochastic complexity for multinomials. As we will show, our new measure overcomes the drawbacks of both G 2 and CMI, and performs much better in practice, especially under sparsity. Building upon our independence test, we consider the problem of finding the Markov blanket, short MB (see example in Fig. 1). Precisely, the Markov blanket of a target variable T is defined as the minimal set of variables, conditioned on which all other variables are independent of the target [20]. This set includes its parents, children and parents of common children, also called spouses. Simply put, the Markov blanket of a target node T is considered as the minimal set of nodes that contains all information about T [1]. Algorithms for finding the Markov blanket stop at this point and return a set of nodes, without identifying the parents, the children or the spouses [2], [8], [22]. We propose CLIMB, a new method based on the algorithmic Markov condition [11], that is not only faster than state of the art algorithms for discovering the MB, but is to the best of our knowledge the first algorithm for discovering the directed, or causal Markov blanket of a target node, without further exploration of the network. That is, it tells apart parents, children and spouses. Last but not least, we consider recovering the full causal graph. Current state of the art constraint based and score based algorithms [4], [29] only discover partially directed graphs but can not distinguish between Markov equivalent subgraphs. Based on CLIMB, we propose a procedure to infer the remaining undirected edges with a high precision. The key contributions of this paper are, that we (a) define SCI , a new conditional independence test, (b) derive a score to tell apart parents and children of a node, (c) propose CLIMB, to discover causal Markov blankets, and (d) show how to use the CLIMB score to orient those edges that can not be oriented by the PC or GES algorithm. This paper is structured as follows. In Sec. II we introduce the main concepts and notation, as well as properties of the stochastic complexity. Sec. III discusses related work. Since our contributions build upon each other, we first introduce SCI , our new independence test, in Sec. IV. Next, we define and explain the CLIMB algorithm to find the causal Markov blanket in Sec. V and extend this idea to decide between Markov equivalent DAGs. We empirically evaluate in Sec VI and round up with discussion and conclusion in Sec. VII. II. PRELIMINARIES In this section, we introduce the notation and main concepts we build upon in this paper. All logarithms are to base 2, and we use the common convention that 0 log 0 = 0. A. Bayesian Networks Given an m-dimensional vector (X 1 , . . . , X m ), a Bayesian network defines the joint probability over random variables X 1 , . . . , X m . We specifically consider discrete data, i.e. each variable X i has a domain X i of k i distinct values. Further, we assume we are given n i.i.d. samples drawn from the joint distribution of the network. We express the n-dimensional data vector for the i-th node with x n i . To describe interactions between the nodes, we use a directed acyclic graph (DAG) G. We denote the parent set of a node X i with PA i , its children with CH i and nodes that have a common child with X i as its spouses SP i . A set of variables X contains k X = Xj ∈X k j value combinations that can be non-ambiguously enumerated. We write X = j to refer to the j-th value combination of X. For instance such a set could be the set of parents, children or spouses of a node. As it is common for both inferring the Markov blanket as well as the complete network, we assume causal sufficiency, that is, we assume that we have measured all common causes of the measured variables. Further, we assume that the Bayesian network G is faithful to the underlying probability distribution P [29]. Definition 1 (Faithfulness). If a Bayesian network G is faithful to a probability distribution P , then for each pair of nodes X i and X j in G, X i and X j are adjacent in G iff. X i ⊥ ⊥ X j | Z, for each Z ⊂ G, with X i , X j ∈ Z. F ⊥ ⊥ T | D, E, or F is d-separated of T given D, E. Note that D ⊥ ⊥ T | E, F and E ⊥ ⊥ T | D, F . Equivalently, it holds that X i and X j are d-separated by Z, with Z ⊂ G and X i , X j ∈ Z, iff. X i ⊥ ⊥ X j | Z [11]. Generally, d-separation is an important concept for constraint based algorithms, since it is used to prune out false positives. As an example consider Fig. 2. All nodes D, E and F are associated with the target T . However, F ⊥ ⊥ T | D, E and hence F is d-separated from T given D and E, which means that it can be excluded from the parent set of T . Building on the faithfulness assumption, it follows that the probability to describe the whole network can be written as P (X 1 , . . . , X m ) = m i=1 P (X i | PA i ) ,(1) which means that we only need to know the conditional distributions for each node X i given its parents [11]. Having defined a Bayesian network, it is now easy to explain what a Markov blanket is. B. Markov Blankets Markov blankets were first described by Pearl [20]. A Markov blanket MB T of a target T is the minimal set of nodes in a graph G, given which all other nodes are conditionally independent of T . That is, knowing the values of MB T , we can fully explain the probability distribution of T and any further information is redundant [20]. Concretely, the Markov blanket of T consists of the parents, children and spouses of T . An example MB is shown in Fig. 1. Discovering the Markov blanket of a node has several advantages. First, the Markov blanket contains all information that we need to describe a target variable as well as its neighbourhood in the graph. In addition, the MB is theoretically the optimal set of attributes to predict the target values [16] and can be used for feature selection [1], [7]. In addition to those properties, the Markov blanket of a single target can be inferred much more efficiently than the whole Bayesian network [30]. This is especially beneficial if we are only interested in a single target in a large network, e.g. the activation of one gene. Most algorithms to discover the MB rely on faithfulness and a conditional independence test [1]. For the former, we have to trust the data, while for the latter we have a choice. To introduce the independence test we propose, we first need to explain the notions of Kolmogorov complexity and Stochastic complexity. C. Kolmogorov Complexity The Kolmogorov complexity of a finite binary string x is the length of the shortest binary program p * for a universal Turing machine U that generates x, and then halts [12], [14]. Formally, we have K(x) = min{|p| | p ∈ {0, 1} * , U(p) = x} . That is, program p * is the most succinct algorithmic description of x, or in other words, the ultimate lossless compressor for that string. We will also need conditional Kolmogorov complexity, K(x | y) ≤ K(x), which is again the length of the shortest binary program p * that generates x, and halts, but now given y as input for free. By definition, Kolmogorov complexity will make maximal use of any structure in x that can be expressed more succinctly algorithmically than by printing it verbatim. As such it is the theoretical optimal measure for complexity. However, due to the halting problem it is not computable, nor approximable up to arbitrary precision [14]. We can, however, approximate it from above through MDL. D. The Minimum Description Length Principle The Minimum Description Length (MDL) principle [24] provides a statistically well-founded and computable framework to approximate Kolmogorov complexity from above [10]. In refined MDL we measure the stochastic complexity of data x with regard to a model class M, L(x | M) ≥ K(x). The larger this model class, the closer we get to K(x); the ideal version of MDL considers the set of all programs that we know output x and halt, and hence is the same as Kolmogorov complexity. By using a refined MDL codeL, we have the guarantee thatL(x | M) is only a constant away from the number of bits we would need to encode the data if we already knew the best model This constant is called the regret. Importantly, it does not depend on the data, but only on the model class; and hence these guarantees hold even in settings where the data was drawn adversarially [10]. Only for a few model classes it is known how to efficiently compute the stochastic complexity. One of these is the stochastic complexity for multinomials [13], which we will introduce below. E. Stochastic Complexity for Multinomials Given n samples of a discrete-valued univariate attribute X with a domain X of k distinct values, x n ∈ X n , letθ(x n ) denote the maximum likelihood estimator for x n . Shtarkov [27] defined the mini-max optimal normalized maximum likelihood (NML) as P NML (x n | M k ) = P (x n |θ(x n , M k )) R n M k ,(2) where the normalizing factor, or regret R n M k , relative to the model class M k is defined as R n M k = x n ∈X n P (x n |θ(x n ), M k ) .(3) The sum goes over every possible x n over the domain of X, and for each considers the maximum likelihood for that data given model class M k . For discrete data, we can rewrite Eq. (2) as P NML (x n | M k ) = k v=1 hv n hv R n M k , writing h v for the frequency of value v in x n , resp. Eq. (3) as R n M k = h1+···+h k =n n! h 1 ! · · · h k ! k v=1 h v n hv . Mononen and Myllymäki [19] derived a formula to calculate the regret in sub-linear time, meaning that the whole formula can be computed in linear time w.r.t. the number of samples n. We obtain the stochastic complexity for x n with regard to model class M k by simply taking the negative logarithm of the P NML , which decomposes into a Shannon-entropy and a log regret term, SC(x n | M k ) = − log P NML (x n | M k ) , = nH(x n ) + log R n M k .(4) Conditional stochastic complexity [28] is defined analogue to conditional entropy, i.e. we have for any x n , y n that SC(x n | y n , M k ) = v∈Y SC(x n | y n = v, M k ) = v∈Y h v H(x n | y n = v) + v∈Y log R hv M k ,(5) with Y the domain of Y , and h v is the frequency of a value v in y n . For notational convenience, wherever clear from context we will write SC(X) for SC(x n | M k ), resp. SC(X | Y ) for SC(x n | y n , M k ). For the log-regret terms in Eq. (4), resp. Eq. (5), we will write ∆(X) resp. ∆(X | ·). To introduce our independence test and methods, we need the following property of the conditional stochastic complexity. Proposition 1. Given two discrete random variables X and Y and a set Z of discrete random variables. If ∆(X | Z) ≤ ∆(X | Z, Y ), we call ∆ monotone w.r.t. the number of conditioning variables. To proof Prop. 1, we first show that the regret term is logconcave in n. Lemma 1. For n ≥ 1, the regret term R n M k of the multinomial stochastic complexity of a random variable with a domain size of k ≥ 2 is log-concave in n. For consciousness, we postpone the proof of Lemma 1 to Appendix VIII-A. Based on Lemma 1 we can now proof Prop. 1. Proof of Prop. 1: In order to show Prop. 1, we can reduce the statement as follows. Consider that Z contains p distinct value combinations {r 1 , . . . , r p }. If we add Y to Z, the number of distinct value combinations, {l 1 , . . . , l q }, increases to q, where p ≤ q. Consequently, to show that Prop. 1 holds, it suffices to show that p i=1 log R |ri| M k ≤ q j=1 log R |lj | M k (6) whereas p i=1 |r i | = q j=1 |l j | = n. Next, consider w.l.o.g. that each value combination {r i } i=1,. ..,p is mapped to one or more value combinations in {l 1 , . . . , l q }. Hence, Eq. (6) holds, if the log R n M k is sub-additive in n. Since we know from Lemma 1 that the regret term is log-concave in n, subadditivity follows by definition. IV. INDEPENDENCE TESTING USING STOCHASTIC COMPLEXITY Most algorithms to discover the Markov blanket rely on two tests: association and conditional independence [1], [7]. The association test has to be precise for two reasons. First, by being too restrictive it might miss dependencies, which results in a bad recall. Second, if the test is too lenient we have to test more candidates. This may not sound bad, but as we face an exponential runtime w.r.t. the number of candidates, it is very much so in practice. The quality of the conditional independence test is even more important. Algorithms proven to be correct, are only correct under the assumption that the conditional independence test is so, too [7], [22]. Commonly used conditional independence tests for categorical data like the G 2 or the conditional mutual information (CMI), have good properties in the limit, but show several drawbacks on practical sample sizes. In the following, we will propose and justify a new test for conditional independence on discrete data. We start with Shannon conditional mutual information as a measure of independence. It is defined as follows. Definition 2 (Shannon Conditional Independence [6]). Given random variables X, Y and Z. If I(X; Y | Z) = H(X | Z) − H(X | Z, Y ) = 0(7) then X and Y are called statistically independent given Z. In essence, I(X; Y | Z) is a measure of association of X and Y conditioned on Z, where an association of 0 corresponds to statistical independence. If Z is the empty set it reduces to standard mutual information, meaning we directly measure the association between X and Y . As I is based on Shannon entropy, it assumes that we know the true distributions. In practice, we of course do not know these, but rather estimateĤ from finite samples. This becomes a problem when estimating conditional entropy, as to obtain a reasonable estimate we need a number of samples that is exponential in the domain size of the conditioning variable [15]; if we have too few samples, we tend to underestimate the conditional entropy, overestimate the conditional mutual information, and Eq. (7) will seldom be 0-even when X ⊥ ⊥ Y | Z. Hence, we needed to set an arbitrary cut-off δ, such that I ≤ δ. The problem is, however, that δ is hard to define, since it is dependent on the complexity of the variables, the amount of noise and the sample size. Therefore, we propose a new independence test based on (conditional) algorithmic independence that remains zero for X ⊥ ⊥ Y | Z even in settings where the sample size is small and the complexity of the variables is high. This we can achieve, by not only considering the complexity of the data under the model (i.e. the entropy) but also the complexity of the model (i.e. distribution). Before introducing our test, we first need to define algorithmic conditional independence. Definition 3 (Algorithmic Conditional Independence). Given the strings x, y and z, We write z * to denote the shortest program for z, and analogously (z, y) * for the shortest program for the concatenation of z and y. If I A (x; y | z) := K(x | z * ) − K(x | (z, y) * ) + = 0 (8) holds up to an additive constant that is independent of the data, then x and y are called algorithmically independent given z [3]. As discussed above, Kolmogorov complexity is not computable, and to use I A in practice we will need to instantiate it through MDL. We do so using stochastic complexity for multinomials. That is, we rewrite Eq. (8) in terms of stochastic complexity, I SC (X; Y | Z) = SC (X | Z) − SC (X | Z, Y ) (9) = n · I(X; Y | Z) + ∆(X | Z) − ∆(X | Z, Y ) where n is the number of samples. Note that the regret terms ∆(X | Z) and ∆(X | Z, Y ) in Eq. (9) are over the same domain. From Prop 1, we know that ∆(X | Z) is smaller or equal than ∆(X | Z, Y ). Hence, the new variable Y has to provide a significant gain in the term H(X | Z, Y ) in Eq. (7) to overcome the penalty from its regret term. To use I SC as an independence measure, we need one further adjustment. Since the regret terms are dependent on the domain size, it can happen that I SC (X; Y | Z) = I SC (Y ; X | Z). We make the score symmetric by simply taking the maximum of both directions, and define the Stochastic Complexity based Independence measure as SCI (X; Y | Z) = max{I SC (X; Y | Z), I SC (Y ; X | Z)} . We have that X ⊥ ⊥ Y | Z, iff SCI (X; Y | Z) ≤ 0. Note that SCI can be smaller than zero, if e.g. H(X | Z) = H(X | Y, Z) but ∆(X | Z) < ∆(X | Z, Y ). In the following, we explain why the SCI is a well defined measure for conditional independence. In particular, we show that it detects independence, i.e. SCI (X; Y | Z) ≤ 0 holds, if X ⊥ ⊥ Y | Z and that it converges to I. Lemma 2. SCI (X; Y | Z) ≤ 0, iff X ⊥ ⊥ Y | Z. Proof of Lemma 2: It suffices to show that I SC (X; Y | Z) ≤ 0, as I SC (Y ; X | Z) ≤ 0 follows analogously. Since the first part of I SC (X; Y | Z) is equal to n times I, this part will be zero by definition. Based on Prop 1, we have that ∆(X | Z)−∆(X | Z, Y ) ≤ 0, which concludes the argument. Next, we show that in the limit 1 n SCI (X; Y | Z) behaves like I(X; Y | Z). Lemma 3. Given two random variables X and Y and a set of random variables Z, it holds that lim n→∞ 1 n SCI (X; Y | Z) = I(X; Y | Z) , whereas n denotes the number of samples. Proof of Lemma 3: To show the claim, it suffices to show that I SC (X; Y | Z) asymptotically behaves like I(X; Y | Z), as I SC (Y ; X | Z) has the same asymptotic behaviour. We have lim n→∞ 1 n I SC (X; Y | Z) = lim n→∞ I(X; Y | Z) + 1 n (∆(X | Z) − ∆(X | Z, Y )) . Hence it remains to show that the second term goes to zero. Since log R n M k is concave in n, 1 n ∆(X | Z) and 1 n ∆(X | Z, Y ) will approach zero if n → ∞. In sum, asymptotically SCI behaves like conditional mutual information, but in contrast to I, it is robust given only few samples, and hence does not need an arbitrary threshold. In practice it also performs favourably compared to the G 2 test, as we will show in the experiments. Next, we build upon SCI and introduce CLIMB for discovering causal Markov blankets. V. CAUSAL MARKOV BLANKETS In this section, we introduce CLIMB, to discover directed, or causal Markov blankets. As an example, consider Fig. 1 again and further assume that we only observe T , its parents P 1 , P 2 , P 3 and its children C 1 , C 2 . Only in specific cases we can identify some of the parents using conditional independence tests. In particular, only where exist at least two parents P i and P j that are not connected by an edge or do not have any ancestor in common, i.e. when P i ⊥ ⊥ P j | ∅ but P i ⊥ ⊥ P j | T . We hence need another approach to tell all apart the parents and children of T . A. Telling apart Parents and Children To tell apart parents and children, we define a partition π(PC T ) on the set of parents and children of a target node T , which separates the parents and children into exactly two non-intersecting sets. We refer to the first set as the parents PA T and to the second as children CH T , for which PA T ∪ CH T = PC T and PA T ∩ CH T = ∅. Further, we consider two special cases, where we allow either PC T or CH T to be empty, which leaves the remaining set to contain all elements of PC T . Note that there exist 2 |PC T | −1 possible partitions of PC T . To decide which of the partitions fits best to the given data, we need to be able to score a partition. Building on the faithfulness assumption, we know that we can describe the each node as the conditional distribution given its parents (see Eq. 1). For causal networks, Janzing and Schölkopf [11] showed this equation can be expressed in terms of Kolmogorov complexity. Postulate 1 (Algorithmic Independence of Conditionals). A causal hypothesis is only acceptable if the shortest description of the joint density P is given by the concatenation of the shortest description of the Markov kernels. Formally, we write K(P (X 1 , . . . , X m )) + = j K(P (X j | PA j )) ,(10) which holds up to an additive constant independent of the input, and where PA j corresponds to the parents of X j in a causal directed acyclic graph (DAG). Further, Janzing and Schölkopf [11] show that this equation only holds for the true causal model. This means that it is minimal if each parent is correctly assigned to its corresponding children. Like in SCI , we again use stochastic complexity to approximate Kolmogorov complexity. In particular, we reformulate Eq. 10, such that we are able to describe the local neighbourhood of a target node T by its parents and children. In other words, we score a partition π as SC (π(PC T )) = SC (T | PA T ) + P ∈PA T SC (P ) + C∈CH T SC (C | T ) .(11) where, we calculate the costs of T given its parents, the unconditioned costs of the parents and the children given T . Further, by MDL, the best partition π * (PC T ) is the one minimizing Eq. (11). By exploring the whole search space, we can find the optimal partition with regard to our score. The corresponding computational complexity, which is exponential in the number of parents and children, is not the bottle neck for finding the causal Markov blanket, as it does not exceed the runtime for finding the parents and children in the first place. Moreover, in most real world data sets the average number of parents and children is rather small, leaving us on average with few computational steps here. B. The Climb Algorithm Now that we defined how to score a partition, we can now introduce the CLIMB algorithm. In essence, the algorithm builds upon and extends PCMB and IPCMB, but, unlike these, can discover the causal Markov blanket. CLIMB (Algorithm 1) consists of three main steps: First, we need to find the parents and children of the target node T (line 1), which can be done with any sound parents and children algorithm. Second, we compute the best partition π * (PC T ) using Eq. (11) (line 2). The last step is the search for spouses. Here, we only have to iterate over the children to find spouses, which saves computational time. To remove children of children, we apply the fast symmetry correction criterion as suggested by Fu et al. [8] (lines [6][7][8]. Further, we find the spouses as suggested in the PCMB algorithm [22] (lines 9-12), whereas the separating set S (line 10) can be recovered from the procedure that found the parents and children. In the last line, we output the causal MB by returning the distinct sets of parents, children and spouses. Complexity and Correctness: At worst, the computational complexity of CLIMB is as good as common Markov blanket discovery algorithms. This worst case is the scenario of having only children and therefore having to search each element in the parents and children set for the spouses. Given |V | as the number of nodes, we have to apply O(2 |M B| |V |) independence tests, which reduces to O(|M B| k |V |), if we restrict the number of conditioning variables in the independence test to k [2]. Calculating the independence test is linear, as calculating the the conditional mutual information takes linear time and the regret term of SCI can be computed in sublinear time [19]. In practice, CLIMB saves a lot of computation compared to PCMB or IPCMB because it can identify the children and hence does not need to iterate through the parents to search for spouses. If we search through the whole set of parents and children to identify the spouses, the correctness of the Markov blanket under the faithfulness condition and a correct independence test follows trivially from PCMB [22] and IPCMB [8]. To correctly infer the causal Markov blanket, we need to minimize our score over the complete causal network, which scales exponentially and is infeasible for networks with more than the causal network, which would eliminate the computational advantage of only discovering the Markov blanket. Instead, we compute the local optimal score to tell apart parents and children. This makes CLIMB feasible for large networks. C. Deciding between Markov equivalent DAGs In the previous section, we showed how to find the causal MB by telling apart parents and children. Besides, we can use this information to enhance current state of the art causal discovery algorithms. In particular, many causal discovery algorithms as GES [4] and the PC [29] algorithm find partial DAGs. That is, they can not orient all edges and leave some of them undirected. Precisely, if we would assign any direction at random to these undirected edges, the corresponding graph would be in the same Markov equivalence class as the original partial DAG. We can, however, use the score of CLIMB to also orient these edges in a post processing step as follows. First, we determine the parents and children of each node using the partial DAG. For an undirected edge connecting two nodes A and B, we assign B as a parent of A and vice versa A as a parent of B. It is easy to see that such an assignment creates a loop between A and B in the causal graph. In the second step, we iteratively resolve loops between two nodes A and B by we determining that configuration with the minimum costs according to Eq. 11. This we do by first assigning B as a parent of A in P C A , while keeping A as a child of B in P C B . We compute the sum of the costs of this configuration for A and B according to Eq. 11, compare the result to the costs of the inverse assignment and select the one with smaller costs. We repeat this until all edges have been directed. In the experiments we show that this simple technique significantly improves the precision, recall, and F1 in edge directions for both stable PC and FGES. VI. EXPERIMENTS In this section, we empirically evaluate our independence test, as well as the CLIMB algorithm and the corresponding edge orientation scheme. For research purposes, we provide the code for SCI , CLIMB and the orientation scheme online. 1 All experiments were ran single threaded on a Linux server with Intel Xenon E5-2643v2 processors and 64GB RAM. For the competing approaches, we evaluate different significance thresholds and present the best results. A. Independence Testing In this experiment, we illustrate the practical performance of SCI , the G 2 test and conditional mutual information. In particular, we evaluate how well they can distinguish between true d-separation and false alarms. To do so, we simulate dependencies as depicted in Fig. 2 and generated data under various samples sizes (100-2 500) and noise settings (0%-95%). For each setup we generated 200 data sets and assess the accuracy. In particular, we report 1 http://eda.mmci.uni-saarland.de/climb/ First, we focus on the results for G 2 and SCI , which we show in Fig 3. SCI performs with near to 100% accuracy for less than 70% noise and then starts to drop in the presence of more noise. At the level of 0% noise D and E can be fully explained by F and therefore an accuracy of 50%, i.e. all independences hold, is the correct choice. In comparison, the G 2 test marks everything as independent given less than 1 500 data points and starts to lose performance with more than 30% noise. However, it is better in very high noise setups (more than 75% noise), whereas it si questionable if the real signal can still be detected. Next we consider the results for CMI. As theoretically discussed, we expect that finding a sensible cut-off value is impossible, since it depends on the size of the data, as well as the domain sizes of both the target and conditioning set. As shown in Fig. 4, CMI with zero as cut-off performs nearly random. In addition, we can clearly see that CMI is highly dependent on the sample size as well as on the amount of noise. B. Plug and Play with SCI To evaluate how well SCI performs in practice, we plug it into both the PCMB [22] and the IPCMB [8] algorithm. As the results for IPCMB are similar to those for PCMB, we skip them for conciseness. In particular, we compare the results of PCMB using G 2 and SCI to CLIMB using SCI and the PCMB subroutine to find the parents and children. We refer to PCMB with G 2 as PCMB G 2 and using SCI as PCMB SCI . To compare those variants, we generate data from the Alarm network, where we generate for 100 up to 20 000 samples each ten data sets and plot the average F 1 score as well as the number of performed independence tests in Fig. 5. As we can see, the F 1 score for PCMB using G 2 reaches at most 67%. In comparison, PCMB using SCI as well as CLIMB obtain F 1 scores of more than 90%. For both the precision is greater than 95% given at least 1 000 data points. Note that CLIMB only uses the inferred children to search for the spouses and hence can at most be as good as PCMB SCI . When we consider the runtime, we observe that CLIMB has superior performance: Both the PCMB SCI and PCMB G 2 need more than 10 times as many tests than CLIMB. C. Telling apart Parents and Children Next, we evaluate how well we can tell apart parents and children. We again generate synthetic data from the Alarm network as above and average over the results. For each node in the network, we infer, given the true parents and children set, which are the parents and which the children using our score and plot the averaged accuracy in Fig. 6. Given only 100 samples, the accuracy is already around 80% and further increases to 88% given more data. In addition, the results show that there is no bias towards preferring either parents or children. D. Finding Causal Markov Blankets In the next experiment, we go one step further. Again, we consider generated data from the Alarm network for different sample sizes. This time, however, we apply CLIMB and compute the precision and recall for the directed edges. As a comparison, we infer the causal skeleton with stable PC [5] using the G 2 test, let it orientate as many edges as possible and then extract the Markov blanket. We plot the results in Fig. 7. We see that the precision of CLIMB reaches up to 90% and is always better than the PC algorithm, which reaches at most 79%, for up to 10 000 data points. In terms of recall, CLIMB is better than the PC algorithm, however, at 20 000 data points, they are about equal. Since CLIMB is, as far as we know, the first method to extract the causal Markov blanket directly from the data, we can not have a fair comparison of runtimes. Given 20 000 data points, CLIMB needs on average 221 independence tests per node. E. Causal Discovery Last but not least, we evaluate the use of SCI , respectively CLIMB, for causal discovery. First, we show that SCI significantly improves the F 1 score over the directed edges of stable PC for small sample sizes compared to the G 2 test. Second, we apply the CLIMB edge orientation procedure as post processing on top of the FGES and stable PC, and show how it improves their precision and recall on the directed edges. a) SCI for PC: First, we evaluate stable PC [5], using the standard G 2 test with α = 0.01 and SCI on the Alarm network with sample sizes between 100 and 20 000. We generate ten data sets for each sample size and calculate the average F 1 score for stable PC using G 2 (stable PC G 2 ) and and our independence test (stable PC SCI ). To calculate the F 1 score, we use the precision and recall on the directed edges, which means that only if an edge is present with the correct orientation, we count it as a true positive. We show the results in Fig. 8. Stable PC SCI has a much better performance than stable PC G 2 , given only few samples. When the sample size approaches 20 000, stable PC SCI still has an advantage of ∼ 12% over PC G 2 . b) Climb-based FGES and PC: To show that our CLIMB-based edge orientation scheme improves not only constraint based algorithms, but also score based methods, we apply it on top of FGES and stable PC, and correspond to the enhanced versions as FGES CLIMB and PC CLIMB . In Table I Figure 8. [Higher is better] F 1 score on directed edges for stable PC using G 2 and SCI on the Alarm network given different sample sizes. the Bayesian Network Repository. 2 We generate ten random data sets with 20 000 samples for each network and average over the results. Applying CLIMB to the undirected edges clearly improves both the results of PC and FGES. In all cases, except FGES on the Hailfinder network, the precision and recall of the enhanced method are more than one standard deviation and up to ∼ 25% better than the original results. To test the significance of these results, we apply the exact one-sided Wilcoxon rank-sum test [17] on the precision and recall. As result, the enhanced versions significantly improve the precision and recall with all p-values < 0.0018. VII. CONCLUSION This work includes three key contributions: SCI , a conditional independence test for discrete data, the CLIMB algorithm to mine the causal Markov blanket and the edge orienting scheme based on CLIMB for causal discovery. Through thorough empirical evaluation we showed that each of those contributions strongly improves local and global causal discovery on observational data. Moreover, we showed that SCI converges to the true conditional mutual information. In contrast to CMI, it does not require any cut-off value and is more robust in the presence of noise. In particular, incorporating SCI in either common Markov blanket discovery algorithms, or the PC algorithm leads to much better results than using the standard state of the art G 2 test-especially for small sample sizes. Further, we proposed CLIMB to efficiently find causal Markov blankets in large networks. By applying our edge orientation scheme based on CLIMB on top of common causal discovery algorithms, we can improve their precision and recall on the edge directions. For future work, we want to consider sparsification, by e.g. removing candidates that do not have a significant association to a target node. One possible way of doing this could be to formulate a significance test based on the nohypercompression inequality [10], [18]. Last but not least, we want to develop fast approximations to find and orient the parents and children of hub nodes, to extend the applicability of our method. VIII. APPENDIX A. Proof of log-concavity of the regret term Proof of Lemma 1: To improve the readability of this proof, we write R n L as shorthand for R n M L of a random variable with a domain size of L. Since n is an integer, each R n L > 0 and R 0 L = 1, we can prove Lemma 1, by showing that the fraction R n L /R n−1 L is decreasing for n ≥ 1, when n increases. We know from Mononen and Myllymäki [19] that R n L can be written as the sum where m(0, n) is equal to 1. It is easy to see that from n = 1 to n = 2 the fraction R n L /R n−1 L decreases, as R 0 L = 1, R 1 L = L and R 2 L = L + L(L − 1)/2. In the following, we will show the general case. We rewrite the fraction as follows. Next, we will show that both parts of the sum in Eq. 13 are decreasing when n increases. We start with the left part, which we rewrite to . When n increases, each term of the sum in the numerator in Eq. 14 decreases, while each element of the sum in the denominator increases. Hence, the whole term is decreasing. In the next step, we show that the right term in Eq. 13 also decreases when n increases. It holds that m(n, n) n−1 k=0 m(k, n − 1) ≥ m(n, n) m(n − 1, n − 1) . Using Eq. 12 we can reformulate the term as follows. After rewriting, it is easy to see that also the second term of Eq. 13 is decreasing and hence we can conclude the proof. Figure 9. [Zero-Baseline]F , F andF SCI on X ⊥ ⊥ Y , for different domain sizes k Y of Y . F is the ideal score,F uses empirical estimates, whileF SCI uses stochastic complexity.F identifies spurious associations for larger k Y , whereasF SCI correctly does not pick up any signal. B. SCI fulfills the Zero-Baseline Property Given a random variable X and a set of random variables Y, where each Y ∈ Y is jointly independent of X. An independence measure fulfills the zero-baseline property, if it remains zero, or indicates no association, independent of the sample size or the complexity of Y [15]. To illustrate that SCI fulfills the zero-baseline property, while CMI does not, we consider their behaviour when the conditioning set is empty. We generate X and Y independently, setting their domain sizes respectively to k X = 4 while increasing k Y from 4 0 to 4 5 . For a score between zero and one we divide I by H(X) and writeF (X; Y ) = I(X; Y | ∅)/H(X). We instantiate F using the true entropy,F using empirical estimates of H(·), andF SCI using SCI . For each setup between X and Y we generate 100 data sets with 1 000 samples, average over the results and plot them in Fig. 9. F SCI correctly identifies independence, whereasF almost immediately is non-zero, and quickly rises up to 1, identifying a functional relationship (!) instead of independence.
7,323
1808.06347
2885143803
Probabilistic graphical models compactly represent joint distributions by decomposing them into factors over subsets of random variables. In Bayesian networks, the factors are conditional probability distributions. For many problems, common information exists among those factors. Adding similarity restrictions can be viewed as imposing prior knowledge for model regularization. With proper restrictions, learned models usually generalize better. In this work, we study methods that exploit such high-level similarities to regularize the learning process and apply them to the task of modeling the wave propagation in inhomogeneous media. We propose a novel distribution-based penalization approach that encourages similar conditional probability distribution rather than force the parameters to be similar explicitly. We show in experiment that our proposed algorithm solves the modeling wave propagation problem, which other baseline methods are not able to solve.
Multi-Task Learning (MTL) is a learning paradigm in machine learning and its target is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. Consider the wave propagation problem, we can formulize it as finding the distribution @math , where @math denotes the state of all nodes at time @math . The assumption is that the state of each node at time @math is independent of other nodes at time @math given the states of its neighbors at time @math . So the distribution can be decomposed into @math (assume we have @math nodes in total) conditional distributions. Rather than using @math neural networks to approximate those distributions, MTL uses one neural network with @math outputs. The first few hidden layers are shared among nodes, while the following layers are node-specific. @cite_9 shows that such setting greatly reduces the risk of overfitting. This makes sense intuitively: The more factors we are learning simultaneously, the more our model has to find a representation that captures all of the factors and the less is the chance of overfitting. We implement a MTL model as the opponent of our distribution-based penalization model.
{ "abstract": [ "A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks." ], "cite_N": [ "@cite_9" ], "mid": [ "2143419558" ] }
A Distribution Similarity Based Regularizer for Learning Bayesian Networks Its Application on Modeling Wave Propagation in Inhomogeneous Media
Probabilistic graphical models compactly represent joint distributions by decomposing them into factors over subsets of random variables. In Bayesian networks, the factors are conditional probability distributions. For many problems, common information exists among those factors. For instance, the simplified Ising model (a typical graphical model of ferromagnetism) restricts the parameters of all local potentials and all of the neighbor interactions to be identical [MW14]. Adding similarity restrictions can be viewed as imposing prior knowledge for model regularization. With proper restrictions, learned models usually generalize better. However, for problems with inhomogeneous space, the identical assumption oversimplifies the problem. But still, we believe that common information would exist in a higher level. In this work, we study methods that exploit such high-level similarities to regularize the learning process and apply them to the task of modeling the wave propagation in inhomogeneous media. Mathematically, wave propagation is modeled using differential equations of a perturbation function, and closed form solution does not exist for inhomogeneous material. Using numerical finite difference methods, we generate the dataset that stores the sequential states of the wave propagation in a 2-D grid. This process can be considered as a stationary Markov chain. The task is to learn a transition dynamics that maps the state of current perturbations to perturbations of next time step. We propose a novel distribution-based penalization approach that encourages similar conditional probability distribution rather than force the parameters to be similar explicitly. We also implement three other methods: the "free" model (will be illustrated later), strict parameter sharing and multi-task learning, and show that our approach outperforms these models on our wave propagation dataset. Parameter Sharing Architectures based on deep artificial neural networks have improved the state of the art across a wide range of diverse tasks. Most prominently Convolutional Neural Networks have raised the bar on image classification tasks [KSH12, SZ14, HZRS16]. Parameter sharing, namely sharing of weights by all neurons in a particular feature map, is crucial to deep CNN models since it controls the capacity of the model and encourages spatial invariance. However, explicitly tying the parameters together may oversimply the problem in some scenarios. Take Ising models as example, the energy of a configuration σ is given by the Hamiltonian function H(σ) = − i,j J ij σ i σ j − µ j h j σ j . If the external magnetic field is homogeneous, it's reasonable to simplify the model by setting h j = h for all j and J ij = J for all i, j paris. When the external magnetic field is inhomogeneous, whereas, strict parameter sharing does not refer to reality anymore. A model with strict parameter sharing is likewise too naïve to approximate the wave propagation problem, nevertheless we implement such model as a baseline. Multi-task Learning Multi-Task Learning (MTL) is a learning paradigm in machine learning and its target is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. Consider the wave propagation problem, we can formulize it as finding the distribution P (x t |x t−1 ), where x t denotes the state of all nodes at time t. The assumption is that the state of each node at time t is independent of other nodes at time t − 1 given the states of its neighbors at time t − 1. So the distribution can be decomposed into N (assume we have N nodes in total) conditional distributions. Rather than using N neural networks to approximate those distributions, MTL uses one neural network with N outputs. The first few hidden layers are shared among nodes, while the following layers are node-specific. [Bax97] shows that such setting greatly reduces the risk of overfitting. This makes sense intuitively: The more factors we are learning simultaneously, the more our model has to find a representation that captures all of the factors and the less is the chance of overfitting. We implement a MTL model as the opponent of our distribution-based penalization model. Similarity Measure Between Distributions Statistical distance measures dissimilarity between two probabilistic distributions. It was originally developed for testing samples follow hypothesized distributions [Tha10]. In our work, the targets are to learn conditional distributions, and penalize on their dissimilarities. [MV15] provides a comprehensive overview of this field. Typical statistical distances can be classified as metric and non-metric measures. For our purpose, metric measures are easier to work with since many optimization algorithms are developed in metric space, and symmetry simplifies the possibilities. However the non-metric measures often encode desired information such as KL divergence measures information gain. In this work, we employ KL divergence and Bhattacharyya distance for regularization. Since we work on Gaussian random fields, the difference measures exist in closed forms. For fixed µ 1 , σ 1 , µ 2 , σ 2 , which are the means and standard deviations that define two normal distributions, the KL divergence and Bhattacharyya distance are KL(µ 1 , σ 1 , µ 2 , σ 2 ) = ln(σ 2 ) − ln(σ 1 ) + σ 2 1 + (µ 1 − µ 2 ) 2 2σ 2 2 − 1 2 , and BH(µ 1 , σ 1 , µ 2 , σ 2 ) = 1 4 ln( 1 4 ( σ 2 1 σ 2 2 + σ2 2 σ 2 1 + 2)) + (µ 1 − µ 2 ) 2 σ 2 1 + σ 2 2 . Notice that Bhattacharyya distance is symmetric. Wave Propagation in Inhomogeneous Media and Damage Identification The problem of damage identification using ultrasonic waves is to find singular material properties location by analyzing ultrasonic waves propagating through the material. This problem can be applied to monitoring health of structure materials such as aircrafts. The difficulty of this problem is that the material properties as a function over space interacts with the wave through a partial differential equation. A closed form solution of the equation does not exist. Therefore, learning the material properties from observations of the wave function is very hard. For two-dimensional elastic wave propagation problem, P-SV wave and its approximation using finite difference method was proposed in [Vir86]. Later work shows finite difference method for lamb wave, which is a special case of P-SV wave [Gop16], is useful for the task of damage identification [LS03a,LS03b,Yan13]. Because of the accessibility of source code, we employ a finite difference implementation of P-SV wave [BW16] for data generation. Existing methods for damage identification are limited in the following perspectives. First, many of existing works are not quantitative. For example [LS03b] systematically studies the wave behavior for different configuration of damages, and suggest further measurements and possible damage locations given observed wave behavior. The problems of this kind of approaches are that the behaviors are described subjectively, the inferences are inductive, and the results are hard to test. Second, the existing quantitative works rely too much on feature engineering based on simplified models. For example [Yan13] builds a Bayesian model of damage location and time-of-fly, which is the trivial time of reflective field. To compute the time-of-fly, it is necessary to assume single point damage with strong reflectivity and an infinite homogeneous media. In practice, many of these assumptions oversimplify the problem, and measurements are far from assumed model [WJT + 15]. In this work, we propose to learn the local interactions directly from data using graphical models. This will overcome the oversimplification, and a successfully learned graphical model is capable for many kinds of quantitative inference including damage identification. Problem Specification and Algorithms The task we work on is to model the wave propagation on a d × d grid (d was set to 50 in our experiments). At time t, a 50 × 50 tensor represents the state (its position) of these 2500 nodes. The problem can be formulized as finding the distribution of the state of all nodes at time t, given the state at time t−1. We assume the sequential data is a homogeneous Markov chain. The independence assumption is that the state of each node at time t is independent of other nodes at time t − 1 given the states of its neighbors at time t − 1. Furthermore, we assume that the state of each node given its neighbors of previous time follows a Gaussian distribution. We use two neural networks to approximate the distribution, one for producing the mean and the other for variance. For a d by d grid, a model without any regularization would contain d × d × 2 = 5000 neural nets. We call it "free" model. The inputs to each neural net are the states of neighbors at time t − 1, and the output is the mean or variance of the node at time t. For all the experiments, we set the neighbors of a node be a 5 × 5 blanket whose center is the chosen node. And because of the conditional independencies, the losses can be easily written as a sum of sub-functions, where each of the sub-functions involves very few of the networks. Therefore, one can use coordinate gradient descent to train those neural nets. Data Generation The training and test data are generated as two time sequences of wave function evaluations at d × d grid points on a plate. We simulate the wave propagation using the finite difference method that solves P-SV wave equations [Inc13]. The densities of plate over the grid points, which is used to solve the differential equation, is set as a constant equals 2200g/cm 3 except on the 5 × 5 grids centered at (23, 23). The 5 × 5 densities centered at (23, 23) are set to The velocities on the grid points are set to 3000m/s. The training and test data are simulated on this material with different wave source points for 1.5 seconds. The data of first 0.5s are truncated for each of the sequences. The training data at time 0.6s is visualized in figure 1. Baseline Algorithms We implement three baseline methods, i.e., free model, model with strict parameter sharing and multi-task learning model. The free model consists of 50 × 50 × 2 = 5000 neural nets, computing the mean and variance for the distribution of each node individually. Figure 2.a shows the structure of one such neural net. For the model with strict parameter sharing, we use two neural nets to approximate the distribution of conditional mean and variance for all the nodes. It can be regarded as tying the weights of 5000 neural nets all together. The assumption is strong: given the state of its neighbors at time t − 1, the distribution of the state of each node is identical, regardless the location of node. In this setting, the model is trained on more data (For one sequential data, we only need to train 2 neural nets rather than d × d × 2). It's easier to converge and less likely to overfit. But the negative point of this model is that it oversimplifies the problem by ignoring the inhomogeneous property of the material. In another word, it makes a homogeneous assumption. Our last baseline model uses multi-task learning technique. We have one neural net for computing the mean of all the nodes and one single neural net for the variance. The first hidden layer is shared but the second hidden layer corresponds to different distributions of each node. Figure 2.b shows the structure of MLT based model. This model exploits the common information among the grid nodes to some extend without simplifying the problem too much. Distribution Similarity Based Penalization Our proposed model is the same as the aforementioned free model, except that we add distribution-based penalization. Now we still have d × d × 2 neural nets, but we restrict the neural nets that approximate the distributions of nearby nodes to be similar. The regularization is at the distribution level, that is, the statistical distance between two conditional Gaussians. More specifically, there are two neural nets being associated with every node, whose outputs are the mean and variance respectively. The loss function consists of negative log likelihood and a penalization term. The penalization is the closed form statistical distances (a function of two pairs of mean and variance) among nearby Gaussians conditioned on some identical neighborhood values. The neighborhood values are selected in different ways. Mathematically, the loss function is written as loss(Θ) = −L(D|Θ) + λ t i j∈N B(i) P (µ(X i,j,t |Θ i ), σ(X i,j,t |Θ i ), µ(X i,j,t |Θ j ), σ(X i,j,t |Θ j ))(1) where Θ are the parameters that define the neural nets, L is the log likelihood function and P is a penalization function that measures the difference between two normal distributions given the parameters. The hyperparamter λ is set to one for all the experiments. Notice that for each node the two networks define a conditional normal distribution, and the distribution difference measures only apply on unconditioned probability distributions. The conditional distributions' differences need to be measured given the parent data. In the loss function, at each time step t, we regularize the parameters by summing the distribution difference measures given parent values X i,j,t over all neighborhoods i, j. The parent values (i.e. the X i,j,t 's) can be selected in different ways. One is to randomly select 5 × 5 neighborhood values over the whole dataset. The other way is to select the parent values as the 5 × 5 neighborhood values at time t − 1 whose center is node i. We employ two statistical distances for experiments: the KL divergence and Bhattacharyya distance. Experimental Results We use an epoch of 3, a learning rate of 0.01 and Adam optimizer to train all models. Both the training data and test data contain 50×50 node states of 119 time steps. For each model, we record the training loss (negative log likelihood) and test loss at every time step. Since these values are in a great magnitude, we process them by taking ln to the values. For negative values x, − ln(−x) are used. For |x| < 1, we simply display a zero. We create figures with an x-axis denoting time steps (119 per epoch × 3 epochs) and y-axis denoting the log value of loss. There's no sign of convergence and its losses are larger than the free model. It constrains the model complexity by sharing the first hidden layer, but the second layer alone may not be complex enough to fit the data. Figure 6.a and 6.b show the losses of KL divergence regularized model. When computing equation (1), we pick parent values X i,j,t as the 5 × 5 neighborhood values of previous time centered at node i. We can see that the regularized model converges much faster. It also effectively avoids the risk of overfitting since the test loss keeps decreasing as the model is trained with more epochs. Figure 7.a and 7.b show the losses of Bhattacharyya distance regularized model. Again, the regularized model boosts the convergence and makes the training more stable. For the test loss, however, we do not see the similar pattern of Figure 6.b. So we turn to another strategy of picking the parent values X i,j,t by randomly selecting 5 × 5 neighborhood values over the whole dataset. Figure 8 shows the results in this setting. Now the Bhattacharyya distance regularized model behaves similar to the KL divergence one, though the latter is more stable. A possible interpretation of this observation is that since the purpose of evaluating penalization given X i,j,t is to make the two conditional probabilities similar for all possible parent values, using the same X i,j,t as the input to the neural nets makes it biased. Conclusion Modeling wave propagation requires learning a numerous number of conditional probabilities from limited data. The experiments show that the three baseline algorithms do not solve this problem. The reasons for the "free" and strict parameter sharing model do not perform well is trivial. For the multi-task learning method using layer-sharing neural nets, comparing to our method, it dose not use the spacial information to reduce the model complexity. It uses a single common layer shared by all nodes instead of various common layers shared by neighbors. Because of the difficulty of implementation, we did not implement a parameter similarity based penalization. It will be more fair to compare our proposed distribution similarity based penalization with a parameter based one since they could use the same amount of prior knowledge of the problem, i.e. the spacial information. However the results we provide is sufficient to show the proposed penalization method is practically valuable. We test our algorithm with wave function observation data on all grid points to avoid the difficulty of learning with latent variables. In practice, it is more likely that there are only tens or hundreds of the wave function locations (possibly vary over time) available. A algorithm that builds the transition model of the wave with unobserved data will be more significant from a practical point of view. This work does not limit to the damage identification problem. It would generalize to other systems that can be solved by finite difference methods with simple modifications. In summary, we propose a distribution similarity base penalization function to regularize conditional probability distributions learned for Bayesian networks. The experimental results are done for the problem of modeling a wave function transition, where the model is valuable for material damage identification. With our regularization, the conditional probabilities as neural nets that define a Bayesian net is successfully learned, which are failed using other three baseline methods.
2,948
1808.06296
2886849295
Although stochastic gradient descent (SGD) method and its variants (e.g., stochastic momentum methods, AdaGrad) are the choice of algorithms for solving non-convex problems (especially deep learning), there still remain big gaps between the theory and the practice with many questions unresolved. For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice. In addition, theoretical insights of why adaptive step size of AdaGrad could improve non-adaptive step size of is still missing for non-convex optimization. This paper aims to address these questions and fill the gap between theory and practice. We propose a universal stagewise optimization framework for a broad family of non-smooth non-convex (namely weakly convex) problems with the following key features: (i) at each stage any suitable stochastic convex optimization algorithms (e.g., SGD or AdaGrad) that return an averaged solution can be employed for minimizing a regularized convex problem; (ii) the step size is decreased in a stagewise manner; (iii) an averaged solution is returned as the final solution that is selected from all stagewise averaged solutions with sampling probabilities increasing as the stage number. Our theoretical results of stagewise AdaGrad exhibit its adaptive convergence, therefore shed insights on its faster convergence for problems with sparse stochastic gradients than stagewise SGD. To the best of our knowledge, these new results are the first of their kind for addressing the unresolved issues of existing theories mentioned earlier. Besides theoretical contributions, our empirical studies show that our stagewise SGD and ADAGRAD improve the generalization performance of existing variants implementations of SGD and ADAGRAD.
Finally, we refer readers to several recent papers for other algorithms for weakly convex problems @cite_12 @cite_45 . For example, Drusvyatskiy and Paquette @cite_45 studied a subclass of weakly convex problems whose objective consists of a composition of a convex function and a smooth map, and proposed a prox-linear method that could enjoy a lower iteration complexity than @math by smoothing the objective of each subproblem. Davis and Drusvyatskiy @cite_12 studied a more general algorithm that successively minimizes a proximal regularized stochastic model of the objective function. When the objective function is smooth and has a finite form, variance-reduction based methods are also studied @cite_20 @cite_19 @cite_38 @cite_22 @cite_26 , which have provable faster convergence than in terms of @math . However, in all of these studies the convergence is provided on an impractical solution, which is either a solution that gives the minimum value of the (proximal) subgradient's norm @cite_45 or on a uniformly sampled solution from all iterations @cite_20 @cite_19 @cite_38 @cite_22 .
{ "abstract": [ "We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in @math iterations for smooth objectives, and stochastic gradient descent that converges in @math iterations for objectives that are sum of smooth functions. We provide the first improvement in this line of research. Our result is based on the variance reduction trick recently introduced to convex optimization, as well as a brand new analysis of variance reduction that is suitable for non-convex optimization. For objectives that are sum of smooth functions, our first-order minibatch stochastic method converges with an @math rate, and is faster than full gradient descent by @math . We demonstrate the effectiveness of our methods on empirical risk minimizations with non-convex loss functions and training neural nets.", "In this paper, we present new stochastic methods for solving two important classes of nonconvex optimization problems. We first introduce a randomized accelerated proximal gradient (RapGrad) method for solving a class of nonconvex optimization problems consisting of the sum of @math component functions, and show that it can significantly reduce the number of gradient computations especially when the condition number @math (i.e., the ratio between the Lipschitz constant and negative curvature) is large. More specifically, RapGrad can save up to @math gradient computations than existing deterministic nonconvex accelerated gradient methods. Moreover, the number of gradient computations required by RapGrad can be @math (at least @math ) times smaller than the best-known randomized nonconvex gradient methods when @math . Inspired by RapGrad, we also develop a new randomized accelerated proximal dual (RapDual) method for solving a class of multi-block nonconvex optimization problems coupled with linear constraints. We demonstrate that RapDual can also save up to a factor of @math projection subproblems than its deterministic counterpart, where @math denotes the number of blocks. To the best of our knowledge, all these complexity results associated with RapGrad and RapDual seem to be new in the literature. We also illustrate potential advantages of these algorithms through our preliminary numerical experiments.", "Given a nonconvex function that is an average of @math smooth functions, we design stochastic first-order methods to find its approximate stationary points. The convergence of our new methods depends on the smallest (negative) eigenvalue @math of the Hessian, a parameter that describes how nonconvex the function is. Our methods outperform known results for a range of parameter @math , and can be used to find approximate local minima. Our result implies an interesting dichotomy: there exists a threshold @math so that the currently fastest methods for @math and for @math have different behaviors: the former scales with @math and the latter scales with @math .", "", "We consider global efficiency of algorithms for minimizing a sum of a convex function and a composition of a Lipschitz convex function with a smooth map. The basic algorithm we rely on is the prox-linear method, which in each iteration solves a regularized subproblem formed by linearizing the smooth map. When the subproblems are solved exactly, the method has efficiency @math , akin to gradient descent for smooth minimization. We show that when the subproblems can only be solved by first-order methods, a simple combination of smoothing, the prox-linear method, and a fast-gradient scheme yields an algorithm with complexity @math . The technique readily extends to minimizing an average of @math composite functions, with complexity @math in expectation. We round off the paper with an inertial prox-linear method that automatically accelerates in presence of convexity.", "We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate @math . Our result yields new complexity guarantees for the stochastic proximal point algorithm on weakly convex problems and for the stochastic prox-linear algorithm for minimizing compositions of convex functions with smooth maps. Moreover, our result also recovers the recently obtained complexity estimate for the stochastic proximal subgradient method on weakly convex problems.", "" ], "cite_N": [ "@cite_38", "@cite_26", "@cite_22", "@cite_19", "@cite_45", "@cite_12", "@cite_20" ], "mid": [ "2301983558", "2803240098", "2728499573", "", "2570286083", "2790417304", "" ] }
Universal Stagewise Learning for Non-Convex Problems with Convergence on Averaged Solutions
Non-convex optimization has recently received increasing attention due to its popularity in emerging machine learning tasks, particularly for learning deep neural networks. One of the keys to the success of deep learning for big data problems is the employment of simple stochastic algorithms such as SGD or ADAGRAD [22,9]. Analysis of these stochastic algorithms for non-convex optimization is an important and interesting research topic, which already attracts much attention from the community of theoreticians [14,15,16,36,7,32,24]. However, one issue that has been largely ignored in existing theoretical results is that the employed algorithms in practice usually differ from their plain versions that are well understood in theory. Below, we will mention several important heuristics used in practice that have not been well understood for non-convex optimization, which motivates this work. First, a heuristic for setting the step size in training deep neural networks is to change it in a stagewise manner from a large value to a small value (i.e., a constant step size is used in a stage for a number of iterations and is decreased for the next stage) [22], which lacks theoretical analysis to date. In existing literature [14,7], SGD with an iteratively decreasing step size or a small constant step size has been well analyzed for non-convex optimization problems with guaranteed convergence to a stationary point. For example, the existing theory usually suggests an iteratively decreasing step size proportional to 1/ √ t at the t-th iteration or a small constant step size, e.g., proportional to ǫ 2 with ǫ ≪ 1 for finding an ǫ-stationary solution whose gradient's magnitude (in expectation) is small than ǫ. Second, the averaging heuristic is usually used in practice, i.e., an averaged solution is returned for prediction [3], which could yield improved stability and generalization [18]. However, existing theory for many stochastic non-convex optimization algorithms only provides guarantee on a uniformly sampled solution or a non-uniformly sampled solution with decreasing probabilities for latest solutions [14,36,7]. In particular, if an iteratively decreasing step size proportional to 1/ √ t at the t-th iteration is employed, the convergence guarantee was provided for a random solution that is non-uniformly selected from all iterates with a sampling probability proportional to 1/ √ t for the t-th iterate. This means that the latest solution always has the smallest probability to be selected as the final solution, which contradicts to the common wisdom. If a small constant step size is used, then usually a uniformly sampled solution is returned with convergence guarantee. However, both options are seldomly used in practice. A third common heuristic in practice is to use adaptive coordinate-wise step size of ADAGRAD [9]. Although adaptive step size has been well analyzed for convex problems (i.e., when it can yield faster convergence than SGD) [12,5], it still remains an mystery for non-convex optimization with missing insights from theory. Several recent studies have attempted to analyze ADAGRAD for non-convex problems [32,24,4,39]. Nonetheless, none of them are able to exhibit the adaptive convergence of ADAGRAD to data as in the convex case and its advantage over SGD for non-convex problems. To overcome the shortcomings of existing theories for stochastic non-convex optimization, this paper analyzes new algorithms that employ some or all of these commonly used heuristics in a systematic framework, aiming to fill the gap between theory and practice. The main results and contributions are summarized below: • We propose a universal stagewise optimization framework for solving a family of nonconvex problems, i.e., weakly convex problems, which includes some non-smooth nonconvex problems and is broader than smooth non-convex problems. At each stage, any suitable stochastic convex optimization algorithms (e.g., SGD, ADAGRAD) with a constant step size parameter can be employed for optimizing a regularized convex problem with a number of iterations, which usually return an averaged solution. The step size parameter is decreased in a stagewise manner. • We analyze several variants of the proposed framework by employing different basic algorithms, including SGD, stochastic heavy-ball (SHB) method, stochastic Nesterov's accelerated gradient (SNAG) method, stochastic alternating direction methods of multipliers (ADMM), and ADAGRAD. We prove the convergence of their stagewise versions for an averaged solution that is non-uniformly selected from all stagewise averaged solutions with sampling probabilities increasing as the stage number. In particular, for the stagewise algorithm using the first three basic algorithms (SGD, SHB, SNAG), we establish the same order of iteration complexity for finding a stationary point as the existing theories of these basic algorithms. For stagewise ADAGRAD, we establish an adaptive convergence for finding a stationary point, which is provably better than stagewise SGD or SGD when the cumulative growth of stochastic gradient is slow. Preliminaries The problem of interest in this paper is: min x∈Ω φ(x) = E ξ [φ(x; ξ)](1) where Ω is a closed convex set, ξ ∈ U is a random variable, φ(x) and φ(x; ξ) are non-convex functions, with the basic assumptions on the problem given in Assumption 1. To state the convergence property of an algorithm for solving the above problem. We need to introduce some definitions. These definitions can be also found in related literature, e.g., [8,7]. In the sequel, we let · denote an Euclidean norm, [S] = {1, . . . , S} denote a set, and δ Ω (·) denote the indicator function of the set Ω. Definition 1. (Fréchet subgradient) For a non-smooth and non-convex function f (·) , ∂ F f (x) = v ∈ R d |f (y) ≥ f (x) + v, y − x + o( y − x ), ∀y ∈ R d denotes the Fréchet subgradient of f . Definition 2. (First-order stationarity) For problem (1), a point x ∈ Ω is a first-order stationary point if 0 ∈ ∂ F (φ + δ Ω )(x), where δ Ω denotes the indicator function of Ω. Moreover, a point x is said to be ǫ-stationary if dist(0, ∂ F (φ + δ Ω )(x)) ≤ ǫ.(2) where dist denotes the Euclidean distance from a point to a set. Definition 3. (Moreau Envelope and Proximal Mapping) For any function f and λ > 0, the following function is called a Moreau envelope of f f λ (x) = min z f (z) + 1 2λ z − x 2 . Further, the optimal solution to the above problem denoted by prox λf (x) = arg min z f (z) + 1 2λ z − x 2 is called a proximal mapping of f . Definition 4. (Weakly convex) A function f is ρ-weakly convex, if f (x) + ρ 2 x 2 is convex. It is known that if f (x) is ρ-weakly convex and λ < ρ −1 , then its Moreau envelope f λ (x) is C 1smooth with the gradient given by (see e.g. [7]) ∇f λ (x) = λ −1 (x − prox λf (x)) A small norm of ∇f λ (x) has an interpretation that x is close to a point that is nearly stationary. In particular for any x ∈ R d , let x = prox λf (x), then we have f ( x) ≤ f (x), x − x = λ ∇f λ (x) , dist(0, ∂f ( x)) ≤ ∇f λ (x) .(3) This means that a point x satisfying ∇f λ (x) ≤ ǫ is close to a point in distance of O(ǫ) that is ǫ-stationary. It is notable that for a non-smooth non-convex function f (·), there could exist a sequence of solutions {x k } such that ∇f λ (x k ) converges while dist(0, ∂f (x k )) may not converge [11]. To handle such a challenging issue for non-smooth non-convex problems, we will follow existing works [6,11,8] to prove the near stationarity in terms of ∇f λ (x). In the case when f is smooth, ∇f λ (x) is closely related to the magnitude of the projected gradient G λ (x) defined below, which has been used as a criterion for constrained non-convex optimization [29], G λ (x) = 1 λ (x − prox λδΩ (x − λ∇f (x))).(4) It was shown that when f (·) is smooth with L-Lipschitz continuous gradient [10]: (1 − Lλ) G λ (x) ≤ ∇f λ (x) ≤ (1 + Lλ) G λ (x) , ∀x ∈ Ω.(5) Thus, the near stationarity in terms of ∇f λ (x) implies the near stationarity in terms of G λ (x) for a smooth function f (·). Now, we are ready to state the basic assumptions of the considered problem (1). Assumption 1. (A1) There is a measurable mapping g : Ω × U → R such that E ξ [g(x; ξ)] ∈ ∂ F φ(x) for any x ∈ Ω. (A2) For any x ∈ Ω, E[ g(x; ξ) 2 ] ≤ G 2 . (A3) Objective function φ is µ-weakly convex. (A4) there exists ∆ φ > 0 such that φ(x) − min z∈Ω φ(z) ≤ ∆ φ for any x ∈ Ω. Remark: Assumption 1-(A1), 1-(A2) assume a stochastic subgradient is available for the objective function and its Euclidean norm square is bounded in expectation, which are standard assumptions for non-smooth optimization. Assumption (A3) assumes weak convexity of the objective function, which is weaker than assuming smoothness. Assumption (A4) assumes that the objective value with respect to the optimal value is bounded. Below, we present some examples of objective functions in machine learning that are weakly convex. Ex. 1: Smooth Non-Convex Functions. If φ(·) is a L-smooth function (i.e., its gradient is L-Lipschitz continuous), then it is L-weakly convex. Ex. 2: Additive Composition. Consider φ(x) = E[f (x; ξ)] + g(x),(6) where E[f (x; ξ)] is a L-weakly convex function, and g(x) is a closed convex function. In this case, φ(x) is L-weakly convex. This class includes many interesting regularized problems in machine learning with smooth losses and convex regularizers. For smooth non-convex loss functions, one can consider truncated square loss for robust learning, i.e., f (x; a, b) = φ α ((x ⊤ a − b) 2 ) , where a ∈ R d denotes a random data and b ∈ R denotes its corresponding output, and φ α is a smooth non-convex truncation function (e.g., φ α (x) = α log(1 + x/α), α > 0). Such truncated non-convex losses have been considered in [25]. When |x 2 φ ′′ α | ≤ k and a ≤ R, it was proved that f (x; a, b) is a smooth function with Lipschitz continuous gradient [35]. For g(x), one can consider any existing convex regularizers, e.g., ℓ 1 norm, group-lasso regularizer [37], graph-lasso regularizer [20]. Ex. 3: Convex and Smooth Composition Consider φ(x; ξ) = h(c(x; ξ)) where h(·) : R m → R is closed convex and M -Lipschitz continuous, and c(x; ξ) : R d → R m is nonlinear smooth mapping with L-Lipschitz continuous gradient. This class of functions has been considered in [11] and it was proved that φ(x; ξ) is M L-weakly convex. An interesting example is phase retrieval [], where φ(x; a, b) = |(x ⊤ a) 2 − b|. More examples of this class can be found in [6]. Ex. 4: Smooth and Convex Composition Consider φ(x; ξ) = h(c(x; ξ)) where h(·) : R → R is a L-smooth function satisfying h ′ (·) ≥ 0, and c(x; ξ) : R d → R is convex and M -Lipschitz continuous. This class of functions has been considered in [35] for robust learning and it was proved that φ(x; ξ) is M L-weakly convex. An interesting example is truncated Lipschitz continuous loss φ(x; a, b) = φ α (ℓ(x ⊤ a, b)), where φ α is a smooth truncation function with φ ′ (·) ≥ 0 (e.g., φ α = α log(1 + x/α)) and ℓ(x ⊤ a, b) is a convex and Lipschitz-continuous function (e.g., |x ⊤ a − b| with bounded a ). Ex. 5: Weakly Convex Sparsity-Promoting Regularizers Consider φ(x; ξ) = f (x; ξ) + g(x), where f (x; ξ) is a convex or a weakly-convex function, and g(x) is a weakly-convex sparsitypromoting regularizer. Examples of weakly-convex sparsity-promoting regularizers include: • Smoothly Clipped Absolute Deviation (SCAD) penalty [13]: g(x) = d i=1 g λ (x i ) and g λ (x) =      λ|x| |x| ≤ λ − x 2 −2aλ|x|+λ 2 2(a−1) λ < |x| ≤ aλ (a+1)λ 2 2 |x| > aλ where a > 2 is fixed and λ > 0. It can be shown that SCAD penalty is (1/(a − 1))-weakly convex [25]. • Minimax Convex Penalty (MCP) [38]: g(x) = d i=1 g λ (x i ) and g λ (x) = sign(x)λ |x| 0 1 − z λb + dz where b > 0 is fixed and λ > 0. MCP is 1/b-weakly convex [25]. Stagewise Optimization: Algorithms and Analysis In this section, we will present the proposed algorithms and the analysis of their convergence. We will first present a Meta algorithmic framework highlighting the key features of the proposed algorithms and then present several variants of the Meta algorithm by employing different basic algorithms. The Meta algorithmic framework is described in Algorithm 1. There are several key features that differentiate Algorithm 1 from existing stochastic algorithms that come with theoretical guarantee. First, the algorithm is run with multiple stages. At each stage, a stochastic algorithm (SA) is called to optimize a proximal problem f s (x) inexactly that consists of the original objective function and a quadratic term, which is guaranteed to be convex due to the weak convexity of φ and γ < µ −1 . The convexity of f s allows one to employ any suitable existing stochastic algorithms (cf. Theorem 1) that have convergence guarantee for convex problems. It is notable that SA usually returns an averaged solution x s at each stage. Second, a decreasing sequence of step size parameters η s is used. At each stage, the SA uses a constant step size parameter η s and runs the updates for a number of T s iterations. We do not initialize T s as it might be adaptive to the data as in stagewise ADAGRAD. Third, the final solution is selected from the stagewise averaged solutions {x s } with non-uniform sampling probabilities proportional to a sequence of non-decreasing positive weights {w s }. In the sequel, we are particularly interested in w s = s α with α > 0. The setup of η s and T s will depend on the specific choice of SA, which will be exhibited later for different variants. To illustrate that Algorithm 1 is a universal framework such that any suitable SA algorithm can be employed, we present the following result by assuming that SA has an appropriate convergence for a convex problem. Theorem 1. Let f (·) be a convex function, x * = arg min x∈Ω f (x) and Θ denote some problem dependent parameters. Suppose for x + = SA(f, x 0 , η, T ), we have E[f (x + ) − f (x * )] ≤ ε 1 (η, T, Θ) x 0 − x * 2 2 + ε 2 (η, T, Θ)(f (x 0 ) − f (x * )) + ε 3 (η,(η s , T s , Θ) ≤ 1/(48γ), ε 2 (η s , T s , Θ) ≤ 1/2, we have E ∇φ γ (x τ ) 2 ≤ 32∆ φ (α + 1) γ(S + 1) + 48 S s=1 w s ε 3 (η s , T s , Θ) γ s=1 w s , where τ is randomly selected from {0, . . . , S} with probabilities p τ ∝ w τ +1 , τ = 0, . . . , S. If ε 3 (η s , T s , Θ) ≤ c 3 /s for some constant c 3 ≥ 0 that may depend on Θ, we have E ∇φ γ (x τ ) 2 ≤ 32∆ φ (α + 1) γ(S + 1) + 48c 3 (α + 1) γ(S + 1)α I(α<1) .(8) Remark: The convergence upper bound in (7) of SA covers the results of a broad family of stochastic convex optimization algorithms. The upper bound in (8) is derived assuming ε 1 (η s , T s , Θ), ε 2 (η s , T s , Θ), ε 3 (η s , T s , Θ) are non-zeros. When ε 2 (η s , T s , Θ) = 0 (as in SGD), the upper bound can be improved by a constant factor. Moreover, we do not optimize the value of γ. Indeed, any γ < 1/µ will work, which only has an effect on constant factor in the convergence upper bound. Proof. Below, we use E s to denote expectation over randomness in the s-th stage given all history before s-th stage. Define z s = arg min x∈Ω f s (x) = prox γ(φ+δΩ) (x s−1 )(9) Then ∇φ γ (x s−1 ) = γ −1 (x s−1 − z s ). Then we have φ(x s ) ≥ φ(z s+1 ) + 1 2γ x s − z s+1 2 . Next, we apply Lemma 1 to each call of SGD in stagewise SGD, E[f s (x s ) − f s (z s )] ≤ ε 1 (η s , T s , Θ) x s−1 − z s 2 2 + ε 2 (η s , T s , Θ)(f s (x s−1 ) − f s (z s )) + ε 3 (η s , T s , Θ) Es . Then E s φ(x s ) + 1 2γ x s − x s−1 2 ≤ f s (z s ) + E s ≤ f s (x s−1 ) + E s ≤ φ(x s−1 ) + E s On the other hand, we have that x s − x s−1 2 = x s − z s + z s − x s−1 2 = x s − z s 2 + z s − x s−1 2 + 2 x s − z s , z s − x s−1 ≥(1 − α −1 s ) x s − z s 2 + (1 − α s ) x s−1 − z s 2 where the inequality follows from the Young's inequality with 0 < α s < 1. Thus we have that E s (1 − α s ) 2γ x s−1 − z s 2 ≤ E s φ(x s−1 ) − φ(x s ) + (α −1 s − 1) 2γ x s − z s 2 + E s ≤ E s φ(x s−1 ) − φ(x s ) + (α −1 s − 1) γ(γ −1 − µ) (f s (x s ) − f s (z s )) + E s ≤ E s φ(x s−1 ) − φ(x s ) + α −1 s − γµ (1 − γµ) E s ≤ E s φ(x s−1 ) − φ(x s ) + E s α −1 s − γµ (1 − γµ) {ε 1 (η s , T s , Θ) x s−1 − z s 2 + ε 2 (η s , T s , Θ)(f s (x s−1 ) − f s (z s )) + ε 3 (η s , T s , Θ)}(10) Next, we bound f s (x s−1 ) − f s (z s ) given that x s−1 is fixed. According to the definition of f s (·), we have f s (x s−1 ) − f s (z s ) = φ(x s−1 ) − φ(z s ) − 1 2γ z s − x s−1 2 = φ(x s−1 ) − φ(x s ) + φ(x s ) − φ(z s ) − 1 2γ z s − x s−1 2 = [φ(x s−1 ) − φ(x s )] + f s (x s ) − f s (z s ) + 1 2γ z s − x s−1 2 − 1 2γ x s − x s−1 2 − 1 2γ z s − x s−1 2 ≤ [φ(x s−1 ) − φ(x s )] + [f s (x s ) − f s (z s )]. Taking expectation over randomness in the s-th stage on both sides, we have f s (x s−1 ) − f s (z s ) ≤ E s [φ(x s−1 ) − φ(x s )] + E s [f s (x s ) − f s (z s )] ≤ E[φ(x s−1 ) − φ(x s )] + ε 1 (η s , T s , Θ) x s−1 − z s 2 2 + ε 2 (η s , T s , Θ)(f s (x s−1 ) − f s (z s )) + ε 3 (η s , T s , Θ). Thus, (1 − ε 2 (η s , T s , Θ))(f s (x s−1 ) − f s (z s )) ≤ E[φ(x s−1 ) − φ(x s )] + ε 1 (η s , T s , Θ) x s−1 − z s 2 2 + ε 3 (η s , T s , Θ). Assuming that ε 2 (η s , T s , Θ) ≤ 1/2, we have ε 2 (η s , T s , Θ)(f s (x s−1 ) − f s (z s )) ≤ E s [φ(x s−1 ) − φ(x s )] + ε 1 (η s , T s , Θ) x s−1 − z s 2 2 + ε 3 (η s , T s , Θ). Plugging this upper bound into (10), we have E s (1 − α s ) 2γ x s−1 − z s 2 ≤ E s φ(x s−1 ) − φ(x s ) + E s α −1 s − γµ (1 − γµ) {2ε 1 (η s , T s , Θ) x s−1 − z s 2 + φ(x s−1 ) − φ(x s ) + 2ε 3 (η s , T s , Θ)}(11) By setting α s = 1/2, γ = 1/(2µ) and assuming ε 1 (η s , T s , Θ) ≤ 1/(48γ), we have E s 1 8γ x s−1 − z s 2 ≤ 4E s φ(x s−1 ) − φ(x s ) + 6ε 3 (η s , T s , Θ)} Multiplying both sides by w s , we have that w s γE s [ ∇φ γ (x s−1 ) 2 ] ≤ E s 32w s ∆ s + 48ε 3 (η s , T s , Θ)w s By summing over s = 1, . . . , S + 1, we have S+1 s=1 w s E[ ∇φ γ (x s−1 ) 2 ] ≤ E 32 γ S+1 s=1 w s ∆ s + 48 γ S+1 s=1 w s ε 3 (η s , T s , Θ) Taking the expectation w.r.t. τ ∈ {0, . . . , S}, we have that E[ ∇φ γ (x τ ) 2 ]] ≤ E 32 S+1 s=1 w s ∆ s γ S+1 s=1 w s + 48 S+1 s=1 w s ε 3 (η s , T s , Θ)) γ S+1 s=1 w s Algorithm 2 SGD(f, x 1 , η, T ) for t = 1, . . . , T do Compute a stochastic subgradient g t for f (x t ). x t+1 = Π Ω [x t − ηg t ] end for Output: x T = T t=1 x t /T For the first term on the R.H.S, we have that S+1 s=1 w s ∆ s = S+1 s=1 w s (φ(x s−1 ) − φ(x s )) = S+1 s=1 (w s−1 φ(x s−1 ) − w s φ(x s )) + S+1 s=1 (w s − w s−1 )φ(x s−1 ) = w 0 φ(x 0 ) − w S+1 φ(x S+1 ) + S+1 s=1 (w s − w s−1 )φ(x s−1 ) = S+1 s=1 (w s − w s−1 )(φ(x s−1 ) − φ(x S+1 )) ≤ ∆ φ S+1 s=1 (w s − w s−1 ) = ∆ φ w S+1 Then, E[ ∇φ γ (x τ ) 2 ] ≤ 32∆ φ w S+1 γ S+1 s=1 w s + 48 S+1 s=1 w s ε 3 (η s , T s , Θ) γ S+1 s=1 w s The standard calculus tells that S s=1 s α ≥ S 0 x α dx = 1 α + 1 S α+1 S s=1 s α−1 ≤ SS α−1 = S α , ∀α ≥ 1, S s=1 s α−1 ≤ S 0 x α−1 dx = S α α , ∀0 < α < 1 Combining these facts and the assumption ε 3 (η s , T s , Θ) ≤ c/s, we have that E[ ∇φ γ (x τ ) 2 ] ≤      32∆ φ (α+1) γ(S+1) + 48c(α+1) γ(S+1) α ≥ 1 32∆ φ (α+1) γ(S+1) + 48c(α+1) γ(S+1)α 0 < α < 1 In order to have E[ ∇φ γ (x τ ) 2 ] ≤ ǫ 2 , we can set S = O(1/ǫ 2 ). The total number of iterations is S s=1 T s ≤ S s=1 12γs ≤ 6γS(S + 1) = O(1/ǫ 4 ) Next, we present several variants of the Meta algorithm by employing SGD, stochastic momentum methods, and ADAGRAD as the basic SA algorithm, to which we refer as stagewise SGD, stagewise stochastic momentum methods, and stagewise ADAGRAD, respectively. Stagewise SGD In this subsection, we analyze the convergence of stagewise SGD, in which SGD shown in Algorithm 2 is employed in the Meta framework. Besides Assumption 1, we impose the following assumption in this subsection. Assumption 2. the domain Ω is bounded, i.e., there exists D > 0 such that x − y ≤ D for any x, y ∈ Ω. Algorithm 3 Unified Stochastic Momentum Methods: SUM(f, x 0 , η, T ) Set parameters: ρ ≥ 0 and β ∈ (0, 1). for t = 0, . . . , T do Compute a stochastic subgradient g t for f (x t ). y t+1 = x t − ηg t y ρ t+1 = x t − ρηg t x t+1 = y t+1 + β(y ρ t+1 − y ρ t ) end for Output: x T = T t=0 x t /(T + 1) It is worth mentioning that bounded domain assumption is imposed for simplicity, which is usually assumed in convex optimization. For machine learning problems, one usually imposes some bounded norm constraint to achieve a regularization. Recently, several studies have found that imposing a norm constraint is more effective than an additive norm regularization term in the objective for deep learning [17,26]. Nevertheless, the bounded domain assumption is not essential for the proposed algorithm. We present a more involved analysis in the next subsection for unbounded domain Ω = R d . The following is a basic convergence result of SGD, whose proof can be found in the literature and is omitted. Lemma 1. For Algorithm 2, assume that f (·) is convex and E g t 2 ≤ G 2 , t ∈ [T ], then for any x ∈ Ω we have E[f ( x T ) − f (x)] ≤ x − x 1 2 2ηT + ηG 2 2 To state the convergence, we introduce a notation ∇φ γ (x) = γ −1 (x − prox γ(φ+δΩ) (x)),(12) which is the gradient of the Moreau envelope of the objective function φ + δ Ω . The following theorem exhibits the convergence of stagewise SGD Theorem 2. Suppose Assumption 1 and 2 hold. By setting γ = 1/(2µ), w s = s α , α > 0, η s = c/s, T s = 12γs/c where c > 0 is a free parameter, then stagewise SGD (Algorithm 1 employing SGD) returns a solution x τ satisfying E ∇φ γ (x τ ) 2 ≤ 16µ∆ φ (α + 1) S + 1 + 24µcĜ 2 (α + 1) (S + 1)α I(α<1) , whereĜ 2 = 2G 2 + 2γ −2 D 2 , and τ is randomly selected from {0, . . . , S} with probabilities p τ ∝ w τ +1 , τ = 0, . . . , S. Remark: To find a solution with E ∇φ γ (x τ ) 2 ≤ ǫ 2 , we can set S = O(1/ǫ 2 ) and the total iteration complexity is in the order of O(1/ǫ 4 ). The above theorem is essentially a corollary of Theorem 1 by applying 1 to f s (·) at each stage. We present a complete proof in the appendix. Stagewise stochastic momentum (SM) methods In this subsection, we present stagewise stochastic momentum methods and their analysis. In the literature, there are two popular variants of stochastic momentum methods, namely, stochastic heavyball method (SHB) and stochastic Nesterov's accelerated gradient method (SNAG). Both methods have been used for training deep neural networks [22,31], and have been analyzed by [36] for non-convex optimization. To contrast with the results in [36], we will consider the same unified stochastic momentum methods that subsume SHB, SNAG and SGD as special cases when Ω = R d . The updates are presented in Algorithm 3. To present the analysis of stagewise SM methods, we first provide a convergence result for minimizing f s (x) at each stage. Lemma 2. For Algorithm 3, assume f (x) = φ(x) + 1 2γ x − x 0 2 is a λ-strongly convex function, Compute a stochastic subgradient g t for f (x t ) 4: g t = g(x t ; ξ) + 1 γ (x t − x 0 ) where g(x; ξ) ∈ ∂ F φ(x t ) such that E[ g(x; ξ) 2 ] ≤ G 2 , Update g 1:t = [g 1:t−1 , g(x t )], s t,i = g 1:t,i 2 5: Set H t = H 0 + diag(s t ) and ψ t (x) = 1 2 (x − x 1 ) ⊤ H t (x − x 1 ) 6: Let x t+1 = arg min x∈Ω ηx ⊤ 1 t t τ =1 g τ + 1 t ψ t (x) 7: end while 8: Output: x T = T t=1 x t /T (1 − β)γ 2 λ/(8ρβ + 4), then we have that E[f ( x T ) − f (x * )] ≤ (1 − β) x 0 − x * 2 2η(T + 1) + β(f (x 0 ) − f (x * )) (1 − β)(T + 1) + 2ηG 2 (2ρβ + 1) 1 − β + 4ρβ + 4 (1 − β) η γ 2 x 0 − x * 2 (13) where x T = T t=0 x t /(1 + T ) and x * ∈ arg min x∈R d f (x). Remark: It is notable that in the above result, we do not use the bounded domain assumption since we consider Ω = R d for the unified momentum methods in this subsection. The key to get rid of bounded domain assumption is by exploring the strong convexity of f (x) = φ(x) + 1 2γ x − x 0 2 . Theorem 3. Suppose Assumption 1 holds. By setting γ = 1/(2µ), w s = s α , α > 0, η s = (1 − β)γ/(96s(ρβ + 1)), T s ≥ 2304(ρβ + 1)s, then we have E[ ∇φ γ (x τ ) 2 ] ≤ 16µ∆ φ (α + 1) S + 1 + (βG 2 + 96G 2 (2ρβ + 1)(1 − β))(α + 1) 96(S + 1)(2ρβ + 1)(1 − β)α I(α<1) , where τ is randomly selected from {0, . . . , S} with probabilities p τ ∝ w τ +1 , τ = 0, . . . , S. Remark: The bound in the above theorem is in the same order as that in Theorem 2. The total iteration complexity for finding a solution x τ with E ∇φ γ (x τ ) 2 ≤ ǫ 2 is O(1/ǫ 4 ). Stagewise ADAGRAD In this subsection, we analyze stagewise ADAGRAD and establish its adaptive complexity. In particular, we consider the Meta algorithm that employs ADAGRAD in Algorithm 4. The key difference of stagewise ADAGRAD from stagewise SGD and stagewise SM is that the number of iterations T s at each stage is adaptive to the history of learning. It is this adaptiveness that makes the proposed stagewise ADAGRAD achieve adaptive convergence. It is worth noting that such adaptive scheme has been also considered in [5] for solving stochastic strongly convex problems. In contrast, we consider stochastic weakly convex problems. Similar to previous analysis of ADAGRAD [12,5], we assume g(x; ξ) ∞ ≤ G, ∀x ∈ Ω in this subsection. Note that this is stronger than Assumption 1-(A1). We formally state this assumption required in this subsection below. Assumption 3. g(x; ξ) ∞ ≤ G for any x ∈ Ω. The convergence analysis of stagewise ADAGRAD is build on the following lemma, which is attributed to [5]. E[f ( x T ) − f (x * )] ≤ 1 M η x 0 − x * 2 + η M ,(14) where x * = arg min x∈Ω f (x), g 1:t = (g(x 1 ), . . . , g(x t )) and g 1:t,i denotes the i-th row of g 1:t . The convergence property of stagewise ADAGRAD is described by following theorem. Algorithm 5 SADMM(f, x 0 , η, β, t) 1: Input: x 0 ∈ R d , a step size η, penalty parameter β, the number of iterations t and a domain Ω. 2: Initialize: x 1 = x 0 , y 1 = Ax 1 , λ 1 = 0 3: for τ = 1, . . . , t do 4: Update x τ +1 by (16) 5: Update y τ +1 by (17) 6: Update λ τ +1 by (18) 7: end for 8: Output: x t = t τ =1 x τ /tE[ ∇φ γ (x τ ) 2 ] ≤ 16µ∆ φ (α + 1) S + 1 + 4µ 2 c 2 (α + 1) (S + 1)α I(α<1) , whereĜ = G + γ −1 D, and g s 1:t,i denotes the cumulative stochastic gradient of the i-th coordinate at the s-th stage. Remark: It is obvious that the total number of iterations S s=1 T s is adaptive to the data. Next, let us present more discussion on the iteration complexity. Note that M s = O( √ s). By the boundness of stochastic gradient g 1:Ts,i ≤ O( √ T s ), therefore T s in the order of O(s) will satisfy the condition in Theorem 4. Thus in the worst case, the iteration complexity for finding E[ ∇φ γ (x τ ) 2 ] ≤ ǫ 2 is in the order of S s=1 O(s) ≤ O(1/ǫ 4 ). To show the potential advantage of adaptive step size as in the convex case, let us consider a good case when the cumulative growth of stochastic gradient is slow, e.g., assuming g s 1:Ts,i ≤ O(T s α ) with α < 1/2. Then T s = O(s 1/(2(1−α)) ) will work, and then the total number of iterations S s=1 T s ≤ S 1+1/(2(1−α)) ≤ O(1/ǫ 2+1/(1−α) ), which is better than O(1/ǫ 4 ). Finally, we remark that the bounded domain assumption could be removed similar to last subsection. Stagewise Stochastic ADMM for Solving Problems with Structured Regularizers In this subsection, we consider solving a regularized problem with a structured regularizer, i.e., min x∈Ω φ(x) := E[f (x; ξ)] + ψ(Ax),(15) where A ∈ R d×m , and ψ(·) : R m → R is some convex structured regularizer (e.g., generalized Lasso ψ(Ax) = Ax 1 ). We assume that φ(·) is µ-weakly convex. Although Stagewise SGD can be employed to solve the above problem, it is usually expected to generate a sequence of solutions that respect certain properties (e.g., sparsity) promoted by the regularizer. When E[f (x; ξ)] is convex, the problem is usually solved by stochastic ADMM shown in Algorithm 5 (assuming f (x) = E[f (x; ξ)] + ψ(Ax)), in which the following steps are alternatively executed: x τ +1 = arg min x∈Ω ∂f (x t , ξ τ ) ⊤ x + β 2 (Ax − y τ ) − 1 β λ τ 2 + x − x τ 2 C η ,(16)y τ +1 = arg min x∈Ω ψ(y) + β 2 (Ax τ +1 − y) − 1 β λ t 2 ,(17)λ τ +1 = λ τ − β(Ax τ +1 − y τ +1 ),(18) where β > 0 is the penalty parameter of ADMM, x 2 C = x ⊤ Cx, and C = αI − ηβA ⊤ A I with some appropriate α > 0. In order to employ SADMM for solving (15) with a weakly convex objective, we usef s (·; ξ) = f (·; ξ) + 1 2γ x − x s−1 2 to define f s (x) = E[f s (x; ξ)] + ψ(Ax) in the s-th call of SADMM in the Meta framework. A convergence upper bound of stochastic ADMM for solving min x∈Ω f (x) = E[f (x; ξ)] + 1 2γ x − x 0 2 + ψ(Ax),(19) is given in the following lemma. Lemma 4. For Algorithm 5, assume f (x) is a convex function and ψ(·) is a ρ-Lipschitz continuous convex function, g(x t ; ξ) ∈ ∂ F f (x t ; ξ t ) is used in the update, C = αI − ηβA ⊤ A I, and Assumption 2 holds. Then, E[f ( x t ) − f (x * )] ≤ α x 0 − x * 2 2 2ηt + β A 2 2 x 0 − x * 2 2 2t + ρ 2 2βt + ηĜ 2 2 + ρ A 2 D t . whereĜ = G + γ −1 D.E[ ∇φ γ (x τ ) 2 ] ≤ 16µ∆ φ (α + 1) S + 1 + C(α + 1) (S + 1)α I(α<1) , where τ is randomly selected from {0, . . . , S} with probabilities p τ ∝ w τ +1 , τ = 0, . . . , S, and C is some constant depending on c 1 , c 2 , ρ, G, D, A 2 . Remark: The above result can be easily proved. Therefore, the proof is omitted. Conclusion In this paper, we proposed a universal stagewise learning framework for solving non-convex problems, which employs well-known heuristics in practice that have not been well analyzed theoretically. Our results address shortcomings of existing theories by providing convergence on randomly selected averaged solutions with increasing sampling probabilities. Moreover, we established an adaptive convergence of a stochastic algorithm using data adaptive coordinate-wise step size of ADAGRAD, and exhibited its faster convergence than non-adaptive stepsize for slowly growing cumulative stochastic gradients similar to that in the convex case. Acknowledgement T. Yang are partially supported by National Science Foundation (IIS-1545995). Part of this work was done when Chen is interning at JD AI Research and Yang is visiting JD AI Research. A Proof of Theorem 2 Proof. Below, we use E s to denote expectation over randomness in the s-th stage given all history before s-th stage. Define z s = arg min x∈Ω f s (x) = prox γ(φ+δΩ) (x s−1 )(20) Then ∇φ γ (x s−1 ) = γ −1 (x s−1 − z s ). Then we have φ(x s ) ≥ φ(z s+1 ) + 1 2γ x s − z s+1 2 . Next, we apply Lemma 1 to each call of SGD in stagewise SGD, E[f s (x s ) − f s (z s )] ≤ z s − x s−1 2 2η s T s + η sĜ 2 2 Es , whereĜ 2 is the upper bound of E[ g(x; ξ) + γ −1 (x − x s−1 ) 2 ] , which exists and can be set to 2G 2 + 2γ −2 D 2 due to the Assumption 1-(A1) and the bounded assumption of the domain. Then E s φ(x s ) + 1 2γ x s − x s−1 2 ≤ f s (z s ) + E s ≤ f s (x s−1 ) + E s ≤ φ(x s−1 ) + E s On the other hand, we have that x s − x s−1 2 = x s − z s + z s − x s−1 2 = x s − z s 2 + z s − x s−1 2 + 2 x s − z s , z s − x s−1 ≥(1 − α −1 s ) x s − z s 2 + (1 − α s ) x s−1 − z s 2 where the inequality follows from the Young's inequality with 0 < α s < 1. Thus we have that E s (1 − α s ) 2γ x s−1 − z s 2 ≤E s φ(x s−1 ) − φ(x s ) + (α −1 s − 1) 2γ x s − z s 2 + E s ≤E φ(x s−1 ) − φ(x s ) + (α −1 s − 1) γ(γ −1 − µ) (f s (x s ) − f s (z s )) + E s ≤E φ(x s−1 ) − φ(x s ) + α −1 s − γµ (1 − γµ) E s(21) Combining the above inequalities, we have that (1 − α s )γ − γ 2 (α −1 s − µγ) (1 − µγ)η s T s E s [ ∇φ γ (x s−1 ) 2 ] ≤ E s 2∆ s + (α −1 s − µγ)η sĜ 2 (1 − µγ) Multiplying both sides by w s , we have that w s (1 − α s )γ − γ 2 (α −1 s − µγ) (1 − µγ)η s T s E s [ ∇φ γ (x s−1 ) 2 ] ≤ E s 2w s ∆ s + (α −1 s − µγ)w s η sĜ 2 (1 − µγ) By setting α s = 1/2 and γ = 1/(2µ), T s η s ≥ 12γ, we have 1 4 w s γE s [ ∇φ γ (x s−1 ) 2 ] ≤ E s [2w s ∆ s + 3w s η sĜ 2 ] By summing over s = 1, . . . , S + 1, we have w s ∆ s = S+1 s=1 w s (φ(x s−1 ) − φ(x s )) = S+1 s=1 (w s−1 φ(x s−1 ) − w s φ(x s )) + S+1 s=1 (w s − w s−1 )φ(x s−1 ) ≤ w 0 φ(x 0 ) − w S+1 φ(x S+1 ) + S+1 s=1 (w s − w s−1 )φ(x s−1 ) = S+1 s=1 (w s − w s−1 )(φ(x s−1 ) − φ(x S+1 )) ≤ ∆ φ S+1 s=1 (w s − w s−1 ) = ∆ φ w S+1 Then, E[ ∇φ γ (x τ ) 2 ] ≤ 16µ∆ φ w S+1 S+1 s=1 w s + 24µ S+1 s=1 w s η sĜ 2 S+1 s=1 w s The standard calculus tells that S s=1 s α ≥ S 0 x α dx = 1 α + 1 S α+1 S s=1 s α−1 ≤ SS α−1 = S α , ∀α ≥ 1, S s=1 s α−1 ≤ S 0 x α−1 dx = S α α , ∀0 < α < 1 Combining these facts, we have that E[ ∇φ γ (x τ ) 2 ] ≤      16µ∆ φ (α+1) S+1 + 24µĜ 2 (α+1) S+1 α ≥ 1 16µ∆ φ (α+1) S+1 + 24µĜ 2 (α+1) (S+1)α 0 < α < 1 In order to have E[ ∇φ γ (x τ ) 2 ] ≤ ǫ 2 , we can set S = O(1/ǫ 2 ). The total number of iterations is S s=1 T s ≤ S s=1 12γs ≤ 6γS(S + 1) = O(1/ǫ 4 ) B Proof of Theorem 3 Proof. According to the definition of z s in (20) and Lemma 2, we have that E s φ(x s ) + 1 2γ x s − x s−1 2 ≤ f s (z s ) + β(f s (x s−1 ) − f s (z s )) (1 − β)(T s + 1) + (1 − β) x s−1 − z s 2 2η s (T s + 1) + 2η s G 2 (2ρβ + 1) 1 − β + 1 24γ x s−1 − z s 2 Es ≤ φ(x s−1 ) + E s . Similar to the proof of Theorem 2, we have (1 − α s ) 2γ x s−1 − z s 2 ≤E s [φ(x s−1 ) − φ(x s )] + α −1 s − γµ (1 − γµ) E s(22) Rearranging above inequality, we have that (1 − α s )γ − γ 2 (α −1 s − µγ)(1 − β) (1 − µγ)η s (T s + 1) − α −1 s − γµ (1 − γµ) γ 24 ∇φ γ (x s−1 ) 2 ≤2E s [∆ s ] + 2(α −1 s − µγ) (1 − µγ) β(f s (x s−1 ) − f s (z s )) (1 − β)(T s + 1) + 2η sĜ 2 (2ρβ + 1) 1 − β The definition of f s gives that f s (x s−1 ) − f s (z s ) = φ(x s−1 ) − φ(z s ) − 1 2γ z s − x s−1 2 On the other hand, the µ-weakly convexity of φ gives that φ(z s ) ≥ φ(x s−1 ) + g(x s−1 ), z s − x s−1 − µ 2 z s − x s−1 2 , where g(x s−1 ) ∈ ∂ F φ(x s−1 ). Combing these two inequalities we have that f s (x s−1 ) − f s (z s ) ≤ g(x s−1 ), x s−1 − z s − µ 2 z s − x s−1 2 ≤ G 2 2µ + µ − µ 2 z s − x s−1 2 = G 2 2µ where the second inequality follows from Jensen's inequality for · and Young's inequality. Combining above inequalities and multiplying both side by w s , we have that w s (1 − α s )γ − γ 2 (α −1 s − µγ)(1 − β) (1 − µγ)η s (T s + 1) − α −1 s − γµ (1 − γµ) γ 24 ∇φ γ (x s−1 ) 2 ≤2w s E s [∆ s ] + 2w s (α −1 s − µγ) (1 − µγ) βG 2 2µ(1 − β)(T s + 1) + 2η s G 2 (2ρβ + 1) 1 − β(23) By setting α s = 1/2, η s (T s + 1) ≥ 24(1 − β)γ, we have that w s γ 4 ∇φ γ (x s−1 ) 2 ≤ 2w s E s [∆ s ] + w s η s βG 2 8(1 − β) 2 + 12w s η s G 2 (2ρβ + 1) 1 − β Summing over s = 1, . . . , S + 1 and rearranging, we have S+1 s=1 w s ∇φ γ (x s−1 ) 2 = E S+1 s=1 8 γ w s ∆ s + w s η s G 2 (β + 96(2ρβ + 1)(1 − β)) 2γ(1 − β) 2 Following similar analysis as in the proof of Theorem 2, we can finish the proof. C Proof of Theorem 4 Proof. Applying Lemma 3 with T s ≥ M s max{Ĝ +maxi g s 1:Ts,i 2 , d i=1 g s 1:Ts,i } M s > 0, and the fact that φ(x s−1 ) ≥ φ(z s ) + 1 2γ x s−1 − z s 2 in sth stage, we have that E s φ(x s ) + 1 2γ s x s − x s−1 2 ≤ f s (z s ) + 1 M s η s x s−1 − z s 2 + η s M s Es ≤ φ(x s ) + E s According to (22), we have that (1 − α s ) 2γ E s [ x s−1 − z s 2 ] ≤φ(x s−1 ) − φ(x s ) + (α −1 s − 1) 2γ x s − z s 2 + E s ≤φ(x s−1 ) − φ(x s ) + α −1 s − γµ (1 − γµ) 1 M s η s x s−1 − z s 2 + η s M s Rearranging above inequality then multiplying both side by w s , we have that By the definition of p s in the theorem, taking expectation of ∇φ γ (x τ ) 2 w.r.t. τ ∈ {0, . . . , S} we have that E[ ∇φ γ (x τ ) 2 ] =E 8 γ S+1 s=1 w s ∆ s S+1 i=1 w i + c 2 γ 2 S+1 s=1 s α−1 S+1 i=1 w i ≤ 8∆ φ (α + 1) γ(S + 1) + c 2 (α + 1) γ 2 (S + 1)α I(α<1) D Proof of Lemma 2 Proof. Following the analysis in [36], we directly have the following inequality, E[ x k+1 + p k+1 − x * 2 ] = = E[ x k + p k − x * 2 ] − 2η 1 − β E[(x k − x * ) ⊤ ∂f (x k )] − 2ηβ (1 − β) 2 E[(x k − x k−1 ) ⊤ ∂f (x k )] − 2ρη 2 β (1 − β) 2 E[g ⊤ k−1 ∂f (x k )] + η 1 − β 2 E[|g k 2 ] Taking the summation of objective gap in all iterations, we have Dividing by T on both sides and setting x = x * , following the inequality (3) and the convexity of f (x) we have f ( x) − f * ≤ 1 M η x 0 − x * 2 + η M + 1 T T t=1 ∆ t Let {F t } be the filtration associated with Algorithm 1 in the paper. Noticing that T is a random variable with respect to {F t }, we cannot get rid of the last term directly. Define the Sequence {X t } t∈N+ as X t = 1 t t i=1 ∆ i = 1 t t i=1 g i − E[g i ], x i − x *(25) where E[g i ] ∈ ∂f (x i ). Since E [g t+1 − E[g t+1 ]] = 0 and x t+1 = arg min x∈Ω ηx ⊤ 1 t t τ =1 g τ + 1 t ψ t (x), which is measurable with respect to g 1 , . . . , g t and x 1 , . . . , x t , it is easy to see {∆ t } t∈N is a martingale difference sequence with respect to {F t }, e.g. E[∆ t |F t−1 ] = 0. On the other hand, since g t 2 is upper bounded by G, following the statement of T in the theorem, T ≤ N = M 2 max{ (G+1) 2 4 , d 2 G 2 } < ∞ always holds. Then following Lemma 1 in [5] we have that E[X T ] = 0. Now taking the expectation we have that E[f ( x) − f * ] ≤ 1 M η x 0 − x * 2 + η M Then we finish the proof.
8,761
1808.05492
2887894061
When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection ---images of classes which are not in the training set but are related to those---, and anomaly detection ---images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works.
Anomaly and novelty detection. Also known as out-of-distribution detection, it aims at identifying inputs that are completely different from or unknown to the original data distribution used for training @cite_14 . @cite_18 , they perform novelty detection by learning a distance in an embedding. It proposes a Kernel Null Foley-Sammon transform that aims at projecting all the samples of each in-distribution class into a single point in a certain space. Consequently, novelty detection can be performed by thresholding the distance of a test sample to the nearest of the collapsed class representations. However, they employ handcrafted features, thus optimizing only the transform parameters and not the representation, like in the presently dominating paradigm of deep learning.
{ "abstract": [ "Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in comprehensive multi-class experiments using the publicly available datasets Caltech-256 and Image Net. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.", "Novelty detection is the task of classifying test data that differ in some respect from the data that are available during training. This may be seen as ''one-class classification'', in which a model is constructed to describe ''normal'' training data. The novelty detection approach is typically used when the quantity of available ''abnormal'' data is insufficient to construct explicit models for non-normal classes. Application includes inference in datasets from critical systems, where the quantity of available normal data is very large, such that ''normality'' may be accurately modelled. In this review we aim to provide an updated and structured investigation of novelty detection research papers that have appeared in the machine learning literature during the last decade." ], "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "2081604642", "2115627867" ] }
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 1 Metric Learning for Novelty and Anomaly Detection
Deep neural networks have obtained excellent performance for many applications. However, one of the known shortcomings of these systems is that they can be overly confident when presented with images (and classes) which were not present in the training set. Therefore, a desirable property of these systems would be the capacity to not produce an answer if an input sample belongs to an unknown class, that is, a class for which it has not been trained. The field of research which is dedicated to this goal is called out-of-distribution detection [10,17,18]. Performing out-of-distribution detection is important not only to avoid classification errors but also as the first step towards lifelong learning systems [3]. Such systems would detect out-of-distribution samples in order to later update the model accordingly [13,20]. The problem of out-of-distribution detection has also been called one-class classification, novelty and anomaly detection [23]. More recently, associated to deep neural network classifiers, some works refer to it as open-set recognition [1]. In this paper, we distinguish two cases of out-of-distribution which we believe are quite different: we propose to term as novelty an image from a class different from those contained in a dataset from which to train, but that bears some resemblance to them, for instance because it shows the same kind of object from untrained points of view. This is a very important problem in many computer vision applications. For example, imagine a system that classifies traffic signs on-board a car and takes automatic decisions accordingly. It can happen that it finds a class of local traffic signs which was not included in the training set, and this must be detected to avoid taking wrong decisions. We reserve the word anomaly for completely unrelated samples, like different type of objects, images from another unrelated dataset, or background patches in the case of traffic sign classification. This is also relevant from the point of view of commercial applications. In fact, most previous works focus on anomaly detection. Novelty detection remains rather unexplored. To the best of our knowledge only [26] and [18] perform some intra-dataset out-of-distribution detection experiments. The three previous works closest to ours [10,17,18], revolve around one idea: given a discriminative neural network model, use the output probabilities to take the decision of seen/unseen class. These networks are optimized to distinguish between the classes present in the training set, and are not required to explicitly model the marginal data distribution. As a consequence, at testing time the system cannot assess the probability of the presented data, complicating the assessment of novelty cases. Here we explore a completely different approach: to learn an embedding where one can use Euclidean distance as a measure of "out-of-distributioness". We propose a loss that learns an embedding where samples from the same in-distribution class form clusters, well separated from the space of other in-distribution classes and also from out-of-distribution samples. The contributions to the problem of out-of-distribution detection presented in this paper are the following. First, the use of metric learning for out-of-distribution detection, instead of doing it on the basis of the cross-entropy loss and corresponding softmax scores. Second, we distinguish between novelty and anomaly detection and show that research should focus on the more challenging problem of novelty detection. Third, we obtain comparable or better results than state-of-the-art in both anomaly and novelty detection. Last, in addition to the experiments with benchmark datasets in order to compare with previous works, we address also a real-world classification problem, traffic sign recognition, for which we obtain good detection and accuracy results. Metric Learning for Out-of-Distribution Most recent works on out-of-distribution detection are based on supervisely trained neural networks which optimize the cross-entropy loss. In these cases the network output has a direct correspondence with the solution of the task, namely a probability for each class. However, the representation of the output vector is forced to always sum up to one. This means that when the network is shown an input which is not part of the training distribution, it will still give probabilities to the nearest classes so that they sum up to one. This phenomena has led to the known problem of neural networks being too overconfident about content that they have never seen [10]. Several works have focused on improving the accuracy of the confidence estimate of methods based on the cross entropy; adapting them in such a way that they would yield lower confidences for out-of-distribution [10,17,18]. We hypothesize that the problem of the overconfident network predictions is inherent to the used cross-entropy, and therefore propose to study another class of network objectives, namely those used for metric learning. In metric learning methods, we minimize an objective which encourages images with the same label to be close and images with different labels to be at least some margin apart in an embedding space. These networks do not apply a softmax layer, and therefore are not forced to divide images which are out-of-distribution over the known classes. Metric Learning For applications such as image retrieval, images are represented by an embedding in some feature space. Images can be ordered (or classified) according to the distance to other images in that embedding space. It has been shown that using metric learning methods to improve the embeddings could significantly improve their performance [8]. The theory of metric learning was extended to deep neural networks by Chopra et al. [4]. They proposed to pass images through two parallel network branches which share the weights (also called a Siamese network). A loss considers both embeddings, and adapts the embedding in such a way that similar classes are close and dissimilar classes are far in that embedding space. Traditionally these networks have been trained with contrastive loss [9], which is formulated as: L(x 1 , x 2 , y ; W ) = 1 2 (1 − y) D 2 w + 1 2 y (max (0, m − D w )) 2 ,(1) where D w = || f W (x 1 ) − f W (x 2 )|| 2 is the distance between the embeddings of images x 1 and x 2 computed by network f W with weights W . The label y = 0 indicates that the two images are from the same class, and y = 1 is used for images from different classes. The loss therefore minimizes the distance between images of the same class, and increases the distance of images of different classes until this distance surpasses the margin m. Several other losses have been proposed for Siamese networks [11,25,28,31,32] but in this paper we will evaluate results with the contrastive loss to provide a simple baseline on which to improve. Out-of-Distribution Mining (ODM) In the previous section, we considered that during training only examples of in-distribution data are provided. However, some methods consider the availability of some out-of-distribution data during training [17]. This is often a realistic assumption since it is relatively easy to obtain data from other datasets or create out-of-distribution examples, such as samples generated with Gaussian noise. However, it has to be noted that the out-of-distribution data is used unlabeled, and is of a different distribution from the out-of-distribution used at testing. The objective is to help the network be less confident about what it does not know. Therefore, noise or even unlabeled data can be used to strengthen the knowledge boundaries of the network. We propose to adapt the contrastive loss to incorporate the out-of-distribution data: L(x 1 , x 2 , y ; W ) = 1 2 (1 − y) zD 2 w + 1 2 yz (max (0, m − D w )) 2 ,(2) where we have introduced a label z which is zero when both images are from the out-ofdistribution and one otherwise. This loss is similar to Eq. 1, but with the difference that in case of a pair of images where one is an out-of-distribution image (z = 1, y = 1) they are encouraged to be at least m distance apart. Note that we do not enforce the out-of-distribution images to be close, since when z = 0 the pair does not contribute to the loss. It is important to make sure that there are no pairs of out-of-distribution samples so that they are not treated as a single new class and forced to be grouped into a single cluster. In practice, we have not implemented a two-branches Siamese network but followed recent works [19,30] which devise a more efficient approach to minimize losses traditionally computed with Siamese networks. The idea is to sample a minibatch of images which we forward through a single branch until the embedding layer. We then sample pairs from them in the loss layer and backpropagate the gradient. This allows the network to be defined with only one copy of the weights instead of having two branches with shared weights. At the same time, computing the pairs after the embedding also allows to use any subgroup of possible pairs among all the images from the minibatch. When computing the pairs we make sure that pairs of out-of-distribution samples are not used. As a result z will never be 0 and we can in practice directly apply Eq. 1 instead of Eq. 2. Anomaly and Novelty detection In this paper we distinguish between two categories of out-of-distribution data: Novelty: samples that share some common space with the trained distribution, which are usually concepts or classes which the network could include when expanding its knowledge. If you train a network specialized in different dog breeds, an example would be a new dog breed that was not in the training set. Furthermore, if the classes are more complex, some novelty out-of-distribution could be new viewpoints or modifications of an existing learned class. Anomaly: samples that are not related with the trained distribution. In this category we could include background images, Gaussian noise, or unrelated classes to the trained distribution (i.e. SVHN would be a meaningful anomaly for CIFAR-10). Since anomalies are further from the in-distribution than novelties these are expected to be easier to detect. To further illustrate the difference between novelties and anomalies consider the following experiment. We train a LeNet on the classes 2, 6 and 7 from the MNIST dataset [16] under the same setup for both cross-entropy (CE) and contrastive (ML) losses. We also train it with our proposed method which introduces out-of-distribution mining during training (ODM). We use classes 0, 3, 4, and 8 as those seen out-of-distribution samples during training. Then, we visualize the embeddings for different out-of-distribution cases from closer to further resemblance to the train set : 1) similar numbers 5, 9 and 1 as novelty, 2) SVHN [22] and CIFAR-10 [14] as anomalies with a meaning, and 3) the simpler Gaussian noise anomalies. In Figure 1 we show the 3-dimensional output embedding spaces for CE, ML and ODM in rows 1, 2 and 3 respectively. As expected, the CE space is bounded inside the shown triangle, since the three dimensions of the output (the number of classes) have to always sum up to 1. For SVHN, CE correctly assigns low confidence for all classes. However, for CIFAR-10, Gaussian noise and Novelty it increasingly is more confident about the probability of an out-of-distribution image to be classified as an in-distribution one. In the case of ML, all anomalies seem to be more separated from the in-distributions for each class, and only the Novelty is still too close to the cluster centers. With the introduction of out-of-distribution samples during training, ODM shows how out-of-distribution images are kept away from the in-distribution, allowing the network to be confident about what it is capable of classifying and what not. We provide quantitative performance results for this experiment in the Supplementary Material. In conclusion, this experiment shows that there is a difference between novel and anomaly out-of-distribution samples for both cross-entropy and metric learning approaches, stressing that those have to be approached differently. Furthermore, the overconfidence of the cross-entropy methods is more clear on novelty detection cases, and among the anomaly cases, the Gaussian noise seems to be the one with more overconfident cases. In those cases, a metric learning approach presents more benefits when doing out-of-distribution detection. It allows for the output embedding space to be more representative of the learned classes around the class centers, and naturally has the ability to give low scores to unseen data. Finally, when some out-of-distribution samples are shown during training, the network is more capable of adapting the embedding space to be more separable against anomaly data. Results To assess the performance of the proposed method, we first compare with existing stateof-the-art out-of-distribution detection methods on SVHN [22] and CIFAR-10 [14] datasets trained on VGGnet [29] and evaluated with the metrics provided in [17]. Furthermore, as a more application-based benchmark, we propose to compare cross-entropy based strategies and metric learning strategies on the Tsinghua dataset [35] of traffic signs. In this second set of experiments we use our own implementation of the metrics defined in [18]. More about the metrics used can be found in the Supplementary Material. 1 Comparison with state-of-the-art We compare our method with two very recent state-of-the-art methods. One of them uses a confidence classifier and an adversarial generator (CC-AG) [17] and like ours uses outof-distribution images during training. The second method is ODIN [18] which does not consider out-of-distribution images during training. In [17] they compare CC-AG with ODIN [18], and show that they can perform much better in the novelty case but similar for the anomaly cases. We train each SVHN and CIFAR-10 as the in-distribution datasets while using the other dataset as the seen out-distribution during training. We train on VGGnet, just like [17], with a contrastive loss of margin 10 and a 25% of (in-dist, out-dist) pairs every two batches. Following the experiments of [17], we test the resulting networks on the in-distribution test set for classification, and TinyImageNet [6], LSUN [33] and Gaussian noise for out-of-distribution detection. For evaluation we use the proposed metrics from their implementation, namely: true negative rate (TNR) when true positive rate (TPR) is at 95%, detection accuracy, area under the receiver operating characteristic curve (AUROC) and both area under the precisionrecall curve for in-distribution (AUPR-in) and out-distribution (AUPR-out). Table 1 shows the results. For SVHN as the in-distribution results are as expected, with ODIN having lower results due to not using any out-of-distribution during training, and both CC-AG and ODM having near perfect performance. In the case of CIFAR-10 being the indistribution, the same pattern is repeated for the seen distribution from SVHN. However, for the unseen out-distributions, CC-AG achieves the lower performance on both TinyImageNet and LSUN datasets, and ODIN the lower for Gaussian noise. Although not always achieving the best performance, ODM is able to compete with the best cases, and is never the worse performer. Gaussian noise seems to be the most difficult case on CIFAR-10, which is a more complex dataset than SVHN. For ODIN, as it is only based on cross-entropy, it becomes to overconfident. In the case of CC-AG and ODM, the low results might be related to Gaussian noise being too different from the out-distribution seen during training. Finally, it is important to note that metric learning has a lower classification accuracy of the in-distribution. This has already been observed in [12], where features learned by classification networks with typical softmax layers are compared with metric learning based features, with regard to several benchmark datasets. For good classification results our metric learning network should be combined with those of a network trained with cross-entropy. One could also consider a network with two heads, where after some initial shared layers a cross-entropy branch and a metric learning branch are trained in a multi-task setting. Tsinghua traffic sign dataset We evaluate our method on a real application, i.e. traffic sign recognition in the presence of unseen traffic signs (novelty) and not-a-traffic-sign detection (anomaly). We compare our proposed method ODM against ODIN [18], as a cross-entropy based method, on the Tsinghua dataset [35]. We divide traffic sign classes into three disjoint partitions : the indistribution classes, seen out-of-distribution images used for training, and unseen out-ofdistribution images used for testing on out-of-distribution detection. Since Tsinghua contains some very similar traffic sign classes which would rarely be learned without each other (i.e. all speed limits, all turning arrows, ...), we group those that are too similar in order to build a more reasonable and natural split than just a random one (See Supplementary Material for more on the usual random splits). For the same reason, we also discard classes with less than 10 images as they introduce errors. Therefore, we generate a random split which applies by the mentioned restrictions (see Fig. 2), by taking a 50-20-30% split of the classes for the in-distribution, seen out-distribution and unseen out-distribution respectively. Regarding anomalies, we consider Gaussian noise, but also background patches from the same Tsinghua dataset images. Those patches are generated randomly from the central area of the original full frames to avoid an unbalanced ratio of ground and sky images, which can Table 2: Comparison between ODIN and our proposed learning strategies on a WRN-28-10 architecture, when using novelty, anomaly (background patches and Gaussian noise) as seen out-of-distribution data as well as not seen out-of-distribution. Method In be semantically richer and more challenging. In a real traffic sign detector application, where detected possible traffic signs are fed to a classifier, this kind of anomalies are more realistic and account for possible detection errors more than Gaussian noise. The global performance of the system can be improved by avoiding that those anomalies reach the classifier and produce an overconfident error. For this experiment, we learn a 32-dimensional embedding space, training a WRN-28-10 model [34] with an Adam optimizer at learning rate 0.0001 for 10,000 steps. The same training parameters are used for ODIN since they provided the best combination on the validation set. Table 2 shows the results of the comparison between ODIN, ML and ODM for both seen novelty and anomaly cases. Note that our implementation of the Detection Error metric is fixed to use the FPR at a TPR of 95%, making a value of 2.50 the one of a perfect detector (see Supplementary Material). In terms of in-distribution classification accuracy, both methods are equivalent. However, the comparison of plain metric learning (Ours-ML) with ODIN shows that learning an embedding can be more suitable for out-of-distribution detection of both novelty and anomalies. Introducing out-distribution samples during training slightly improves all cases. Using anomalies as seen out-of-distribution during training helps the detection of the same kind of anomaly as expected since anomalies will be forced to be further away from the indistribution in the embedding space. However, in some cases, it can damage the detection of novelty, which would not be guaranteed to be pushed away from the learned classes. In this paper, we propose a metric learning approach to improve out-of-distribution detection which performs comparable or better than the state-of-the-art. We show that metric learning provides a better output embedding space to detect data outside the learned distribution than cross-entropy softmax based models. This opens an opportunity to further research on how this embedding space should be learned, with restrictions that could further improve the field. The presented results suggest that out-of-distribution data might not all be seen as a single type of anomaly, but instead a continuous representation between novelty and anomaly data. In that spectrum, anomaly detection is the easier task, giving more focus at the difficulty of novelty detection. Finally, we also propose a new benchmark for out-of-distribution detection on the Tsinghua dataset, as a more realistic scenario for novelty detection. Supplementary Material Metric Learning for Novelty and Anomaly Detection A Out-of-Distribution detection metrics In out-of-distribution detection, comparing different detector approaches cannot be done by measuring only accuracy. The question we want to answer is if a given test sample is from a different distribution than that of the training data. The detector will be using some information from the classifier or embedding space, but the prediction is whether that processed sample is part of the in-distribution or the out-distribution. To measure that, we adopt the metrics proposed in [18]: • FPR at 95% TPR is the corresponding False Positive Rate (FPR=FP/(FP+TN)) when the True Positive Rate (TPR=TP/(TP+FN)) is at 95%. It can be interpreted as the misclassification probability of a negative (out-distribution) sample to be predicted as a positive (in-distribution) sample. • Detection Error measures the probability of misclassifying a sample when the TPR is at 95%. Assuming that a sample has equal probability of being positive or negative in the test, it is defined as 0.5(1 − TPR) + 0.5FPR. where TP, FP, TN, FN correspond to true positives, false positives, true negatives and false negatives respectively. Those two metrics were also changed to TNR at 95% TPR and Detection Accuracy in [17], which can be calculated by doing 1 − x from the two metrics above explained respectively. We use the latter metrics only when comparing to other stateof-the-art methods. This is also done because the implementation in both [17,18] allows for using a TPR which is not at 95% in some cases, meaning that the Detection Error can go below 2.5 since TPR is not fixed to 0.95. In order to avoid the biases between the likelihood of an in-distribution sample to being more frequent than an out-distribution one, we need threshold independent metrics that measure the trade-off between false negatives and false positives. We adopt the following performance metrics proposed in [10]: • AUROC is the Area Under the Receiver Operating Characteristic proposed in [5]. It measures the relation between between TPR and FPR interpreted as the probability of a positive sample being assigned a higher score than a negative sample. • AUPR is the Area Under the Precision-Recall curve proposed in [21]. It measures the relationship between precision (TP/(TP+FP)) and recall (TP/(TP+FN)) and is more robust when positive and negative classes have different base rates. For this metric we provide both AUPR-in and AUPR-out when treating in-distribution and outdistribution samples as positive, respectively. B Quantitative results of the MNIST experiment In this section we present the quantitative results of the comparison on the MNIST dataset. In this case we allowed a 5-dimensional embedding space for ML so the representation is rich enough to make the discrimination between in-dist and out-dist. For CE, as it is fixed to the number of classes, the embedding space is 3-dimensional. In Table 3 we see that ML performs a better than CE on all cases. ODM almost solves the novelty problem while keeping a similar performance on anomalies as ML. It is noticeable that CE struggles a bit more with Gaussian noise than the other anomalies. In this case, CE still produces highly confident predictions for some of the noise images. C Experimental results on additional Tsinghua splits Alternatively to the Tsinghua split generated with the restrictions introduced in Section 4.2, we also perform the comparison in a set of 10 random splits without applying any restriction to the partition classes. We still discard the classes with less than 10 images per class. Table 4 shows the average performance for this set of splits with their respective standard deviation. Since the split of the classes is random, this leads to highly similar or mirrored classes to be separated into in-distribution and out-distribution, creating situations that are very difficult to predict correctly. For instance, detecting that a turn-left traffic sign is part of the in-distribution while the turn-right traffic sign is part of the out-distribution, is very difficult in many cases. Therefore, the results from the random splits have a much lower performance, specially for the novelty case. When comparing the metric learning based methods, ODM improves over ML for the test set that has been seen as out-distribution during training. In general, using novelty data as out-distribution makes an improvement over said test set, as well as for background and noise. However, when using background images to push the out-of-distribution further from the in-distribution class clusters in the embedding space, novelty is almost unaffected. The same happens when noise is used as out-distribution during training. This could be explained by those cases improving the embedding space for data that is initially not so far away from Table 4: Comparison between ODIN and our proposed learning strategies on a WRN-28-10 architecture, when using novelty, anomaly (background patches and Gaussian noise) as seen out-of-distribution data as well as not seen out-of-distribution. The experiments are performed on a set of 10 random splits and the metrics provided are the mean of the metrics on the individual splits ± its standard deviation. Method In the in-distribution class clusters. This would change the embedding space to push further the anomalies, but would leave the novelty classes, originally much closer to the clusters, almost at the same location. When introducing out-of-distribution samples, the behaviour on the random splits is the same as for the restricted splits: while introducing novelty helps the detection on all cases, introducing anomaly helps the detection of the same kind of anomaly. Figure 3 shows the embeddings for ODM (with novelty as seen out-of-distribution) and ML after applying PCA. When using ML, the novelties are not forced to be pushed away from the in-distribution clusters so they share the embedding space in between those same indistribution clusters. In the case of ODM, the out-of-distribution clusters are more clearly separated from the in-distribution ones. Figure 3: Embedding spaces after PCA for ODM (left) and ML (right) tested for in-dist (blue shaded) and out-dist (yellow shaded). Results are for TSinghua (first row), background patches (second row) and Gaussian noise (third row). Best viewed in color.
4,302
1808.05492
2887894061
When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection ---images of classes which are not in the training set but are related to those---, and anomaly detection ---images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works.
Metric Learning. Several computer vision tasks such as retrieval, matching, verification, even multi-class classification, share the need of being able to measure the similarity between pairs of images. Deriving such a measure from data samples is known as metric learning @cite_31 . Two often cited seminal works on this subject through neural networks are @cite_11 @cite_6 , where the Siamese architecture was proposed for this purpose. Differently from classification networks, the goal is to learn rather than a representation amenable for classification, one for measuring how similar two instances are in terms of the Euclidean distance. Another popular architecture is triplet networks @cite_27 . For both of them many authors have realized that mining the samples of the training set in order to find out or challenging pairs or triplets is important in order to converge faster or to better minima @cite_32 @cite_9 @cite_23 . Like them, we have also resorted to a mining strategy in order to obtain good results in the task of out-of-distribution detection.
{ "abstract": [ "Learning the distance metric between pairs of examples is of great importance for learning and visual recognition. With the remarkable success from the state of the art convolutional neural networks, recent works have shown promising results on discriminatively training the networks to learn semantic feature embeddings where similar examples are mapped close to each other and dissimilar examples are mapped farther apart. In this paper, we describe an algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of pairwise distances. This step enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Additionally, we collected Online Products dataset: 120k images of 23k classes of online products for metric learning. Our experiments on the CUB-200-2011, CARS196, and Online Products datasets demonstrate significant improvement over existing deep feature embedding methods on all experimented embedding sizes with the GoogLeNet network.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE.", "", "Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N-pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples - more specifically, N-1 negative examples - and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N+1) x N. We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification.", "The metric learning problem is concerned with learning a distance function tuned to a particular task, and has been shown to be useful when used in conjunction with nearest-neighbor methods and other techniques that rely on distances or similarities. This survey presents an overview of existing research in metric learning, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points. A goal of the survey is to present as unified as possible a framework under which existing research on metric learning can be cast. The first part of the survey focuses on linear metric learning approaches, mainly concentrating on the class of Mahalanobis distance learning methods. We then discuss nonlinear metric learning approaches, focusing on the connections between the nonlinear and linear approaches. Finally, we discuss extensions of metric learning, as well as applications to a variety of problems in computer vision, text analysis, program analysis, and multimedia. Full text available at: http: dx.doi.org 10.1561 2200000019", "" ], "cite_N": [ "@cite_9", "@cite_32", "@cite_6", "@cite_27", "@cite_23", "@cite_31", "@cite_11" ], "mid": [ "2176040302", "2096733369", "2138621090", "", "2555897561", "2121949863", "" ] }
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 1 Metric Learning for Novelty and Anomaly Detection
Deep neural networks have obtained excellent performance for many applications. However, one of the known shortcomings of these systems is that they can be overly confident when presented with images (and classes) which were not present in the training set. Therefore, a desirable property of these systems would be the capacity to not produce an answer if an input sample belongs to an unknown class, that is, a class for which it has not been trained. The field of research which is dedicated to this goal is called out-of-distribution detection [10,17,18]. Performing out-of-distribution detection is important not only to avoid classification errors but also as the first step towards lifelong learning systems [3]. Such systems would detect out-of-distribution samples in order to later update the model accordingly [13,20]. The problem of out-of-distribution detection has also been called one-class classification, novelty and anomaly detection [23]. More recently, associated to deep neural network classifiers, some works refer to it as open-set recognition [1]. In this paper, we distinguish two cases of out-of-distribution which we believe are quite different: we propose to term as novelty an image from a class different from those contained in a dataset from which to train, but that bears some resemblance to them, for instance because it shows the same kind of object from untrained points of view. This is a very important problem in many computer vision applications. For example, imagine a system that classifies traffic signs on-board a car and takes automatic decisions accordingly. It can happen that it finds a class of local traffic signs which was not included in the training set, and this must be detected to avoid taking wrong decisions. We reserve the word anomaly for completely unrelated samples, like different type of objects, images from another unrelated dataset, or background patches in the case of traffic sign classification. This is also relevant from the point of view of commercial applications. In fact, most previous works focus on anomaly detection. Novelty detection remains rather unexplored. To the best of our knowledge only [26] and [18] perform some intra-dataset out-of-distribution detection experiments. The three previous works closest to ours [10,17,18], revolve around one idea: given a discriminative neural network model, use the output probabilities to take the decision of seen/unseen class. These networks are optimized to distinguish between the classes present in the training set, and are not required to explicitly model the marginal data distribution. As a consequence, at testing time the system cannot assess the probability of the presented data, complicating the assessment of novelty cases. Here we explore a completely different approach: to learn an embedding where one can use Euclidean distance as a measure of "out-of-distributioness". We propose a loss that learns an embedding where samples from the same in-distribution class form clusters, well separated from the space of other in-distribution classes and also from out-of-distribution samples. The contributions to the problem of out-of-distribution detection presented in this paper are the following. First, the use of metric learning for out-of-distribution detection, instead of doing it on the basis of the cross-entropy loss and corresponding softmax scores. Second, we distinguish between novelty and anomaly detection and show that research should focus on the more challenging problem of novelty detection. Third, we obtain comparable or better results than state-of-the-art in both anomaly and novelty detection. Last, in addition to the experiments with benchmark datasets in order to compare with previous works, we address also a real-world classification problem, traffic sign recognition, for which we obtain good detection and accuracy results. Metric Learning for Out-of-Distribution Most recent works on out-of-distribution detection are based on supervisely trained neural networks which optimize the cross-entropy loss. In these cases the network output has a direct correspondence with the solution of the task, namely a probability for each class. However, the representation of the output vector is forced to always sum up to one. This means that when the network is shown an input which is not part of the training distribution, it will still give probabilities to the nearest classes so that they sum up to one. This phenomena has led to the known problem of neural networks being too overconfident about content that they have never seen [10]. Several works have focused on improving the accuracy of the confidence estimate of methods based on the cross entropy; adapting them in such a way that they would yield lower confidences for out-of-distribution [10,17,18]. We hypothesize that the problem of the overconfident network predictions is inherent to the used cross-entropy, and therefore propose to study another class of network objectives, namely those used for metric learning. In metric learning methods, we minimize an objective which encourages images with the same label to be close and images with different labels to be at least some margin apart in an embedding space. These networks do not apply a softmax layer, and therefore are not forced to divide images which are out-of-distribution over the known classes. Metric Learning For applications such as image retrieval, images are represented by an embedding in some feature space. Images can be ordered (or classified) according to the distance to other images in that embedding space. It has been shown that using metric learning methods to improve the embeddings could significantly improve their performance [8]. The theory of metric learning was extended to deep neural networks by Chopra et al. [4]. They proposed to pass images through two parallel network branches which share the weights (also called a Siamese network). A loss considers both embeddings, and adapts the embedding in such a way that similar classes are close and dissimilar classes are far in that embedding space. Traditionally these networks have been trained with contrastive loss [9], which is formulated as: L(x 1 , x 2 , y ; W ) = 1 2 (1 − y) D 2 w + 1 2 y (max (0, m − D w )) 2 ,(1) where D w = || f W (x 1 ) − f W (x 2 )|| 2 is the distance between the embeddings of images x 1 and x 2 computed by network f W with weights W . The label y = 0 indicates that the two images are from the same class, and y = 1 is used for images from different classes. The loss therefore minimizes the distance between images of the same class, and increases the distance of images of different classes until this distance surpasses the margin m. Several other losses have been proposed for Siamese networks [11,25,28,31,32] but in this paper we will evaluate results with the contrastive loss to provide a simple baseline on which to improve. Out-of-Distribution Mining (ODM) In the previous section, we considered that during training only examples of in-distribution data are provided. However, some methods consider the availability of some out-of-distribution data during training [17]. This is often a realistic assumption since it is relatively easy to obtain data from other datasets or create out-of-distribution examples, such as samples generated with Gaussian noise. However, it has to be noted that the out-of-distribution data is used unlabeled, and is of a different distribution from the out-of-distribution used at testing. The objective is to help the network be less confident about what it does not know. Therefore, noise or even unlabeled data can be used to strengthen the knowledge boundaries of the network. We propose to adapt the contrastive loss to incorporate the out-of-distribution data: L(x 1 , x 2 , y ; W ) = 1 2 (1 − y) zD 2 w + 1 2 yz (max (0, m − D w )) 2 ,(2) where we have introduced a label z which is zero when both images are from the out-ofdistribution and one otherwise. This loss is similar to Eq. 1, but with the difference that in case of a pair of images where one is an out-of-distribution image (z = 1, y = 1) they are encouraged to be at least m distance apart. Note that we do not enforce the out-of-distribution images to be close, since when z = 0 the pair does not contribute to the loss. It is important to make sure that there are no pairs of out-of-distribution samples so that they are not treated as a single new class and forced to be grouped into a single cluster. In practice, we have not implemented a two-branches Siamese network but followed recent works [19,30] which devise a more efficient approach to minimize losses traditionally computed with Siamese networks. The idea is to sample a minibatch of images which we forward through a single branch until the embedding layer. We then sample pairs from them in the loss layer and backpropagate the gradient. This allows the network to be defined with only one copy of the weights instead of having two branches with shared weights. At the same time, computing the pairs after the embedding also allows to use any subgroup of possible pairs among all the images from the minibatch. When computing the pairs we make sure that pairs of out-of-distribution samples are not used. As a result z will never be 0 and we can in practice directly apply Eq. 1 instead of Eq. 2. Anomaly and Novelty detection In this paper we distinguish between two categories of out-of-distribution data: Novelty: samples that share some common space with the trained distribution, which are usually concepts or classes which the network could include when expanding its knowledge. If you train a network specialized in different dog breeds, an example would be a new dog breed that was not in the training set. Furthermore, if the classes are more complex, some novelty out-of-distribution could be new viewpoints or modifications of an existing learned class. Anomaly: samples that are not related with the trained distribution. In this category we could include background images, Gaussian noise, or unrelated classes to the trained distribution (i.e. SVHN would be a meaningful anomaly for CIFAR-10). Since anomalies are further from the in-distribution than novelties these are expected to be easier to detect. To further illustrate the difference between novelties and anomalies consider the following experiment. We train a LeNet on the classes 2, 6 and 7 from the MNIST dataset [16] under the same setup for both cross-entropy (CE) and contrastive (ML) losses. We also train it with our proposed method which introduces out-of-distribution mining during training (ODM). We use classes 0, 3, 4, and 8 as those seen out-of-distribution samples during training. Then, we visualize the embeddings for different out-of-distribution cases from closer to further resemblance to the train set : 1) similar numbers 5, 9 and 1 as novelty, 2) SVHN [22] and CIFAR-10 [14] as anomalies with a meaning, and 3) the simpler Gaussian noise anomalies. In Figure 1 we show the 3-dimensional output embedding spaces for CE, ML and ODM in rows 1, 2 and 3 respectively. As expected, the CE space is bounded inside the shown triangle, since the three dimensions of the output (the number of classes) have to always sum up to 1. For SVHN, CE correctly assigns low confidence for all classes. However, for CIFAR-10, Gaussian noise and Novelty it increasingly is more confident about the probability of an out-of-distribution image to be classified as an in-distribution one. In the case of ML, all anomalies seem to be more separated from the in-distributions for each class, and only the Novelty is still too close to the cluster centers. With the introduction of out-of-distribution samples during training, ODM shows how out-of-distribution images are kept away from the in-distribution, allowing the network to be confident about what it is capable of classifying and what not. We provide quantitative performance results for this experiment in the Supplementary Material. In conclusion, this experiment shows that there is a difference between novel and anomaly out-of-distribution samples for both cross-entropy and metric learning approaches, stressing that those have to be approached differently. Furthermore, the overconfidence of the cross-entropy methods is more clear on novelty detection cases, and among the anomaly cases, the Gaussian noise seems to be the one with more overconfident cases. In those cases, a metric learning approach presents more benefits when doing out-of-distribution detection. It allows for the output embedding space to be more representative of the learned classes around the class centers, and naturally has the ability to give low scores to unseen data. Finally, when some out-of-distribution samples are shown during training, the network is more capable of adapting the embedding space to be more separable against anomaly data. Results To assess the performance of the proposed method, we first compare with existing stateof-the-art out-of-distribution detection methods on SVHN [22] and CIFAR-10 [14] datasets trained on VGGnet [29] and evaluated with the metrics provided in [17]. Furthermore, as a more application-based benchmark, we propose to compare cross-entropy based strategies and metric learning strategies on the Tsinghua dataset [35] of traffic signs. In this second set of experiments we use our own implementation of the metrics defined in [18]. More about the metrics used can be found in the Supplementary Material. 1 Comparison with state-of-the-art We compare our method with two very recent state-of-the-art methods. One of them uses a confidence classifier and an adversarial generator (CC-AG) [17] and like ours uses outof-distribution images during training. The second method is ODIN [18] which does not consider out-of-distribution images during training. In [17] they compare CC-AG with ODIN [18], and show that they can perform much better in the novelty case but similar for the anomaly cases. We train each SVHN and CIFAR-10 as the in-distribution datasets while using the other dataset as the seen out-distribution during training. We train on VGGnet, just like [17], with a contrastive loss of margin 10 and a 25% of (in-dist, out-dist) pairs every two batches. Following the experiments of [17], we test the resulting networks on the in-distribution test set for classification, and TinyImageNet [6], LSUN [33] and Gaussian noise for out-of-distribution detection. For evaluation we use the proposed metrics from their implementation, namely: true negative rate (TNR) when true positive rate (TPR) is at 95%, detection accuracy, area under the receiver operating characteristic curve (AUROC) and both area under the precisionrecall curve for in-distribution (AUPR-in) and out-distribution (AUPR-out). Table 1 shows the results. For SVHN as the in-distribution results are as expected, with ODIN having lower results due to not using any out-of-distribution during training, and both CC-AG and ODM having near perfect performance. In the case of CIFAR-10 being the indistribution, the same pattern is repeated for the seen distribution from SVHN. However, for the unseen out-distributions, CC-AG achieves the lower performance on both TinyImageNet and LSUN datasets, and ODIN the lower for Gaussian noise. Although not always achieving the best performance, ODM is able to compete with the best cases, and is never the worse performer. Gaussian noise seems to be the most difficult case on CIFAR-10, which is a more complex dataset than SVHN. For ODIN, as it is only based on cross-entropy, it becomes to overconfident. In the case of CC-AG and ODM, the low results might be related to Gaussian noise being too different from the out-distribution seen during training. Finally, it is important to note that metric learning has a lower classification accuracy of the in-distribution. This has already been observed in [12], where features learned by classification networks with typical softmax layers are compared with metric learning based features, with regard to several benchmark datasets. For good classification results our metric learning network should be combined with those of a network trained with cross-entropy. One could also consider a network with two heads, where after some initial shared layers a cross-entropy branch and a metric learning branch are trained in a multi-task setting. Tsinghua traffic sign dataset We evaluate our method on a real application, i.e. traffic sign recognition in the presence of unseen traffic signs (novelty) and not-a-traffic-sign detection (anomaly). We compare our proposed method ODM against ODIN [18], as a cross-entropy based method, on the Tsinghua dataset [35]. We divide traffic sign classes into three disjoint partitions : the indistribution classes, seen out-of-distribution images used for training, and unseen out-ofdistribution images used for testing on out-of-distribution detection. Since Tsinghua contains some very similar traffic sign classes which would rarely be learned without each other (i.e. all speed limits, all turning arrows, ...), we group those that are too similar in order to build a more reasonable and natural split than just a random one (See Supplementary Material for more on the usual random splits). For the same reason, we also discard classes with less than 10 images as they introduce errors. Therefore, we generate a random split which applies by the mentioned restrictions (see Fig. 2), by taking a 50-20-30% split of the classes for the in-distribution, seen out-distribution and unseen out-distribution respectively. Regarding anomalies, we consider Gaussian noise, but also background patches from the same Tsinghua dataset images. Those patches are generated randomly from the central area of the original full frames to avoid an unbalanced ratio of ground and sky images, which can Table 2: Comparison between ODIN and our proposed learning strategies on a WRN-28-10 architecture, when using novelty, anomaly (background patches and Gaussian noise) as seen out-of-distribution data as well as not seen out-of-distribution. Method In be semantically richer and more challenging. In a real traffic sign detector application, where detected possible traffic signs are fed to a classifier, this kind of anomalies are more realistic and account for possible detection errors more than Gaussian noise. The global performance of the system can be improved by avoiding that those anomalies reach the classifier and produce an overconfident error. For this experiment, we learn a 32-dimensional embedding space, training a WRN-28-10 model [34] with an Adam optimizer at learning rate 0.0001 for 10,000 steps. The same training parameters are used for ODIN since they provided the best combination on the validation set. Table 2 shows the results of the comparison between ODIN, ML and ODM for both seen novelty and anomaly cases. Note that our implementation of the Detection Error metric is fixed to use the FPR at a TPR of 95%, making a value of 2.50 the one of a perfect detector (see Supplementary Material). In terms of in-distribution classification accuracy, both methods are equivalent. However, the comparison of plain metric learning (Ours-ML) with ODIN shows that learning an embedding can be more suitable for out-of-distribution detection of both novelty and anomalies. Introducing out-distribution samples during training slightly improves all cases. Using anomalies as seen out-of-distribution during training helps the detection of the same kind of anomaly as expected since anomalies will be forced to be further away from the indistribution in the embedding space. However, in some cases, it can damage the detection of novelty, which would not be guaranteed to be pushed away from the learned classes. In this paper, we propose a metric learning approach to improve out-of-distribution detection which performs comparable or better than the state-of-the-art. We show that metric learning provides a better output embedding space to detect data outside the learned distribution than cross-entropy softmax based models. This opens an opportunity to further research on how this embedding space should be learned, with restrictions that could further improve the field. The presented results suggest that out-of-distribution data might not all be seen as a single type of anomaly, but instead a continuous representation between novelty and anomaly data. In that spectrum, anomaly detection is the easier task, giving more focus at the difficulty of novelty detection. Finally, we also propose a new benchmark for out-of-distribution detection on the Tsinghua dataset, as a more realistic scenario for novelty detection. Supplementary Material Metric Learning for Novelty and Anomaly Detection A Out-of-Distribution detection metrics In out-of-distribution detection, comparing different detector approaches cannot be done by measuring only accuracy. The question we want to answer is if a given test sample is from a different distribution than that of the training data. The detector will be using some information from the classifier or embedding space, but the prediction is whether that processed sample is part of the in-distribution or the out-distribution. To measure that, we adopt the metrics proposed in [18]: • FPR at 95% TPR is the corresponding False Positive Rate (FPR=FP/(FP+TN)) when the True Positive Rate (TPR=TP/(TP+FN)) is at 95%. It can be interpreted as the misclassification probability of a negative (out-distribution) sample to be predicted as a positive (in-distribution) sample. • Detection Error measures the probability of misclassifying a sample when the TPR is at 95%. Assuming that a sample has equal probability of being positive or negative in the test, it is defined as 0.5(1 − TPR) + 0.5FPR. where TP, FP, TN, FN correspond to true positives, false positives, true negatives and false negatives respectively. Those two metrics were also changed to TNR at 95% TPR and Detection Accuracy in [17], which can be calculated by doing 1 − x from the two metrics above explained respectively. We use the latter metrics only when comparing to other stateof-the-art methods. This is also done because the implementation in both [17,18] allows for using a TPR which is not at 95% in some cases, meaning that the Detection Error can go below 2.5 since TPR is not fixed to 0.95. In order to avoid the biases between the likelihood of an in-distribution sample to being more frequent than an out-distribution one, we need threshold independent metrics that measure the trade-off between false negatives and false positives. We adopt the following performance metrics proposed in [10]: • AUROC is the Area Under the Receiver Operating Characteristic proposed in [5]. It measures the relation between between TPR and FPR interpreted as the probability of a positive sample being assigned a higher score than a negative sample. • AUPR is the Area Under the Precision-Recall curve proposed in [21]. It measures the relationship between precision (TP/(TP+FP)) and recall (TP/(TP+FN)) and is more robust when positive and negative classes have different base rates. For this metric we provide both AUPR-in and AUPR-out when treating in-distribution and outdistribution samples as positive, respectively. B Quantitative results of the MNIST experiment In this section we present the quantitative results of the comparison on the MNIST dataset. In this case we allowed a 5-dimensional embedding space for ML so the representation is rich enough to make the discrimination between in-dist and out-dist. For CE, as it is fixed to the number of classes, the embedding space is 3-dimensional. In Table 3 we see that ML performs a better than CE on all cases. ODM almost solves the novelty problem while keeping a similar performance on anomalies as ML. It is noticeable that CE struggles a bit more with Gaussian noise than the other anomalies. In this case, CE still produces highly confident predictions for some of the noise images. C Experimental results on additional Tsinghua splits Alternatively to the Tsinghua split generated with the restrictions introduced in Section 4.2, we also perform the comparison in a set of 10 random splits without applying any restriction to the partition classes. We still discard the classes with less than 10 images per class. Table 4 shows the average performance for this set of splits with their respective standard deviation. Since the split of the classes is random, this leads to highly similar or mirrored classes to be separated into in-distribution and out-distribution, creating situations that are very difficult to predict correctly. For instance, detecting that a turn-left traffic sign is part of the in-distribution while the turn-right traffic sign is part of the out-distribution, is very difficult in many cases. Therefore, the results from the random splits have a much lower performance, specially for the novelty case. When comparing the metric learning based methods, ODM improves over ML for the test set that has been seen as out-distribution during training. In general, using novelty data as out-distribution makes an improvement over said test set, as well as for background and noise. However, when using background images to push the out-of-distribution further from the in-distribution class clusters in the embedding space, novelty is almost unaffected. The same happens when noise is used as out-distribution during training. This could be explained by those cases improving the embedding space for data that is initially not so far away from Table 4: Comparison between ODIN and our proposed learning strategies on a WRN-28-10 architecture, when using novelty, anomaly (background patches and Gaussian noise) as seen out-of-distribution data as well as not seen out-of-distribution. The experiments are performed on a set of 10 random splits and the metrics provided are the mean of the metrics on the individual splits ± its standard deviation. Method In the in-distribution class clusters. This would change the embedding space to push further the anomalies, but would leave the novelty classes, originally much closer to the clusters, almost at the same location. When introducing out-of-distribution samples, the behaviour on the random splits is the same as for the restricted splits: while introducing novelty helps the detection on all cases, introducing anomaly helps the detection of the same kind of anomaly. Figure 3 shows the embeddings for ODM (with novelty as seen out-of-distribution) and ML after applying PCA. When using ML, the novelties are not forced to be pushed away from the in-distribution clusters so they share the embedding space in between those same indistribution clusters. In the case of ODM, the out-of-distribution clusters are more clearly separated from the in-distribution ones. Figure 3: Embedding spaces after PCA for ODM (left) and ML (right) tested for in-dist (blue shaded) and out-dist (yellow shaded). Results are for TSinghua (first row), background patches (second row) and Gaussian noise (third row). Best viewed in color.
4,302
1808.05498
2951863869
Rotation estimation of known rigid objects is important for robotic applications such as dexterous manipulation. Most existing methods for rotation estimation use intermediate representations such as templates, global or local feature descriptors, or object coordinates, which require multiple steps in order to infer the object pose. We propose to directly regress a pose vector from raw point cloud segments using a convolutional neural network. Experimental results show that our method can potentially achieve competitive performance compared to a state-of-the-art method, while also showing more robustness against occlusion. Our method does not require any post processing such as refinement with the iterative closest point algorithm.
6D pose estimation using only RGB information has been widely studied @cite_27 @cite_12 @cite_10 @cite_0 . Since this work concentrates on using point cloud inputs, which contain depth information, we mainly review works that also consider depth information. We also review how depth information can be represented.
{ "abstract": [ "Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the observed image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over state-of-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.", "", "We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task (, ICCV'17) that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by the YOLO network design that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches when they are all used without post-processing. During post-processing, a pose refinement step can be used to boost the accuracy of the existing methods, but at 10 fps or less, they are much slower than our method.", "We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously." ], "cite_N": [ "@cite_0", "@cite_27", "@cite_10", "@cite_12" ], "mid": [ "2795999188", "", "2952717317", "2604236302" ] }
Occlusion Resistant Object Rotation Regression from Point Cloud Segments
The 6D pose of an object is composed of 3D location and 3D orientation. The pose describes the transformation from a local coordinate system of the object to a reference coordinate system (e.g. camera or robot coordinate) [20], as shown in Figure 1. Knowing the accurate 6D pose of an object is necessary for robotic applications such as dexterous grasping and manipulation. This problem is challenging due to occlusion, clutter and varying lighting conditions. Many methods for pose estimation using only color information have been proposed [17,25,32,21]. Since depth cameras are commonly used, there have been many methods using both color and depth information [1,18,15]. Recently, there are also many CNN based methods [18,15]. In general, methods that use depth information can handle both textured and texture-less objects, and they are more robust to occlusion compared to methods using only color information [18,15]. The 6D pose of an object is an inherently continuous quantity. Some works discretize the continuous pose space [8,9], and formulate the problem as classification. Others avoid discretization by representing the pose using, e.g., quaternions [34], or the axis-angle representation [22,4]. Work outside the domain of pose estimation has also considered rotation matrices [24], or in a more general case parametric representations of affine transformations [14]. In these cases the problem is often formulated as regression. The choice of rotation representation has a major impact on the performance of the estimation method. In this work, we propose a deep learning based pose estimation method that uses point clouds as an input. To the best of our knowledge, this is the first attempt at applying deep learning for directly estimating 3D rotation using point cloud segments. We formulate the problem of estimating the rotation of a rigid object as regression from a point cloud segment to the axis-angle representation of the rotation. This representation is constraint-free and thus well-suited for application in supervised learning. Our experimental results show that our method reaches state-of-the-art performance. We also show that our method exceeds the state-of-the-art in pose estimation tasks with moderate amounts of occlusion. Our approach does not require any post-processing, such as pose refinement by the iterative closest point (ICP) algorithm [3]. In practice, we adapt PointNet [24] for the rotation regression task. Our input is a point cloud with spatial and color information. We use the geodesic distance between rotations as the loss function. The remainder of the paper is organized as follows. Section 2 reviews related work in pose estimation. In Section 3, we argue why the axis-angle representation is suitable for supervised learning. We present our system architecture and network details in Section 4. Section 5 presents our experimental results. In Section 6 we provide concluding remarks and discuss future work. Pose estimation RGB-D methods. A template matching method which integrates color and depth information is proposed by Hinterstoisser et al. [8,9]. Templates are built with quantized image gradients on object contour from RGB information and surface normals on object interior from depth information, and annotated with viewpoint information. The effectiveness of template matching is also shown in [12,19]. However, template matching methods are sensitive to occlusions [18]. Voting-based methods attempt to infer the pose of an object by accumulating evidence from local or global features of image patches. One example is the Latent-Class Hough Forest [31,30] which adapts the template feature from [8] for generating training data. During inference stage, a random set of patches is sampled from the input image. The patches are used in Hough voting to obtain pose hypotheses for verification. 3D object coordinates and object instance probabilities are learned using a Decision Forest in [1]. The 6D pose estimation is then formulated as an energy optimization problem which compares synthetic images rendered with the estimated pose with observed depth values. 3D object coordinates are also used in [18,23]. However, those approaches tend to be very computationally intensive due to generation and verification of hypotheses [18]. Most recent approaches rely on convolutional neural networks (CNNs). In [20], the work in [1] is extended by adding a CNN to describe the posterior density of an object pose. A combination of using a CNN for object segmentation and geometry-based pose estimation is proposed in [16]. PoseCNN [34] uses a similar two-stage network, in which the first stage extracts feature maps from RGB input and the second stage uses the generated maps for object segmentation, 3D translation estimation and 3D rotation regression in quaternion format. Depth data and ICP are used for pose refinement. Jafari et al. [15] propose a three-stage, instance-aware approach for 6D object pose estimation. An instance segmentation network is first applied, followed by an encoder-decoder network which estimates the 3D object coordinates for each segment. The 6D pose is recovered with a geometric pose optimization step similar to [1]. The approaches [20,15,34] do not directly use CNN to predict the pose. Instead, they provide segmentation and other intermediate information, which are used to infer the object pose. Point cloud-based. Drost et al. [5] propose to extract a global model description from oriented point pair features. With the global description, scene data are matched with models using a voting scheme. This approach is further improved by [10] to be more robust against sensor noise and background clutter. Compared to [5,10], our approach uses a CNN to learn the global description. Depth representation Depth information in deep learning systems can be represented with, e.g., voxel grids [28,26], truncated signed distance functions (TSDF) [29], or point clouds [24]. Voxel grids are simple to generate and use. Because of their regular grid structure, voxel grids can be directly used as inputs to 3D CNNs. However, voxel grids are inefficient since they also have to explicitly represent empty space. They also suffer from discretization artifacts. TSDF tries to alleviate these problems by storing the shortest distance to the surface represented in each voxel. This allows a more faithful representation of the 3D information. In comparison to other depth data representations, a point cloud has a simple representation without redundancy, yet contains rich geometric information. Recently, PointNet [24] has allowed to use raw point clouds directly as an input of a CNN. Supervised learning for rotation regression The aim of object pose estimation is to find the translation and rotation that describe the transformation from the object coordinate system O to the camera coordinate system C ( Figure 1). The translation consists of the displacements along the three coordinate axes, and the rotation specifies the rotation around the three coordinate axes. Here we concentrate on the problem of estimating rotation. For supervised learning, we require a loss function that measures the difference between the predicted rotation and the ground truth rotation. To find a suitable loss function, we begin by considering a suitable representation for a rotation. We argue that the axis-angle representation is the best suited for a learning task. We then review the connection of the axis-angle representation to the Lie algebra of rotation matrices. The Lie algebra provides us with tools needed to define our loss function as the geodesic distance of rotation matrices. These steps allow our network to directly make predictions in the axis-angle format. Notation. In the following, we denote by (·) T vector or matrix transpose. By · 2 , we denote the Euclidean or 2-norm. We write I 3×3 for the 3-by-3 identity matrix. Axis-angle representation of rotations A rotation can be represented, e.g., as Euler angles, a rotation matrix, a quaternion, or with the axis-angle representation. Euler angles are known to suffer from gimbal lock discontinuity [11]. Rotation matrices and quaternions have orthogonality and unit norm constraints, respectively. Such constraints may be problematic in an optimization-based approach such as supervised learning, since they restrict the range of valid predictions. To avoid these issues, we adopt the axisangle representation. In the axis-angle representation, a vector r ∈ R 3 represents a rotation of θ = r 2 radians around the unit vector r r 2 [7]. The Lie group SO(3) The special orthogonal group SO(3) = {R ∈ R 3×3 | RR T = I 3×3 , det R = 1} is a compact Lie group that contains the 3-by-3 orthogonal matrices with determinant one, i.e., all rotation matrices [6]. Associated with SO(3) is the Lie algebra so (3), consisting of the set of skew-symmetric 3-by-3 matrices. Let r = r 1 r 2 r 3 T ∈ R 3 be an axis-angle representation of a rotation. The corresponding element of so (3) is the skew-symmetric matrix r × =   0 −r 3 r 2 r 3 0 −r 1 −r 2 r 1 0   .(1) The exponential map exp : so(3) → SO(3) connects the Lie algebra with the Lie group by exp(r × ) = I 3×3 + sin θ θ r × + 1 − cos θ θ 2 r 2 × ,(2) where θ = r T r = r 2 as above 1 . Now let R be a rotation matrix in the Lie group SO (3). The logarithmic map log : SO(3) → so (3) connects R with an element in the Lie algebra by log(R) = φ(R) 2 sin(φ(R)) (R − R T ),(3) where φ(R) = arccos trace(R) − 1 2 (4) can be interpreted as the magnitude of rotation related to R in radians. If desired, we can now obtain an axis-angle representation of R by first extracting from log(R) the corresponding elements indicated in Eq. (1), and then setting the norm of the resulting vector to φ(R). Loss function for rotation regression We regress to a predicted rotationr represented in the axis-angle form. The prediction is compared against the ground truth rotation r via a loss function l : R 3 × R 3 → R ≥0 . LetR and R denote the two rotation matrices corresponding tor and r, respectively. We use as loss function the geodesic distance d(R, R) of R and R [13,7], i.e., l(r, r) = d(R, R) = φ(RR T ),(5) where we first obtainR and R via the exponential map, and then calculate φ(RR T ) to obtain the loss value. This loss function directly measures the magnitude of rotation betweenR and R, making it convenient to interpret. Furthermore, using the axis-angle representation allows to make predictions free of constraints such as the unit norm requirement of quaternions. This makes the loss function also convenient to implement in a supervised learning approach. should be used for small θ for numerical stability. 4 System architecture Figure 2 shows the system overview. We train our system for a specific target object, in Figure 2 the drill. The inputs to our system are the RGB color image, the depth image, and a segmentation mask indicating which pixels belong to the target object. We first create a point cloud segment of the target object based on the inputs. Each point has 6 dimensions: 3 dimensions for spatial coordinates and 3 dimensions for color information. We randomly sample n points from this point cloud segment to create a fixed-size downsampled point cloud. In all of our experiments, we use n = 256. We then remove the estimated translation from the point coordinates to normalize the data. The normalized point cloud segment is then fed into a network which outputs a rotation prediction in the axis-angle format. During training, we use the ground truth segmentation and translation. As we focus on the rotation estimation, during testing, we apply the segmentation and translation outputs of PoseCNN [34]. We consider two variants for our network presented in the following subsections. The first variant processes the point cloud as a set of independent points without regard to the local neighbourhoods of points. The second variant explicitly takes into account the local neighbourhoods of a point by considering its nearest neighbours. PointNet (PN) Our PN network is based on PointNet [24], as illustrated in Figure 3. The Point-Net architecture is invariant to all n! possible permutations of the input point cloud, and hence an ideal structure for processing raw point clouds. The invariance is achieved by processing all points independently using multi-layer perceptrons (MLPs) with shared weights. The obtained feature vectors are finally max-pooled to create a global feature representation of the input point cloud. Finally, we attach a three-layer regression MLP on top of this global feature to predict the rotation. Dynamic nearest neighbour graph (DG) In the PN architecture, all features are extracted based only on a single point. Hence it does not explicitly consider the local neighbourhoods of individual points. However, local neighbourhoods can contain useful geometric information for pose estimation [27]. The local neighbourhoods are considered by an alternative network structure based on the dynamic nearest-neighbour graph network proposed in [33]. For each point P i in the point set, a k-nearest neighbor graph is calculated. In all our experiments, we use k = 10. The graph contains directed edges (i, j i1 ), . . . , (i, j ik ), such that P ji1 , . . . , P j ik are the k closest points to P i . For an edge e ij , an edge feature P i , (P j − P i ) T is calculated. The edge features are then processed in a similar manner as in PointNet to preserve permutation invariance. This dynamic graph convolution can then be repeated, now calculating the nearest neighbour graph for the feature vectors of the first shared MLP layer, and so on for the subsequent layers. We use the implementation 2 provided by authors from [33], and call the resulting network DG for dynamic graph. Experimental results This section shows experimental results of the proposed approach on the YCB video dataset [34], and compares the performance with state-of-the-art PoseCNN method [34]. Besides prediction accuracy, we investigate the effect of occlusions and the quality of the segmentation and translation estimates. Experiment setup The YCB video dataset [34] is used for training and testing with the original train/test split. The dataset contains 133,827 frames of 21 objects selected from the YCB object set [2] with 6D pose annotation. 80,000 frames of synthetic data are also provided as an extension to the training set. We select a set of four objects to test on, shown in Figure 4. As our approach does not consider object symmetry, we use objects that have 1-fold rotational symmetry (power drill, banana and pitcher base) or 2-fold rotational symmetry (extra large clamp). We run all experiments using both the PointNet based (PN) and dynamic graph (DG) networks. During training, Adam optimizer is used with learning rate 0.008, and batch size of 128. Batch normalization is applied to all layers. No dropout is used. For training, ground truth segmentations and translations are used as the corresponding inputs shown in Fig. 2. While evaluating 3D rotation estimation in Subsection 5.3, the translation and segmentation predicted by PoseCNN are used. We observed that the color information represented by RGB color space varies in an inconsistent manner across different video sequences, hence all the following experimental results are obtained only with XYZ coordinate information of point cloud. Moreover, our current system does not deal with classification problem, individual network is trained for each object. Due to the difference of experimental setup between our method and PoseCNN, the performance comparison are mainly for illustrating the potential of proposed approach. Evaluation metrics For evaluating rotation estimation, we directly use geodesic distance described in Section 3 to quantify the rotation error. We evaluate 6D pose estimation using average distance of model points (ADD) proposed in [9]. For a 3D model M represented as a set of points, with ground truth rotation R and translation t, and estimated rotationR and translationt, the ADD is defined as: ADD = 1 m x∈M (Rx + t) − (Rx +t) 2 ,(6) where m is the number of points. The 6D pose estimate is considered to be correct if ADD is smaller than a given threshold. Accuracy of rotation angle prediction shows the fraction of predictions with error smaller than the threshold. Results are shown for our method and PoseCNN [34]. The additional +gt denotes the variants where ground truth segmentation is provided. Figure 5 shows the estimation accuracy as function of the rotation angle error threshold, i.e., the fraction of predictions that have an angle error smaller than the horizontal axis value. Results are shown for PoseCNN, PoseCNN with ICP refinement (PoseCNN+ICP), and our method with PointNet structure (PN), and with dynamic graph structure (DG). To determine the effect of the translation and segmentation input, we additionally test our methods while giving the ground truth translation and segmentation as input. The cases with ground truths provided are indicated by +gt, and shown with a dashed line. The performance without ground truth translation and segmentation is significantly worse than the performance with ground truth information. This shows that good translation and segmentation results are crucial for accurate rotation estimation. Also, by using ground truth information, the performance for extra large clamp (2-fold rotational symmetry) is worse than other objects, which illustrates that the object symmetry should be taken into consideration during learning process. Rotation estimation The results also confirm the fact that ICP based refinement usually only improves the estimation quality if the initial guess is already good enough. When the initial estimation is not accurate enough, the use of ICP can even decrease the accuracy, as shown by the PoseCNN+ICP curve falling below the PoseCNN curve for large angle thresholds. Ours (PN+gt) 9.9 • ±0.5 • 5.7 • ± 0.1 • 6.5 • ±0.3 • 13 • ±0.8 • 11.2 • ± 0.4 • 5.7 • ± 0.4 • Ours (DG+gt) 7.1 • ± 0.3 • 9.8 • ±1.2 • 4.3 • ± 0.2 • 2.6 • ± 0.3 • 34.1 • ±1.6 • 68.2 • ±8.9 • Effect of occlusion. We quantify the effect of occlusion on the rotation prediction accuracy. For a given frame and target object, we estimate the occlusion factor O of the object by O = 1 − λ µ ,(7) where λ is the number of pixels in the 2D ground truth segmentation, and µ is the number of pixels in the projection of the 3D model of the object onto the image plane using the camera intrinsic parameters and the ground truth 6D pose, when we assume that the object would be fully visible. We noted that for the test frames of the YCB-video dataset O is mostly below 0.5. We categorize O < 0.2 as low occlusion and O ≥ 0.2 as moderate occlusion. Table 1 shows the average rotation angle error (in degrees) and its 95% confidence interval 3 for PoseCNN and our method in the low and moderate occlusion categories. We also investigated the effect of the translation and segmentation by considering variants of our methods that were provided with the ground truth translation and segmentation. These variants are shown in the table indicated by +gt. We observe that with ground truth information, our methods shows potential in cases of both low and moderate occlusion. Furthermore, with the dynamic graph architecture (DG), the average error tends to be lower for 1-fold rotational symmetry objects. This shows the local neighbourhood information extracted by DG is useful for rotation estimation when there is no pose ambiguity. One observation is that for banana, the rotation error in low occlusion is significantly higher than it is in the moderate case for PoseCNN. This is because near 25% of the test frames in low occlusion case present an rotation error in range of 160 • to 180 • . Qualitative results for rotation estimation are shown in Figure 6. In the leftmost column, the occlusion factor O of the target object is denoted. Then, from left to right, we show the ground truth, PoseCNN+ICP, and our method using DG and our method using DG with ground truth translation and segmentation (DG+gt) results. In all cases, the ground truth pose, or respectively, the pose estimate, are indicated by the green overlay on the figures. To focus on the difference in the rotation estimate, we use the ground truth translation for all methods for the visualization. The rotation predictions for Ours (DG) are still based on translation and segmentation from PoseCNN. The first two rows of Figure 6 show cases with moderate occlusion. When the discriminative part of the banana is occluded (top row), PoseCNN can not recover the rotation, while our method still produces a good estimate. The situation is similar in the second row for the drill. The third row illustrates that the quality of segmentation has a strong impact on the accuracy of rotation estimation. In this case the segmentation fails to detect the black clamp on the black background, which leads to a poor rotation estimate for both PoseCNN and our method. When we provide the ground truth segmentation (third row, last column), our method is still unable to recover the correct rotation due to the pose ambiguity. Conclusion We propose to directly predict the 3D rotation of a known rigid object from a point cloud segment. We use axis-angle representation of rotations as the regression target. Our network learns a global representation either from individual input points, or from point sets of nearest neighbors. Geodesic distance is used as the loss function to supervise the learning process. Without using ICP refinement, experiments shows that the proposed method can reach competitive and sometimes superior performance compared to PoseCNN. Our results show that point cloud segments contain enough information for inferring object pose. The axis-angle representation does not have any constraints, making it a suitable regression target. Using Lie algebra as a tool provides a valid distance measure for rotations. This distance measure can be used as a loss function during training. We discovered that the performance of our method is strongly affected by the quality of the target object translation and segmentation, which will be further investigated in future work. We will extend the proposed method to full 6D pose estimation by additionally predicting the object translations. We also plan to integrate object classification into our system, and study a wider range of target objects.
3,705
1808.05498
2951863869
Rotation estimation of known rigid objects is important for robotic applications such as dexterous manipulation. Most existing methods for rotation estimation use intermediate representations such as templates, global or local feature descriptors, or object coordinates, which require multiple steps in order to infer the object pose. We propose to directly regress a pose vector from raw point cloud segments using a convolutional neural network. Experimental results show that our method can potentially achieve competitive performance compared to a state-of-the-art method, while also showing more robustness against occlusion. Our method does not require any post processing such as refinement with the iterative closest point algorithm.
Voting-based methods attempt to infer the pose of an object by accumulating evidence from local or global features of image patches. One example is the Latent-Class Hough Forest @cite_15 @cite_2 which adapts the template feature from @cite_32 for generating training data. During inference stage, a random set of patches is sampled from the input image. The patches are used in Hough voting to obtain pose hypotheses for verification.
{ "abstract": [ "In this paper we propose a novel framework, Latent-Class Hough Forests, for 3D object detection and pose estimation in heavily cluttered and occluded scenes. Firstly, we adapt the state-of-the-art template matching feature, LINEMOD [14], into a scale-invariant patch descriptor and integrate it into a regression forest using a novel template-based split function. In training, rather than explicitly collecting representative negative samples, our method is trained on positive samples only and we treat the class distributions at the leaf nodes as latent variables. During the inference process we iteratively update these distributions, providing accurate estimation of background clutter and foreground occlusions and thus a better detection rate. Furthermore, as a by-product, the latent class distributions can provide accurate occlusion aware segmentation masks, even in the multi-instance scenario. In addition to an existing public dataset, which contains only single-instance sequences with large amounts of clutter, we have collected a new, more challenging, dataset for multiple-instance detection containing heavy 2D and 3D clutter as well as foreground occlusions. We evaluate the Latent-Class Hough Forest on both of these datasets where we outperform state-of-the art methods.", "We present a method for detecting 3D objects using multi-modalities. While it is generic, we demonstrate it on the combination of an image and a dense depth map which give complementary object information. It works in real-time, under heavy clutter, does not require a time consuming training stage, and can handle untextured objects. It is based on an efficient representation of templates that capture the different modalities, and we show in many experiments on commodity hardware that our approach significantly outperforms state-of-the-art methods on single modalities.", "In this paper we present Latent-Class Hough Forests , a method for object detection and 6 DoF pose estimation in heavily cluttered and occluded scenarios. We adapt a state of the art template matching feature into a scale-invariant patch descriptor and integrate it into a regression forest using a novel template-based split function. We train with positive samples only and we treat class distributions at the leaf nodes as latent variables. During testing we infer by iteratively updating these distributions, providing accurate estimation of background clutter and foreground occlusions and, thus, better detection rate. Furthermore, as a by-product, our Latent-Class Hough Forests can provide accurate occlusion aware segmentation masks, even in the multi-instance scenario. In addition to an existing public dataset, which contains only single-instance sequences with large amounts of clutter, we have collected two, more challenging, datasets for multiple-instance detection containing heavy 2D and 3D clutter as well as foreground occlusions. We provide extensive experiments on the various parameters of the framework such as patch size, number of trees and number of iterations to infer class distributions at test time. We also evaluate the Latent-Class Hough Forests on all datasets where we outperform state of the art methods." ], "cite_N": [ "@cite_15", "@cite_32", "@cite_2" ], "mid": [ "1022526533", "1969868017", "2400577036" ] }
Occlusion Resistant Object Rotation Regression from Point Cloud Segments
The 6D pose of an object is composed of 3D location and 3D orientation. The pose describes the transformation from a local coordinate system of the object to a reference coordinate system (e.g. camera or robot coordinate) [20], as shown in Figure 1. Knowing the accurate 6D pose of an object is necessary for robotic applications such as dexterous grasping and manipulation. This problem is challenging due to occlusion, clutter and varying lighting conditions. Many methods for pose estimation using only color information have been proposed [17,25,32,21]. Since depth cameras are commonly used, there have been many methods using both color and depth information [1,18,15]. Recently, there are also many CNN based methods [18,15]. In general, methods that use depth information can handle both textured and texture-less objects, and they are more robust to occlusion compared to methods using only color information [18,15]. The 6D pose of an object is an inherently continuous quantity. Some works discretize the continuous pose space [8,9], and formulate the problem as classification. Others avoid discretization by representing the pose using, e.g., quaternions [34], or the axis-angle representation [22,4]. Work outside the domain of pose estimation has also considered rotation matrices [24], or in a more general case parametric representations of affine transformations [14]. In these cases the problem is often formulated as regression. The choice of rotation representation has a major impact on the performance of the estimation method. In this work, we propose a deep learning based pose estimation method that uses point clouds as an input. To the best of our knowledge, this is the first attempt at applying deep learning for directly estimating 3D rotation using point cloud segments. We formulate the problem of estimating the rotation of a rigid object as regression from a point cloud segment to the axis-angle representation of the rotation. This representation is constraint-free and thus well-suited for application in supervised learning. Our experimental results show that our method reaches state-of-the-art performance. We also show that our method exceeds the state-of-the-art in pose estimation tasks with moderate amounts of occlusion. Our approach does not require any post-processing, such as pose refinement by the iterative closest point (ICP) algorithm [3]. In practice, we adapt PointNet [24] for the rotation regression task. Our input is a point cloud with spatial and color information. We use the geodesic distance between rotations as the loss function. The remainder of the paper is organized as follows. Section 2 reviews related work in pose estimation. In Section 3, we argue why the axis-angle representation is suitable for supervised learning. We present our system architecture and network details in Section 4. Section 5 presents our experimental results. In Section 6 we provide concluding remarks and discuss future work. Pose estimation RGB-D methods. A template matching method which integrates color and depth information is proposed by Hinterstoisser et al. [8,9]. Templates are built with quantized image gradients on object contour from RGB information and surface normals on object interior from depth information, and annotated with viewpoint information. The effectiveness of template matching is also shown in [12,19]. However, template matching methods are sensitive to occlusions [18]. Voting-based methods attempt to infer the pose of an object by accumulating evidence from local or global features of image patches. One example is the Latent-Class Hough Forest [31,30] which adapts the template feature from [8] for generating training data. During inference stage, a random set of patches is sampled from the input image. The patches are used in Hough voting to obtain pose hypotheses for verification. 3D object coordinates and object instance probabilities are learned using a Decision Forest in [1]. The 6D pose estimation is then formulated as an energy optimization problem which compares synthetic images rendered with the estimated pose with observed depth values. 3D object coordinates are also used in [18,23]. However, those approaches tend to be very computationally intensive due to generation and verification of hypotheses [18]. Most recent approaches rely on convolutional neural networks (CNNs). In [20], the work in [1] is extended by adding a CNN to describe the posterior density of an object pose. A combination of using a CNN for object segmentation and geometry-based pose estimation is proposed in [16]. PoseCNN [34] uses a similar two-stage network, in which the first stage extracts feature maps from RGB input and the second stage uses the generated maps for object segmentation, 3D translation estimation and 3D rotation regression in quaternion format. Depth data and ICP are used for pose refinement. Jafari et al. [15] propose a three-stage, instance-aware approach for 6D object pose estimation. An instance segmentation network is first applied, followed by an encoder-decoder network which estimates the 3D object coordinates for each segment. The 6D pose is recovered with a geometric pose optimization step similar to [1]. The approaches [20,15,34] do not directly use CNN to predict the pose. Instead, they provide segmentation and other intermediate information, which are used to infer the object pose. Point cloud-based. Drost et al. [5] propose to extract a global model description from oriented point pair features. With the global description, scene data are matched with models using a voting scheme. This approach is further improved by [10] to be more robust against sensor noise and background clutter. Compared to [5,10], our approach uses a CNN to learn the global description. Depth representation Depth information in deep learning systems can be represented with, e.g., voxel grids [28,26], truncated signed distance functions (TSDF) [29], or point clouds [24]. Voxel grids are simple to generate and use. Because of their regular grid structure, voxel grids can be directly used as inputs to 3D CNNs. However, voxel grids are inefficient since they also have to explicitly represent empty space. They also suffer from discretization artifacts. TSDF tries to alleviate these problems by storing the shortest distance to the surface represented in each voxel. This allows a more faithful representation of the 3D information. In comparison to other depth data representations, a point cloud has a simple representation without redundancy, yet contains rich geometric information. Recently, PointNet [24] has allowed to use raw point clouds directly as an input of a CNN. Supervised learning for rotation regression The aim of object pose estimation is to find the translation and rotation that describe the transformation from the object coordinate system O to the camera coordinate system C ( Figure 1). The translation consists of the displacements along the three coordinate axes, and the rotation specifies the rotation around the three coordinate axes. Here we concentrate on the problem of estimating rotation. For supervised learning, we require a loss function that measures the difference between the predicted rotation and the ground truth rotation. To find a suitable loss function, we begin by considering a suitable representation for a rotation. We argue that the axis-angle representation is the best suited for a learning task. We then review the connection of the axis-angle representation to the Lie algebra of rotation matrices. The Lie algebra provides us with tools needed to define our loss function as the geodesic distance of rotation matrices. These steps allow our network to directly make predictions in the axis-angle format. Notation. In the following, we denote by (·) T vector or matrix transpose. By · 2 , we denote the Euclidean or 2-norm. We write I 3×3 for the 3-by-3 identity matrix. Axis-angle representation of rotations A rotation can be represented, e.g., as Euler angles, a rotation matrix, a quaternion, or with the axis-angle representation. Euler angles are known to suffer from gimbal lock discontinuity [11]. Rotation matrices and quaternions have orthogonality and unit norm constraints, respectively. Such constraints may be problematic in an optimization-based approach such as supervised learning, since they restrict the range of valid predictions. To avoid these issues, we adopt the axisangle representation. In the axis-angle representation, a vector r ∈ R 3 represents a rotation of θ = r 2 radians around the unit vector r r 2 [7]. The Lie group SO(3) The special orthogonal group SO(3) = {R ∈ R 3×3 | RR T = I 3×3 , det R = 1} is a compact Lie group that contains the 3-by-3 orthogonal matrices with determinant one, i.e., all rotation matrices [6]. Associated with SO(3) is the Lie algebra so (3), consisting of the set of skew-symmetric 3-by-3 matrices. Let r = r 1 r 2 r 3 T ∈ R 3 be an axis-angle representation of a rotation. The corresponding element of so (3) is the skew-symmetric matrix r × =   0 −r 3 r 2 r 3 0 −r 1 −r 2 r 1 0   .(1) The exponential map exp : so(3) → SO(3) connects the Lie algebra with the Lie group by exp(r × ) = I 3×3 + sin θ θ r × + 1 − cos θ θ 2 r 2 × ,(2) where θ = r T r = r 2 as above 1 . Now let R be a rotation matrix in the Lie group SO (3). The logarithmic map log : SO(3) → so (3) connects R with an element in the Lie algebra by log(R) = φ(R) 2 sin(φ(R)) (R − R T ),(3) where φ(R) = arccos trace(R) − 1 2 (4) can be interpreted as the magnitude of rotation related to R in radians. If desired, we can now obtain an axis-angle representation of R by first extracting from log(R) the corresponding elements indicated in Eq. (1), and then setting the norm of the resulting vector to φ(R). Loss function for rotation regression We regress to a predicted rotationr represented in the axis-angle form. The prediction is compared against the ground truth rotation r via a loss function l : R 3 × R 3 → R ≥0 . LetR and R denote the two rotation matrices corresponding tor and r, respectively. We use as loss function the geodesic distance d(R, R) of R and R [13,7], i.e., l(r, r) = d(R, R) = φ(RR T ),(5) where we first obtainR and R via the exponential map, and then calculate φ(RR T ) to obtain the loss value. This loss function directly measures the magnitude of rotation betweenR and R, making it convenient to interpret. Furthermore, using the axis-angle representation allows to make predictions free of constraints such as the unit norm requirement of quaternions. This makes the loss function also convenient to implement in a supervised learning approach. should be used for small θ for numerical stability. 4 System architecture Figure 2 shows the system overview. We train our system for a specific target object, in Figure 2 the drill. The inputs to our system are the RGB color image, the depth image, and a segmentation mask indicating which pixels belong to the target object. We first create a point cloud segment of the target object based on the inputs. Each point has 6 dimensions: 3 dimensions for spatial coordinates and 3 dimensions for color information. We randomly sample n points from this point cloud segment to create a fixed-size downsampled point cloud. In all of our experiments, we use n = 256. We then remove the estimated translation from the point coordinates to normalize the data. The normalized point cloud segment is then fed into a network which outputs a rotation prediction in the axis-angle format. During training, we use the ground truth segmentation and translation. As we focus on the rotation estimation, during testing, we apply the segmentation and translation outputs of PoseCNN [34]. We consider two variants for our network presented in the following subsections. The first variant processes the point cloud as a set of independent points without regard to the local neighbourhoods of points. The second variant explicitly takes into account the local neighbourhoods of a point by considering its nearest neighbours. PointNet (PN) Our PN network is based on PointNet [24], as illustrated in Figure 3. The Point-Net architecture is invariant to all n! possible permutations of the input point cloud, and hence an ideal structure for processing raw point clouds. The invariance is achieved by processing all points independently using multi-layer perceptrons (MLPs) with shared weights. The obtained feature vectors are finally max-pooled to create a global feature representation of the input point cloud. Finally, we attach a three-layer regression MLP on top of this global feature to predict the rotation. Dynamic nearest neighbour graph (DG) In the PN architecture, all features are extracted based only on a single point. Hence it does not explicitly consider the local neighbourhoods of individual points. However, local neighbourhoods can contain useful geometric information for pose estimation [27]. The local neighbourhoods are considered by an alternative network structure based on the dynamic nearest-neighbour graph network proposed in [33]. For each point P i in the point set, a k-nearest neighbor graph is calculated. In all our experiments, we use k = 10. The graph contains directed edges (i, j i1 ), . . . , (i, j ik ), such that P ji1 , . . . , P j ik are the k closest points to P i . For an edge e ij , an edge feature P i , (P j − P i ) T is calculated. The edge features are then processed in a similar manner as in PointNet to preserve permutation invariance. This dynamic graph convolution can then be repeated, now calculating the nearest neighbour graph for the feature vectors of the first shared MLP layer, and so on for the subsequent layers. We use the implementation 2 provided by authors from [33], and call the resulting network DG for dynamic graph. Experimental results This section shows experimental results of the proposed approach on the YCB video dataset [34], and compares the performance with state-of-the-art PoseCNN method [34]. Besides prediction accuracy, we investigate the effect of occlusions and the quality of the segmentation and translation estimates. Experiment setup The YCB video dataset [34] is used for training and testing with the original train/test split. The dataset contains 133,827 frames of 21 objects selected from the YCB object set [2] with 6D pose annotation. 80,000 frames of synthetic data are also provided as an extension to the training set. We select a set of four objects to test on, shown in Figure 4. As our approach does not consider object symmetry, we use objects that have 1-fold rotational symmetry (power drill, banana and pitcher base) or 2-fold rotational symmetry (extra large clamp). We run all experiments using both the PointNet based (PN) and dynamic graph (DG) networks. During training, Adam optimizer is used with learning rate 0.008, and batch size of 128. Batch normalization is applied to all layers. No dropout is used. For training, ground truth segmentations and translations are used as the corresponding inputs shown in Fig. 2. While evaluating 3D rotation estimation in Subsection 5.3, the translation and segmentation predicted by PoseCNN are used. We observed that the color information represented by RGB color space varies in an inconsistent manner across different video sequences, hence all the following experimental results are obtained only with XYZ coordinate information of point cloud. Moreover, our current system does not deal with classification problem, individual network is trained for each object. Due to the difference of experimental setup between our method and PoseCNN, the performance comparison are mainly for illustrating the potential of proposed approach. Evaluation metrics For evaluating rotation estimation, we directly use geodesic distance described in Section 3 to quantify the rotation error. We evaluate 6D pose estimation using average distance of model points (ADD) proposed in [9]. For a 3D model M represented as a set of points, with ground truth rotation R and translation t, and estimated rotationR and translationt, the ADD is defined as: ADD = 1 m x∈M (Rx + t) − (Rx +t) 2 ,(6) where m is the number of points. The 6D pose estimate is considered to be correct if ADD is smaller than a given threshold. Accuracy of rotation angle prediction shows the fraction of predictions with error smaller than the threshold. Results are shown for our method and PoseCNN [34]. The additional +gt denotes the variants where ground truth segmentation is provided. Figure 5 shows the estimation accuracy as function of the rotation angle error threshold, i.e., the fraction of predictions that have an angle error smaller than the horizontal axis value. Results are shown for PoseCNN, PoseCNN with ICP refinement (PoseCNN+ICP), and our method with PointNet structure (PN), and with dynamic graph structure (DG). To determine the effect of the translation and segmentation input, we additionally test our methods while giving the ground truth translation and segmentation as input. The cases with ground truths provided are indicated by +gt, and shown with a dashed line. The performance without ground truth translation and segmentation is significantly worse than the performance with ground truth information. This shows that good translation and segmentation results are crucial for accurate rotation estimation. Also, by using ground truth information, the performance for extra large clamp (2-fold rotational symmetry) is worse than other objects, which illustrates that the object symmetry should be taken into consideration during learning process. Rotation estimation The results also confirm the fact that ICP based refinement usually only improves the estimation quality if the initial guess is already good enough. When the initial estimation is not accurate enough, the use of ICP can even decrease the accuracy, as shown by the PoseCNN+ICP curve falling below the PoseCNN curve for large angle thresholds. Ours (PN+gt) 9.9 • ±0.5 • 5.7 • ± 0.1 • 6.5 • ±0.3 • 13 • ±0.8 • 11.2 • ± 0.4 • 5.7 • ± 0.4 • Ours (DG+gt) 7.1 • ± 0.3 • 9.8 • ±1.2 • 4.3 • ± 0.2 • 2.6 • ± 0.3 • 34.1 • ±1.6 • 68.2 • ±8.9 • Effect of occlusion. We quantify the effect of occlusion on the rotation prediction accuracy. For a given frame and target object, we estimate the occlusion factor O of the object by O = 1 − λ µ ,(7) where λ is the number of pixels in the 2D ground truth segmentation, and µ is the number of pixels in the projection of the 3D model of the object onto the image plane using the camera intrinsic parameters and the ground truth 6D pose, when we assume that the object would be fully visible. We noted that for the test frames of the YCB-video dataset O is mostly below 0.5. We categorize O < 0.2 as low occlusion and O ≥ 0.2 as moderate occlusion. Table 1 shows the average rotation angle error (in degrees) and its 95% confidence interval 3 for PoseCNN and our method in the low and moderate occlusion categories. We also investigated the effect of the translation and segmentation by considering variants of our methods that were provided with the ground truth translation and segmentation. These variants are shown in the table indicated by +gt. We observe that with ground truth information, our methods shows potential in cases of both low and moderate occlusion. Furthermore, with the dynamic graph architecture (DG), the average error tends to be lower for 1-fold rotational symmetry objects. This shows the local neighbourhood information extracted by DG is useful for rotation estimation when there is no pose ambiguity. One observation is that for banana, the rotation error in low occlusion is significantly higher than it is in the moderate case for PoseCNN. This is because near 25% of the test frames in low occlusion case present an rotation error in range of 160 • to 180 • . Qualitative results for rotation estimation are shown in Figure 6. In the leftmost column, the occlusion factor O of the target object is denoted. Then, from left to right, we show the ground truth, PoseCNN+ICP, and our method using DG and our method using DG with ground truth translation and segmentation (DG+gt) results. In all cases, the ground truth pose, or respectively, the pose estimate, are indicated by the green overlay on the figures. To focus on the difference in the rotation estimate, we use the ground truth translation for all methods for the visualization. The rotation predictions for Ours (DG) are still based on translation and segmentation from PoseCNN. The first two rows of Figure 6 show cases with moderate occlusion. When the discriminative part of the banana is occluded (top row), PoseCNN can not recover the rotation, while our method still produces a good estimate. The situation is similar in the second row for the drill. The third row illustrates that the quality of segmentation has a strong impact on the accuracy of rotation estimation. In this case the segmentation fails to detect the black clamp on the black background, which leads to a poor rotation estimate for both PoseCNN and our method. When we provide the ground truth segmentation (third row, last column), our method is still unable to recover the correct rotation due to the pose ambiguity. Conclusion We propose to directly predict the 3D rotation of a known rigid object from a point cloud segment. We use axis-angle representation of rotations as the regression target. Our network learns a global representation either from individual input points, or from point sets of nearest neighbors. Geodesic distance is used as the loss function to supervise the learning process. Without using ICP refinement, experiments shows that the proposed method can reach competitive and sometimes superior performance compared to PoseCNN. Our results show that point cloud segments contain enough information for inferring object pose. The axis-angle representation does not have any constraints, making it a suitable regression target. Using Lie algebra as a tool provides a valid distance measure for rotations. This distance measure can be used as a loss function during training. We discovered that the performance of our method is strongly affected by the quality of the target object translation and segmentation, which will be further investigated in future work. We will extend the proposed method to full 6D pose estimation by additionally predicting the object translations. We also plan to integrate object classification into our system, and study a wider range of target objects.
3,705
1808.05498
2951863869
Rotation estimation of known rigid objects is important for robotic applications such as dexterous manipulation. Most existing methods for rotation estimation use intermediate representations such as templates, global or local feature descriptors, or object coordinates, which require multiple steps in order to infer the object pose. We propose to directly regress a pose vector from raw point cloud segments using a convolutional neural network. Experimental results show that our method can potentially achieve competitive performance compared to a state-of-the-art method, while also showing more robustness against occlusion. Our method does not require any post processing such as refinement with the iterative closest point algorithm.
3D object coordinates and object instance probabilities are learned using a Decision Forest in @cite_33 . The 6D pose estimation is then formulated as an energy optimization problem which compares synthetic images rendered with the estimated pose with observed depth values. 3D object coordinates are also used in @cite_22 @cite_23 . However, those approaches tend to be very computationally intensive due to generation and verification of hypotheses @cite_22 .
{ "abstract": [ "", "This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state of the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.", "This paper addresses the task of estimating the 6D pose of a known 3D object from a single RGB-D image. Most modern approaches solve this task in three steps: i) Compute local features; ii) Generate a pool of pose-hypotheses; iii) Select and refine a pose from the pool. This work focuses on the second step. While all existing approaches generate the hypotheses pool via local reasoning, e.g. RANSAC or Hough-voting, we are the first to show that global reasoning is beneficial at this stage. In particular, we formulate a novel fully-connected Conditional Random Field (CRF) that outputs a very small number of pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian, we give a new and efficient two-step optimization procedure, with some guarantees for optimality. We utilize our global hypotheses generation procedure to produce results that exceed state-of-the-art for the challenging \"Occluded Object Dataset\"." ], "cite_N": [ "@cite_22", "@cite_33", "@cite_23" ], "mid": [ "", "132147841", "2953257837" ] }
Occlusion Resistant Object Rotation Regression from Point Cloud Segments
The 6D pose of an object is composed of 3D location and 3D orientation. The pose describes the transformation from a local coordinate system of the object to a reference coordinate system (e.g. camera or robot coordinate) [20], as shown in Figure 1. Knowing the accurate 6D pose of an object is necessary for robotic applications such as dexterous grasping and manipulation. This problem is challenging due to occlusion, clutter and varying lighting conditions. Many methods for pose estimation using only color information have been proposed [17,25,32,21]. Since depth cameras are commonly used, there have been many methods using both color and depth information [1,18,15]. Recently, there are also many CNN based methods [18,15]. In general, methods that use depth information can handle both textured and texture-less objects, and they are more robust to occlusion compared to methods using only color information [18,15]. The 6D pose of an object is an inherently continuous quantity. Some works discretize the continuous pose space [8,9], and formulate the problem as classification. Others avoid discretization by representing the pose using, e.g., quaternions [34], or the axis-angle representation [22,4]. Work outside the domain of pose estimation has also considered rotation matrices [24], or in a more general case parametric representations of affine transformations [14]. In these cases the problem is often formulated as regression. The choice of rotation representation has a major impact on the performance of the estimation method. In this work, we propose a deep learning based pose estimation method that uses point clouds as an input. To the best of our knowledge, this is the first attempt at applying deep learning for directly estimating 3D rotation using point cloud segments. We formulate the problem of estimating the rotation of a rigid object as regression from a point cloud segment to the axis-angle representation of the rotation. This representation is constraint-free and thus well-suited for application in supervised learning. Our experimental results show that our method reaches state-of-the-art performance. We also show that our method exceeds the state-of-the-art in pose estimation tasks with moderate amounts of occlusion. Our approach does not require any post-processing, such as pose refinement by the iterative closest point (ICP) algorithm [3]. In practice, we adapt PointNet [24] for the rotation regression task. Our input is a point cloud with spatial and color information. We use the geodesic distance between rotations as the loss function. The remainder of the paper is organized as follows. Section 2 reviews related work in pose estimation. In Section 3, we argue why the axis-angle representation is suitable for supervised learning. We present our system architecture and network details in Section 4. Section 5 presents our experimental results. In Section 6 we provide concluding remarks and discuss future work. Pose estimation RGB-D methods. A template matching method which integrates color and depth information is proposed by Hinterstoisser et al. [8,9]. Templates are built with quantized image gradients on object contour from RGB information and surface normals on object interior from depth information, and annotated with viewpoint information. The effectiveness of template matching is also shown in [12,19]. However, template matching methods are sensitive to occlusions [18]. Voting-based methods attempt to infer the pose of an object by accumulating evidence from local or global features of image patches. One example is the Latent-Class Hough Forest [31,30] which adapts the template feature from [8] for generating training data. During inference stage, a random set of patches is sampled from the input image. The patches are used in Hough voting to obtain pose hypotheses for verification. 3D object coordinates and object instance probabilities are learned using a Decision Forest in [1]. The 6D pose estimation is then formulated as an energy optimization problem which compares synthetic images rendered with the estimated pose with observed depth values. 3D object coordinates are also used in [18,23]. However, those approaches tend to be very computationally intensive due to generation and verification of hypotheses [18]. Most recent approaches rely on convolutional neural networks (CNNs). In [20], the work in [1] is extended by adding a CNN to describe the posterior density of an object pose. A combination of using a CNN for object segmentation and geometry-based pose estimation is proposed in [16]. PoseCNN [34] uses a similar two-stage network, in which the first stage extracts feature maps from RGB input and the second stage uses the generated maps for object segmentation, 3D translation estimation and 3D rotation regression in quaternion format. Depth data and ICP are used for pose refinement. Jafari et al. [15] propose a three-stage, instance-aware approach for 6D object pose estimation. An instance segmentation network is first applied, followed by an encoder-decoder network which estimates the 3D object coordinates for each segment. The 6D pose is recovered with a geometric pose optimization step similar to [1]. The approaches [20,15,34] do not directly use CNN to predict the pose. Instead, they provide segmentation and other intermediate information, which are used to infer the object pose. Point cloud-based. Drost et al. [5] propose to extract a global model description from oriented point pair features. With the global description, scene data are matched with models using a voting scheme. This approach is further improved by [10] to be more robust against sensor noise and background clutter. Compared to [5,10], our approach uses a CNN to learn the global description. Depth representation Depth information in deep learning systems can be represented with, e.g., voxel grids [28,26], truncated signed distance functions (TSDF) [29], or point clouds [24]. Voxel grids are simple to generate and use. Because of their regular grid structure, voxel grids can be directly used as inputs to 3D CNNs. However, voxel grids are inefficient since they also have to explicitly represent empty space. They also suffer from discretization artifacts. TSDF tries to alleviate these problems by storing the shortest distance to the surface represented in each voxel. This allows a more faithful representation of the 3D information. In comparison to other depth data representations, a point cloud has a simple representation without redundancy, yet contains rich geometric information. Recently, PointNet [24] has allowed to use raw point clouds directly as an input of a CNN. Supervised learning for rotation regression The aim of object pose estimation is to find the translation and rotation that describe the transformation from the object coordinate system O to the camera coordinate system C ( Figure 1). The translation consists of the displacements along the three coordinate axes, and the rotation specifies the rotation around the three coordinate axes. Here we concentrate on the problem of estimating rotation. For supervised learning, we require a loss function that measures the difference between the predicted rotation and the ground truth rotation. To find a suitable loss function, we begin by considering a suitable representation for a rotation. We argue that the axis-angle representation is the best suited for a learning task. We then review the connection of the axis-angle representation to the Lie algebra of rotation matrices. The Lie algebra provides us with tools needed to define our loss function as the geodesic distance of rotation matrices. These steps allow our network to directly make predictions in the axis-angle format. Notation. In the following, we denote by (·) T vector or matrix transpose. By · 2 , we denote the Euclidean or 2-norm. We write I 3×3 for the 3-by-3 identity matrix. Axis-angle representation of rotations A rotation can be represented, e.g., as Euler angles, a rotation matrix, a quaternion, or with the axis-angle representation. Euler angles are known to suffer from gimbal lock discontinuity [11]. Rotation matrices and quaternions have orthogonality and unit norm constraints, respectively. Such constraints may be problematic in an optimization-based approach such as supervised learning, since they restrict the range of valid predictions. To avoid these issues, we adopt the axisangle representation. In the axis-angle representation, a vector r ∈ R 3 represents a rotation of θ = r 2 radians around the unit vector r r 2 [7]. The Lie group SO(3) The special orthogonal group SO(3) = {R ∈ R 3×3 | RR T = I 3×3 , det R = 1} is a compact Lie group that contains the 3-by-3 orthogonal matrices with determinant one, i.e., all rotation matrices [6]. Associated with SO(3) is the Lie algebra so (3), consisting of the set of skew-symmetric 3-by-3 matrices. Let r = r 1 r 2 r 3 T ∈ R 3 be an axis-angle representation of a rotation. The corresponding element of so (3) is the skew-symmetric matrix r × =   0 −r 3 r 2 r 3 0 −r 1 −r 2 r 1 0   .(1) The exponential map exp : so(3) → SO(3) connects the Lie algebra with the Lie group by exp(r × ) = I 3×3 + sin θ θ r × + 1 − cos θ θ 2 r 2 × ,(2) where θ = r T r = r 2 as above 1 . Now let R be a rotation matrix in the Lie group SO (3). The logarithmic map log : SO(3) → so (3) connects R with an element in the Lie algebra by log(R) = φ(R) 2 sin(φ(R)) (R − R T ),(3) where φ(R) = arccos trace(R) − 1 2 (4) can be interpreted as the magnitude of rotation related to R in radians. If desired, we can now obtain an axis-angle representation of R by first extracting from log(R) the corresponding elements indicated in Eq. (1), and then setting the norm of the resulting vector to φ(R). Loss function for rotation regression We regress to a predicted rotationr represented in the axis-angle form. The prediction is compared against the ground truth rotation r via a loss function l : R 3 × R 3 → R ≥0 . LetR and R denote the two rotation matrices corresponding tor and r, respectively. We use as loss function the geodesic distance d(R, R) of R and R [13,7], i.e., l(r, r) = d(R, R) = φ(RR T ),(5) where we first obtainR and R via the exponential map, and then calculate φ(RR T ) to obtain the loss value. This loss function directly measures the magnitude of rotation betweenR and R, making it convenient to interpret. Furthermore, using the axis-angle representation allows to make predictions free of constraints such as the unit norm requirement of quaternions. This makes the loss function also convenient to implement in a supervised learning approach. should be used for small θ for numerical stability. 4 System architecture Figure 2 shows the system overview. We train our system for a specific target object, in Figure 2 the drill. The inputs to our system are the RGB color image, the depth image, and a segmentation mask indicating which pixels belong to the target object. We first create a point cloud segment of the target object based on the inputs. Each point has 6 dimensions: 3 dimensions for spatial coordinates and 3 dimensions for color information. We randomly sample n points from this point cloud segment to create a fixed-size downsampled point cloud. In all of our experiments, we use n = 256. We then remove the estimated translation from the point coordinates to normalize the data. The normalized point cloud segment is then fed into a network which outputs a rotation prediction in the axis-angle format. During training, we use the ground truth segmentation and translation. As we focus on the rotation estimation, during testing, we apply the segmentation and translation outputs of PoseCNN [34]. We consider two variants for our network presented in the following subsections. The first variant processes the point cloud as a set of independent points without regard to the local neighbourhoods of points. The second variant explicitly takes into account the local neighbourhoods of a point by considering its nearest neighbours. PointNet (PN) Our PN network is based on PointNet [24], as illustrated in Figure 3. The Point-Net architecture is invariant to all n! possible permutations of the input point cloud, and hence an ideal structure for processing raw point clouds. The invariance is achieved by processing all points independently using multi-layer perceptrons (MLPs) with shared weights. The obtained feature vectors are finally max-pooled to create a global feature representation of the input point cloud. Finally, we attach a three-layer regression MLP on top of this global feature to predict the rotation. Dynamic nearest neighbour graph (DG) In the PN architecture, all features are extracted based only on a single point. Hence it does not explicitly consider the local neighbourhoods of individual points. However, local neighbourhoods can contain useful geometric information for pose estimation [27]. The local neighbourhoods are considered by an alternative network structure based on the dynamic nearest-neighbour graph network proposed in [33]. For each point P i in the point set, a k-nearest neighbor graph is calculated. In all our experiments, we use k = 10. The graph contains directed edges (i, j i1 ), . . . , (i, j ik ), such that P ji1 , . . . , P j ik are the k closest points to P i . For an edge e ij , an edge feature P i , (P j − P i ) T is calculated. The edge features are then processed in a similar manner as in PointNet to preserve permutation invariance. This dynamic graph convolution can then be repeated, now calculating the nearest neighbour graph for the feature vectors of the first shared MLP layer, and so on for the subsequent layers. We use the implementation 2 provided by authors from [33], and call the resulting network DG for dynamic graph. Experimental results This section shows experimental results of the proposed approach on the YCB video dataset [34], and compares the performance with state-of-the-art PoseCNN method [34]. Besides prediction accuracy, we investigate the effect of occlusions and the quality of the segmentation and translation estimates. Experiment setup The YCB video dataset [34] is used for training and testing with the original train/test split. The dataset contains 133,827 frames of 21 objects selected from the YCB object set [2] with 6D pose annotation. 80,000 frames of synthetic data are also provided as an extension to the training set. We select a set of four objects to test on, shown in Figure 4. As our approach does not consider object symmetry, we use objects that have 1-fold rotational symmetry (power drill, banana and pitcher base) or 2-fold rotational symmetry (extra large clamp). We run all experiments using both the PointNet based (PN) and dynamic graph (DG) networks. During training, Adam optimizer is used with learning rate 0.008, and batch size of 128. Batch normalization is applied to all layers. No dropout is used. For training, ground truth segmentations and translations are used as the corresponding inputs shown in Fig. 2. While evaluating 3D rotation estimation in Subsection 5.3, the translation and segmentation predicted by PoseCNN are used. We observed that the color information represented by RGB color space varies in an inconsistent manner across different video sequences, hence all the following experimental results are obtained only with XYZ coordinate information of point cloud. Moreover, our current system does not deal with classification problem, individual network is trained for each object. Due to the difference of experimental setup between our method and PoseCNN, the performance comparison are mainly for illustrating the potential of proposed approach. Evaluation metrics For evaluating rotation estimation, we directly use geodesic distance described in Section 3 to quantify the rotation error. We evaluate 6D pose estimation using average distance of model points (ADD) proposed in [9]. For a 3D model M represented as a set of points, with ground truth rotation R and translation t, and estimated rotationR and translationt, the ADD is defined as: ADD = 1 m x∈M (Rx + t) − (Rx +t) 2 ,(6) where m is the number of points. The 6D pose estimate is considered to be correct if ADD is smaller than a given threshold. Accuracy of rotation angle prediction shows the fraction of predictions with error smaller than the threshold. Results are shown for our method and PoseCNN [34]. The additional +gt denotes the variants where ground truth segmentation is provided. Figure 5 shows the estimation accuracy as function of the rotation angle error threshold, i.e., the fraction of predictions that have an angle error smaller than the horizontal axis value. Results are shown for PoseCNN, PoseCNN with ICP refinement (PoseCNN+ICP), and our method with PointNet structure (PN), and with dynamic graph structure (DG). To determine the effect of the translation and segmentation input, we additionally test our methods while giving the ground truth translation and segmentation as input. The cases with ground truths provided are indicated by +gt, and shown with a dashed line. The performance without ground truth translation and segmentation is significantly worse than the performance with ground truth information. This shows that good translation and segmentation results are crucial for accurate rotation estimation. Also, by using ground truth information, the performance for extra large clamp (2-fold rotational symmetry) is worse than other objects, which illustrates that the object symmetry should be taken into consideration during learning process. Rotation estimation The results also confirm the fact that ICP based refinement usually only improves the estimation quality if the initial guess is already good enough. When the initial estimation is not accurate enough, the use of ICP can even decrease the accuracy, as shown by the PoseCNN+ICP curve falling below the PoseCNN curve for large angle thresholds. Ours (PN+gt) 9.9 • ±0.5 • 5.7 • ± 0.1 • 6.5 • ±0.3 • 13 • ±0.8 • 11.2 • ± 0.4 • 5.7 • ± 0.4 • Ours (DG+gt) 7.1 • ± 0.3 • 9.8 • ±1.2 • 4.3 • ± 0.2 • 2.6 • ± 0.3 • 34.1 • ±1.6 • 68.2 • ±8.9 • Effect of occlusion. We quantify the effect of occlusion on the rotation prediction accuracy. For a given frame and target object, we estimate the occlusion factor O of the object by O = 1 − λ µ ,(7) where λ is the number of pixels in the 2D ground truth segmentation, and µ is the number of pixels in the projection of the 3D model of the object onto the image plane using the camera intrinsic parameters and the ground truth 6D pose, when we assume that the object would be fully visible. We noted that for the test frames of the YCB-video dataset O is mostly below 0.5. We categorize O < 0.2 as low occlusion and O ≥ 0.2 as moderate occlusion. Table 1 shows the average rotation angle error (in degrees) and its 95% confidence interval 3 for PoseCNN and our method in the low and moderate occlusion categories. We also investigated the effect of the translation and segmentation by considering variants of our methods that were provided with the ground truth translation and segmentation. These variants are shown in the table indicated by +gt. We observe that with ground truth information, our methods shows potential in cases of both low and moderate occlusion. Furthermore, with the dynamic graph architecture (DG), the average error tends to be lower for 1-fold rotational symmetry objects. This shows the local neighbourhood information extracted by DG is useful for rotation estimation when there is no pose ambiguity. One observation is that for banana, the rotation error in low occlusion is significantly higher than it is in the moderate case for PoseCNN. This is because near 25% of the test frames in low occlusion case present an rotation error in range of 160 • to 180 • . Qualitative results for rotation estimation are shown in Figure 6. In the leftmost column, the occlusion factor O of the target object is denoted. Then, from left to right, we show the ground truth, PoseCNN+ICP, and our method using DG and our method using DG with ground truth translation and segmentation (DG+gt) results. In all cases, the ground truth pose, or respectively, the pose estimate, are indicated by the green overlay on the figures. To focus on the difference in the rotation estimate, we use the ground truth translation for all methods for the visualization. The rotation predictions for Ours (DG) are still based on translation and segmentation from PoseCNN. The first two rows of Figure 6 show cases with moderate occlusion. When the discriminative part of the banana is occluded (top row), PoseCNN can not recover the rotation, while our method still produces a good estimate. The situation is similar in the second row for the drill. The third row illustrates that the quality of segmentation has a strong impact on the accuracy of rotation estimation. In this case the segmentation fails to detect the black clamp on the black background, which leads to a poor rotation estimate for both PoseCNN and our method. When we provide the ground truth segmentation (third row, last column), our method is still unable to recover the correct rotation due to the pose ambiguity. Conclusion We propose to directly predict the 3D rotation of a known rigid object from a point cloud segment. We use axis-angle representation of rotations as the regression target. Our network learns a global representation either from individual input points, or from point sets of nearest neighbors. Geodesic distance is used as the loss function to supervise the learning process. Without using ICP refinement, experiments shows that the proposed method can reach competitive and sometimes superior performance compared to PoseCNN. Our results show that point cloud segments contain enough information for inferring object pose. The axis-angle representation does not have any constraints, making it a suitable regression target. Using Lie algebra as a tool provides a valid distance measure for rotations. This distance measure can be used as a loss function during training. We discovered that the performance of our method is strongly affected by the quality of the target object translation and segmentation, which will be further investigated in future work. We will extend the proposed method to full 6D pose estimation by additionally predicting the object translations. We also plan to integrate object classification into our system, and study a wider range of target objects.
3,705
1808.05498
2951863869
Rotation estimation of known rigid objects is important for robotic applications such as dexterous manipulation. Most existing methods for rotation estimation use intermediate representations such as templates, global or local feature descriptors, or object coordinates, which require multiple steps in order to infer the object pose. We propose to directly regress a pose vector from raw point cloud segments using a convolutional neural network. Experimental results show that our method can potentially achieve competitive performance compared to a state-of-the-art method, while also showing more robustness against occlusion. Our method does not require any post processing such as refinement with the iterative closest point algorithm.
Most recent approaches rely on convolutional neural networks (CNNs). @cite_1 , the work in @cite_33 is extended by adding a CNN to describe the posterior density of an object pose. A combination of using a CNN for object segmentation and geometry-based pose estimation is proposed in @cite_8 . PoseCNN @cite_7 uses a similar two-stage network, in which the first stage extracts feature maps from RGB input and the second stage uses the generated maps for object segmentation, 3D translation estimation and 3D rotation regression in quaternion format. Depth data and ICP are used for pose refinement. @cite_14 propose a three-stage, instance-aware approach for 6D object pose estimation. An instance segmentation network is first applied, followed by an encoder-decoder network which estimates the 3D object coordinates for each segment. The 6D pose is recovered with a geometric pose optimization step similar to @cite_33 . The approaches @cite_1 @cite_14 @cite_7 do not directly use CNN to predict the pose. Instead, they provide segmentation and other intermediate information, which are used to infer the object pose.
{ "abstract": [ "We address the task of 6D pose estimation of known rigid objects from single input images in scenarios where the objects are partly occluded. Recent RGB-D-based methods are robust to moderate degrees of occlusion. For RGB inputs, no previous method works well for partly occluded objects. Our main contribution is to present the first deep learning-based system that estimates accurate poses for partly occluded objects from RGB-D and RGB input. We achieve this with a new instance-aware pipeline that decomposes 6D object pose estimation into a sequence of simpler steps, where each step removes specific aspects of the problem. The first step localizes all known objects in the image using an instance segmentation network, and hence eliminates surrounding clutter and occluders. The second step densely maps pixels to 3D object surface positions, so called object coordinates, using an encoder-decoder network, and hence eliminates object appearance. The third, and final, step predicts the 6D pose using geometric optimization. We demonstrate that we significantly outperform the state-of-the-art for pose estimation of partly occluded objects for both RGB and RGB-D input.", "This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state of the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.", "Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at this https URL.", "", "Analysis-by-synthesis has been a successful approach for many tasks in computer vision, such as 6D pose estimation of an object in an RGB-D image which is the topic of this work. The idea is to compare the observation with the output of a forward process, such as a rendered image of the object of interest in a particular pose. Due to occlusion or complicated sensor noise, it can be difficult to perform this comparison in a meaningful way. We propose an approach that \"learns to compare\", while taking these difficulties into account. This is done by describing the posterior density of a particular object pose with a convolutional neural network (CNN) that compares an observed and rendered image. The network is trained with the maximum likelihood paradigm. We observe empirically that the CNN does not specialize to the geometry or appearance of specific objects, and it can be used with objects of vastly different shapes and appearances, and in different backgrounds. Compared to state-of-the-art, we demonstrate a significant improvement on two different datasets which include a total of eleven objects, cluttered background, and heavy occlusion." ], "cite_N": [ "@cite_14", "@cite_33", "@cite_7", "@cite_8", "@cite_1" ], "mid": [ "2796874547", "132147841", "2767032778", "", "2953350888" ] }
Occlusion Resistant Object Rotation Regression from Point Cloud Segments
The 6D pose of an object is composed of 3D location and 3D orientation. The pose describes the transformation from a local coordinate system of the object to a reference coordinate system (e.g. camera or robot coordinate) [20], as shown in Figure 1. Knowing the accurate 6D pose of an object is necessary for robotic applications such as dexterous grasping and manipulation. This problem is challenging due to occlusion, clutter and varying lighting conditions. Many methods for pose estimation using only color information have been proposed [17,25,32,21]. Since depth cameras are commonly used, there have been many methods using both color and depth information [1,18,15]. Recently, there are also many CNN based methods [18,15]. In general, methods that use depth information can handle both textured and texture-less objects, and they are more robust to occlusion compared to methods using only color information [18,15]. The 6D pose of an object is an inherently continuous quantity. Some works discretize the continuous pose space [8,9], and formulate the problem as classification. Others avoid discretization by representing the pose using, e.g., quaternions [34], or the axis-angle representation [22,4]. Work outside the domain of pose estimation has also considered rotation matrices [24], or in a more general case parametric representations of affine transformations [14]. In these cases the problem is often formulated as regression. The choice of rotation representation has a major impact on the performance of the estimation method. In this work, we propose a deep learning based pose estimation method that uses point clouds as an input. To the best of our knowledge, this is the first attempt at applying deep learning for directly estimating 3D rotation using point cloud segments. We formulate the problem of estimating the rotation of a rigid object as regression from a point cloud segment to the axis-angle representation of the rotation. This representation is constraint-free and thus well-suited for application in supervised learning. Our experimental results show that our method reaches state-of-the-art performance. We also show that our method exceeds the state-of-the-art in pose estimation tasks with moderate amounts of occlusion. Our approach does not require any post-processing, such as pose refinement by the iterative closest point (ICP) algorithm [3]. In practice, we adapt PointNet [24] for the rotation regression task. Our input is a point cloud with spatial and color information. We use the geodesic distance between rotations as the loss function. The remainder of the paper is organized as follows. Section 2 reviews related work in pose estimation. In Section 3, we argue why the axis-angle representation is suitable for supervised learning. We present our system architecture and network details in Section 4. Section 5 presents our experimental results. In Section 6 we provide concluding remarks and discuss future work. Pose estimation RGB-D methods. A template matching method which integrates color and depth information is proposed by Hinterstoisser et al. [8,9]. Templates are built with quantized image gradients on object contour from RGB information and surface normals on object interior from depth information, and annotated with viewpoint information. The effectiveness of template matching is also shown in [12,19]. However, template matching methods are sensitive to occlusions [18]. Voting-based methods attempt to infer the pose of an object by accumulating evidence from local or global features of image patches. One example is the Latent-Class Hough Forest [31,30] which adapts the template feature from [8] for generating training data. During inference stage, a random set of patches is sampled from the input image. The patches are used in Hough voting to obtain pose hypotheses for verification. 3D object coordinates and object instance probabilities are learned using a Decision Forest in [1]. The 6D pose estimation is then formulated as an energy optimization problem which compares synthetic images rendered with the estimated pose with observed depth values. 3D object coordinates are also used in [18,23]. However, those approaches tend to be very computationally intensive due to generation and verification of hypotheses [18]. Most recent approaches rely on convolutional neural networks (CNNs). In [20], the work in [1] is extended by adding a CNN to describe the posterior density of an object pose. A combination of using a CNN for object segmentation and geometry-based pose estimation is proposed in [16]. PoseCNN [34] uses a similar two-stage network, in which the first stage extracts feature maps from RGB input and the second stage uses the generated maps for object segmentation, 3D translation estimation and 3D rotation regression in quaternion format. Depth data and ICP are used for pose refinement. Jafari et al. [15] propose a three-stage, instance-aware approach for 6D object pose estimation. An instance segmentation network is first applied, followed by an encoder-decoder network which estimates the 3D object coordinates for each segment. The 6D pose is recovered with a geometric pose optimization step similar to [1]. The approaches [20,15,34] do not directly use CNN to predict the pose. Instead, they provide segmentation and other intermediate information, which are used to infer the object pose. Point cloud-based. Drost et al. [5] propose to extract a global model description from oriented point pair features. With the global description, scene data are matched with models using a voting scheme. This approach is further improved by [10] to be more robust against sensor noise and background clutter. Compared to [5,10], our approach uses a CNN to learn the global description. Depth representation Depth information in deep learning systems can be represented with, e.g., voxel grids [28,26], truncated signed distance functions (TSDF) [29], or point clouds [24]. Voxel grids are simple to generate and use. Because of their regular grid structure, voxel grids can be directly used as inputs to 3D CNNs. However, voxel grids are inefficient since they also have to explicitly represent empty space. They also suffer from discretization artifacts. TSDF tries to alleviate these problems by storing the shortest distance to the surface represented in each voxel. This allows a more faithful representation of the 3D information. In comparison to other depth data representations, a point cloud has a simple representation without redundancy, yet contains rich geometric information. Recently, PointNet [24] has allowed to use raw point clouds directly as an input of a CNN. Supervised learning for rotation regression The aim of object pose estimation is to find the translation and rotation that describe the transformation from the object coordinate system O to the camera coordinate system C ( Figure 1). The translation consists of the displacements along the three coordinate axes, and the rotation specifies the rotation around the three coordinate axes. Here we concentrate on the problem of estimating rotation. For supervised learning, we require a loss function that measures the difference between the predicted rotation and the ground truth rotation. To find a suitable loss function, we begin by considering a suitable representation for a rotation. We argue that the axis-angle representation is the best suited for a learning task. We then review the connection of the axis-angle representation to the Lie algebra of rotation matrices. The Lie algebra provides us with tools needed to define our loss function as the geodesic distance of rotation matrices. These steps allow our network to directly make predictions in the axis-angle format. Notation. In the following, we denote by (·) T vector or matrix transpose. By · 2 , we denote the Euclidean or 2-norm. We write I 3×3 for the 3-by-3 identity matrix. Axis-angle representation of rotations A rotation can be represented, e.g., as Euler angles, a rotation matrix, a quaternion, or with the axis-angle representation. Euler angles are known to suffer from gimbal lock discontinuity [11]. Rotation matrices and quaternions have orthogonality and unit norm constraints, respectively. Such constraints may be problematic in an optimization-based approach such as supervised learning, since they restrict the range of valid predictions. To avoid these issues, we adopt the axisangle representation. In the axis-angle representation, a vector r ∈ R 3 represents a rotation of θ = r 2 radians around the unit vector r r 2 [7]. The Lie group SO(3) The special orthogonal group SO(3) = {R ∈ R 3×3 | RR T = I 3×3 , det R = 1} is a compact Lie group that contains the 3-by-3 orthogonal matrices with determinant one, i.e., all rotation matrices [6]. Associated with SO(3) is the Lie algebra so (3), consisting of the set of skew-symmetric 3-by-3 matrices. Let r = r 1 r 2 r 3 T ∈ R 3 be an axis-angle representation of a rotation. The corresponding element of so (3) is the skew-symmetric matrix r × =   0 −r 3 r 2 r 3 0 −r 1 −r 2 r 1 0   .(1) The exponential map exp : so(3) → SO(3) connects the Lie algebra with the Lie group by exp(r × ) = I 3×3 + sin θ θ r × + 1 − cos θ θ 2 r 2 × ,(2) where θ = r T r = r 2 as above 1 . Now let R be a rotation matrix in the Lie group SO (3). The logarithmic map log : SO(3) → so (3) connects R with an element in the Lie algebra by log(R) = φ(R) 2 sin(φ(R)) (R − R T ),(3) where φ(R) = arccos trace(R) − 1 2 (4) can be interpreted as the magnitude of rotation related to R in radians. If desired, we can now obtain an axis-angle representation of R by first extracting from log(R) the corresponding elements indicated in Eq. (1), and then setting the norm of the resulting vector to φ(R). Loss function for rotation regression We regress to a predicted rotationr represented in the axis-angle form. The prediction is compared against the ground truth rotation r via a loss function l : R 3 × R 3 → R ≥0 . LetR and R denote the two rotation matrices corresponding tor and r, respectively. We use as loss function the geodesic distance d(R, R) of R and R [13,7], i.e., l(r, r) = d(R, R) = φ(RR T ),(5) where we first obtainR and R via the exponential map, and then calculate φ(RR T ) to obtain the loss value. This loss function directly measures the magnitude of rotation betweenR and R, making it convenient to interpret. Furthermore, using the axis-angle representation allows to make predictions free of constraints such as the unit norm requirement of quaternions. This makes the loss function also convenient to implement in a supervised learning approach. should be used for small θ for numerical stability. 4 System architecture Figure 2 shows the system overview. We train our system for a specific target object, in Figure 2 the drill. The inputs to our system are the RGB color image, the depth image, and a segmentation mask indicating which pixels belong to the target object. We first create a point cloud segment of the target object based on the inputs. Each point has 6 dimensions: 3 dimensions for spatial coordinates and 3 dimensions for color information. We randomly sample n points from this point cloud segment to create a fixed-size downsampled point cloud. In all of our experiments, we use n = 256. We then remove the estimated translation from the point coordinates to normalize the data. The normalized point cloud segment is then fed into a network which outputs a rotation prediction in the axis-angle format. During training, we use the ground truth segmentation and translation. As we focus on the rotation estimation, during testing, we apply the segmentation and translation outputs of PoseCNN [34]. We consider two variants for our network presented in the following subsections. The first variant processes the point cloud as a set of independent points without regard to the local neighbourhoods of points. The second variant explicitly takes into account the local neighbourhoods of a point by considering its nearest neighbours. PointNet (PN) Our PN network is based on PointNet [24], as illustrated in Figure 3. The Point-Net architecture is invariant to all n! possible permutations of the input point cloud, and hence an ideal structure for processing raw point clouds. The invariance is achieved by processing all points independently using multi-layer perceptrons (MLPs) with shared weights. The obtained feature vectors are finally max-pooled to create a global feature representation of the input point cloud. Finally, we attach a three-layer regression MLP on top of this global feature to predict the rotation. Dynamic nearest neighbour graph (DG) In the PN architecture, all features are extracted based only on a single point. Hence it does not explicitly consider the local neighbourhoods of individual points. However, local neighbourhoods can contain useful geometric information for pose estimation [27]. The local neighbourhoods are considered by an alternative network structure based on the dynamic nearest-neighbour graph network proposed in [33]. For each point P i in the point set, a k-nearest neighbor graph is calculated. In all our experiments, we use k = 10. The graph contains directed edges (i, j i1 ), . . . , (i, j ik ), such that P ji1 , . . . , P j ik are the k closest points to P i . For an edge e ij , an edge feature P i , (P j − P i ) T is calculated. The edge features are then processed in a similar manner as in PointNet to preserve permutation invariance. This dynamic graph convolution can then be repeated, now calculating the nearest neighbour graph for the feature vectors of the first shared MLP layer, and so on for the subsequent layers. We use the implementation 2 provided by authors from [33], and call the resulting network DG for dynamic graph. Experimental results This section shows experimental results of the proposed approach on the YCB video dataset [34], and compares the performance with state-of-the-art PoseCNN method [34]. Besides prediction accuracy, we investigate the effect of occlusions and the quality of the segmentation and translation estimates. Experiment setup The YCB video dataset [34] is used for training and testing with the original train/test split. The dataset contains 133,827 frames of 21 objects selected from the YCB object set [2] with 6D pose annotation. 80,000 frames of synthetic data are also provided as an extension to the training set. We select a set of four objects to test on, shown in Figure 4. As our approach does not consider object symmetry, we use objects that have 1-fold rotational symmetry (power drill, banana and pitcher base) or 2-fold rotational symmetry (extra large clamp). We run all experiments using both the PointNet based (PN) and dynamic graph (DG) networks. During training, Adam optimizer is used with learning rate 0.008, and batch size of 128. Batch normalization is applied to all layers. No dropout is used. For training, ground truth segmentations and translations are used as the corresponding inputs shown in Fig. 2. While evaluating 3D rotation estimation in Subsection 5.3, the translation and segmentation predicted by PoseCNN are used. We observed that the color information represented by RGB color space varies in an inconsistent manner across different video sequences, hence all the following experimental results are obtained only with XYZ coordinate information of point cloud. Moreover, our current system does not deal with classification problem, individual network is trained for each object. Due to the difference of experimental setup between our method and PoseCNN, the performance comparison are mainly for illustrating the potential of proposed approach. Evaluation metrics For evaluating rotation estimation, we directly use geodesic distance described in Section 3 to quantify the rotation error. We evaluate 6D pose estimation using average distance of model points (ADD) proposed in [9]. For a 3D model M represented as a set of points, with ground truth rotation R and translation t, and estimated rotationR and translationt, the ADD is defined as: ADD = 1 m x∈M (Rx + t) − (Rx +t) 2 ,(6) where m is the number of points. The 6D pose estimate is considered to be correct if ADD is smaller than a given threshold. Accuracy of rotation angle prediction shows the fraction of predictions with error smaller than the threshold. Results are shown for our method and PoseCNN [34]. The additional +gt denotes the variants where ground truth segmentation is provided. Figure 5 shows the estimation accuracy as function of the rotation angle error threshold, i.e., the fraction of predictions that have an angle error smaller than the horizontal axis value. Results are shown for PoseCNN, PoseCNN with ICP refinement (PoseCNN+ICP), and our method with PointNet structure (PN), and with dynamic graph structure (DG). To determine the effect of the translation and segmentation input, we additionally test our methods while giving the ground truth translation and segmentation as input. The cases with ground truths provided are indicated by +gt, and shown with a dashed line. The performance without ground truth translation and segmentation is significantly worse than the performance with ground truth information. This shows that good translation and segmentation results are crucial for accurate rotation estimation. Also, by using ground truth information, the performance for extra large clamp (2-fold rotational symmetry) is worse than other objects, which illustrates that the object symmetry should be taken into consideration during learning process. Rotation estimation The results also confirm the fact that ICP based refinement usually only improves the estimation quality if the initial guess is already good enough. When the initial estimation is not accurate enough, the use of ICP can even decrease the accuracy, as shown by the PoseCNN+ICP curve falling below the PoseCNN curve for large angle thresholds. Ours (PN+gt) 9.9 • ±0.5 • 5.7 • ± 0.1 • 6.5 • ±0.3 • 13 • ±0.8 • 11.2 • ± 0.4 • 5.7 • ± 0.4 • Ours (DG+gt) 7.1 • ± 0.3 • 9.8 • ±1.2 • 4.3 • ± 0.2 • 2.6 • ± 0.3 • 34.1 • ±1.6 • 68.2 • ±8.9 • Effect of occlusion. We quantify the effect of occlusion on the rotation prediction accuracy. For a given frame and target object, we estimate the occlusion factor O of the object by O = 1 − λ µ ,(7) where λ is the number of pixels in the 2D ground truth segmentation, and µ is the number of pixels in the projection of the 3D model of the object onto the image plane using the camera intrinsic parameters and the ground truth 6D pose, when we assume that the object would be fully visible. We noted that for the test frames of the YCB-video dataset O is mostly below 0.5. We categorize O < 0.2 as low occlusion and O ≥ 0.2 as moderate occlusion. Table 1 shows the average rotation angle error (in degrees) and its 95% confidence interval 3 for PoseCNN and our method in the low and moderate occlusion categories. We also investigated the effect of the translation and segmentation by considering variants of our methods that were provided with the ground truth translation and segmentation. These variants are shown in the table indicated by +gt. We observe that with ground truth information, our methods shows potential in cases of both low and moderate occlusion. Furthermore, with the dynamic graph architecture (DG), the average error tends to be lower for 1-fold rotational symmetry objects. This shows the local neighbourhood information extracted by DG is useful for rotation estimation when there is no pose ambiguity. One observation is that for banana, the rotation error in low occlusion is significantly higher than it is in the moderate case for PoseCNN. This is because near 25% of the test frames in low occlusion case present an rotation error in range of 160 • to 180 • . Qualitative results for rotation estimation are shown in Figure 6. In the leftmost column, the occlusion factor O of the target object is denoted. Then, from left to right, we show the ground truth, PoseCNN+ICP, and our method using DG and our method using DG with ground truth translation and segmentation (DG+gt) results. In all cases, the ground truth pose, or respectively, the pose estimate, are indicated by the green overlay on the figures. To focus on the difference in the rotation estimate, we use the ground truth translation for all methods for the visualization. The rotation predictions for Ours (DG) are still based on translation and segmentation from PoseCNN. The first two rows of Figure 6 show cases with moderate occlusion. When the discriminative part of the banana is occluded (top row), PoseCNN can not recover the rotation, while our method still produces a good estimate. The situation is similar in the second row for the drill. The third row illustrates that the quality of segmentation has a strong impact on the accuracy of rotation estimation. In this case the segmentation fails to detect the black clamp on the black background, which leads to a poor rotation estimate for both PoseCNN and our method. When we provide the ground truth segmentation (third row, last column), our method is still unable to recover the correct rotation due to the pose ambiguity. Conclusion We propose to directly predict the 3D rotation of a known rigid object from a point cloud segment. We use axis-angle representation of rotations as the regression target. Our network learns a global representation either from individual input points, or from point sets of nearest neighbors. Geodesic distance is used as the loss function to supervise the learning process. Without using ICP refinement, experiments shows that the proposed method can reach competitive and sometimes superior performance compared to PoseCNN. Our results show that point cloud segments contain enough information for inferring object pose. The axis-angle representation does not have any constraints, making it a suitable regression target. Using Lie algebra as a tool provides a valid distance measure for rotations. This distance measure can be used as a loss function during training. We discovered that the performance of our method is strongly affected by the quality of the target object translation and segmentation, which will be further investigated in future work. We will extend the proposed method to full 6D pose estimation by additionally predicting the object translations. We also plan to integrate object classification into our system, and study a wider range of target objects.
3,705
1808.05498
2951863869
Rotation estimation of known rigid objects is important for robotic applications such as dexterous manipulation. Most existing methods for rotation estimation use intermediate representations such as templates, global or local feature descriptors, or object coordinates, which require multiple steps in order to infer the object pose. We propose to directly regress a pose vector from raw point cloud segments using a convolutional neural network. Experimental results show that our method can potentially achieve competitive performance compared to a state-of-the-art method, while also showing more robustness against occlusion. Our method does not require any post processing such as refinement with the iterative closest point algorithm.
Depth information in deep learning systems can be represented with, e.g., voxel grids @cite_30 @cite_18 , truncated signed distance functions (TSDF) @cite_9 , or point clouds @cite_5 . Voxel grids are simple to generate and use. Because of their regular grid structure, voxel grids can be directly used as inputs to 3D CNNs. However, voxel grids are inefficient since they also have to explicitly represent empty space. They also suffer from discretization artifacts. TSDF tries to alleviate these problems by storing the shortest distance to the surface represented in each voxel. This allows a more faithful representation of the 3D information. In comparison to other depth data representations, a point cloud has a simple representation without redundancy, yet contains rich geometric information. Recently, PointNet @cite_5 has allowed to use raw point clouds directly as an input of a CNN.
{ "abstract": [ "Recent work has shown good recognition results in 3D object recognition using 3D convolutional networks. In this paper, we show that the object orientation plays an important role in 3D recognition. More specifically, we argue that objects induce different features in the network under rotation. Thus, we approach the category-level classification task as a multi-task problem, in which the network is trained to predict the pose of the object in addition to the class label as a parallel task. We show that this yields significant improvements in the classification results. We test our suggested architecture on several datasets representing various 3D data sources: LiDAR data, CAD models, and RGB-D images. We report state-of-the-art results on classification as well as significant improvements in precision and speed over the baseline on 3D detection.", "We focus on the task of amodal 3D object detection in RGB-D images, which aims to produce a 3D bounding box of an object in metric form at its full extent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a 3D volumetric scene from a RGB-D image as input and outputs 3D object bounding boxes. In our approach, we propose the first 3D Region Proposal Network (RPN) to learn objectness from geometric shapes and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D. In particular, we handle objects of various sizes by training an amodal RPN at two different scales and an ORN to regress 3D bounding boxes. Experiments show that our algorithm outperforms the state-of-the-art by 13.8 in mAP and is 200x faster than the original Sliding Shapes. All source code and pre-trained models will be available at GitHub.", "We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.", "" ], "cite_N": [ "@cite_30", "@cite_9", "@cite_18", "@cite_5" ], "mid": [ "2336098239", "2949768986", "2556802233", "" ] }
Occlusion Resistant Object Rotation Regression from Point Cloud Segments
The 6D pose of an object is composed of 3D location and 3D orientation. The pose describes the transformation from a local coordinate system of the object to a reference coordinate system (e.g. camera or robot coordinate) [20], as shown in Figure 1. Knowing the accurate 6D pose of an object is necessary for robotic applications such as dexterous grasping and manipulation. This problem is challenging due to occlusion, clutter and varying lighting conditions. Many methods for pose estimation using only color information have been proposed [17,25,32,21]. Since depth cameras are commonly used, there have been many methods using both color and depth information [1,18,15]. Recently, there are also many CNN based methods [18,15]. In general, methods that use depth information can handle both textured and texture-less objects, and they are more robust to occlusion compared to methods using only color information [18,15]. The 6D pose of an object is an inherently continuous quantity. Some works discretize the continuous pose space [8,9], and formulate the problem as classification. Others avoid discretization by representing the pose using, e.g., quaternions [34], or the axis-angle representation [22,4]. Work outside the domain of pose estimation has also considered rotation matrices [24], or in a more general case parametric representations of affine transformations [14]. In these cases the problem is often formulated as regression. The choice of rotation representation has a major impact on the performance of the estimation method. In this work, we propose a deep learning based pose estimation method that uses point clouds as an input. To the best of our knowledge, this is the first attempt at applying deep learning for directly estimating 3D rotation using point cloud segments. We formulate the problem of estimating the rotation of a rigid object as regression from a point cloud segment to the axis-angle representation of the rotation. This representation is constraint-free and thus well-suited for application in supervised learning. Our experimental results show that our method reaches state-of-the-art performance. We also show that our method exceeds the state-of-the-art in pose estimation tasks with moderate amounts of occlusion. Our approach does not require any post-processing, such as pose refinement by the iterative closest point (ICP) algorithm [3]. In practice, we adapt PointNet [24] for the rotation regression task. Our input is a point cloud with spatial and color information. We use the geodesic distance between rotations as the loss function. The remainder of the paper is organized as follows. Section 2 reviews related work in pose estimation. In Section 3, we argue why the axis-angle representation is suitable for supervised learning. We present our system architecture and network details in Section 4. Section 5 presents our experimental results. In Section 6 we provide concluding remarks and discuss future work. Pose estimation RGB-D methods. A template matching method which integrates color and depth information is proposed by Hinterstoisser et al. [8,9]. Templates are built with quantized image gradients on object contour from RGB information and surface normals on object interior from depth information, and annotated with viewpoint information. The effectiveness of template matching is also shown in [12,19]. However, template matching methods are sensitive to occlusions [18]. Voting-based methods attempt to infer the pose of an object by accumulating evidence from local or global features of image patches. One example is the Latent-Class Hough Forest [31,30] which adapts the template feature from [8] for generating training data. During inference stage, a random set of patches is sampled from the input image. The patches are used in Hough voting to obtain pose hypotheses for verification. 3D object coordinates and object instance probabilities are learned using a Decision Forest in [1]. The 6D pose estimation is then formulated as an energy optimization problem which compares synthetic images rendered with the estimated pose with observed depth values. 3D object coordinates are also used in [18,23]. However, those approaches tend to be very computationally intensive due to generation and verification of hypotheses [18]. Most recent approaches rely on convolutional neural networks (CNNs). In [20], the work in [1] is extended by adding a CNN to describe the posterior density of an object pose. A combination of using a CNN for object segmentation and geometry-based pose estimation is proposed in [16]. PoseCNN [34] uses a similar two-stage network, in which the first stage extracts feature maps from RGB input and the second stage uses the generated maps for object segmentation, 3D translation estimation and 3D rotation regression in quaternion format. Depth data and ICP are used for pose refinement. Jafari et al. [15] propose a three-stage, instance-aware approach for 6D object pose estimation. An instance segmentation network is first applied, followed by an encoder-decoder network which estimates the 3D object coordinates for each segment. The 6D pose is recovered with a geometric pose optimization step similar to [1]. The approaches [20,15,34] do not directly use CNN to predict the pose. Instead, they provide segmentation and other intermediate information, which are used to infer the object pose. Point cloud-based. Drost et al. [5] propose to extract a global model description from oriented point pair features. With the global description, scene data are matched with models using a voting scheme. This approach is further improved by [10] to be more robust against sensor noise and background clutter. Compared to [5,10], our approach uses a CNN to learn the global description. Depth representation Depth information in deep learning systems can be represented with, e.g., voxel grids [28,26], truncated signed distance functions (TSDF) [29], or point clouds [24]. Voxel grids are simple to generate and use. Because of their regular grid structure, voxel grids can be directly used as inputs to 3D CNNs. However, voxel grids are inefficient since they also have to explicitly represent empty space. They also suffer from discretization artifacts. TSDF tries to alleviate these problems by storing the shortest distance to the surface represented in each voxel. This allows a more faithful representation of the 3D information. In comparison to other depth data representations, a point cloud has a simple representation without redundancy, yet contains rich geometric information. Recently, PointNet [24] has allowed to use raw point clouds directly as an input of a CNN. Supervised learning for rotation regression The aim of object pose estimation is to find the translation and rotation that describe the transformation from the object coordinate system O to the camera coordinate system C ( Figure 1). The translation consists of the displacements along the three coordinate axes, and the rotation specifies the rotation around the three coordinate axes. Here we concentrate on the problem of estimating rotation. For supervised learning, we require a loss function that measures the difference between the predicted rotation and the ground truth rotation. To find a suitable loss function, we begin by considering a suitable representation for a rotation. We argue that the axis-angle representation is the best suited for a learning task. We then review the connection of the axis-angle representation to the Lie algebra of rotation matrices. The Lie algebra provides us with tools needed to define our loss function as the geodesic distance of rotation matrices. These steps allow our network to directly make predictions in the axis-angle format. Notation. In the following, we denote by (·) T vector or matrix transpose. By · 2 , we denote the Euclidean or 2-norm. We write I 3×3 for the 3-by-3 identity matrix. Axis-angle representation of rotations A rotation can be represented, e.g., as Euler angles, a rotation matrix, a quaternion, or with the axis-angle representation. Euler angles are known to suffer from gimbal lock discontinuity [11]. Rotation matrices and quaternions have orthogonality and unit norm constraints, respectively. Such constraints may be problematic in an optimization-based approach such as supervised learning, since they restrict the range of valid predictions. To avoid these issues, we adopt the axisangle representation. In the axis-angle representation, a vector r ∈ R 3 represents a rotation of θ = r 2 radians around the unit vector r r 2 [7]. The Lie group SO(3) The special orthogonal group SO(3) = {R ∈ R 3×3 | RR T = I 3×3 , det R = 1} is a compact Lie group that contains the 3-by-3 orthogonal matrices with determinant one, i.e., all rotation matrices [6]. Associated with SO(3) is the Lie algebra so (3), consisting of the set of skew-symmetric 3-by-3 matrices. Let r = r 1 r 2 r 3 T ∈ R 3 be an axis-angle representation of a rotation. The corresponding element of so (3) is the skew-symmetric matrix r × =   0 −r 3 r 2 r 3 0 −r 1 −r 2 r 1 0   .(1) The exponential map exp : so(3) → SO(3) connects the Lie algebra with the Lie group by exp(r × ) = I 3×3 + sin θ θ r × + 1 − cos θ θ 2 r 2 × ,(2) where θ = r T r = r 2 as above 1 . Now let R be a rotation matrix in the Lie group SO (3). The logarithmic map log : SO(3) → so (3) connects R with an element in the Lie algebra by log(R) = φ(R) 2 sin(φ(R)) (R − R T ),(3) where φ(R) = arccos trace(R) − 1 2 (4) can be interpreted as the magnitude of rotation related to R in radians. If desired, we can now obtain an axis-angle representation of R by first extracting from log(R) the corresponding elements indicated in Eq. (1), and then setting the norm of the resulting vector to φ(R). Loss function for rotation regression We regress to a predicted rotationr represented in the axis-angle form. The prediction is compared against the ground truth rotation r via a loss function l : R 3 × R 3 → R ≥0 . LetR and R denote the two rotation matrices corresponding tor and r, respectively. We use as loss function the geodesic distance d(R, R) of R and R [13,7], i.e., l(r, r) = d(R, R) = φ(RR T ),(5) where we first obtainR and R via the exponential map, and then calculate φ(RR T ) to obtain the loss value. This loss function directly measures the magnitude of rotation betweenR and R, making it convenient to interpret. Furthermore, using the axis-angle representation allows to make predictions free of constraints such as the unit norm requirement of quaternions. This makes the loss function also convenient to implement in a supervised learning approach. should be used for small θ for numerical stability. 4 System architecture Figure 2 shows the system overview. We train our system for a specific target object, in Figure 2 the drill. The inputs to our system are the RGB color image, the depth image, and a segmentation mask indicating which pixels belong to the target object. We first create a point cloud segment of the target object based on the inputs. Each point has 6 dimensions: 3 dimensions for spatial coordinates and 3 dimensions for color information. We randomly sample n points from this point cloud segment to create a fixed-size downsampled point cloud. In all of our experiments, we use n = 256. We then remove the estimated translation from the point coordinates to normalize the data. The normalized point cloud segment is then fed into a network which outputs a rotation prediction in the axis-angle format. During training, we use the ground truth segmentation and translation. As we focus on the rotation estimation, during testing, we apply the segmentation and translation outputs of PoseCNN [34]. We consider two variants for our network presented in the following subsections. The first variant processes the point cloud as a set of independent points without regard to the local neighbourhoods of points. The second variant explicitly takes into account the local neighbourhoods of a point by considering its nearest neighbours. PointNet (PN) Our PN network is based on PointNet [24], as illustrated in Figure 3. The Point-Net architecture is invariant to all n! possible permutations of the input point cloud, and hence an ideal structure for processing raw point clouds. The invariance is achieved by processing all points independently using multi-layer perceptrons (MLPs) with shared weights. The obtained feature vectors are finally max-pooled to create a global feature representation of the input point cloud. Finally, we attach a three-layer regression MLP on top of this global feature to predict the rotation. Dynamic nearest neighbour graph (DG) In the PN architecture, all features are extracted based only on a single point. Hence it does not explicitly consider the local neighbourhoods of individual points. However, local neighbourhoods can contain useful geometric information for pose estimation [27]. The local neighbourhoods are considered by an alternative network structure based on the dynamic nearest-neighbour graph network proposed in [33]. For each point P i in the point set, a k-nearest neighbor graph is calculated. In all our experiments, we use k = 10. The graph contains directed edges (i, j i1 ), . . . , (i, j ik ), such that P ji1 , . . . , P j ik are the k closest points to P i . For an edge e ij , an edge feature P i , (P j − P i ) T is calculated. The edge features are then processed in a similar manner as in PointNet to preserve permutation invariance. This dynamic graph convolution can then be repeated, now calculating the nearest neighbour graph for the feature vectors of the first shared MLP layer, and so on for the subsequent layers. We use the implementation 2 provided by authors from [33], and call the resulting network DG for dynamic graph. Experimental results This section shows experimental results of the proposed approach on the YCB video dataset [34], and compares the performance with state-of-the-art PoseCNN method [34]. Besides prediction accuracy, we investigate the effect of occlusions and the quality of the segmentation and translation estimates. Experiment setup The YCB video dataset [34] is used for training and testing with the original train/test split. The dataset contains 133,827 frames of 21 objects selected from the YCB object set [2] with 6D pose annotation. 80,000 frames of synthetic data are also provided as an extension to the training set. We select a set of four objects to test on, shown in Figure 4. As our approach does not consider object symmetry, we use objects that have 1-fold rotational symmetry (power drill, banana and pitcher base) or 2-fold rotational symmetry (extra large clamp). We run all experiments using both the PointNet based (PN) and dynamic graph (DG) networks. During training, Adam optimizer is used with learning rate 0.008, and batch size of 128. Batch normalization is applied to all layers. No dropout is used. For training, ground truth segmentations and translations are used as the corresponding inputs shown in Fig. 2. While evaluating 3D rotation estimation in Subsection 5.3, the translation and segmentation predicted by PoseCNN are used. We observed that the color information represented by RGB color space varies in an inconsistent manner across different video sequences, hence all the following experimental results are obtained only with XYZ coordinate information of point cloud. Moreover, our current system does not deal with classification problem, individual network is trained for each object. Due to the difference of experimental setup between our method and PoseCNN, the performance comparison are mainly for illustrating the potential of proposed approach. Evaluation metrics For evaluating rotation estimation, we directly use geodesic distance described in Section 3 to quantify the rotation error. We evaluate 6D pose estimation using average distance of model points (ADD) proposed in [9]. For a 3D model M represented as a set of points, with ground truth rotation R and translation t, and estimated rotationR and translationt, the ADD is defined as: ADD = 1 m x∈M (Rx + t) − (Rx +t) 2 ,(6) where m is the number of points. The 6D pose estimate is considered to be correct if ADD is smaller than a given threshold. Accuracy of rotation angle prediction shows the fraction of predictions with error smaller than the threshold. Results are shown for our method and PoseCNN [34]. The additional +gt denotes the variants where ground truth segmentation is provided. Figure 5 shows the estimation accuracy as function of the rotation angle error threshold, i.e., the fraction of predictions that have an angle error smaller than the horizontal axis value. Results are shown for PoseCNN, PoseCNN with ICP refinement (PoseCNN+ICP), and our method with PointNet structure (PN), and with dynamic graph structure (DG). To determine the effect of the translation and segmentation input, we additionally test our methods while giving the ground truth translation and segmentation as input. The cases with ground truths provided are indicated by +gt, and shown with a dashed line. The performance without ground truth translation and segmentation is significantly worse than the performance with ground truth information. This shows that good translation and segmentation results are crucial for accurate rotation estimation. Also, by using ground truth information, the performance for extra large clamp (2-fold rotational symmetry) is worse than other objects, which illustrates that the object symmetry should be taken into consideration during learning process. Rotation estimation The results also confirm the fact that ICP based refinement usually only improves the estimation quality if the initial guess is already good enough. When the initial estimation is not accurate enough, the use of ICP can even decrease the accuracy, as shown by the PoseCNN+ICP curve falling below the PoseCNN curve for large angle thresholds. Ours (PN+gt) 9.9 • ±0.5 • 5.7 • ± 0.1 • 6.5 • ±0.3 • 13 • ±0.8 • 11.2 • ± 0.4 • 5.7 • ± 0.4 • Ours (DG+gt) 7.1 • ± 0.3 • 9.8 • ±1.2 • 4.3 • ± 0.2 • 2.6 • ± 0.3 • 34.1 • ±1.6 • 68.2 • ±8.9 • Effect of occlusion. We quantify the effect of occlusion on the rotation prediction accuracy. For a given frame and target object, we estimate the occlusion factor O of the object by O = 1 − λ µ ,(7) where λ is the number of pixels in the 2D ground truth segmentation, and µ is the number of pixels in the projection of the 3D model of the object onto the image plane using the camera intrinsic parameters and the ground truth 6D pose, when we assume that the object would be fully visible. We noted that for the test frames of the YCB-video dataset O is mostly below 0.5. We categorize O < 0.2 as low occlusion and O ≥ 0.2 as moderate occlusion. Table 1 shows the average rotation angle error (in degrees) and its 95% confidence interval 3 for PoseCNN and our method in the low and moderate occlusion categories. We also investigated the effect of the translation and segmentation by considering variants of our methods that were provided with the ground truth translation and segmentation. These variants are shown in the table indicated by +gt. We observe that with ground truth information, our methods shows potential in cases of both low and moderate occlusion. Furthermore, with the dynamic graph architecture (DG), the average error tends to be lower for 1-fold rotational symmetry objects. This shows the local neighbourhood information extracted by DG is useful for rotation estimation when there is no pose ambiguity. One observation is that for banana, the rotation error in low occlusion is significantly higher than it is in the moderate case for PoseCNN. This is because near 25% of the test frames in low occlusion case present an rotation error in range of 160 • to 180 • . Qualitative results for rotation estimation are shown in Figure 6. In the leftmost column, the occlusion factor O of the target object is denoted. Then, from left to right, we show the ground truth, PoseCNN+ICP, and our method using DG and our method using DG with ground truth translation and segmentation (DG+gt) results. In all cases, the ground truth pose, or respectively, the pose estimate, are indicated by the green overlay on the figures. To focus on the difference in the rotation estimate, we use the ground truth translation for all methods for the visualization. The rotation predictions for Ours (DG) are still based on translation and segmentation from PoseCNN. The first two rows of Figure 6 show cases with moderate occlusion. When the discriminative part of the banana is occluded (top row), PoseCNN can not recover the rotation, while our method still produces a good estimate. The situation is similar in the second row for the drill. The third row illustrates that the quality of segmentation has a strong impact on the accuracy of rotation estimation. In this case the segmentation fails to detect the black clamp on the black background, which leads to a poor rotation estimate for both PoseCNN and our method. When we provide the ground truth segmentation (third row, last column), our method is still unable to recover the correct rotation due to the pose ambiguity. Conclusion We propose to directly predict the 3D rotation of a known rigid object from a point cloud segment. We use axis-angle representation of rotations as the regression target. Our network learns a global representation either from individual input points, or from point sets of nearest neighbors. Geodesic distance is used as the loss function to supervise the learning process. Without using ICP refinement, experiments shows that the proposed method can reach competitive and sometimes superior performance compared to PoseCNN. Our results show that point cloud segments contain enough information for inferring object pose. The axis-angle representation does not have any constraints, making it a suitable regression target. Using Lie algebra as a tool provides a valid distance measure for rotations. This distance measure can be used as a loss function during training. We discovered that the performance of our method is strongly affected by the quality of the target object translation and segmentation, which will be further investigated in future work. We will extend the proposed method to full 6D pose estimation by additionally predicting the object translations. We also plan to integrate object classification into our system, and study a wider range of target objects.
3,705
1906.04838
2951865326
In this paper we propose an edge-direct visual odometry algorithm that efficiently utilizes edge pixels to find the relative pose that minimizes the photometric error between images. Prior work on exploiting edge pixels instead treats edges as features and employ various techniques to match edge lines or pixels, which adds unnecessary complexity. Direct methods typically operate on all pixel intensities, which proves to be highly redundant. In contrast our method builds on direct visual odometry methods naturally with minimal added computation. It is not only more efficient than direct dense methods since we iterate with a fraction of the pixels, but also more accurate. We achieve high accuracy and efficiency by extracting edges from only one image, and utilize robust Gauss-Newton to minimize the photometric error of these edge pixels. This simultaneously finds the edge pixels in the reference image, as well as the relative camera pose that minimizes the photometric error. We test various edge detectors, including learned edges, and determine that the optimal edge detector for this method is the Canny edge detection algorithm using automatic thresholding. We highlight key differences between our edge direct method and direct dense methods, in particular how higher levels of image pyramids can lead to significant aliasing effects and result in incorrect solution convergence. We show experimentally that reducing the photometric error of edge pixels also reduces the photometric error of all pixels, and we show through an ablation study the increase in accuracy obtained by optimizing edge pixels only. We evaluate our method on the RGB-D TUM benchmark on which we achieve state-of-the-art performance.
Due to the high level of inaccuracies present in feature extraction and matching, such algorithms must compute the fundamental matrix or homography in a RANSAC loop. While feature-based methods have achieved accurate results, they remain computationally wasteful due to their reliance on RANSAC for robust estimation of such parameters. Several examples of such systems that use indirect methods are ORB-SLAM, ORB-SLAM2 @cite_4 @cite_23 and Parallel Tracking and Mapping (PTAM) @cite_5 . Alternatively, direct methods directly use the sensor inputs, such as image intensities, to optimize an error function to determine relative camera pose.
{ "abstract": [ "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.", "This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.", "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields." ], "cite_N": [ "@cite_5", "@cite_4", "@cite_23" ], "mid": [ "2151290401", "1612997784", "2535547924" ] }
Edge-Direct Visual Odometry
Visual odometry (VO), or the task of tracking camera pose from a stream of images, has received increased attention due to its widespread applications in robotics and augmented reality. Camera tracking in unknown environments is one of the most difficult challenges of computer vision. While VO has become a more popular area of research, there are still several challenges present. Such challenges are operating in low-texture environments, achieving higher frame rate processing capabilities for increased positional control, and reducing the drift of the trajectory estimate. Any new algorithm must also deal with inherent challenges of tracking camera pose, in particular they must be able to handle the high bandwidth image streams, which requires efficient solutions to extract useful information from such large amounts of data. Contributions In this paper we propose a sparse visual odometry algorithm that efficiently utilizes edges to track the camera motion with state-of-the-art accuracy quantified by low relative pose drift. More formally, we outline our main contributions: • An edge-direct visual odometry algorithm that outperforms state-of-the-art methods in public datasets. • We provide experimental evidence that edges are the essential pixels in direct methods through an ablation study. • We compare our edge method relative to a direct dense method. • We present key differences on reducing photometric error on edges as opposed to full image intensities. • We optimize our algorithm with respect to several different types of edges. Visual Odometry vs. SLAM Simultaneous localization and mapping (SLAM) algorithms have taken visual odometry algorithms a step further by jointly mapping the environment, and performing optimization over the joint poses and map. Additionally, SLAM algorithms implement loop closure, which enables systems to identify locations which it has visited before and optimize the trajectory by matching feature points against the prior image in memory. With the success of Bundle Adjustment and loop closure in producing near drift-free results, much of the attention has shifted from the performance of visual odometry algorithms to overall system performance. In reality the two are tightly coupled, and it is very important that visual odometry provides low-drift pose for two reasons. Firstly, Bundle Adjustment requires a good initialization in order for it to converge to a drift-free solution. Secondly, it is computationally expensive and is comparatively slow compared to the high frame-rate at which visual odometry performs. For these reasons we focus solely on VO performance in this work, and we show competitive performance even against such SLAM systems. Edge-Direct Visual Odometry Overview In this section we formulate the edge direct visual odometry algorithm. The key concept behind direct visual odometry is to align images with respect to pose parameters using gradients. This is an extension of the Lucas-Kanade algo-rithm [2,15]. At each timestamp we have a reference RGB image and a depth image. When we obtain a new frame, we assume we only receive an RGB image. This enables our method to be extended to monocular VO by simply keeping a depth map and updating at each new time step. Note also that we convert the RGB image into a grayscale image. The key step of our algorithm is that we then extract edges from the new image, and use them as a mask on the reference image we are localizing with respect to. We then align the images by iteratively minimizing the photometric error over these edge pixels. The objective is to minimize the nonlinear photometric error r i (ξ) = I 2 (τ (x i , d i , ξ)) − I 1 (x i ),(1) where τ is the warp function that maps image intensities in the second image to image intensities in the first image through a rigid body transform. The warping function τ (x i , d i , ξ) is dependent on the pixel positions x i , the depth d i of the corresponding 3D point, and the camera pose ξ. Note that now the pixels we are using are only edge pixels, ie. x i ∈ E(I 2 ),(2) where E(I 2 ) are the edges of the new image. Camera Model In order to minimize the photometric error we need to be able to associate image pixels with 3D points in space. Using the standard pinhole camera model, which maps 3D points to image pixels, we have π(P ) = fxX Z + c x , fyY Z + c y T ,(3) where f x and f y are the focal lengths and c x and c y are the image coordinates of the principal point. If we know the depth then we can find the inverse mapping that takes image coordinates and backprojects them to a 3D point P in homogenous coordinates P = π −1 (x i , Z) = x−cx fx Z, y−cy fy Z, Z, 1 T . (4) Camera Motion We are interested in determining the motion of the camera from a sequence of frames, which we model as a rigid body transformation. The camera motion will therefore be in the Special Euclidean Group SE(3). The rigid body transform is given by T ∈ SE(3) Any pyramid greater than three edge images deep starts to suffer from heavy amounts of aliasing, which led us to cut off our edge pyramid at the third level. T = R t 0 1 ,(5) where R is a 3 × 3 rotation matrix and t is a 3 × 1 translation vector. Since we are performing Gauss-Newton optimization, we need to parameterize camera pose as a 6vector through the exponential map T = exp se(3) (ξ) so that we can optimize over the SO(3) manifold for rotations. At each iteration we can compose the relative pose update ∆ξ with the previous iteration estimate. ξ (n+1) = ∆ξ (n) ξ (n) ,(6) where ∆ξ T = exp se(3) (∆ξ)T . We also use a constant motion assumption, where the pose initialization is taken to be the relative pose motion from the previous update, as opposed to initializing with the identity pose. The pose initialization for frame F i with respect to frame F k thus can be expressed as ξ ki,init = ξ k,i−1 ξ i−2,i−1 .(7) . Experimentally we have found that this greatly improves performance by providing the system with an accurate initialization such that it can converge to a low-error solution. Robust Gauss-Newton on Edge Maps Similar to other direct methods, we employ a coarse-tofine approach to Gauss-Newton minimization to avoid false convergence. The selection of the image pyramid scheme has a large effect on the system performance, and must be chosen carefully. Some systems such as [8] report using up to six levels, while [12] report using four levels. Simply extending these large pyramid sizes to edge maps causes the system to fail to converge to the correct solution. This is due to the effects of aliasing. A much smaller pyramid size is required. We found that three levels worked well for the original 640×480 resolution. Using additional levels caused the system to fail due to edge aliasing effects which is illustrated in Figure 3, which shows the same edge image at different levels of the pyramid. After level three, it becomes unrecognizable. For this reason, we recommend using images no smaller than 160 × 120 in resolution. A common approach in direct methods is to incorporate a weighting function that increases robustness to outliers when solving the error function. We use an iteratively re-weighted residual error function that we minimize with Gauss-Newton. We found that iteratively re-weighting using Huber weights worked quite well for our application, following the work of [8]. The Huber weights are defined as w i (r i ) = 1, r i ≤ k k |ri| , r i > k .(8) The error function now becomes E(ξ) = i w i (ξ)r 2 i (ξ).(9) Our goal is to find the relative camera pose that minimizes this function arg min ξ E(ξ) = arg min ξ i w i (ξ)r 2 i (ξ).(10) In order to minimize this nonlinear error function with Gauss-Newton, we must linearize the equation. We can then solve this as a first-order approximation by iteratively solving the equation ∆ξ (n) = −(J T W J ) −1 J T W r(ξ (n) ),(11) where W is a diagonal matrix with the weights, and the Jacobian J is defined as J = ∇I 2 ∂π ∂P ∂P ∂T ∂T ∂ξ ,(12) and ∇I 2 is the image gradient of the new image. We can then iteratively update the relative pose with Equation 6. Note that we use the inverse-composition [2] formulation such that we do not have to recompute the Jacobian matrix every iteration. This is what makes this algorithm extremely efficient, as shown in [2]. This shows that minimizing the residuals for just the edge pixels jointly minimizes the residuals for all pixels. After 3 images, the minimization starts to become more inaccurate. This is also a function of camera velocity and rotational velocity. Optimizing over Edge Points We present the theory of selecting and incorporating edge points in the formulation, and provide some insight on why it is so effective in implementation. For edge selection process, note that we have two images, a reference and a new image, and therefore two sets of edges. We wish to avoid the problems that arise from using both sets, namely there will be a different number of edge pixels, and dealing with this through matching algorithms is inefficient and error-prone. We use a more elegant solution, which is to use the edges of the new image as a mask on the first image. This initialization causes the mask to select pixels in the reference image that are slightly off from the reference image edges, assuming the camera has moved. At each iteration, we follow a gradient from this position towards a point that reduces photometric error. By definition, edges are regions of large photometric variation on either side. Intuitively we argue that the optimization should therefore converge and settle at the correct edge. To summarize, we initialize the edge mask to an offset position from the reference image's edges, and iteratively force these edge pixels to overlap with the reference edges. In doing this we achieve a highly accurate relative pose. Keyframe Selection Another implementation detail has to do with keyframes. Frame-to-frame alignment is inherently noisy and prone to accumulate drift. To mitigate this, VO algorithms often se-lect a key-frame which is used as the reference image for multiple new frames. The error accumulation is decreased by comparing against fewer reference frames, which directly results in a smaller error stackup. There have been several strategies for selecting keyframes. The selection of keyframes is dependent on the type of VO algorithm being used. Feature-based methods such as [17] usually impose the restriction that a significant number of frames to pass, on the order of tens of frames. In [11] the authors summarize several common approaches that direct methods use for creating a new keyframe: every n frames, after a certain relative pose threshold has been met, the variance of the error function exceeds a threshold, or the differential entropy of the covariance matrix reaches a threshold. However, each metric is not without its problems. Furthermore, the performance of the tracking degrades the further apart the baselines. Figure 4 demonstrates this phenomena, in which the residuals from five consecutive frames with respect to the first frame are shown. We observe that in general after 4 frames, the residuals become harder to minimize for most sequences. Note that this is a function of camera motion. We make the assumption that this camera tracking will be used for moderate motion and select an every n frames approach. Table 1. Comparison of the performance of our system using three different types of edges. Blue denotes best performing frame-to-frame VO, excluding SLAM or keyframe systems. Bold denotes best performing system overall. A dashed line indicates that using keyframes did not improve performance. Experiments We evaluate our system using the TUM RGB-D benchmark [21] , which is provided by the Technical University of Munich. The benchmark has been widely used by various SLAM and VO algorithms to benchmark their accuracy and performance over various test sequences. Each sequence contains RGB images, depth images, accelerometer data, as well as groundtruth. The camera intrinsics are also provided. Groundtruth was obtained by an external motion capture system through triangulation, and the data was synchronized. There are several challenging datasets within this benchmark. Each sequence ranges in duration, trajectory, and translational and rotational velocities. We follow the work of [19] which uses seven sequences to benchmark their system performance so to achieve a direct comparison with other methods. Evaluation Metrics We use the Relative Pose Error (RPE) and Absolute Trajectory Error (ATE) to evaluate our system. The Relative Pose Error is proposed for evaluation of drift for VO algorithms in [21]. It measures the accuracy of the camera pose over a fixed time interval ∆t RP E t = (Q −1 t Q t+∆t )(P −1 t P t+∆t ),(13) where Q 1 . . . Q n ∈ SE(3) are the camera poses associated with the groundtruth trajectory and P 1 . . . P n ∈ SE(3) are the camera poses associated with the estimated camera trajectory. Similarly the Absolute Trjectory Error is defined as Figure 5. XY cross-section of our estimated trajectory compared with ground truth. The error is shown in green. The start position is shown as a black dot, while the final positions are shown as colored dots corresponding to the trajectory. Areas without green indicate missing groundtruth data from sequence. AT E t = Q −1 i SP i ,(14) where poses Q and P are aligned by the rigid body transformation S obtained through a least-squares solution. A common practice has been to use the RMSE value of both the RPE and ATE, as RMSE values are a more robust metric that gives more weight to outliers as compared with the mean or median values. Thus the RMSE is a much more stringent performance metric to benchmark system drift. Following the example set by [9,12,22], we provide the RMSE camera pose drift over several sequences of the dataset. As first pointed out in [12], choosing too small of a ∆t creates erroneous error estimates as the ground truth motion capture system has finite error as well. Too large of a value leads to penalizing rotations more so at the beginning than rotations towards the end [21]. Therefore, a reasonably sized ∆t needs to be chosen. We use a ∆t of 1s to achieve direct comparison with other methods. Results on the TUM RGB-D Benchmark We compare the performance of our algorithm using four different edge extraction algorithms, namely Canny, LoG, Sobel, and Structured Edges. We compare to other methods using frame-to-frame tracking for all variants. We selected Canny to perform keyframe tracking due to its consistent accuracy. Although all of the edge types performed well on the sequences, Canny edges performed the best overall on average. Note that we used automatic thresholding as opposed to REVO [19] which used fixed threshold values, which introduces a dependency on photometric consistency. Since we utilize automatic thresholding, our system is more robust to photometric variations across frames. See Figure 2 for examples of edge extractions. From our experiments we observed that edge-direct VO is highly accurate in frame-to-frame tracking, despite the inherent accumulation of drift in such a scheme that does not utilize keyframes. In terms of RPE, our frame-toframe variants perform better than or in worst case as well as REVO, an edge-based method which uses the distance transform on edges. Our method also outperforms ORB-SLAM2 run in VO mode for all sequences, except on f r1/xyz. This is a result of ORB-SLAM2 keeping a local map, and in this particular sequence the camera keeps the majority of the initial scene in view at all times. We confirmed this hypothesis by turning off the local mapping, at which case we outperform it on this sequence as well. Our results are shown in Table 1. In terms of ATE, we again perform well across all non-SLAM algorithms. Even though we do not use any Bundle Adjustment or global optimization as employed by RGBD-SLAM [11], we perform competitively over all sequences with such systems. We provide plots of the edge-direct estimated trajectories over time compared to groundtruth in Figure 7. Our estimated trajectory closely follows that of the groundtruth. In Figure 5 we show the edge-direct estimated trajectory along the XY plane, along with the error between our estimate and groundtruth. Ablation Study In order to experimentally demonstrate the effect of using edge pixels we perform an ablation study. This two-fold ablation study demonstrates the relative efficacy between optimizing over edge pixels compared with optimizing over the same number of randomly chosen pixels, and additionally demonstrates the stability of using edge pixels. We randomly select a fraction of the edge pixels to use, and compare it to our system randomly selecting the same number of pixels from the entire image. We average over 5 runs to account for variability. All parameters are identical for both methods. Additionally, for these tests we utilize keyframes as well as dropping the constant motion assumption. This forces the system to rely on the optimization more heavily, and provides a better measurement of the quality of convergence. We additionally record the latency of our system per frame. Operating on edge pixels is more accurate, while additionally enabling ∼50 fps on average on an Intel i7 CPU. Note that at our optimization settings, a dense method is far from real-time. Since we use the Lucas-Kanade Inverse Compositional formulation we expected our algorithm to be linear time complexity with the number of pixels used. We confirm this experimentally as well. Refer to Figure 6 for both the ablation study and timing measurements. We save approximately 90% computation on average by using edge pixels compared to using all pixels. Note that for stability of edge pixels, the Kinect sensor used in the sequences filters out unstable points in its depth map, and from qualitative inspection still leaves a large number of reliable edge pixels. This is confirmed via the relative stability of selected edge pixels compared to all pixels as well. This ablation study further supports our claim that edge pixels are essential for robust and accurate camera tracking. Discussion Our edge-direct VO algorithm performs well across all sequences compared to other state-of-the-art methods. The trajectory in Figure 5 shows accurate camera tracking in a sequence that is 99 seconds long, and travels over 18 m without the use of Bundle Adjustment or loop closure. Note that our algorithm would perform even better if coupled Figure 7. Shown is our estimated trajectory for four sequences. Each sequence plots the trajectory in solid colors corresponding to the axis. Groundtruth is shown as a red dotted line for all axes. As can be seen our estimates closely match that of the ground truth. Note that for the sequence fr2/desk, there is no ground truth during the interval at approximately 31-43 seconds, which is why there appears to be a straight line in groundtruth trajectory. with such global optimization methods, as our VO would initialize the algorithms closer to the correct solution compared with other algorithms. Such an increase in accuracy can enable SLAM systems to rely less heavily on computationally expensive global optimizations, and perhaps run these threads less frequently. Note that in this figure, the regions that are missing green regions are due to missing groundtruth data in the sequence. The estimated trajectory over time in Figure 7 shows remarkably accurate results as well. It is important to note that even though we explicitly only minimize the photometric error for edge pixels, Figure 4 shows that we simultaneously minimize the residuals for all pixels. This is an important observation, as it supports the claim that minimizing the residuals of edge pixels is the minimally sufficient objective. Moreover, the ablation study supports the claim that minimizing the photometric residuals for just the edge pixels provides less pixels to iterate over while enabling accurate tracking. It is interesting to note that utilizing keyframes did not help the system improve on many of the sequences once we added the constant motion assumption. Prior to adding this camera motion model, utilizing keyframes helped significantly. Conclusion We have presented a novel edge-direct visual odometry algorithm that determines an accurate relative pose between two frames by minimizing the photometric error of only edge pixels. We demonstrate experimentally that minimizing the edge residuals jointly minimizes the residuals over the entire image. This minimalist representation reduces computation required by operating on all pixels, and also results in more accurate tracking. We benchmark its performance on the TUM RGB-D dataset where it achieves stateof-the-art performance as quantified by low relative pose drift and low absolute trajectory error.
3,493
1906.04838
2951865326
In this paper we propose an edge-direct visual odometry algorithm that efficiently utilizes edge pixels to find the relative pose that minimizes the photometric error between images. Prior work on exploiting edge pixels instead treats edges as features and employ various techniques to match edge lines or pixels, which adds unnecessary complexity. Direct methods typically operate on all pixel intensities, which proves to be highly redundant. In contrast our method builds on direct visual odometry methods naturally with minimal added computation. It is not only more efficient than direct dense methods since we iterate with a fraction of the pixels, but also more accurate. We achieve high accuracy and efficiency by extracting edges from only one image, and utilize robust Gauss-Newton to minimize the photometric error of these edge pixels. This simultaneously finds the edge pixels in the reference image, as well as the relative camera pose that minimizes the photometric error. We test various edge detectors, including learned edges, and determine that the optimal edge detector for this method is the Canny edge detection algorithm using automatic thresholding. We highlight key differences between our edge direct method and direct dense methods, in particular how higher levels of image pyramids can lead to significant aliasing effects and result in incorrect solution convergence. We show experimentally that reducing the photometric error of edge pixels also reduces the photometric error of all pixels, and we show through an ablation study the increase in accuracy obtained by optimizing edge pixels only. We evaluate our method on the RGB-D TUM benchmark on which we achieve state-of-the-art performance.
There have been many iterations of direct dense methods such as direct dense VO in @cite_1 , RGB-D SLAM @cite_22 , and LSD-SLAM @cite_25 . Even using dense methods, these systems achieve real-time performance on modern CPUs due to the highly efficient nature of these types of algorithms. More recent advances highlight the fact that the information contained in image intensities are highly redundant, and attempt to minimize the photometric error only over sparse random points in the image in order to increase efficiency and thus speed @cite_10 . Another direct method that has been used with success is the iterative closest point (ICP) algorithm, which is used in systems such as @cite_8 @cite_19 . These systems minimize the difference between point alignment in contrast to image intensities.
{ "abstract": [ "In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o framework. We evaluated our approach extensively on publicly available benchmark datasets, and found that it performs well in scenes with low texture as well as low structure. In direct comparison to several state-of-the-art methods, our approach yields a significantly lower trajectory error. We release our software as open-source.", "KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.", "We present an energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera. To this end we propose an energy function which aims at finding the best rigid body motion to map one RGB-D image into another one, assuming a static scene filmed by a moving camera. We then propose a linearization of the energy function which leads to a 6×6 normal equation for the twist coordinates representing the rigid body motion. To allow for larger motions, we solve this equation in a coarse-to-fine scheme. Extensive quantitative analysis on recently proposed benchmark datasets shows that the proposed solution is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude. While ICP is more robust to large camera motion, the proposed method gives better results in the regime of small displacements which are often the case in camera tracking applications.", "", "Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU." ], "cite_N": [ "@cite_22", "@cite_8", "@cite_1", "@cite_19", "@cite_10", "@cite_25" ], "mid": [ "2064451896", "2099940712", "2091226544", "", "2474281075", "612478963" ] }
Edge-Direct Visual Odometry
Visual odometry (VO), or the task of tracking camera pose from a stream of images, has received increased attention due to its widespread applications in robotics and augmented reality. Camera tracking in unknown environments is one of the most difficult challenges of computer vision. While VO has become a more popular area of research, there are still several challenges present. Such challenges are operating in low-texture environments, achieving higher frame rate processing capabilities for increased positional control, and reducing the drift of the trajectory estimate. Any new algorithm must also deal with inherent challenges of tracking camera pose, in particular they must be able to handle the high bandwidth image streams, which requires efficient solutions to extract useful information from such large amounts of data. Contributions In this paper we propose a sparse visual odometry algorithm that efficiently utilizes edges to track the camera motion with state-of-the-art accuracy quantified by low relative pose drift. More formally, we outline our main contributions: • An edge-direct visual odometry algorithm that outperforms state-of-the-art methods in public datasets. • We provide experimental evidence that edges are the essential pixels in direct methods through an ablation study. • We compare our edge method relative to a direct dense method. • We present key differences on reducing photometric error on edges as opposed to full image intensities. • We optimize our algorithm with respect to several different types of edges. Visual Odometry vs. SLAM Simultaneous localization and mapping (SLAM) algorithms have taken visual odometry algorithms a step further by jointly mapping the environment, and performing optimization over the joint poses and map. Additionally, SLAM algorithms implement loop closure, which enables systems to identify locations which it has visited before and optimize the trajectory by matching feature points against the prior image in memory. With the success of Bundle Adjustment and loop closure in producing near drift-free results, much of the attention has shifted from the performance of visual odometry algorithms to overall system performance. In reality the two are tightly coupled, and it is very important that visual odometry provides low-drift pose for two reasons. Firstly, Bundle Adjustment requires a good initialization in order for it to converge to a drift-free solution. Secondly, it is computationally expensive and is comparatively slow compared to the high frame-rate at which visual odometry performs. For these reasons we focus solely on VO performance in this work, and we show competitive performance even against such SLAM systems. Edge-Direct Visual Odometry Overview In this section we formulate the edge direct visual odometry algorithm. The key concept behind direct visual odometry is to align images with respect to pose parameters using gradients. This is an extension of the Lucas-Kanade algo-rithm [2,15]. At each timestamp we have a reference RGB image and a depth image. When we obtain a new frame, we assume we only receive an RGB image. This enables our method to be extended to monocular VO by simply keeping a depth map and updating at each new time step. Note also that we convert the RGB image into a grayscale image. The key step of our algorithm is that we then extract edges from the new image, and use them as a mask on the reference image we are localizing with respect to. We then align the images by iteratively minimizing the photometric error over these edge pixels. The objective is to minimize the nonlinear photometric error r i (ξ) = I 2 (τ (x i , d i , ξ)) − I 1 (x i ),(1) where τ is the warp function that maps image intensities in the second image to image intensities in the first image through a rigid body transform. The warping function τ (x i , d i , ξ) is dependent on the pixel positions x i , the depth d i of the corresponding 3D point, and the camera pose ξ. Note that now the pixels we are using are only edge pixels, ie. x i ∈ E(I 2 ),(2) where E(I 2 ) are the edges of the new image. Camera Model In order to minimize the photometric error we need to be able to associate image pixels with 3D points in space. Using the standard pinhole camera model, which maps 3D points to image pixels, we have π(P ) = fxX Z + c x , fyY Z + c y T ,(3) where f x and f y are the focal lengths and c x and c y are the image coordinates of the principal point. If we know the depth then we can find the inverse mapping that takes image coordinates and backprojects them to a 3D point P in homogenous coordinates P = π −1 (x i , Z) = x−cx fx Z, y−cy fy Z, Z, 1 T . (4) Camera Motion We are interested in determining the motion of the camera from a sequence of frames, which we model as a rigid body transformation. The camera motion will therefore be in the Special Euclidean Group SE(3). The rigid body transform is given by T ∈ SE(3) Any pyramid greater than three edge images deep starts to suffer from heavy amounts of aliasing, which led us to cut off our edge pyramid at the third level. T = R t 0 1 ,(5) where R is a 3 × 3 rotation matrix and t is a 3 × 1 translation vector. Since we are performing Gauss-Newton optimization, we need to parameterize camera pose as a 6vector through the exponential map T = exp se(3) (ξ) so that we can optimize over the SO(3) manifold for rotations. At each iteration we can compose the relative pose update ∆ξ with the previous iteration estimate. ξ (n+1) = ∆ξ (n) ξ (n) ,(6) where ∆ξ T = exp se(3) (∆ξ)T . We also use a constant motion assumption, where the pose initialization is taken to be the relative pose motion from the previous update, as opposed to initializing with the identity pose. The pose initialization for frame F i with respect to frame F k thus can be expressed as ξ ki,init = ξ k,i−1 ξ i−2,i−1 .(7) . Experimentally we have found that this greatly improves performance by providing the system with an accurate initialization such that it can converge to a low-error solution. Robust Gauss-Newton on Edge Maps Similar to other direct methods, we employ a coarse-tofine approach to Gauss-Newton minimization to avoid false convergence. The selection of the image pyramid scheme has a large effect on the system performance, and must be chosen carefully. Some systems such as [8] report using up to six levels, while [12] report using four levels. Simply extending these large pyramid sizes to edge maps causes the system to fail to converge to the correct solution. This is due to the effects of aliasing. A much smaller pyramid size is required. We found that three levels worked well for the original 640×480 resolution. Using additional levels caused the system to fail due to edge aliasing effects which is illustrated in Figure 3, which shows the same edge image at different levels of the pyramid. After level three, it becomes unrecognizable. For this reason, we recommend using images no smaller than 160 × 120 in resolution. A common approach in direct methods is to incorporate a weighting function that increases robustness to outliers when solving the error function. We use an iteratively re-weighted residual error function that we minimize with Gauss-Newton. We found that iteratively re-weighting using Huber weights worked quite well for our application, following the work of [8]. The Huber weights are defined as w i (r i ) = 1, r i ≤ k k |ri| , r i > k .(8) The error function now becomes E(ξ) = i w i (ξ)r 2 i (ξ).(9) Our goal is to find the relative camera pose that minimizes this function arg min ξ E(ξ) = arg min ξ i w i (ξ)r 2 i (ξ).(10) In order to minimize this nonlinear error function with Gauss-Newton, we must linearize the equation. We can then solve this as a first-order approximation by iteratively solving the equation ∆ξ (n) = −(J T W J ) −1 J T W r(ξ (n) ),(11) where W is a diagonal matrix with the weights, and the Jacobian J is defined as J = ∇I 2 ∂π ∂P ∂P ∂T ∂T ∂ξ ,(12) and ∇I 2 is the image gradient of the new image. We can then iteratively update the relative pose with Equation 6. Note that we use the inverse-composition [2] formulation such that we do not have to recompute the Jacobian matrix every iteration. This is what makes this algorithm extremely efficient, as shown in [2]. This shows that minimizing the residuals for just the edge pixels jointly minimizes the residuals for all pixels. After 3 images, the minimization starts to become more inaccurate. This is also a function of camera velocity and rotational velocity. Optimizing over Edge Points We present the theory of selecting and incorporating edge points in the formulation, and provide some insight on why it is so effective in implementation. For edge selection process, note that we have two images, a reference and a new image, and therefore two sets of edges. We wish to avoid the problems that arise from using both sets, namely there will be a different number of edge pixels, and dealing with this through matching algorithms is inefficient and error-prone. We use a more elegant solution, which is to use the edges of the new image as a mask on the first image. This initialization causes the mask to select pixels in the reference image that are slightly off from the reference image edges, assuming the camera has moved. At each iteration, we follow a gradient from this position towards a point that reduces photometric error. By definition, edges are regions of large photometric variation on either side. Intuitively we argue that the optimization should therefore converge and settle at the correct edge. To summarize, we initialize the edge mask to an offset position from the reference image's edges, and iteratively force these edge pixels to overlap with the reference edges. In doing this we achieve a highly accurate relative pose. Keyframe Selection Another implementation detail has to do with keyframes. Frame-to-frame alignment is inherently noisy and prone to accumulate drift. To mitigate this, VO algorithms often se-lect a key-frame which is used as the reference image for multiple new frames. The error accumulation is decreased by comparing against fewer reference frames, which directly results in a smaller error stackup. There have been several strategies for selecting keyframes. The selection of keyframes is dependent on the type of VO algorithm being used. Feature-based methods such as [17] usually impose the restriction that a significant number of frames to pass, on the order of tens of frames. In [11] the authors summarize several common approaches that direct methods use for creating a new keyframe: every n frames, after a certain relative pose threshold has been met, the variance of the error function exceeds a threshold, or the differential entropy of the covariance matrix reaches a threshold. However, each metric is not without its problems. Furthermore, the performance of the tracking degrades the further apart the baselines. Figure 4 demonstrates this phenomena, in which the residuals from five consecutive frames with respect to the first frame are shown. We observe that in general after 4 frames, the residuals become harder to minimize for most sequences. Note that this is a function of camera motion. We make the assumption that this camera tracking will be used for moderate motion and select an every n frames approach. Table 1. Comparison of the performance of our system using three different types of edges. Blue denotes best performing frame-to-frame VO, excluding SLAM or keyframe systems. Bold denotes best performing system overall. A dashed line indicates that using keyframes did not improve performance. Experiments We evaluate our system using the TUM RGB-D benchmark [21] , which is provided by the Technical University of Munich. The benchmark has been widely used by various SLAM and VO algorithms to benchmark their accuracy and performance over various test sequences. Each sequence contains RGB images, depth images, accelerometer data, as well as groundtruth. The camera intrinsics are also provided. Groundtruth was obtained by an external motion capture system through triangulation, and the data was synchronized. There are several challenging datasets within this benchmark. Each sequence ranges in duration, trajectory, and translational and rotational velocities. We follow the work of [19] which uses seven sequences to benchmark their system performance so to achieve a direct comparison with other methods. Evaluation Metrics We use the Relative Pose Error (RPE) and Absolute Trajectory Error (ATE) to evaluate our system. The Relative Pose Error is proposed for evaluation of drift for VO algorithms in [21]. It measures the accuracy of the camera pose over a fixed time interval ∆t RP E t = (Q −1 t Q t+∆t )(P −1 t P t+∆t ),(13) where Q 1 . . . Q n ∈ SE(3) are the camera poses associated with the groundtruth trajectory and P 1 . . . P n ∈ SE(3) are the camera poses associated with the estimated camera trajectory. Similarly the Absolute Trjectory Error is defined as Figure 5. XY cross-section of our estimated trajectory compared with ground truth. The error is shown in green. The start position is shown as a black dot, while the final positions are shown as colored dots corresponding to the trajectory. Areas without green indicate missing groundtruth data from sequence. AT E t = Q −1 i SP i ,(14) where poses Q and P are aligned by the rigid body transformation S obtained through a least-squares solution. A common practice has been to use the RMSE value of both the RPE and ATE, as RMSE values are a more robust metric that gives more weight to outliers as compared with the mean or median values. Thus the RMSE is a much more stringent performance metric to benchmark system drift. Following the example set by [9,12,22], we provide the RMSE camera pose drift over several sequences of the dataset. As first pointed out in [12], choosing too small of a ∆t creates erroneous error estimates as the ground truth motion capture system has finite error as well. Too large of a value leads to penalizing rotations more so at the beginning than rotations towards the end [21]. Therefore, a reasonably sized ∆t needs to be chosen. We use a ∆t of 1s to achieve direct comparison with other methods. Results on the TUM RGB-D Benchmark We compare the performance of our algorithm using four different edge extraction algorithms, namely Canny, LoG, Sobel, and Structured Edges. We compare to other methods using frame-to-frame tracking for all variants. We selected Canny to perform keyframe tracking due to its consistent accuracy. Although all of the edge types performed well on the sequences, Canny edges performed the best overall on average. Note that we used automatic thresholding as opposed to REVO [19] which used fixed threshold values, which introduces a dependency on photometric consistency. Since we utilize automatic thresholding, our system is more robust to photometric variations across frames. See Figure 2 for examples of edge extractions. From our experiments we observed that edge-direct VO is highly accurate in frame-to-frame tracking, despite the inherent accumulation of drift in such a scheme that does not utilize keyframes. In terms of RPE, our frame-toframe variants perform better than or in worst case as well as REVO, an edge-based method which uses the distance transform on edges. Our method also outperforms ORB-SLAM2 run in VO mode for all sequences, except on f r1/xyz. This is a result of ORB-SLAM2 keeping a local map, and in this particular sequence the camera keeps the majority of the initial scene in view at all times. We confirmed this hypothesis by turning off the local mapping, at which case we outperform it on this sequence as well. Our results are shown in Table 1. In terms of ATE, we again perform well across all non-SLAM algorithms. Even though we do not use any Bundle Adjustment or global optimization as employed by RGBD-SLAM [11], we perform competitively over all sequences with such systems. We provide plots of the edge-direct estimated trajectories over time compared to groundtruth in Figure 7. Our estimated trajectory closely follows that of the groundtruth. In Figure 5 we show the edge-direct estimated trajectory along the XY plane, along with the error between our estimate and groundtruth. Ablation Study In order to experimentally demonstrate the effect of using edge pixels we perform an ablation study. This two-fold ablation study demonstrates the relative efficacy between optimizing over edge pixels compared with optimizing over the same number of randomly chosen pixels, and additionally demonstrates the stability of using edge pixels. We randomly select a fraction of the edge pixels to use, and compare it to our system randomly selecting the same number of pixels from the entire image. We average over 5 runs to account for variability. All parameters are identical for both methods. Additionally, for these tests we utilize keyframes as well as dropping the constant motion assumption. This forces the system to rely on the optimization more heavily, and provides a better measurement of the quality of convergence. We additionally record the latency of our system per frame. Operating on edge pixels is more accurate, while additionally enabling ∼50 fps on average on an Intel i7 CPU. Note that at our optimization settings, a dense method is far from real-time. Since we use the Lucas-Kanade Inverse Compositional formulation we expected our algorithm to be linear time complexity with the number of pixels used. We confirm this experimentally as well. Refer to Figure 6 for both the ablation study and timing measurements. We save approximately 90% computation on average by using edge pixels compared to using all pixels. Note that for stability of edge pixels, the Kinect sensor used in the sequences filters out unstable points in its depth map, and from qualitative inspection still leaves a large number of reliable edge pixels. This is confirmed via the relative stability of selected edge pixels compared to all pixels as well. This ablation study further supports our claim that edge pixels are essential for robust and accurate camera tracking. Discussion Our edge-direct VO algorithm performs well across all sequences compared to other state-of-the-art methods. The trajectory in Figure 5 shows accurate camera tracking in a sequence that is 99 seconds long, and travels over 18 m without the use of Bundle Adjustment or loop closure. Note that our algorithm would perform even better if coupled Figure 7. Shown is our estimated trajectory for four sequences. Each sequence plots the trajectory in solid colors corresponding to the axis. Groundtruth is shown as a red dotted line for all axes. As can be seen our estimates closely match that of the ground truth. Note that for the sequence fr2/desk, there is no ground truth during the interval at approximately 31-43 seconds, which is why there appears to be a straight line in groundtruth trajectory. with such global optimization methods, as our VO would initialize the algorithms closer to the correct solution compared with other algorithms. Such an increase in accuracy can enable SLAM systems to rely less heavily on computationally expensive global optimizations, and perhaps run these threads less frequently. Note that in this figure, the regions that are missing green regions are due to missing groundtruth data in the sequence. The estimated trajectory over time in Figure 7 shows remarkably accurate results as well. It is important to note that even though we explicitly only minimize the photometric error for edge pixels, Figure 4 shows that we simultaneously minimize the residuals for all pixels. This is an important observation, as it supports the claim that minimizing the residuals of edge pixels is the minimally sufficient objective. Moreover, the ablation study supports the claim that minimizing the photometric residuals for just the edge pixels provides less pixels to iterate over while enabling accurate tracking. It is interesting to note that utilizing keyframes did not help the system improve on many of the sequences once we added the constant motion assumption. Prior to adding this camera motion model, utilizing keyframes helped significantly. Conclusion We have presented a novel edge-direct visual odometry algorithm that determines an accurate relative pose between two frames by minimizing the photometric error of only edge pixels. We demonstrate experimentally that minimizing the edge residuals jointly minimizes the residuals over the entire image. This minimalist representation reduces computation required by operating on all pixels, and also results in more accurate tracking. We benchmark its performance on the TUM RGB-D dataset where it achieves stateof-the-art performance as quantified by low relative pose drift and low absolute trajectory error.
3,493
1906.04838
2951865326
In this paper we propose an edge-direct visual odometry algorithm that efficiently utilizes edge pixels to find the relative pose that minimizes the photometric error between images. Prior work on exploiting edge pixels instead treats edges as features and employ various techniques to match edge lines or pixels, which adds unnecessary complexity. Direct methods typically operate on all pixel intensities, which proves to be highly redundant. In contrast our method builds on direct visual odometry methods naturally with minimal added computation. It is not only more efficient than direct dense methods since we iterate with a fraction of the pixels, but also more accurate. We achieve high accuracy and efficiency by extracting edges from only one image, and utilize robust Gauss-Newton to minimize the photometric error of these edge pixels. This simultaneously finds the edge pixels in the reference image, as well as the relative camera pose that minimizes the photometric error. We test various edge detectors, including learned edges, and determine that the optimal edge detector for this method is the Canny edge detection algorithm using automatic thresholding. We highlight key differences between our edge direct method and direct dense methods, in particular how higher levels of image pyramids can lead to significant aliasing effects and result in incorrect solution convergence. We show experimentally that reducing the photometric error of edge pixels also reduces the photometric error of all pixels, and we show through an ablation study the increase in accuracy obtained by optimizing edge pixels only. We evaluate our method on the RGB-D TUM benchmark on which we achieve state-of-the-art performance.
The extension of direct methods using edge pixels is a logical direction, yet to the best of our knowledge no work has solely used edge pixels in a direct method minimizing the photometric error. @cite_18 the authors reduce a Euclidean geometric error using the distance transform on edges which does not utilize all information available in the scene. @cite_0 the authors minimize a joint error function combining photometric error over all pixels along with geometric error over edge pixels. Minimizing a joint error function always suffers from the decision on how best to weight each function, and the weighting can have significant effect on the final converged solution. @cite_10 , the authors threshold by gradients, which does not guarantee edges due to noise. They additionally select texture-less regions as well.
{ "abstract": [ "", "In this work, we present a real-time robust edge-based visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machine-learned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation offers a larger convergence basin while the pose estimates are also very fast to compute. Further, we introduce a measure for tracking quality, which we use to determine when to insert a new key frame. We show the feasibility of our system on real-world datasets and extensively evaluate on standard benchmark sequences to demonstrate the performance in a wide variety of scenes and camera motions. Our framework runs in real-time on the CPU of a laptop computer and is available online.", "Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness." ], "cite_N": [ "@cite_0", "@cite_18", "@cite_10" ], "mid": [ "2611998939", "2773890870", "2474281075" ] }
Edge-Direct Visual Odometry
Visual odometry (VO), or the task of tracking camera pose from a stream of images, has received increased attention due to its widespread applications in robotics and augmented reality. Camera tracking in unknown environments is one of the most difficult challenges of computer vision. While VO has become a more popular area of research, there are still several challenges present. Such challenges are operating in low-texture environments, achieving higher frame rate processing capabilities for increased positional control, and reducing the drift of the trajectory estimate. Any new algorithm must also deal with inherent challenges of tracking camera pose, in particular they must be able to handle the high bandwidth image streams, which requires efficient solutions to extract useful information from such large amounts of data. Contributions In this paper we propose a sparse visual odometry algorithm that efficiently utilizes edges to track the camera motion with state-of-the-art accuracy quantified by low relative pose drift. More formally, we outline our main contributions: • An edge-direct visual odometry algorithm that outperforms state-of-the-art methods in public datasets. • We provide experimental evidence that edges are the essential pixels in direct methods through an ablation study. • We compare our edge method relative to a direct dense method. • We present key differences on reducing photometric error on edges as opposed to full image intensities. • We optimize our algorithm with respect to several different types of edges. Visual Odometry vs. SLAM Simultaneous localization and mapping (SLAM) algorithms have taken visual odometry algorithms a step further by jointly mapping the environment, and performing optimization over the joint poses and map. Additionally, SLAM algorithms implement loop closure, which enables systems to identify locations which it has visited before and optimize the trajectory by matching feature points against the prior image in memory. With the success of Bundle Adjustment and loop closure in producing near drift-free results, much of the attention has shifted from the performance of visual odometry algorithms to overall system performance. In reality the two are tightly coupled, and it is very important that visual odometry provides low-drift pose for two reasons. Firstly, Bundle Adjustment requires a good initialization in order for it to converge to a drift-free solution. Secondly, it is computationally expensive and is comparatively slow compared to the high frame-rate at which visual odometry performs. For these reasons we focus solely on VO performance in this work, and we show competitive performance even against such SLAM systems. Edge-Direct Visual Odometry Overview In this section we formulate the edge direct visual odometry algorithm. The key concept behind direct visual odometry is to align images with respect to pose parameters using gradients. This is an extension of the Lucas-Kanade algo-rithm [2,15]. At each timestamp we have a reference RGB image and a depth image. When we obtain a new frame, we assume we only receive an RGB image. This enables our method to be extended to monocular VO by simply keeping a depth map and updating at each new time step. Note also that we convert the RGB image into a grayscale image. The key step of our algorithm is that we then extract edges from the new image, and use them as a mask on the reference image we are localizing with respect to. We then align the images by iteratively minimizing the photometric error over these edge pixels. The objective is to minimize the nonlinear photometric error r i (ξ) = I 2 (τ (x i , d i , ξ)) − I 1 (x i ),(1) where τ is the warp function that maps image intensities in the second image to image intensities in the first image through a rigid body transform. The warping function τ (x i , d i , ξ) is dependent on the pixel positions x i , the depth d i of the corresponding 3D point, and the camera pose ξ. Note that now the pixels we are using are only edge pixels, ie. x i ∈ E(I 2 ),(2) where E(I 2 ) are the edges of the new image. Camera Model In order to minimize the photometric error we need to be able to associate image pixels with 3D points in space. Using the standard pinhole camera model, which maps 3D points to image pixels, we have π(P ) = fxX Z + c x , fyY Z + c y T ,(3) where f x and f y are the focal lengths and c x and c y are the image coordinates of the principal point. If we know the depth then we can find the inverse mapping that takes image coordinates and backprojects them to a 3D point P in homogenous coordinates P = π −1 (x i , Z) = x−cx fx Z, y−cy fy Z, Z, 1 T . (4) Camera Motion We are interested in determining the motion of the camera from a sequence of frames, which we model as a rigid body transformation. The camera motion will therefore be in the Special Euclidean Group SE(3). The rigid body transform is given by T ∈ SE(3) Any pyramid greater than three edge images deep starts to suffer from heavy amounts of aliasing, which led us to cut off our edge pyramid at the third level. T = R t 0 1 ,(5) where R is a 3 × 3 rotation matrix and t is a 3 × 1 translation vector. Since we are performing Gauss-Newton optimization, we need to parameterize camera pose as a 6vector through the exponential map T = exp se(3) (ξ) so that we can optimize over the SO(3) manifold for rotations. At each iteration we can compose the relative pose update ∆ξ with the previous iteration estimate. ξ (n+1) = ∆ξ (n) ξ (n) ,(6) where ∆ξ T = exp se(3) (∆ξ)T . We also use a constant motion assumption, where the pose initialization is taken to be the relative pose motion from the previous update, as opposed to initializing with the identity pose. The pose initialization for frame F i with respect to frame F k thus can be expressed as ξ ki,init = ξ k,i−1 ξ i−2,i−1 .(7) . Experimentally we have found that this greatly improves performance by providing the system with an accurate initialization such that it can converge to a low-error solution. Robust Gauss-Newton on Edge Maps Similar to other direct methods, we employ a coarse-tofine approach to Gauss-Newton minimization to avoid false convergence. The selection of the image pyramid scheme has a large effect on the system performance, and must be chosen carefully. Some systems such as [8] report using up to six levels, while [12] report using four levels. Simply extending these large pyramid sizes to edge maps causes the system to fail to converge to the correct solution. This is due to the effects of aliasing. A much smaller pyramid size is required. We found that three levels worked well for the original 640×480 resolution. Using additional levels caused the system to fail due to edge aliasing effects which is illustrated in Figure 3, which shows the same edge image at different levels of the pyramid. After level three, it becomes unrecognizable. For this reason, we recommend using images no smaller than 160 × 120 in resolution. A common approach in direct methods is to incorporate a weighting function that increases robustness to outliers when solving the error function. We use an iteratively re-weighted residual error function that we minimize with Gauss-Newton. We found that iteratively re-weighting using Huber weights worked quite well for our application, following the work of [8]. The Huber weights are defined as w i (r i ) = 1, r i ≤ k k |ri| , r i > k .(8) The error function now becomes E(ξ) = i w i (ξ)r 2 i (ξ).(9) Our goal is to find the relative camera pose that minimizes this function arg min ξ E(ξ) = arg min ξ i w i (ξ)r 2 i (ξ).(10) In order to minimize this nonlinear error function with Gauss-Newton, we must linearize the equation. We can then solve this as a first-order approximation by iteratively solving the equation ∆ξ (n) = −(J T W J ) −1 J T W r(ξ (n) ),(11) where W is a diagonal matrix with the weights, and the Jacobian J is defined as J = ∇I 2 ∂π ∂P ∂P ∂T ∂T ∂ξ ,(12) and ∇I 2 is the image gradient of the new image. We can then iteratively update the relative pose with Equation 6. Note that we use the inverse-composition [2] formulation such that we do not have to recompute the Jacobian matrix every iteration. This is what makes this algorithm extremely efficient, as shown in [2]. This shows that minimizing the residuals for just the edge pixels jointly minimizes the residuals for all pixels. After 3 images, the minimization starts to become more inaccurate. This is also a function of camera velocity and rotational velocity. Optimizing over Edge Points We present the theory of selecting and incorporating edge points in the formulation, and provide some insight on why it is so effective in implementation. For edge selection process, note that we have two images, a reference and a new image, and therefore two sets of edges. We wish to avoid the problems that arise from using both sets, namely there will be a different number of edge pixels, and dealing with this through matching algorithms is inefficient and error-prone. We use a more elegant solution, which is to use the edges of the new image as a mask on the first image. This initialization causes the mask to select pixels in the reference image that are slightly off from the reference image edges, assuming the camera has moved. At each iteration, we follow a gradient from this position towards a point that reduces photometric error. By definition, edges are regions of large photometric variation on either side. Intuitively we argue that the optimization should therefore converge and settle at the correct edge. To summarize, we initialize the edge mask to an offset position from the reference image's edges, and iteratively force these edge pixels to overlap with the reference edges. In doing this we achieve a highly accurate relative pose. Keyframe Selection Another implementation detail has to do with keyframes. Frame-to-frame alignment is inherently noisy and prone to accumulate drift. To mitigate this, VO algorithms often se-lect a key-frame which is used as the reference image for multiple new frames. The error accumulation is decreased by comparing against fewer reference frames, which directly results in a smaller error stackup. There have been several strategies for selecting keyframes. The selection of keyframes is dependent on the type of VO algorithm being used. Feature-based methods such as [17] usually impose the restriction that a significant number of frames to pass, on the order of tens of frames. In [11] the authors summarize several common approaches that direct methods use for creating a new keyframe: every n frames, after a certain relative pose threshold has been met, the variance of the error function exceeds a threshold, or the differential entropy of the covariance matrix reaches a threshold. However, each metric is not without its problems. Furthermore, the performance of the tracking degrades the further apart the baselines. Figure 4 demonstrates this phenomena, in which the residuals from five consecutive frames with respect to the first frame are shown. We observe that in general after 4 frames, the residuals become harder to minimize for most sequences. Note that this is a function of camera motion. We make the assumption that this camera tracking will be used for moderate motion and select an every n frames approach. Table 1. Comparison of the performance of our system using three different types of edges. Blue denotes best performing frame-to-frame VO, excluding SLAM or keyframe systems. Bold denotes best performing system overall. A dashed line indicates that using keyframes did not improve performance. Experiments We evaluate our system using the TUM RGB-D benchmark [21] , which is provided by the Technical University of Munich. The benchmark has been widely used by various SLAM and VO algorithms to benchmark their accuracy and performance over various test sequences. Each sequence contains RGB images, depth images, accelerometer data, as well as groundtruth. The camera intrinsics are also provided. Groundtruth was obtained by an external motion capture system through triangulation, and the data was synchronized. There are several challenging datasets within this benchmark. Each sequence ranges in duration, trajectory, and translational and rotational velocities. We follow the work of [19] which uses seven sequences to benchmark their system performance so to achieve a direct comparison with other methods. Evaluation Metrics We use the Relative Pose Error (RPE) and Absolute Trajectory Error (ATE) to evaluate our system. The Relative Pose Error is proposed for evaluation of drift for VO algorithms in [21]. It measures the accuracy of the camera pose over a fixed time interval ∆t RP E t = (Q −1 t Q t+∆t )(P −1 t P t+∆t ),(13) where Q 1 . . . Q n ∈ SE(3) are the camera poses associated with the groundtruth trajectory and P 1 . . . P n ∈ SE(3) are the camera poses associated with the estimated camera trajectory. Similarly the Absolute Trjectory Error is defined as Figure 5. XY cross-section of our estimated trajectory compared with ground truth. The error is shown in green. The start position is shown as a black dot, while the final positions are shown as colored dots corresponding to the trajectory. Areas without green indicate missing groundtruth data from sequence. AT E t = Q −1 i SP i ,(14) where poses Q and P are aligned by the rigid body transformation S obtained through a least-squares solution. A common practice has been to use the RMSE value of both the RPE and ATE, as RMSE values are a more robust metric that gives more weight to outliers as compared with the mean or median values. Thus the RMSE is a much more stringent performance metric to benchmark system drift. Following the example set by [9,12,22], we provide the RMSE camera pose drift over several sequences of the dataset. As first pointed out in [12], choosing too small of a ∆t creates erroneous error estimates as the ground truth motion capture system has finite error as well. Too large of a value leads to penalizing rotations more so at the beginning than rotations towards the end [21]. Therefore, a reasonably sized ∆t needs to be chosen. We use a ∆t of 1s to achieve direct comparison with other methods. Results on the TUM RGB-D Benchmark We compare the performance of our algorithm using four different edge extraction algorithms, namely Canny, LoG, Sobel, and Structured Edges. We compare to other methods using frame-to-frame tracking for all variants. We selected Canny to perform keyframe tracking due to its consistent accuracy. Although all of the edge types performed well on the sequences, Canny edges performed the best overall on average. Note that we used automatic thresholding as opposed to REVO [19] which used fixed threshold values, which introduces a dependency on photometric consistency. Since we utilize automatic thresholding, our system is more robust to photometric variations across frames. See Figure 2 for examples of edge extractions. From our experiments we observed that edge-direct VO is highly accurate in frame-to-frame tracking, despite the inherent accumulation of drift in such a scheme that does not utilize keyframes. In terms of RPE, our frame-toframe variants perform better than or in worst case as well as REVO, an edge-based method which uses the distance transform on edges. Our method also outperforms ORB-SLAM2 run in VO mode for all sequences, except on f r1/xyz. This is a result of ORB-SLAM2 keeping a local map, and in this particular sequence the camera keeps the majority of the initial scene in view at all times. We confirmed this hypothesis by turning off the local mapping, at which case we outperform it on this sequence as well. Our results are shown in Table 1. In terms of ATE, we again perform well across all non-SLAM algorithms. Even though we do not use any Bundle Adjustment or global optimization as employed by RGBD-SLAM [11], we perform competitively over all sequences with such systems. We provide plots of the edge-direct estimated trajectories over time compared to groundtruth in Figure 7. Our estimated trajectory closely follows that of the groundtruth. In Figure 5 we show the edge-direct estimated trajectory along the XY plane, along with the error between our estimate and groundtruth. Ablation Study In order to experimentally demonstrate the effect of using edge pixels we perform an ablation study. This two-fold ablation study demonstrates the relative efficacy between optimizing over edge pixels compared with optimizing over the same number of randomly chosen pixels, and additionally demonstrates the stability of using edge pixels. We randomly select a fraction of the edge pixels to use, and compare it to our system randomly selecting the same number of pixels from the entire image. We average over 5 runs to account for variability. All parameters are identical for both methods. Additionally, for these tests we utilize keyframes as well as dropping the constant motion assumption. This forces the system to rely on the optimization more heavily, and provides a better measurement of the quality of convergence. We additionally record the latency of our system per frame. Operating on edge pixels is more accurate, while additionally enabling ∼50 fps on average on an Intel i7 CPU. Note that at our optimization settings, a dense method is far from real-time. Since we use the Lucas-Kanade Inverse Compositional formulation we expected our algorithm to be linear time complexity with the number of pixels used. We confirm this experimentally as well. Refer to Figure 6 for both the ablation study and timing measurements. We save approximately 90% computation on average by using edge pixels compared to using all pixels. Note that for stability of edge pixels, the Kinect sensor used in the sequences filters out unstable points in its depth map, and from qualitative inspection still leaves a large number of reliable edge pixels. This is confirmed via the relative stability of selected edge pixels compared to all pixels as well. This ablation study further supports our claim that edge pixels are essential for robust and accurate camera tracking. Discussion Our edge-direct VO algorithm performs well across all sequences compared to other state-of-the-art methods. The trajectory in Figure 5 shows accurate camera tracking in a sequence that is 99 seconds long, and travels over 18 m without the use of Bundle Adjustment or loop closure. Note that our algorithm would perform even better if coupled Figure 7. Shown is our estimated trajectory for four sequences. Each sequence plots the trajectory in solid colors corresponding to the axis. Groundtruth is shown as a red dotted line for all axes. As can be seen our estimates closely match that of the ground truth. Note that for the sequence fr2/desk, there is no ground truth during the interval at approximately 31-43 seconds, which is why there appears to be a straight line in groundtruth trajectory. with such global optimization methods, as our VO would initialize the algorithms closer to the correct solution compared with other algorithms. Such an increase in accuracy can enable SLAM systems to rely less heavily on computationally expensive global optimizations, and perhaps run these threads less frequently. Note that in this figure, the regions that are missing green regions are due to missing groundtruth data in the sequence. The estimated trajectory over time in Figure 7 shows remarkably accurate results as well. It is important to note that even though we explicitly only minimize the photometric error for edge pixels, Figure 4 shows that we simultaneously minimize the residuals for all pixels. This is an important observation, as it supports the claim that minimizing the residuals of edge pixels is the minimally sufficient objective. Moreover, the ablation study supports the claim that minimizing the photometric residuals for just the edge pixels provides less pixels to iterate over while enabling accurate tracking. It is interesting to note that utilizing keyframes did not help the system improve on many of the sequences once we added the constant motion assumption. Prior to adding this camera motion model, utilizing keyframes helped significantly. Conclusion We have presented a novel edge-direct visual odometry algorithm that determines an accurate relative pose between two frames by minimizing the photometric error of only edge pixels. We demonstrate experimentally that minimizing the edge residuals jointly minimizes the residuals over the entire image. This minimalist representation reduces computation required by operating on all pixels, and also results in more accurate tracking. We benchmark its performance on the TUM RGB-D dataset where it achieves stateof-the-art performance as quantified by low relative pose drift and low absolute trajectory error.
3,493
1906.04838
2951865326
In this paper we propose an edge-direct visual odometry algorithm that efficiently utilizes edge pixels to find the relative pose that minimizes the photometric error between images. Prior work on exploiting edge pixels instead treats edges as features and employ various techniques to match edge lines or pixels, which adds unnecessary complexity. Direct methods typically operate on all pixel intensities, which proves to be highly redundant. In contrast our method builds on direct visual odometry methods naturally with minimal added computation. It is not only more efficient than direct dense methods since we iterate with a fraction of the pixels, but also more accurate. We achieve high accuracy and efficiency by extracting edges from only one image, and utilize robust Gauss-Newton to minimize the photometric error of these edge pixels. This simultaneously finds the edge pixels in the reference image, as well as the relative camera pose that minimizes the photometric error. We test various edge detectors, including learned edges, and determine that the optimal edge detector for this method is the Canny edge detection algorithm using automatic thresholding. We highlight key differences between our edge direct method and direct dense methods, in particular how higher levels of image pyramids can lead to significant aliasing effects and result in incorrect solution convergence. We show experimentally that reducing the photometric error of edge pixels also reduces the photometric error of all pixels, and we show through an ablation study the increase in accuracy obtained by optimizing edge pixels only. We evaluate our method on the RGB-D TUM benchmark on which we achieve state-of-the-art performance.
Any system that extracts edges must choose between several edge extraction algorithms. The most prominent are Canny edges @cite_6 , followed by edges extracted from Laplacian of Gaussian (LoG) filters which are efficiently implemented using Difference of Gaussians (DoG). Another type of edge that is not as popular but is very simple are Sobel edges. More recently, there has been research involving the learning of edge features. In @cite_20 the authors utilize structured forests, and in @cite_15 the authors utilize deep learning. Instead of selecting one, we test various edge extraction algorithms with our system select the optimal edge extraction algorithm. Note that @cite_15 requires the use of a GPU and is far from real-time, so we do not consider this method.
{ "abstract": [ "", "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.", "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge." ], "cite_N": [ "@cite_15", "@cite_20", "@cite_6" ], "mid": [ "", "2129587342", "2145023731" ] }
Edge-Direct Visual Odometry
Visual odometry (VO), or the task of tracking camera pose from a stream of images, has received increased attention due to its widespread applications in robotics and augmented reality. Camera tracking in unknown environments is one of the most difficult challenges of computer vision. While VO has become a more popular area of research, there are still several challenges present. Such challenges are operating in low-texture environments, achieving higher frame rate processing capabilities for increased positional control, and reducing the drift of the trajectory estimate. Any new algorithm must also deal with inherent challenges of tracking camera pose, in particular they must be able to handle the high bandwidth image streams, which requires efficient solutions to extract useful information from such large amounts of data. Contributions In this paper we propose a sparse visual odometry algorithm that efficiently utilizes edges to track the camera motion with state-of-the-art accuracy quantified by low relative pose drift. More formally, we outline our main contributions: • An edge-direct visual odometry algorithm that outperforms state-of-the-art methods in public datasets. • We provide experimental evidence that edges are the essential pixels in direct methods through an ablation study. • We compare our edge method relative to a direct dense method. • We present key differences on reducing photometric error on edges as opposed to full image intensities. • We optimize our algorithm with respect to several different types of edges. Visual Odometry vs. SLAM Simultaneous localization and mapping (SLAM) algorithms have taken visual odometry algorithms a step further by jointly mapping the environment, and performing optimization over the joint poses and map. Additionally, SLAM algorithms implement loop closure, which enables systems to identify locations which it has visited before and optimize the trajectory by matching feature points against the prior image in memory. With the success of Bundle Adjustment and loop closure in producing near drift-free results, much of the attention has shifted from the performance of visual odometry algorithms to overall system performance. In reality the two are tightly coupled, and it is very important that visual odometry provides low-drift pose for two reasons. Firstly, Bundle Adjustment requires a good initialization in order for it to converge to a drift-free solution. Secondly, it is computationally expensive and is comparatively slow compared to the high frame-rate at which visual odometry performs. For these reasons we focus solely on VO performance in this work, and we show competitive performance even against such SLAM systems. Edge-Direct Visual Odometry Overview In this section we formulate the edge direct visual odometry algorithm. The key concept behind direct visual odometry is to align images with respect to pose parameters using gradients. This is an extension of the Lucas-Kanade algo-rithm [2,15]. At each timestamp we have a reference RGB image and a depth image. When we obtain a new frame, we assume we only receive an RGB image. This enables our method to be extended to monocular VO by simply keeping a depth map and updating at each new time step. Note also that we convert the RGB image into a grayscale image. The key step of our algorithm is that we then extract edges from the new image, and use them as a mask on the reference image we are localizing with respect to. We then align the images by iteratively minimizing the photometric error over these edge pixels. The objective is to minimize the nonlinear photometric error r i (ξ) = I 2 (τ (x i , d i , ξ)) − I 1 (x i ),(1) where τ is the warp function that maps image intensities in the second image to image intensities in the first image through a rigid body transform. The warping function τ (x i , d i , ξ) is dependent on the pixel positions x i , the depth d i of the corresponding 3D point, and the camera pose ξ. Note that now the pixels we are using are only edge pixels, ie. x i ∈ E(I 2 ),(2) where E(I 2 ) are the edges of the new image. Camera Model In order to minimize the photometric error we need to be able to associate image pixels with 3D points in space. Using the standard pinhole camera model, which maps 3D points to image pixels, we have π(P ) = fxX Z + c x , fyY Z + c y T ,(3) where f x and f y are the focal lengths and c x and c y are the image coordinates of the principal point. If we know the depth then we can find the inverse mapping that takes image coordinates and backprojects them to a 3D point P in homogenous coordinates P = π −1 (x i , Z) = x−cx fx Z, y−cy fy Z, Z, 1 T . (4) Camera Motion We are interested in determining the motion of the camera from a sequence of frames, which we model as a rigid body transformation. The camera motion will therefore be in the Special Euclidean Group SE(3). The rigid body transform is given by T ∈ SE(3) Any pyramid greater than three edge images deep starts to suffer from heavy amounts of aliasing, which led us to cut off our edge pyramid at the third level. T = R t 0 1 ,(5) where R is a 3 × 3 rotation matrix and t is a 3 × 1 translation vector. Since we are performing Gauss-Newton optimization, we need to parameterize camera pose as a 6vector through the exponential map T = exp se(3) (ξ) so that we can optimize over the SO(3) manifold for rotations. At each iteration we can compose the relative pose update ∆ξ with the previous iteration estimate. ξ (n+1) = ∆ξ (n) ξ (n) ,(6) where ∆ξ T = exp se(3) (∆ξ)T . We also use a constant motion assumption, where the pose initialization is taken to be the relative pose motion from the previous update, as opposed to initializing with the identity pose. The pose initialization for frame F i with respect to frame F k thus can be expressed as ξ ki,init = ξ k,i−1 ξ i−2,i−1 .(7) . Experimentally we have found that this greatly improves performance by providing the system with an accurate initialization such that it can converge to a low-error solution. Robust Gauss-Newton on Edge Maps Similar to other direct methods, we employ a coarse-tofine approach to Gauss-Newton minimization to avoid false convergence. The selection of the image pyramid scheme has a large effect on the system performance, and must be chosen carefully. Some systems such as [8] report using up to six levels, while [12] report using four levels. Simply extending these large pyramid sizes to edge maps causes the system to fail to converge to the correct solution. This is due to the effects of aliasing. A much smaller pyramid size is required. We found that three levels worked well for the original 640×480 resolution. Using additional levels caused the system to fail due to edge aliasing effects which is illustrated in Figure 3, which shows the same edge image at different levels of the pyramid. After level three, it becomes unrecognizable. For this reason, we recommend using images no smaller than 160 × 120 in resolution. A common approach in direct methods is to incorporate a weighting function that increases robustness to outliers when solving the error function. We use an iteratively re-weighted residual error function that we minimize with Gauss-Newton. We found that iteratively re-weighting using Huber weights worked quite well for our application, following the work of [8]. The Huber weights are defined as w i (r i ) = 1, r i ≤ k k |ri| , r i > k .(8) The error function now becomes E(ξ) = i w i (ξ)r 2 i (ξ).(9) Our goal is to find the relative camera pose that minimizes this function arg min ξ E(ξ) = arg min ξ i w i (ξ)r 2 i (ξ).(10) In order to minimize this nonlinear error function with Gauss-Newton, we must linearize the equation. We can then solve this as a first-order approximation by iteratively solving the equation ∆ξ (n) = −(J T W J ) −1 J T W r(ξ (n) ),(11) where W is a diagonal matrix with the weights, and the Jacobian J is defined as J = ∇I 2 ∂π ∂P ∂P ∂T ∂T ∂ξ ,(12) and ∇I 2 is the image gradient of the new image. We can then iteratively update the relative pose with Equation 6. Note that we use the inverse-composition [2] formulation such that we do not have to recompute the Jacobian matrix every iteration. This is what makes this algorithm extremely efficient, as shown in [2]. This shows that minimizing the residuals for just the edge pixels jointly minimizes the residuals for all pixels. After 3 images, the minimization starts to become more inaccurate. This is also a function of camera velocity and rotational velocity. Optimizing over Edge Points We present the theory of selecting and incorporating edge points in the formulation, and provide some insight on why it is so effective in implementation. For edge selection process, note that we have two images, a reference and a new image, and therefore two sets of edges. We wish to avoid the problems that arise from using both sets, namely there will be a different number of edge pixels, and dealing with this through matching algorithms is inefficient and error-prone. We use a more elegant solution, which is to use the edges of the new image as a mask on the first image. This initialization causes the mask to select pixels in the reference image that are slightly off from the reference image edges, assuming the camera has moved. At each iteration, we follow a gradient from this position towards a point that reduces photometric error. By definition, edges are regions of large photometric variation on either side. Intuitively we argue that the optimization should therefore converge and settle at the correct edge. To summarize, we initialize the edge mask to an offset position from the reference image's edges, and iteratively force these edge pixels to overlap with the reference edges. In doing this we achieve a highly accurate relative pose. Keyframe Selection Another implementation detail has to do with keyframes. Frame-to-frame alignment is inherently noisy and prone to accumulate drift. To mitigate this, VO algorithms often se-lect a key-frame which is used as the reference image for multiple new frames. The error accumulation is decreased by comparing against fewer reference frames, which directly results in a smaller error stackup. There have been several strategies for selecting keyframes. The selection of keyframes is dependent on the type of VO algorithm being used. Feature-based methods such as [17] usually impose the restriction that a significant number of frames to pass, on the order of tens of frames. In [11] the authors summarize several common approaches that direct methods use for creating a new keyframe: every n frames, after a certain relative pose threshold has been met, the variance of the error function exceeds a threshold, or the differential entropy of the covariance matrix reaches a threshold. However, each metric is not without its problems. Furthermore, the performance of the tracking degrades the further apart the baselines. Figure 4 demonstrates this phenomena, in which the residuals from five consecutive frames with respect to the first frame are shown. We observe that in general after 4 frames, the residuals become harder to minimize for most sequences. Note that this is a function of camera motion. We make the assumption that this camera tracking will be used for moderate motion and select an every n frames approach. Table 1. Comparison of the performance of our system using three different types of edges. Blue denotes best performing frame-to-frame VO, excluding SLAM or keyframe systems. Bold denotes best performing system overall. A dashed line indicates that using keyframes did not improve performance. Experiments We evaluate our system using the TUM RGB-D benchmark [21] , which is provided by the Technical University of Munich. The benchmark has been widely used by various SLAM and VO algorithms to benchmark their accuracy and performance over various test sequences. Each sequence contains RGB images, depth images, accelerometer data, as well as groundtruth. The camera intrinsics are also provided. Groundtruth was obtained by an external motion capture system through triangulation, and the data was synchronized. There are several challenging datasets within this benchmark. Each sequence ranges in duration, trajectory, and translational and rotational velocities. We follow the work of [19] which uses seven sequences to benchmark their system performance so to achieve a direct comparison with other methods. Evaluation Metrics We use the Relative Pose Error (RPE) and Absolute Trajectory Error (ATE) to evaluate our system. The Relative Pose Error is proposed for evaluation of drift for VO algorithms in [21]. It measures the accuracy of the camera pose over a fixed time interval ∆t RP E t = (Q −1 t Q t+∆t )(P −1 t P t+∆t ),(13) where Q 1 . . . Q n ∈ SE(3) are the camera poses associated with the groundtruth trajectory and P 1 . . . P n ∈ SE(3) are the camera poses associated with the estimated camera trajectory. Similarly the Absolute Trjectory Error is defined as Figure 5. XY cross-section of our estimated trajectory compared with ground truth. The error is shown in green. The start position is shown as a black dot, while the final positions are shown as colored dots corresponding to the trajectory. Areas without green indicate missing groundtruth data from sequence. AT E t = Q −1 i SP i ,(14) where poses Q and P are aligned by the rigid body transformation S obtained through a least-squares solution. A common practice has been to use the RMSE value of both the RPE and ATE, as RMSE values are a more robust metric that gives more weight to outliers as compared with the mean or median values. Thus the RMSE is a much more stringent performance metric to benchmark system drift. Following the example set by [9,12,22], we provide the RMSE camera pose drift over several sequences of the dataset. As first pointed out in [12], choosing too small of a ∆t creates erroneous error estimates as the ground truth motion capture system has finite error as well. Too large of a value leads to penalizing rotations more so at the beginning than rotations towards the end [21]. Therefore, a reasonably sized ∆t needs to be chosen. We use a ∆t of 1s to achieve direct comparison with other methods. Results on the TUM RGB-D Benchmark We compare the performance of our algorithm using four different edge extraction algorithms, namely Canny, LoG, Sobel, and Structured Edges. We compare to other methods using frame-to-frame tracking for all variants. We selected Canny to perform keyframe tracking due to its consistent accuracy. Although all of the edge types performed well on the sequences, Canny edges performed the best overall on average. Note that we used automatic thresholding as opposed to REVO [19] which used fixed threshold values, which introduces a dependency on photometric consistency. Since we utilize automatic thresholding, our system is more robust to photometric variations across frames. See Figure 2 for examples of edge extractions. From our experiments we observed that edge-direct VO is highly accurate in frame-to-frame tracking, despite the inherent accumulation of drift in such a scheme that does not utilize keyframes. In terms of RPE, our frame-toframe variants perform better than or in worst case as well as REVO, an edge-based method which uses the distance transform on edges. Our method also outperforms ORB-SLAM2 run in VO mode for all sequences, except on f r1/xyz. This is a result of ORB-SLAM2 keeping a local map, and in this particular sequence the camera keeps the majority of the initial scene in view at all times. We confirmed this hypothesis by turning off the local mapping, at which case we outperform it on this sequence as well. Our results are shown in Table 1. In terms of ATE, we again perform well across all non-SLAM algorithms. Even though we do not use any Bundle Adjustment or global optimization as employed by RGBD-SLAM [11], we perform competitively over all sequences with such systems. We provide plots of the edge-direct estimated trajectories over time compared to groundtruth in Figure 7. Our estimated trajectory closely follows that of the groundtruth. In Figure 5 we show the edge-direct estimated trajectory along the XY plane, along with the error between our estimate and groundtruth. Ablation Study In order to experimentally demonstrate the effect of using edge pixels we perform an ablation study. This two-fold ablation study demonstrates the relative efficacy between optimizing over edge pixels compared with optimizing over the same number of randomly chosen pixels, and additionally demonstrates the stability of using edge pixels. We randomly select a fraction of the edge pixels to use, and compare it to our system randomly selecting the same number of pixels from the entire image. We average over 5 runs to account for variability. All parameters are identical for both methods. Additionally, for these tests we utilize keyframes as well as dropping the constant motion assumption. This forces the system to rely on the optimization more heavily, and provides a better measurement of the quality of convergence. We additionally record the latency of our system per frame. Operating on edge pixels is more accurate, while additionally enabling ∼50 fps on average on an Intel i7 CPU. Note that at our optimization settings, a dense method is far from real-time. Since we use the Lucas-Kanade Inverse Compositional formulation we expected our algorithm to be linear time complexity with the number of pixels used. We confirm this experimentally as well. Refer to Figure 6 for both the ablation study and timing measurements. We save approximately 90% computation on average by using edge pixels compared to using all pixels. Note that for stability of edge pixels, the Kinect sensor used in the sequences filters out unstable points in its depth map, and from qualitative inspection still leaves a large number of reliable edge pixels. This is confirmed via the relative stability of selected edge pixels compared to all pixels as well. This ablation study further supports our claim that edge pixels are essential for robust and accurate camera tracking. Discussion Our edge-direct VO algorithm performs well across all sequences compared to other state-of-the-art methods. The trajectory in Figure 5 shows accurate camera tracking in a sequence that is 99 seconds long, and travels over 18 m without the use of Bundle Adjustment or loop closure. Note that our algorithm would perform even better if coupled Figure 7. Shown is our estimated trajectory for four sequences. Each sequence plots the trajectory in solid colors corresponding to the axis. Groundtruth is shown as a red dotted line for all axes. As can be seen our estimates closely match that of the ground truth. Note that for the sequence fr2/desk, there is no ground truth during the interval at approximately 31-43 seconds, which is why there appears to be a straight line in groundtruth trajectory. with such global optimization methods, as our VO would initialize the algorithms closer to the correct solution compared with other algorithms. Such an increase in accuracy can enable SLAM systems to rely less heavily on computationally expensive global optimizations, and perhaps run these threads less frequently. Note that in this figure, the regions that are missing green regions are due to missing groundtruth data in the sequence. The estimated trajectory over time in Figure 7 shows remarkably accurate results as well. It is important to note that even though we explicitly only minimize the photometric error for edge pixels, Figure 4 shows that we simultaneously minimize the residuals for all pixels. This is an important observation, as it supports the claim that minimizing the residuals of edge pixels is the minimally sufficient objective. Moreover, the ablation study supports the claim that minimizing the photometric residuals for just the edge pixels provides less pixels to iterate over while enabling accurate tracking. It is interesting to note that utilizing keyframes did not help the system improve on many of the sequences once we added the constant motion assumption. Prior to adding this camera motion model, utilizing keyframes helped significantly. Conclusion We have presented a novel edge-direct visual odometry algorithm that determines an accurate relative pose between two frames by minimizing the photometric error of only edge pixels. We demonstrate experimentally that minimizing the edge residuals jointly minimizes the residuals over the entire image. This minimalist representation reduces computation required by operating on all pixels, and also results in more accurate tracking. We benchmark its performance on the TUM RGB-D dataset where it achieves stateof-the-art performance as quantified by low relative pose drift and low absolute trajectory error.
3,493
1906.04825
2954829969
Determining the optimal location of control cabinet components requires the exploration of a large configuration space. For real-world control cabinets it is impractical to evaluate all possible cabinet configurations. Therefore, we need to apply methods for intelligent exploration of cabinet configuration space that enable to find a near-optimal configuration without evaluation of all possible configurations. In this paper, we describe an approach for multi-objective optimization of control cabinet layout that is based on Pareto Simulated Annealing. Optimization aims at minimizing the total wire length used for interconnection of components and the heat convection within the cabinet. We simulate heat convection to study the warm air flow within the control cabinet and determine the optimal position of components that generate heat during the operation. We evaluate and demonstrate the effectiveness of our approach empirically for various control cabinet sizes and usage scenarios.
@cite_4 propose an approach that is based on Bayesian networks for automatic generation of residential building layouts in the context of computer graphics applications (such as, computer games). Authors define a cost function that aims at avoiding layout anomalies, such as, ill-formed rooms or incompatibilities between floors.
{ "abstract": [ "We present a method for automated generation of building layouts for computer graphics applications. Our approach is motivated by the layout design process developed in architecture. Given a set of high-level requirements, an architectural program is synthesized using a Bayesian network trained on real-world data. The architectural program is realized in a set of floor plans, obtained through stochastic optimization. The floor plans are used to construct a complete three-dimensional building with internal structure. We demonstrate a variety of computer-generated buildings produced by the presented approach." ], "cite_N": [ "@cite_4" ], "mid": [ "1977371710" ] }
0
1906.04825
2954829969
Determining the optimal location of control cabinet components requires the exploration of a large configuration space. For real-world control cabinets it is impractical to evaluate all possible cabinet configurations. Therefore, we need to apply methods for intelligent exploration of cabinet configuration space that enable to find a near-optimal configuration without evaluation of all possible configurations. In this paper, we describe an approach for multi-objective optimization of control cabinet layout that is based on Pareto Simulated Annealing. Optimization aims at minimizing the total wire length used for interconnection of components and the heat convection within the cabinet. We simulate heat convection to study the warm air flow within the control cabinet and determine the optimal position of components that generate heat during the operation. We evaluate and demonstrate the effectiveness of our approach empirically for various control cabinet sizes and usage scenarios.
@cite_8 surveyed various methods (such as, Simulated Annealing) for decentralized scheduling in Grid computing environments. Grid scheduling involves mapping of a collection of tasks to resources with the aim of minimizing the total execution time of all considered tasks. Compared to a game theory scheduling method, the average scheduling time per task of Simulated Annealing was lower.
{ "abstract": [ "In grid environments applications require dynamic scheduling for optimized assignment of tasks on available resources, so the optimization represents a key solution for scheduling. This paper presents an evaluation of multi-objective decentralized scheduling models for the problem of task allocation. It also presents a survey of existing optimization solutions for grid scheduling. The surveyed scheduling solutions are: random and best of n random, exhaustive search, simulated annealing, game theory, ad-hoc greedy scheduler, and genetic algorithm for decentralized scheduling. We carry out our experiments with various scheduling scenarios and with heterogeneous input tasks and computation resources. We also present the methods to evaluate and validate the described scheduling methods. We present several experimental results that offer a support for near-optimal algorithm selection." ], "cite_N": [ "@cite_8" ], "mid": [ "2116946066" ] }
0
1906.04825
2954829969
Determining the optimal location of control cabinet components requires the exploration of a large configuration space. For real-world control cabinets it is impractical to evaluate all possible cabinet configurations. Therefore, we need to apply methods for intelligent exploration of cabinet configuration space that enable to find a near-optimal configuration without evaluation of all possible configurations. In this paper, we describe an approach for multi-objective optimization of control cabinet layout that is based on Pareto Simulated Annealing. Optimization aims at minimizing the total wire length used for interconnection of components and the heat convection within the cabinet. We simulate heat convection to study the warm air flow within the control cabinet and determine the optimal position of components that generate heat during the operation. We evaluate and demonstrate the effectiveness of our approach empirically for various control cabinet sizes and usage scenarios.
@cite_12 @cite_2 use Simulated Annealing for optimization of DNA sequence analysis on heterogeneous computing systems that comprise a host with multi-core processors and one or more many-core devices. The optimization procedure aims at determining the number of threads, thread affinities, and DNA sequence fractions for host and device, such that the overall execution time of DNA sequence analysis is minimized.
{ "abstract": [ "Analysis of DNA sequences is a data and computational intensive problem, and therefore, it requires suitable parallel computing resources and algorithms. In this paper, we describe our parallel alg ...", "While modern parallel computing systems offer high performance, utilizing these powerful computing resources to the highest possible extent demands advanced knowledge of various hardware architectures and parallel programming models. Furthermore, optimized software execution on parallel computing systems demands consideration of many parameters at compile-time and run-time. Determining the optimal set of parameters in a given execution context is a complex task, and therefore to address this issue researchers have proposed different approaches that use heuristic search or machine learning. In this paper, we undertake a systematic literature review to aggregate, analyze and classify the existing software optimization methods for parallel computing systems. We review approaches that use machine learning or meta-heuristics for software optimization at compile-time and run-time. Additionally, we discuss challenges and future research directions. The results of this study may help to better understand the state-of-the-art techniques that use machine learning and meta-heuristics to deal with the complexity of software optimization for parallel computing systems. Furthermore, it may aid in understanding the limitations of existing approaches and identification of areas for improvement." ], "cite_N": [ "@cite_12", "@cite_2" ], "mid": [ "2604520782", "2786013445" ] }
0
1906.04825
2954829969
Determining the optimal location of control cabinet components requires the exploration of a large configuration space. For real-world control cabinets it is impractical to evaluate all possible cabinet configurations. Therefore, we need to apply methods for intelligent exploration of cabinet configuration space that enable to find a near-optimal configuration without evaluation of all possible configurations. In this paper, we describe an approach for multi-objective optimization of control cabinet layout that is based on Pareto Simulated Annealing. Optimization aims at minimizing the total wire length used for interconnection of components and the heat convection within the cabinet. We simulate heat convection to study the warm air flow within the control cabinet and determine the optimal position of components that generate heat during the operation. We evaluate and demonstrate the effectiveness of our approach empirically for various control cabinet sizes and usage scenarios.
Drexl and Nikulin @cite_17 use Pareto Simulated Annealing for solving the gate assignment problem in the context of an airport. Various aspects of airport operation are considered, such as, the total passenger walking distance, open flights, connection time, and gate assignment preferences.
{ "abstract": [ "This paper addresses an airport gate assignment problem with multiple objectives. The objectives are to minimize the number of ungated flights and the total passenger walking distances or connection times as well as to maximize the total gate assignment preferences. The problem examined is an integer program with multiple objectives (one of them being quadratic) and quadratic constraints. Of course, such a problem is inherently difficult to solve. We tackle the problem by Pareto simulated annealing in order to get a representative approximation for the Pareto front. Results of computational experiments are presented. To the best of our knowledge, this is the first attempt to consider the airport gate assignment problem with multiple objectives." ], "cite_N": [ "@cite_17" ], "mid": [ "2106120336" ] }
0
1906.04825
2954829969
Determining the optimal location of control cabinet components requires the exploration of a large configuration space. For real-world control cabinets it is impractical to evaluate all possible cabinet configurations. Therefore, we need to apply methods for intelligent exploration of cabinet configuration space that enable to find a near-optimal configuration without evaluation of all possible configurations. In this paper, we describe an approach for multi-objective optimization of control cabinet layout that is based on Pareto Simulated Annealing. Optimization aims at minimizing the total wire length used for interconnection of components and the heat convection within the cabinet. We simulate heat convection to study the warm air flow within the control cabinet and determine the optimal position of components that generate heat during the operation. We evaluate and demonstrate the effectiveness of our approach empirically for various control cabinet sizes and usage scenarios.
@cite_16 propose to use Simulated Annealing for solving the hybrid vehicle routing problem. The aim is to minimize the total travel cost for hybrid vehicles that use fuel and electricity while considering the time limit, electric capacity, fuel capacity, locations of fuel stations and electricity charging stations.
{ "abstract": [ "Display Omitted This research proposes the hybrid vehicle routing problem (HVRP), which is an extension of the green vehicle routing problem.A simulated annealing (SA) heuristic is proposed to solve HVRP.Computational results show that the proposed SA effectively solves HVRP.Sensitivity analysis has been conducted to understand the effect of hybrid vehicles and charging stations on the travel cost. This study proposes the Hybrid Vehicle Routing Problem (HVRP), which is an extension of the Green Vehicle Routing Problem (G-VRP). We focus on vehicles that use a hybrid power source, known as the Plug-in Hybrid Electric Vehicle (PHEV) and generate a mathematical model to minimize the total cost of travel by driving PHEV. Moreover, the model considers the utilization of electric and fuel power depending on the availability of either electric charging or fuel stations.We develop simulated annealing with a restart strategy (SA_RS) to solve this problem, and it consists of two versions. The first version determines the acceptance probability of a worse solution using the Boltzmann function, denoted as SA_RSBF. The second version employs the Cauchy function to determine the acceptance probability of a worse solution, denoted as SA_RSCF. The proposed SA algorithm is first verified with benchmark data of the capacitated vehicle routing problem (CVRP), with the result showing that it performs well and confirms its efficiency in solving CVRP. Further analysis show that SA_RSCF is preferable compared to SA_RSBF and that SA with a restart strategy performs better than without a restart strategy. We next utilize the SA_RSCF method to solve HVRP. The numerical experiment presents that vehicle type and the number of electric charging stations have an impact on the total travel cost." ], "cite_N": [ "@cite_16" ], "mid": [ "2562801040" ] }
0
1808.05089
2964249268
The evolving explosion in high data rate services and applications will soon require the use of untapped, abundant unregulated spectrum of the visible light for communications to adequately meet the demands of the fifth-generation (5G) mobile technologies. Radio-frequency (RF) networks are proving to be too scarce to cover the escalation in data rate services. Visible light communication (VLC) has emerged as a great potential solution, either in replacement of, or a complement to, existing RF networks, to support the projected traffic demands. Despite the prolific advantages of VLC networks, VLC faces many challenges that must be resolved in the near future to achieve full standardization and to be integrated to future wireless systems. Here, we review the emerging research in the field of VLC networks and lay out the challenges, technological solutions, and future work predictions. Specifically, we first review the VLC channel capacity derivation and discuss the performance metrics and the associated variables. The optimization of VLC networks are also discussed, including resources and power allocation techniques, user-to-access point (AP) association and APs-to-clustered-users-association, APs coordination techniques, non-orthogonal multiple access (NOMA) VLC networks, simultaneous energy harvesting and information transmission using the visible light, and the security issues in VLC networks. Finally, we propose several open research problems to optimize the various VLC networks by maximizing either the sum rate, fairness, energy efficiency, secrecy rate, or harvested energy.
Some review articles focused on specific aspects of VLC such as VLC channel modeling methods @cite_47 , noise optical sources and noise mitigation mechanisms @cite_57 , VLC-based positioning techniques for indoor and outdoor applications @cite_18 , and the pertinent issues associated with the outdoor usage of VLC in vehicular communication @cite_177 . They generally identified emerging challenges and proposed future research directions.
{ "abstract": [ "", "In the context of an increasing interest toward reducing the number of traffic accidents and of associated victims, communication-based vehicle safety applications have emerged as one of the best solutions to enhance road safety. In this area, visible light communications (VLC) have a great potential for applications due to their relatively simple design for basic functioning, efficiency, and large geographical distribution. This paper addresses the issues related to the VLC usage in vehicular communication applications, being the first extensive survey dedicated to this topic. Although VLC has been the focus of an intensive research during the last few years, the technology is still in its infancy and requires continuous efforts to overcome the current challenges, especially in outdoor applications, such as the automotive communications. This paper is aimed at providing an overview of several research directions that could transform VLC into a reliable component of the transportation infrastructure. The main challenges are identified and the status of the accomplishments in each direction is presented, helping one to understand what has been done, where the technology stands and what is still missing. The challenges for VLC usage in vehicle applications addressed by this survey are: 1) increasing the robustness to noise; 2) increasing the communication range; 3) enhancing mobility; 4) performing distance measurements and visible light positioning; 5) increasing data rate; 6) developing parallel VLC; and 7) developing heterogeneous dedicated short range communications and VLC networks. Addressing and solving these challenges lead to the perspective of fully demonstrating the high potential of VLC, and therefore, to enable the VLC usage in road safety applications. This paper also proposes several future research directions for the automotive VLC applications and offers a brief review on the associated standardization activities.", "Visible light communication VLC is a newly emerging technology, which integrates communications and lighting purposes, and has become a very active research topic in the areas of wireless communications. It is expected to become an important part of the next generation wireless communications because of its unique features in using unlicensed spectrum, support of high data rate, and its resistance to electromagnetic interferences. In this survey paper, we begin with a review on the basis of photometry, which is used to establish channel models of VLC systems. Then, we will continue to address various issues on the fundamental characteristic features of VLC systems, the impact of indoor environments on system performance, and the analysis and discussions of five different types of typical VLC channel models and other related parameters in VLC channel models. Finally, in terms of the future works, we will show the possible follow-up research focuses and directions as an effort to identify some new research topics on VLCs. Copyright © 2016 John Wiley & Sons, Ltd.", "As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems." ], "cite_N": [ "@cite_57", "@cite_177", "@cite_47", "@cite_18" ], "mid": [ "", "2619151842", "2320983102", "2791124749" ] }
0
1808.05089
2964249268
The evolving explosion in high data rate services and applications will soon require the use of untapped, abundant unregulated spectrum of the visible light for communications to adequately meet the demands of the fifth-generation (5G) mobile technologies. Radio-frequency (RF) networks are proving to be too scarce to cover the escalation in data rate services. Visible light communication (VLC) has emerged as a great potential solution, either in replacement of, or a complement to, existing RF networks, to support the projected traffic demands. Despite the prolific advantages of VLC networks, VLC faces many challenges that must be resolved in the near future to achieve full standardization and to be integrated to future wireless systems. Here, we review the emerging research in the field of VLC networks and lay out the challenges, technological solutions, and future work predictions. Specifically, we first review the VLC channel capacity derivation and discuss the performance metrics and the associated variables. The optimization of VLC networks are also discussed, including resources and power allocation techniques, user-to-access point (AP) association and APs-to-clustered-users-association, APs coordination techniques, non-orthogonal multiple access (NOMA) VLC networks, simultaneous energy harvesting and information transmission using the visible light, and the security issues in VLC networks. Finally, we propose several open research problems to optimize the various VLC networks by maximizing either the sum rate, fairness, energy efficiency, secrecy rate, or harvested energy.
This paper provides, in Section II, an overview of VLC technology, defines and discusses the objectives and constraints that must be taken into account when optimizing VLC networks. Special emphasis is placed on channel capacity derivations, and the unique properties of VLC. We also discuss the variables, parameters, and constraints having an impact on the performance of VLC networks. All optimization techniques are reviewed in Section III, including power and resource allocation, users-to-APs association, cell formation, and AP cooperation used for mitigating the disadvantages of VLC networks to improve performance. This important topic was previously investigated by Li @cite_202 . However, their study was focused on the difference between user-centric and network-centric cell formations, and the interference reduction techniques, whereas in this paper, we place our attention on the techniques, used in RF VLC and in VLC standalone networks, that are aimed at alleviating the limitations in VLC networks. In other words, we show how to formulate optimization problems, what are the techniques used for solving these optimization problems, how the different objectives, limitations, constraints are evaluated, and how added RF APs can remove stand-alone VLC network limitations.
{ "abstract": [ "In order to counteract the explosive escalation of wireless tele-traffic, the communication spectrum has been gradually expanded from the conventional radio frequency (RF) band to the optical wireless (OW) domain. By integrating the classic RF band relying on diverse radio techniques and optical bands, the next-generation heterogeneous networks (HetNets) are expected to offer a potential solution for supporting the ever-increasing wireless tele-traffic. Owing to its abundant unlicensed spectral resources, visible light communications (VLC) combined with advanced illumination constitute a competent candidate for complementing the existing RF networks. Although the advantages of VLC are multi-fold, some challenges arise when incorporating VLC into the classic RF HetNet environments, which may require new system architectures. The user-centric (UC) design principle for VLC environments constitutes a novel and competitive design paradigm for the super dense multi-tier cell combinations of HetNets. The UC concept may be expected to become one of the disruptive techniques to be used in the forthcoming fifth-generation era. This paper provides a comprehensive survey of visible-light-aided OW systems with special emphasis on the design and optimization of VLC networks, where the radically new UC design philosophy is reviewed. Finally, design guidelines are provided for VLC systems." ], "cite_N": [ "@cite_202" ], "mid": [ "2789388912" ] }
0
1808.04581
2949220746
The Authentication and Authorization for Constrained Environments (ACE) framework provides fine-grained access control in the Internet of Things, where devices are resource-constrained and with limited connectivity. The ACE framework defines separate profiles to specify how exactly entities interact and what security and communication protocols to use. This paper presents the novel ACE IPsec profile, which specifies how a client establishes a secure IPsec channel with a resource server, contextually using the ACE framework to enforce authorized access to remote resources. The profile makes it possible to establish IPsec Security Associations, either through their direct provisioning or through the standard IKEv2 protocol. We provide the first Open Source implementation of the ACE IPsec profile for the Contiki OS and test it on the resource-constrained Zolertia Firefly platform. Our experimental performance evaluation confirms that the IPsec profile and its operating modes are affordable and deployable also on constrained IoT platforms.
Finally, Sciancalepore propose a different authorization framework for the IoT @cite_24 , also based on OAuth 2.0 and other standard protocols. In particular, it provides access control through an intermediary gateway acting as mediator between IoT networks and non-constrained Internet segments. However, unlike the ACE framework, @cite_24 displays a considerably higher level of complexity and requires the intermediary gateway to be fully trusted.
{ "abstract": [ "While the Internet of Things is breaking into the market, the controlled access to constrained resources still remains a blocking concern. Unfortunately, conventional solutions already accepted for both web and cloud applications cannot be directly used in this context. In fact, they generally require high computational and bandwidth capabilities (that are impossible to reach with constrained devices) and offer poor interoperability against standardized communication protocols for the Internet of Things. To solve this issue, this contribution presents a flexible authentication and authorization framework for the Internet of Things, namely OAuth-IoT. It leverages and properly harmonizes existing open-standards (including the OAuth 2.0 authorization framework, different token formats, and the protocol suite for the Internet of Things tailored by the Internet Engineering Task Force), while carefully taking into account the limited capabilities of constrained devices. Functionalities and benefits offered by OAuth-IoT are pragmatically shown by means of an experimental testbed, and further demonstrated with a very preliminary performance assessment." ], "cite_N": [ "@cite_24" ], "mid": [ "2751141094" ] }
ACE of Spades in the IoT Security Game: A Flexible IPsec Security Profile for Access Control
Abstract-The Authentication and Authorization for Constrained Environments (ACE) framework provides fine-grained access control in the Internet of Things, where devices are resource-constrained and with limited connectivity. The ACE framework defines separate profiles to specify how exactly entities interact and what security and communication protocols to use. This paper presents the novel ACE IPsec profile, which specifies how a client establishes a secure IPsec channel with a resource server, contextually using the ACE framework to enforce authorized access to remote resources. The profile makes it possible to establish IPsec Security Associations, either through their direct provisioning or through the standard IKEv2 protocol. We provide the first Open Source implementation of the ACE IPsec profile for the Contiki OS and test it on the resource-constrained Zolertia Firefly platform. Our experimental performance evaluation confirms that the IPsec profile and its operating modes are affordable and deployable also on constrained IoT platforms. I. INTRODUCTION The Internet of Things (IoT) refers to network scenarios where billions of devices communicate over IP networks and are available on the Internet. This includes everyday objects and appliances, and has been constantly fostering a number of use cases and business opportunities, from sensor and actuator networks to smart buildings, from monitoring of critical infrastructures to controlled resource sharing. As more and more applications are being developed, the IoT is expected to have a huge impact on the way we live and work. At the same time, security plays a fundamental role, even during this transition process. In fact, ensuring security in IoT scenarios is of vital importance to counteract information breaches and service dysfunctions, which may result in severe performance degradation and privacy violations, or even threaten safety of people and infrastructures. Securing the IoT is thus vital to ensure its successful deployment and adoption. However, unlike in traditional networks, IoT devices are typically resource-constrained, i.e. equipped with limited resources. That is, they are scarce as to processing power, storage and energy availability, often being battery-powered. Besides, most IoT devices are wirelessly connected over lowpower and lossy networks, thus exhibiting limited connectivity and availability. Also, they often lack traditional user interfaces, and are likely deployed in unattended environments. As a result, protecting billions of IoT devices with traditional approaches is challenging, which fosters the development of novel security solutions suitable for the IoT. Yet, many of these solutions do not base on established standards and are difficult to scrutinize in terms of their security guarantees. The first security challenge consists in efficiently enabling secure communication and message exchange. Due to the resource-constrained nature of typical IoT devices, their great heterogeneity and their large-scale deployment, it is not feasible to rely on solutions for traditional network environments. To this end, a number of secure communication protocols for the IoT are available and have been increasingly adopted in constrained environments. In particular, [1] and [2] show how 6LoWPAN header compression mechanisms optimize security protocols to be deployable in resource-constrained networked scenarios. However, it can be very difficult to provision millions, or even billions, of resource-constrained IoT devices with the cryptographic keys necessary to securely communicate and operate. Even the establishment of secure sessions based on pre-shared symmetric keys can easily result in hard-to-manage and poorly scalable key distribution. The second critical security aspect concerns authorization and access control. Typically, a Client wants to access a resource hosted on a Resource Server (RS), which is often deployed as a resource-constrained device. This requires the Client and the RS to mutually authenticate, and must permit the RS to verify Client requests as previously authorized. In order to enable fine-grained and flexible access control in the IoT, the Authentication and Authorization for Constrained Environments (ACE) framework has been proposed [3], building on the authorization framework OAuth 2.0 [4]. The ACE framework relies on an Authorization Server (AS), that has a trust relation with the RS and authorizes resource accesses from requesting Clients, based on pre-defined policies. However, the ACE framework admits the definition of separate profiles describing how these actors interact with each other and what communication and security protocols they use. A few profiles have been proposed, including [5] for the DTLS protocol [6], as well as [7] for OSCORE [8]. The choice of the particular profile to use has to take into account the specific use case and its security requirements, as well as the related trust and security models. This naturally leads to the most suitable communication and security protocols to adopt, and hence to the related profile describing how to use them in the ACE framework. Figure 1 shows how an IoT application for home automation can leverage on the ACE Framework profiles, e.g. a traditional Internet host like a Smart phone can secure its communications with a smart lock using IPsec, DTLS or OSCORE. This paper presents the novel ACE IPsec profile, which describes how Client and RS set up and use an IPsec channel [9], contextually with the access control enforced by the AS. The profile displays two key benefits tightly paired with the access control provided by the ACE framework. First, it enables secure communication between Client and RS at the network layer, by flexibly leveraging the IPsec security protocols AH [10] and ESP [11], and thus counteracting network-layer attacks such as IP spoofing. This is fundamentally achieved by establishing IPsec Security Associations between Client and RS. Second, it efficiently addresses the provisioning of key material, by embedding the process in the authorization workflow of the ACE framework and taking advantage of the AS. Specifically, the IPsec Security Associations can be generated by the AS, and then directly provided to Client and RS. As an alternative, the AS provides the Client and RS with the necessary key material to establish the IPsec Security Associations through the standard IKEv2 protocol [12], based either on symmetric or asymmetric cryptography. In order to encourage wider acceptance and interoperability across multiple vendors, we submitted a draft description of our profile to the IETF for possible standardization [13]. The draft focuses on the theoretical contribution and practical considerations, and it does not refer to a particular implementation or experimental evaluation of the IPsec profile. In this paper, we additionally describe our implementation of the ACE framework and the ACE IPsec profile for the Contiki OS [14]. We test it on real IoT devices using the resource-constrained Zolertia Firefly platform [15]. Our implementation covers all the actors in the ACE framework and is available as open source software at [16]. To the best of our knowledge, this is the first implementation of the ACE framework for the Contiki OS, and the first one ever of its IPsec profile. Additionally, it targets scenarios where even the AS is a resource-constrained device. We utilize our implementation to experimentally evaluate the performance of the ACE framework when using the novel IPsec profile under different channel establishment and authentication methods. In particular, we consider message size, memory and energy consumption, and time required for the Client to perform an authorized resource access at the RS. Our results confirm that the IPsec profile is affordable on resource-constrained devices, and hence is effectively deployable in IoT scenarios to enforce access control paired with IPsec-based secure communication. The rest of the paper is organized as follows. Section II discusses the related work. Section III introduces background concepts and technologies. In Section IV, the ACE IPsec profile is introduced. Section V presents our performance evaluation. Finally, Section VI draws conclusive remarks. II. RELATED WORK Different profiles have been proposed for the Authentication and Authorization for Constrained Environments (ACE) framework. In [5], Gerdes et al. describe the Datagram Transport Layer Security (DTLS) profile, which delegates the authorization and authentication of a Client device to the establishment of a DTLS session [6] between the Client and a Resource Server (RS). Specifically, DTLS can be used in the symmetric Pre-Shared Key (PSK) mode or the asymmetric Raw Public Key (RPK) mode. If the PSK mode is used, the successful establishment of a DTLS session also acts as a proof-of-possession (PoP) for the Client's PSK. In case the RPK mode is used, the Client is authenticated through its asymmetric public key. Finally, this profile uses the Constrained Application Protocol (CoAP) [17] over DTLS between Client and RS. The feasibility of securing CoAP messages with DTLS has been investigated in [1]. The OSCORE profile of ACE proposed by Seitz et al. [7] provides communication security between Client and RS by means of the Object Security for Constrained RESTful Environments (OSCORE) protocol [8]. OSCORE ensures request/response binding and selectively protects CoAP messages at the application layer, by using the compact CBOR Object Signing and Encryption (COSE) [18] based on the Concise Binary Object Representation (CBOR) [19] as data encoding format. This provides true end-to-end secure communication between Client and RS, even in the presence of (untrusted) intermediary CoAP proxies, which remain able to perform their intended operations (e.g. message caching). This is not possible when DTLS is used, as it requires transport-layer security to be terminated at the proxy, which is thus able to inspect and possibly alter the entire content of CoAP messages exchanged between Client and RS. A secure context can be established directly from a symmetric PoP key, or by using external key establishment protocols. Currently, the DTLS and OSCORE profiles have not been implemented or evaluated for resource-constrained IoT devices. Compared to the OSCORE profile, the IPSec profile presented in this paper preserves and leverages a flexible key establishment based on the IKEv2 protocol [12], tightly paired with ACE authorization process. In addition, it makes it possible to employ policy-based traffic filtering, also during the actual establishment of IPsec channels between Client and RS. In contrast, this feature is not available for the DTLS and OSCORE profiles. Besides, we have implemented the IPsec profile together with the ACE framework on the Contiki OS, and tested it over resource-constrained IoT devices. Finally, Sciancalepore et al. propose a different authorization framework for the IoT [20], also based on OAuth 2.0 and other standard protocols. In particular, it provides access control through an intermediary gateway acting as mediator between IoT networks and non-constrained Internet segments. However, unlike the ACE framework, [20] displays a considerably higher level of complexity and requires the intermediary gateway to be fully trusted. A. OAuth 2.0 A typical security requirement in the Internet is authorization, i.e. the process for granting approval to a client that wants to access a resource [21]. The Open Authentication 2.0 (OAuth 2.0) authorization framework has asserted itself among the most adopted standards to enforce authorization [4]. OAuth 2.0 relies on an Authorization Server (AS) entity, and addresses all common issues of alternative approaches based on credential sharing, by introducing a proper authorization layer and separating the role of the actual resource owner from the role of the client accessing a resource. Specifically, OAuth 2.0 allows a client entity (e.g. a user, a host) to obtain a specific and limited access to a remote resource, hosted at a Resource Server (RS), while enforcing the permission from the original resource owner. That is, the resource owner grants authorization through the intermediary AS, which in turn provides the client with an access token including the actual authorization information. Access tokens consist of strings that are opaque to the client and encode decisions for authorized resource access in terms of duration and scope. Such decisions are ultimately taken by the AS and enforced by the RS upon processing the access token. In addition, the AS prevents non-authorized parties from tampering with issued access token or possibly generating bogus ones. To this end, the client presents the access token to the RS upon accessing the intended resource. Then, the RS verifies that the access token is valid, before proceeding with processing and serving the request from the client. This requires that: i) the client is pre-registered at the AS; ii) the AS securely communicates with both the client and the RS; and iii) the AS and RS have pre-established a trust relation. An AS may be associated with multiple RSs at the same time. The involved parties perform RESTful interactions via the HTTP protocol [22], contacting the RESTful endpoints associated to specific steps in the OAuth 2.0 flow. The approach adopted by OAuth 2.0 has become more and more important in IoT scenarios, where heterogeneous and resource constrained devices are deployed on a large scale, often configured as RS. However, these peculiarities make OAuth 2.0 as is not suitable for the IoT. This motivated the design of the Authentication and Authorization for Constrained Environments (ACE) framework [3], as a standard proposal under the Internet Engineering Task Force (IETF). The ACE framework builds on OAuth 2.0 in order to adapt and extend it for enforcing authorization in constrained IoT environments. To this end, it uses the basic OAuth 2.0 mechanisms where possible, while also providing application developers with extensions, profiles and additional guidance to ensure a privacy-oriented and secure usage. a) Actors: The ACE framework considers the following four actors, in accordance with the main paradigm inherited from OAuth 2.0. Client: the entity accessing a remote protected resource. Resource Server (RS): the entity hosting protected resources and serving requests from authorized clients. Authorization is enforced through access tokens that requesting clients provide to the endpoint /authz-info at the RS via a POST request. Authorization Server (AS): the entity authorizing Clients to access protected resources at the RS. The AS is typically equipped with plenty of resources and hosts two endpoints: i) the /token endpoint, for receiving Access Token Requests from Clients; and ii) the /introspect endpoint, that the RS can use to query for extra information on received access tokens. Resource Owner (RO): the entity owning a protected resource hosted at the RS, and entitled to grant access to it. The RO can dynamically provide its consent for giving a Client access to a protected resource, according to the traditional OAuth flows. However, the ACE framework is especially tailored to resource-constrained settings, where such consent is typically pre-configured as authorization policies at the AS. Such policies are then evaluated by the AS upon receiving a token request from a Client. In particular, the policies from the RO influence what claims the AS ultimately includes into the access token released to a requesting client. b) Building Blocks: From an operational point of view, the ACE framework consists of the following building blocks. OAuth [4] defines the overall authentication paradigm resulting in the protocol flows and actors' interaction. CoAP [17] is a RESTful application-layer protocol for the IoT, typically running over UDP and able to greatly limit overhead and message exchanges. As CoAP is lightweight and tailored to resource-constrained IoT devices, it is the preferred choice in the ACE framework. Also, CoAP has been designed to explicitly support operations of intermediary Proxy nodes. Concise Binary Object Representation (CBOR) [19] is a compact version of JavaScript Object Notation (JSON) [23], i.e. a light-weight format for data interchange which is easy to create and process. In particular, CBOR enables binary encoding of small messages conveying self-contained access tokens, CoAP POST parameters, and CoAP responses. CBOR Object Signing and Encryption (COSE) [18] enables application-layer security in the ACE framework, especially in order to secure access tokens. c) Authorization credentials: In order to access protected resources hosted at a RS, a Client must get the right authorization credentials in the form of an access token. Specifically, an access token is a data structure including authorization permissions issued by the AS, provided to the Client, and delivered to the RS for authorized resource access. Access tokens are opaque to the Client, i.e. their semantics are unknown to the Client, and are cryptographically protected, e.g. by means of COSE [18]. That is, access tokens are intelligible only to the RS and the AS. A proof-of-possession (PoP) token is an access token bound to a cryptographic key, which is used by the RS to authenticate a Client request. PoP tokens rely on the AS to act as Trusted Third Party (TTP), in order to bind a PoP key (PoPK) to an access token. PoP keys can be based on symmetric or asymmetric cryptography. In case of a symmetric PoP key, the AS generates it and provides it to the Client and the RS. To this end, the AS can: i) make it available at the /introspect endpoint; or ii) provide it to the RS (Client) in the access token (Access Token Response). For asymmetric PoP keys, the Client generates a key pair, and provides the public key to the AS in the Access Token Request. Also, the AS provides the RS' public key to the Client in the Access Token Response. The Client's public key is made available to the RS through the /introspect endpoint, or conveyed in the access token. The ACE framework delegates to separate security profiles the description of how enforcing secure communication and mutual authentication among the involved parties, as well as the details about their specific interactions. In particular, a security profile must specify: i) the communication and security protocols between the RS and the Client, as well as the methods to achieve mutual authentication; ii) the communication and security protocols for interactions between the Client and the AS; iii) the PoP protocols to use and how to select one; and iv) the mechanisms to protect the /authzinfo endpoint at the RS. The AS informs the Client of the specific profile to use by means of the profile parameter in the Access Token Response. Also, the AS is expected to know what profiles are supported by the Client and RS. B. The ACE framework The protocol flow in the ACE framework consists of the following steps, also shown in Figure 2. Communications between Client and AS as well as between RS and AS should be secured, in accordance with the used security profile. Once the access token has been successfully validated and a secure channel has been established, the RS processes the Resource Request received from the Client at step (C). Then, the RS provides the requested resource to the Client over the established secure channel, in accordance with the used security profile. C. IPsec and IKEv2 The IPsec suite is a collection of protocols to secure IP-based communications at the network layer [9]. It fundamentally relies on Security Associations (SAs), each of which describes how to secure a one-way channel between two parties. Thus, two SAs are required to secure a two-way communication channel. An IPsec SA is identified by a Security Parameters Index (SPI), and it specifies cryptographic material, as well as the parameters and protocols to secure IP packets through the IPsec channel. This includes the security protocol to be used, i.e. Authentication Header Protocol (AH) [10] or Encapsulating Security Protocol (ESP) [11]. In particular, AH enables connectionless integrity and data origin authentication. Instead, ESP provides confidentiality, data origin authentication, connectionless integrity, replay protection and limited traffic flow confidentiality. Although both protocols provide integrity protection, AH additionally protects the header of IP packets. Both AH and ESP can operate in two modes, namely transport and tunnel. The former processes IP packets without changing the IP headers, while the latter encapsulates the original IP packet into a new one, thus protecting its payload and header. Finally, SAs are established manually or dynamically, e.g. by using Internet Key Exchange Protocol version 2 (IKEv2) [12] as key exchange protocol. In particular, IKEv2 enables mutual authentication between two parties through a Diffie-Hellman (DH) key exchange, using the pre-shared key (PSK) or the certificate raw public key (Cert) mode. The usual execution of IKEv2 consists of two pairs of request/response messages, i.e. IKE SA INIT and IKE AUTH. This establishes: i) an IKEv2 SA to protect IKEv2 traffic; and ii) a first IPsec SA to protect the actual IP traffic. Further SAs can be derived through CREATE CHILD SA messages. IV. PROTOCOL OVERVIEW In this section, we describe the ACE IPsec profile. The profile provides an operative instance of the ACE framework, by defining the communication and security protocols used by a Client to perform an authenticated and authorized access to a protected resource hosted at a Resource Server (RS). In particular, it considers the IPsec protocol suite and the IKEv2 key management protocol to enforce secure communications between Client and RS, server authentication and proof-ofpossession bound to an ACE access token. Hereafter, we denote with SA-C the SA used for the unidirectional IPsec channel from the Client to the RS, while with SA-RS the SA used for the unidirectional IPsec channel from the RS to the Client. Also, information to build SAs is encoded as the newly introduced ipsec structure in the ACE access token. Such information includes: i) two SPIs, namely SPI SA C and SPI SA RS; ii) the IPsec mode, i.e. transport or tunnel; iii) the security protocol, i.e. AH or ESP; iv) cryptographic keys; v) the key establishment method to fully setup the two-way IPsec channel; and vi) the SAs' lifetime. In particular, SPI SA C (SPI SA RS) refers to SA-C (SA-RS). In case tunnel mode is chosen, source and destination IP addresses are also specified. A. Key Establishment Methods The IPsec profile provides three methods for establishing a pair of SAs, and hence a two-way IPsec channel between Client and RS. The three methods are: i) Direct Provisioning (DP); ii) establishment with IKEv2 and symmetrickey authentication; and iii) establishment with IKEv2 and asymmetric-key authentication. For every method, the ipsec structure always specifies the protocol mode, the security protocol and the SAs' lifetime. Instead, the SPIs, algorithm and cryptographic keys are specified in different ways, depending on the specific key establishment method. That is, if the Direct Provisioning (DP) method is used, this set of information are explicitly provided. Otherwise, that is IKEv2 is used as Key Management Protocol (KMP), this set of information is not explicitly provided, but rather negotiated and established when the Client and RS performs IKEv2. The choice of the particular method to use should be driven by the capabilities of the Client and RS, as well as by the policies and infrastructure used in the specific use case for provisioning and managing key material. In particular, the DP method is extremely efficient and hence preferable for very constrained devices, as the Client and RS do not take the explicit burden to establish an SA pair. However, it does not provide strict assurances in terms of perfect forward secrecy. On the other hand, the two methods based on IKEv2 do provide perfect forward secrecy, as a native feature of the IKEv2 protocol. However, this requires the Client and RS to perform a full establishment of their SA pair through IKEv2, with a consequent considerable commitment in terms of resources. The particular choice among IKEv2 symmetric-key and asymmetric-key authentication method really depends on the key infrastructure of the specific use case. While Certificate-based public keys are typically more cumbersome to handle and process, they are often preferable to pre-shared keys that do result in more efficient processing while at the same in more complicated provisioning and management operations. In the following, we provide more details about the three key establishment methods. 1) Direct Provisioning (DP). In this method, the SA pair is pre-defined by the AS. That is, SA-RS and SA-C are specified in the access token and in the RS Information of the Access Token Response that the AS sends to the Client. Note that the AS cannot guarantee the uniqueness of the SPI SA C identifier at the RS, and of the SPI SA RS identifier at the Client. In order to address possible collisions with a previously defined SPI, the AS generates SPI SA C and SPI SA RS as random values. By doing so, the probability of a collision to occur is at most 2 −32 for 32-bit long SPIs. In case a collision occurs at the RS, i.e. the RS receives an access token with a SPI SA C value already used by another SA, the RS replies to the Client with an error message and aborts the setup of the IPsec channel. In network scenario scenarios where such additional overhead is not affordable, it is possible to reserve in advance a pool of SPI values intended to be used only with the DP method. This pool is exclusively managed by the AS. Then, when an IPsec channel is closed and the related pair of SAs become stale, the RS asks the AS to restore the SPI of that SA-C as available. Instead, in case a collision occurs at the Client, i.e. the Client receives a SPI SA RS value already used by other SA, the Client sends a second Access Token Request to the AS, asking for an updated access token. This token request also includes an ipsec structure containing only the field SPI SA RS specifying an available identifier to use. Then, the AS replies with the corresponding Access Token and RS Information updated only as to the requested SPI SA RS. 2) IKEv2 with symmetric-key authentication. This method uses the IKEv2 protocol to establish the SA pair between Client and RS, while providing mutual authentication through symmetric cryptography. The Client and RS run IKEv2 in symmetric mode, using a symmetric PSK provided by AS and bound to the access token as a PoP key. The PSK is made available to the Client in the Access Token Response, and to the RS in the access token. If the Client is interacting with the RS for the first time, the AS includes also a unique key identifier of the PSK in the Access Token Response. Otherwise, the Client includes in the Access Token Request a key identifier pointing at a previously established PSK. 3) IKEv2 with asymmetric-key authentication. This method uses the IKEv2 protocol to establish the SA pair between Client and RS, while providing mutual authentication through asymmetric cryptography. The Client and RS run IKEv2 in asymmetric mode, using their RPK or Certificatebased Public Key (CPK) bound to the access token as PoP keys. The RS's RPK/CPK is made available to the Client in the Access Token Response, while the Client's RPK/CPK is made available to the RS in access token. Similarly to the previous method, if the Client is interacting with the AS for the first time, it includes its RPK or CPK in the Access Token Request. Otherwise, the Client includes a key identifier linked to its own RPK or CPK, which is already available at the AS. B. Protocol Description In this section, we describe the message exchanges occuring in the ACE framework, in the presence of our Internet Protocol Security (IPsec) profile. Intuitively, the workflow consists of three phases, as shown in Figure 3. Phase (I) -Unauthorized Client to Resource Server. During this phase, the Client can retrieve information necessary to contact the AS, unless already available. In particular, the Client sends an unauthorized request to the RS, which formally denies the request and replies by indicating the associated AS to contact for obtaining an access token. Phase (II) -Client to Authorization Server. During this phase, the Client sends an Access Token Request to the /token endpoint at the AS, indicating the resource of interest at the RS and the access scope, i.e. the intended operations on such resource. Then, the AS processes the Access Token Request and verifies that the Client is allowed to access the specified protected resource at the RS. In such a case, the AS replies with an access token and the RS information as part of the Access Token Response. In particular, the access token (RS information) includes parameters and key material intended for the RS (the Client) to set up an IPsec as a pair of SAs. The exact information to exchange between the Client and the AS depends on the SA establishment method IV-A. Unlike the DP method, the alternative ones require the Client and the RS to establish the SA pair by running IKEv2. To this end, the AS indicates the specific KMP to use in the kmp field of the access token and of the RS Information. Specifically, kmp is set to "ikev2" to signal the use of the IKEv2 protocol. Provided that the involved parties have the necessary support, it is possible to use and specify a different key management protocol. Note that the AS is aware of the Client's and RS's capabilities as well as of RS's preferred and supported communication settings [3]. Therefore, the AS is able to set the security and network Parameters for the SA pair consistently with that Client-RS pair. Phase (III) -Client to Resource Server In this phase, the Client posts the access token to the /authz-info endpoint at the AS, through a POST CoAP message. Then, the Client and the RS set up the SA pair and the IPsec channel, based on the establishment method signalled by the AS. In particular: a) The DP method is signalled by the presence of the ipsec structure, while the "COSE Key" field is not present. 1 . b) A symmetric-key authenticated establishment is signalled by including a "COSE Key" object with the key type parameter "kty" set to "Symmetric", and by indicating the usage of IKEv2 with the kmp field set to "ikev2". c) An asymmetric-key authenticated establishment is indicated by including a "COSE Key" object with the key type parameter "kty" indicating the usage of asymmetric cryptography, e.g. "EC", by and indicating the usage of IKEv2 with the kmp field set to "ikev2". In case the DP method is used, the Client and the RS already have all the information to start the IPsec channel, and do not need to explicitly interact with each other. Instead, if any of the authenticated establishment methods is used, the Client and the RS perform an actual SA pair establishment through IKEv2 according to the authentication mode indicated by the "kty" field. In the following, we describe how the client and Client and RS finalize/setup the IPsec channel, given the specific establishment method. a) Direct Provisioning. The Client derives all the necessary key material from the "seed" field of the "ipsec" structure in the RS Information. The Client uses the seed to perform the a key derivation algorithm as in IKEv2 [12]. Upon correct submission and successful verification of the access token at the /authz-info endpoint, the RS performs the same key derivation process. The RS replies to the Client over the IPsec channel, according to what specified in SA-RS. Thereafter, any further communication performed during b) Authenticated SA Establishment using IKEv2. the Client and the RS run the IKEv2 protocol, and use the key material in the respectively received "COSE key" object in order to achieve mutual authentication. In particular, the Client posts the access token to the /authz-info endpoint to the RS, which sends back the first IKEv2 message IKE SA INIT to acknowledge the correct reception of the access token. Depending on the type of key used as PoP Key (PoPK), i.e. symmetric or asymmetric, the IKEv2 protocol is executed in the corresponding mode [12], i.e. PSK, CPK or RPK, with no modifications. If the IKEv2 execution is successfully completed, the Client and the RS agree on key material, parameters and algorithms used to enforce the IPsec channel. V. PERFORMANCE EVALUATION AND DISCUSSION In this section we present the performance evaluation of the different key establishment methods for the ACE IPsec Framework, described in Section IV. The implementation is written for Contiki OS [14] based on existing libraries and protocol implementations [24]. In particular, our code was evaluated on the Zolertia Firefly platform, equipped with the CC2538 radio chipset, 32 kB of RAM and 512 kB of flash ROM [15]. A device of this class supports a power supply from two AA (AAA) batteries, each of which typically provides an energy content of 9.36 (5.07) KJ. Our implementation leverage on hardware-based cryptography and utilizes the following algorithms: AES-CCM* to secure the IEEE 802.15.4 link layer and the COSE objects; AES-128 to provide confidentiality for IPsec and IKEv2 with an 8-bytes long Initialization Vector (IV); ECC-DH with 256bit Random ECP Group [25], SHA-2 and a Hash-based MAC (HMAC) based on SHA-256 for the authenticated exchange of IKEv2 [26], [27]. IPsec and IKEv2 traffic is encrypted using ESP in transport mode. The scenario to be evaluated is the following: a Client requests access a protected resource stored in a constrained RS. The authentication and authorization of this request are delegated to the AS. In our experimental setup the AS is as well a resource-constrained device and performs routing related activities. Namely, it is set as the root of the Direct Acyclic Graph (DAG) and [28]. We evaluate four setup configurations: a baseline configuration (Base), i.e. the ACE Framework w.o. the IPsec profile; and the three key establishment methods: DP, Establishment with symmetric key authentication (IKE-PSK) and Establishment with asymmetric key authentication (IKE-CPK) [12]. Our experimental results include: memory footprint, packet size and time and energy measurements. The latter measurements are evaluated as the shown in Figure 4 which depicts where time and energy measurements are performed. This measurements are labeled as follows: (0) for Access Token decoding; (1) for the Client to AS Exchange; (2) for Access Token and RS Information encoding; (3) for the Client to RS Exchange; (4) for the Access Token setup; and (5) for the IPsec channel establishment. In Table I we provide the size of packets exchanged by our profile. The packet exchanges are labeled as in Figure 2. This measurements reflect the size of the CoAP messages. The last row of Table I provides the size of the Access token, which has a big influence on the message size. In Figure 5 we provide the Memory Footprint evaluation results for the different SA establishment methods of the IPsec profile. We show the absolute value of ROM and RAM footprints for the setups configuration Base, DP, IKE-PSK and IKE-CPK. Time measurements are collected using the system clock measured in system ticks. To convert our measurements to seconds the following formula is applied: time = sys clock ticks/second Note that measurements (1) To measure the energy consumed by the devices we use powertrace, a run-time power profiling mechanism which is part of Contiki. This tool has an accuracy of 94% with an 0.6% overhead [29]. The energy consumption out of the powertrace measurements is computed as follows: energy = powertrace value * current * voltage ticks/second For every setup configuration, 20 runs of the protocol were considered. We give average results for successful handshakes without packet loss. Note that wireless communication can be lossy in constrained environments with a loss rate typically increasing for larger packet sizes. In this case, the handshake duration as well as the energy consumption increase due to the retransmission of the packets. In Figure 6 we show the time and energy results of the access token processing. On the right side of the figure we depict the contribution of the four steps involving token processing, i.e. (0) Token decoding performed at the Client and the RS, (2) Token encoding and (4) Token Setup, as in Figure 4. On the left side, we categorized these operations in crypto-and non-crypto-related actions. The Client-to-AS message exchange results are shown in Figure 7. In this figure we can observe Client-to-AS network latency, processing time and energy consumption, i.e. measurement (1) in Figure 4. A comparable performance disregarding the SA establishment method can be observed. Namely, the Access Token Request/Response present a consistent behavior across the different setup configurations. In Figure 8 we show the Client-to-RS message exchange evaluation results, i.e. the measurement tag as (3) in Figure 4. We can observe that the results for the IKEv2-based methods are comparable with the IKE-CPK, showing a slightly bigger energy consumption. At the same time, DP time and energy results are notably lower than the IKE-based key establishment methods. The total energy spent in a DP establishment for (3) is on average 15 mJ, and the exchange is done in less than 1 ms on average. The evaluation results of the the establishment of a secure channel between RS and the Client are shown in Figure 9. Note that only IKE-based establishments perform an IPsec SA establishment, since in the DP method the IPsec SAs are provided by the AS. The Client and the RS perform similarly during (5), as in Figure 4. This result is aligned with the symmetric nature of the IPsec protocol, since both ends of the communication play a similar role, unlike protocols like DTLS where there are a client and a server role with different responsibilities. However, within this symmetry, it is noticeable that for the RS, (5) takes longer than for the Client. The aforementioned difference reflects the fact that the RS is the initiator party of the IPsec channel establishment. We can see that the energy spent in transmission state, label as TX in Figures 7, 8 and 9 appears negligible when compared with the energy spent at the CPU or in receiving state, labeled as RX in the aforementioned figures. On the other hand, RX measurements in the aforementioned figures represent a significant share of the energy spend during a protocol run. This behavior is due to the fact that the reception state is always set to on in our resource-constrained devices. Energy optimization techniques such as Radio Dutycycle (RDC), specified and benchmarked in [30], are out of the scope of this work VI. CONCLUSION This paper has presented our novel ACE IPsec profile for authentication and authorization in the IoT. Our profile enables the scalable and flexible establishment of IPsec communication channels between Clients and Resource Servers, while contextually enforcing fine-grained access control from the ACE framework. In particular, IPsec Security Associations can be either directly provided to Client and Resource Server, or established though the standard IKEv2 key management protocol. We have implemented the IPsec profile for the Contiki OS, and carried out an experimental performance evaluation, considering resource-constrained IoT devices of the Zolertia Firefly platform. Results show that, under different configurations and authentication modes, our ACE IPsec profile is affordable also in resource-constrained devices. Therefore, it is effectively deployable in IoT scenarios for successfully enforcing access control paired with IPsec-based secure communication. Future works will focus on implementing alternative profiles of ACE and comparing their performance in resource-constrained IoT settings.
6,397
1808.04446
2950895666
Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. We propose to generate the parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation achieves state-of-the-art for the short input sequence task ReferIt --- on-par with single-hop FiLM generation --- while also significantly outperforming prior state-of-the-art and single-hop FiLM generation on the GuessWhat?! visual dialogue task.
The game @cite_23 has been a testbed for various vision-and-language tasks over the past years, including object retrieval @cite_19 @cite_15 @cite_32 @cite_34 @cite_11 @cite_44 , semantic image segmentation @cite_28 @cite_1 , and generating referring descriptions @cite_15 @cite_11 @cite_32 . To tackle object retrieval, @cite_19 @cite_15 @cite_44 extract additional visual features such as relative object locations and @cite_32 @cite_11 use reinforcement learning to iteratively train the object retrieval and description generation models. Closer to our work, @cite_25 @cite_34 use the full image and the object crop to locate the correct object. While some previous work relies on task-specific modules @cite_15 @cite_44 , our approach is general and can be easily extended to other vision-and-language tasks.
{ "abstract": [ "In this paper we approach the novel problem of segmenting an image based on a natural language expression. This is different from traditional semantic segmentation over a predefined set of semantic classes, as e.g., the phrase “two men sitting on the right bench” requires segmenting only the two people on the right bench and no one standing or sitting on another bench. Previous approaches suitable for this task were limited to a fixed set of categories and or rectangular regions. To produce pixelwise segmentation for the language expression, we propose an end-to-end trainable recurrent and convolutional network model that jointly learns to process visual and linguistic information. In our model, a recurrent neural network is used to encode the referential expression into a vector representation, and a fully convolutional network is used to a extract a spatial feature map from the image and output a spatial response map for the target object. We demonstrate on a benchmark dataset that our model can produce quality segmentation output from the natural language expression, and outperforms baseline methods by a large margin.", "Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.", "Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a unified framework for the tasks of referring expression comprehension and generation. Our model is composed of three modules: speaker, listener, and reinforcer. The speaker generates referring expressions, the listener comprehends referring expressions, and the reinforcer introduces a reward function to guide sampling of more discriminative expressions. The listener-speaker modules are trained jointly in an end-to-end learning framework, allowing the modules to be aware of one another during learning while also benefiting from the discriminative reinforcer&#x2019;s feedback. We demonstrate that this unified framework and training achieves state-of-the-art results for both comprehension and generation on three referring expression datasets.", "In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks.", "Referring expressions usually describe an object using properties of the object and relationships of the object with other objects. We propose a technique that integrates context between objects to understand referring expressions. Our approach uses an LSTM to learn the probability of a referring expression, with input features from a region and a context region. The context regions are discovered using multiple-instance learning (MIL) since annotations for context objects are generally not available for training. We utilize max-margin based MIL objective functions for training the LSTM. Experiments on the Google RefExp and UNC RefExp datasets show that modeling context between objects provides better performance than modeling only object properties. We also qualitatively show that our technique can ground a referring expression to its referred region along with the supporting context region.", "In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.", "Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg (Datasets and toolbox can be downloaded from https: github.com lichengunc refer), shows the advantages of our methods for both referring expression generation and comprehension.", "Recognising objects according to a pre-defined fixed set of class labels has been well studied in the Computer Vision. There are a great many practical applications where the subjects that may be of interest are not known beforehand, or so easily delineated, however. In many of these cases natural language dialog is a natural way to specify the subject of interest, and the task achieving this capability (a.k.a, Referring Expression Comprehension) has recently attracted attention. To this end we propose a unified framework, the ParalleL AttentioN (PLAN) network, to discover the object in an image that is being referred to in variable length natural expression descriptions, from short phrases query to long multi-round dialogs. The PLAN network has two attention mechanisms that relate parts of the expressions to both the global visual content and also directly to object candidates. Furthermore, the attention mechanisms are recurrent, making the referring process visualizable and explainable. The attended information from these dual sources are combined to reason about the referred object. These two attention mechanisms can be trained in parallel and we find the combined system outperforms the state-of-art on several benchmarked datasets with different length language input, such as RefCOCO, RefCOCO+ and GuessWhat?!.", "In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.", "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets." ], "cite_N": [ "@cite_28", "@cite_1", "@cite_32", "@cite_44", "@cite_19", "@cite_23", "@cite_15", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2302548814", "2247513039", "2571175805", "2784458614", "2964284374", "2251512949", "2489434015", "2770129969", "2963735856", "2583360688" ] }
Visual Reasoning with Multi-hop Feature Modulation
Computer vision has witnessed many impressive breakthroughs over the past decades in image classification [27,15], image segmentation [30], and object detection [12] by applying convolutional neural networks to large-scale, labeled datasets, often exceeding human performance. These systems give outputs such as class labels, segmentation masks, or bounding boxes, but it would be more natural for humans to interact with these systems through natural language. To this end, the research community has introduced various multi-modal tasks, such as image captioning [48], referring expressions [23], visual question-answering [1,34], visual reasoning [21], and visual dialogue [6,5]. These tasks require models to effectively integrate information from both vision and language. One common approach is to process both modalities independently with large unimodal networks before combining them through concatenation [34], element-wise product [25,31], or bilinear pooling [11]. Inspired by the success of attention in machine translation [3], several works have proposed ReferIt GuessWhat?! -The girl with a sweater Is it a person? Yes -The fourth person Is it a girl? Yes -The girl holding a white Does she have a blue No frisbee frisbee? Fig. 1: The ReferIt task identifies a selected object (in the bounding box) using a single expression, while in GuessWhat?!, a speaker localizes the object with a series of yes or no questions. to incorporate various forms of spatial attention to bias models towards focusing on question-specific image regions [48,47]. However, spatial attention sometimes only gives modest improvements over simple baselines for visual question answering [20] and can struggle on questions involving multi-step reasoning [21]. More recently, [44,38] introduced Feature-wise Linear Modulation (FiLM) layers as a promising approach for vision-and-language tasks. These layers apply a per-channel scaling and shifting to a convolutional network's visual features, conditioned on an external input such as language, e.g., captions, questions, or full dialogues. Such feature-wise affine transformations allow models to dynamically highlight the key visual features for the task at hand. The parameters of FiLM layers which scale and shift features or feature maps are determined by a separate network, the so-called FiLM generator, which predicts these parameters using the external conditioning input. Within various architectures, FiLM has outperformed prior state-of-art for visual question-answering [44,38], multi-modal translation [7], and language-guided image segmentation [40]. However, the best way to design the FiLM generator is still an open question. For visual question-answering and visual reasoning, prior work uses single-hop FiLM generators that predict all FiLM parameters at once [38,44]. That is, a Recurrent Neural Network (RNN) sequentially processes input language tokens and then outputs all FiLM parameters via a Multi-Layer Perceptron (MLP). In this paper, we argue that using a Multi-hop FiLM Generator is better suited for tasks involving longer input sequences and multi-step reasoning such as dialogue. Even for shorter input sequence tasks, single-hop FiLM generators can require a large RNN to achieve strong performance; on the CLEVR visual reasoning task [21] which only involves a small vocabulary and templated questions, the FiLM generator in [38] uses an RNN with 4096 hidden units that comprises almost 90% of the model's parameters. Models with Multi-hop FiLM Generators may thus be easier to scale to more difficult tasks involving human-generated language involving larger vocabularies and more ambiguity. As an intuitive example, consider the dialogue in Fig. 1 through which one speaker localizes the second girl in the image, the one who does not "have a blue frisbee." For this task, a single-hop model must determine upfront what steps of reasoning to carry out over the image and in what order; thus, it might decide in a single shot to highlight feature maps throughout the visual network detecting either non-blue colors or girls. In contrast, a multi-hop model may first determine the most immediate step of reasoning necessary (i.e., locate the girls), highlight the relevant visual features, and then determine the next immediate step of reasoning necessary (i.e., locate the blue frisbee), and so on. While it may be appropriate to reason in either way, the latter approach may scale better to longer language inputs and/or or to ambiguous images where the full sequence of reasoning steps is hard to determine upfront, which can even be further enhanced by having intermediate feedback while processing the image. In this paper, we therefore explore several approaches to generating FiLM parameters in multiple hops. These approaches introduce an intermediate context embedding that controls the language and visual processing, and they alternate between updating the context embedding via an attention mechanism over the language sequence (and optionally by incorporating image activations) and predicting the FiLM parameters. We evaluate Multi-hop FiLM generation on ReferIt [23] and GuessWhat?! [6], two vision-and-language tasks illustrated in Fig. 1. We show that Multi-hop FiLM models significantly outperform their single-hop counterparts and prior state-of-the-art for the longer input sequence, dialogue-based GuessWhat?! task while matching the state-of-the-art performance of other models on ReferIt. Our best GuessWhat?! model only updates the context embedding using the language input, while for ReferIt, incorporating visual feedback to update the context embedding improves performance. In summary, this paper makes the following contributions: -We introduce the Multi-hop FiLM architecture and demonstrate that our approach matches or significantly improves state-of-the-art on the Guess-What?! Oracle task, GuessWhat?! Guesser task, and ReferIt Guesser task. -We show Multi-hop FiLM models outperforms their single-hop counterparts on vision-and-language tasks involving complex visual reasoning. -We find that updating the context embedding of Multi-hop FiLM Generator based on visual feedback may be helpful in some cases, such as for tasks which do not include object category labels like ReferIt. Recurrent Neural Networks One common approach in natural language processing is to use a Recurrent Neural Network (RNN) to encode some linguistic input sequence l into a fixed-size embedding. The input (such as a question or dialogue) consists of a sequence of words ω 1:T of length T , where each word ω t is contained within a predefined vocabulary V. We embed each input token via a learned look-up table e and obtain a dense word-embedding e ωt = e(ω t ). The sequence of embeddings {e ωt } T t=1 is then fed to a RNN, which produces a sequence of hidden states {s t } T t=1 by repeatedly applying a transition function f : s t+1 = f (s t , e ωt ) To better handle long-term dependencies in the input sequence, we use a Gated Recurrent Unit (GRU) [4] with layer normalization [2] as transition function. In this work, we use a bidirectional GRU, which consists of one forward GRU, producing hidden states − → s t by running from ω 1 to ω T , and a second backward GRU, producing states ← − s t by running from ω T to ω 1 . We concatenate both unidirectional GRU states s t = [ − → s t ; ← − s t ] at each step t to get a final GRU state, which we then use as the compressed embedding e l of the linguistic sequence l. Attention The form of attention we consider was first introduced in the context of machine translation [3,33]. This mechanism takes a weighted average of the hidden states of an encoding RNN based on their relevance to a decoding RNN at various decoding time steps. Subsequent spatial attention mechanisms have extended the original mechanism to image captioning [48] and other vision-and-language tasks [47,24]. More formally, given an arbitrary linguistic embedding e l and image activations F w,h,c where w, h, c are the width, height, and channel indices, respectively, of the image features F at one layer, we obtain a final visual embedding e v as follows: ξ w,h = M LP (g(F w,h,· , e l )) ; α w,h = exp(ξ w,h ) w ,h exp(ξ w ,h ) ; ev = w,h α w,h F w,h,· ,(1) where M LP is a multi-layer perceptron and g(., .) is an arbitrary fusion mechanism (concatenation, element-wise product, etc.). We will use Multi-modal Lowrank Bilinear (MLB) attention [24] which defines g(., .) as: g(F w,h,· , e l ) = tanh(U T F w,h,· ) • tanh(V T e l ),(2) where • denotes an element-wise product and where U and V are trainable weight matrices. We choose MLB attention because it is parameter efficient and has shown strong empirical performance [24,22]. Feature-wise Linear Modulation Feature-wise Linear Modulation was introduced in the context of image stylization [8] and extended and shown to be highly effective for multi-modal tasks such as visual question-answering [44,38,7]. A Feature-wise Linear Modulation (FiLM) layer applies a per-channel scaling and shifting to the convolutional feature maps. Such layers are parameter efficient (only two scalars per feature map) while still retaining high capacity, as they are able to scale up or down, zero-out, or negate whole feature maps. In vision-andlanguage tasks, another network, the so-called FiLM generator h, predicts these modulating parameters from the linguistic input e l . More formally, a FiLM layer computes a modulated feature mapF w,h,c as follows: [ γ ; β ] = h(e l ) ;F .,.,c = γ c F .,.,c + β c ,(3) where γ and β are the scaling and shifting parameters which modulate the activations of the original feature map F .,.,c . We will use the superscript k ∈ [1; K] to refer to the k th FiLM layer in the network. FiLM layers may be inserted throughout the hierarchy of a convolutional network, either pre-trained and fixed [6] or trained from scratch [38]. Prior FiLMbased models [44,38,7] have used a single-hop FiLM generator to predict the FiLM parameters in all layers, e.g., an MLP which takes the language embedding e l as input [44,38,7]. Multi-hop FiLM In this section, we introduce the Multi-hop FiLM architecture (shown in Fig. 2) to predict the parameters of FiLM layers in an iterative fashion, to better scale to longer input sequences such as in dialogue. Another motivation was to better disantangle the linguistic reasoning from the visual one by iteratively attending to both pipelines. We introduce a context vector c k that acts as a controller for the linguistic and visual pipelines. We initialize the context vector with the final state of a bidirectional RNN s T and repeat the following procedure for each of the FiLM layers in sequence (from lowest to highest convolutional layer): first, the context vector is updated by performing attention over RNN states (extracting relevant language information), and second, the context is used to predict a layer's FiLM parameters (dynamically modulating the visual information). Thus, the context vector enables the model to perform multi-hop reasoning over the linguistic pipeline while iteratively modulating the image features. More formally, the context vector is computed as follows: c 0 = s T c k = t κ k t (c k−1 , s t )s t ,(4) where: κ k t (c k−1 , s t ) = exp(χ k t ) t exp(χ k t ) ; χ k t (c k−1 , s t ) = M LP Attn (g (c k , s t )),(5) where the dependence of χ k t on (c k−1 , s t ) may be omitted to simplify notation. M LP Attn is a network (shared across layers) which aids in producing attention weights. g can be any fusion mechanism that facilitates selecting the relevant context to attend to; here we use a simple dot-product following [33], so g (c k , s t ) = c k • s t . Finally, FiLM is carried out using a layer-dependent neural network M LP k F iLM : [ γ k ; β k ] = M LP k F iLM (c k ) ;F k w,h,c = γ k c F k .,.,c + β k c .(6) As a regularization, we append a normalization-layer [2] on top of the context vector after each attention step. External information. Some tasks provide additional information which may be used to further improve the visual modulation. For instance, GuessWhat?! provides spatial features of the ground truth object to models which must answer questions about that object. Our model incorporates such features by concatenating them to the context vector before generating FiLM parameters. Visual feedback. Inspired by the co-attention mechanism [31,54], we also explore incorporating visual feedback into the Multi-hop FiLM architecture. To do so, we first extract the image or crop features F k (immediately before modulation) and apply a global mean-pooling over spatial dimensions. We then concatenate this visual state into the context vector c k before generating the next set of FiLM parameters. Experiments In this section, we first introduce the ReferIt and GuessWhat?! datasets and respective tasks and then describe our overall Multi-hop FiLM architecture 1 . Dataset ReferIt [23,51] is a cooperative two-player game. The first player (the Oracle) selects an object in a rich visual scene, for which they must generate an expression that refers to it (e.g., "the person eating ice cream"). Based on this expression, the second player (the Guesser) must then select an object within the image. There are four ReferIt datasets exist: RefClef, RefCOCO, RefCOCO+ and Re-fCOCOg. The first dataset contains 130K references over 20K images from the ImageClef dataset [35], while the three other datasets respectively contain 142K, 142K and 86K references over 20K, 20k and 27K images from the MSCOCO dataset [29]. Each dataset has small differences. RefCOCO and RefClef were constructed using different image sets. RefCOCO+ forbids certain words to prevent object references from being too simplistic, and RefCOCOg only relies on images containing 2-4 objects from the same category. RefCOCOg also contains longer and more complex sentences than RefCOCO (8.4 vs. 3.5 average words). Here, we will show results on both the Guesser and Oracle tasks. GuessWhat?! [6] is a cooperative three-agent game in which players see the picture of a rich visual scene with several objects. One player (the Oracle) is randomly assigned an object in the scene. The second player (Questioner) aims to ask a series of yes-no questions to the Oracle to collect enough evidence to allow the third player (Guesser) to correctly locate the object in the image. The GuessWhat?! dataset is composed of 131K successful natural language dialogues containing 650k question-answer pairs on over 63K images from MSCOCO [29]. Dialogues contain 5.2 question-answer pairs and 34.4 words on average. Here, we will focus on the Guesser and Oracle tasks. Task Descriptions Game Features. Both games consist of triplets (I, l, o), where I ∈ R 3×M ×N is an RGB image and l is some language input (i.e., a series of words) describing an object o in I. The object o is defined by an object category, a pixel-wise segmentation, an RGB crop of I based on bounding box information, and handcrafted spatial information x spatial , where x spatial = [x min , y min , x max , y max , x center , y center , w box , h box ](7) We replace words with two or fewer occurrences with an <unk> token. The Oracle task. Given an image I, an object o, a question q, and a sequence δ of previous question-answer pairs (q, a) 1:δ where a ∈ {Yes, No, N/A}, the oracle's task is to produce an answer a that correctly answers the question q. The Guesser task. Given an image I, a list of objects O = o 1:Φ , a target object o * ∈ O and the dialogue D, the guesser needs to output a probability σ φ that each object o φ is the target object o * . Following [17], the Guesser is evaluated by selecting the object with the highest probability of being correct. Note that even if the individual probabilities σ φ are between 0 and 1, their sum can be greater than 1. More formally, the Guesser loss and error are computed as follows: L Guesser = −1 N games Ngames n 1 Φ n Φ φ log(p(o * |I n , o n φ , D n ))(8)E Guesser = −1 N games Ngames n 1(o * = o argmax φ σ n φ )(9) where 1 is the indicator function and Φ n the number of objects in the n th game. Model We use similar models for both ReferIt and GuessWhat?! and provide its architectural details in this subsection. Object embedding The object category is fed into a dense look-up table e cat , and the spatial information is scaled to [-1;1] before being up-sampled via nonlinear projection to e spat . We do not use the object category in ReferIt models. Visual Pipeline We first resized the image and object crop to 448×448 before extracting 14 × 14 × 1024 dimensional features from a ResNet-152 [15] (block3) pre-trained on ImageNet [41]. Following [38], we feed these features to a 3 × 3 convolution layer with Batch Normalization [19] and Rectified Linear Unit [37] (ReLU). We then stack four modulated residual blocks (shown in Fig 2), each producing a set of feature maps F k via (in order) a 1 × 1 convolutional layer (128 units), ReLU activations, a 3 × 3 convolutional layer (128 units), and an untrainable Batch Normalization layer. The residual block then modulates F k with a FiLM layer to getF k , before again applying ReLU activations. Lastly, a residual connection sums the activations of both ReLU outputs. After the last residual block, we use a 1 × 1 convolution layer (512 units) with Batch Normalization and ReLU followed by MLB attention [24] (256 units and 1 glimpse) to obtain the final embedding e v . Note our model uses two independent visual pipeline modules: one to extract modulated image features e img v , one to extract modulated crop features e crop v . To incorporate spatial information, we concatenate two coordinate feature maps indicating relative x and y spatial position (scaled to [−1, 1]) with the image features before each convolution layer (except for convolutional layers followed by FiLM layers). In addition, the pixel-wise segmentations S ∈ {0, 1} M ×N are rescaled to 14 × 14 floating point masks before being concatenated to the feature maps. Linguistic Pipeline We compute the language embedding by using a wordembedding look-up (200 dimensions) with dropout followed by a Bi-GRU (512 × 2units) with Layer Normalization [2]. As described in Section 3, we initialize the context vector with the last RNN state c 0 = s T . We then attend to the other Bi-GRU states via an attention mechanism with a linear projection and ReLU activations and regularize the new context vector with Layer Normalization. FiLM parameter generation We concatenate spatial information e spat and object category information e cat to the context vector. In some experiments, we also concatenate a fourth embedding consisting of intermediate visual features Training Process We train our model end-to-end with Adam [26] (learning rate 3e −4 ), dropout (0.5), weight decay (5e −6 ) for convolutional network layers, and a batch size of 64. We report results after early stopping on the validation set with a maximum of 15 epochs. Baselines In our experiments, we re-implement several baseline models to benchmark the performance of our models. The standard Baseline NN simply concatenates the image and object crop features after mean pooling, the linguistic embedding, and the spatial embedding and the category embedding (GuessWhat?! only), passing those features to the same final layers described in our proposed model. We refer to a model which uses the MLB attention mechanism to pool the visual features as Baseline NN+MLB. We also implement a Single-hop FiLM mechanism which is equivalent to setting all context vectors equal to the last state of the Bi-GRU e l,T . Finally, we experiment with injecting intermediate visual features into the FiLM Generator input, and we refer to the model as Multi-hop FiLM (+img). Results ReferIt Guesser We report the best test error of the outlined methods on the ReferIt Guesser task in Tab GuessWhat?! Oracle We report the best test error of several variants of GuessWhat?! Oracle models in Tab. 2. First, we baseline any visual or language biases by predicting the Oracle's target answer using only the image (46.7% error) or the question (41.1% error). As first reported in [6], we observe that the baseline methods perform worse when integrating the image and crop inputs (21.1%) rather than solely using the object category and spatial location (20.6%). On the other hand, concatenating previous question-answer pairs to answer the current question is beneficial in our experiments. Finally, using Single-hop FiLM reduces the error to 17.6% and Multi-hop FiLM further to 16.9%, outperforming the previous best model by 2.4%. GuessWhat?! Guesser We provide the best test error of the outlined methods on the GuessWhat?! Guesser task in Tab. 3. As a baseline, we find that random object selection achieves an error rate of 82.9%. Our initial model baseline performs significantly worse (38.3%) than concurrent models (36.6%), highlighting Discussion Single-hop FiLM vs. Multi-hop FiLM In the GuessWhat?! task, Multi-hop FiLM outperforms Single-hop FiLM by 6.1% on the Guesser task but only 0.7% on the Oracle task. We think that the small performance gain for the Oracle task is due to the nature of the task; to answer the current question, it is often not necessary to look at previous question-answer pairs, and in most cases this task does not require a long chain of reasoning. On the other hand, the Guesser task needs to gather information across the whole dialogue in order to correctly retrieve the object, and it is therefore more likely to benefit from multi-hop reasoning. The same trend can be observed for ReferIt. Single-hop FiLM and Multi-hop FiLM perform similarly on RefClef and RefCOCO, while we observe 1.3% and 2% gains on RefCOCO+ and RefCOCOg, respectively. This pattern of performance is intuitive, as the former datasets consist of shorter referring expressions (3.5 average words) than the latter (8.4 average words in RefCOCOg), and the latter datasets also consist of richer, more complex referring expressions due e.g. to taboo words (RefCOCO+). In short, our experiments demonstrate that Multihop FiLM is better able reason over complex linguistic sequences. Reasoning mechanism We conduct several experiments to better understand our method. First, we assess whether Multi-hop FiLM performs better because of increased network capacity. We remove the attention mechanism over the linguistic sequence and update the context vector via a shared MLP. We observe that this change significantly hurts performance across all tasks, e.g., increasing the Multi-hop FiLM error of the Guesser from 30.5 to 37.3%. Second, we in- vestigate how the model attends to GuessWhat?! dialogues for the Oracle and Guesser tasks, providing more insight into how to the model reasons over the language input. We first look at the top activation in the (crop) attention layers to observe where the most prominent information is. Note that similar trends are observed for the image pipeline. As one would expect, the Oracle is focused on a specific word in the last question 99.5% of the time, one which is crucial to answer the question at hand. However, this ratio drops to 65% in the Guesser task, suggesting the model is reasoning in a different way. If we then extract the top 3 activations per layer, the attention points to <yes> or <no> tokens (respectively) at least once, 50% of the time for the Oracle and Guesser, showing that the attention is able to correctly split the dialogue into question-answer pairs. Finally, we plot the attention masks for each FiLM layer to have a better intuition of this reasoning process in Fig. 4. Crop vs. Image. We also evaluate the impact of using the image and/or crop on the final error for the Guesser task 3. Using the image alone (while still including object category and spatial information) performs worse than using the crop. However, using image and crop together inarguably gives the lowest errors, though prior work has not always used the crop due to architecture-specific GPU limitations [44]. Visual feedback We explore whether adding visual feedback to the context embedding improves performance. While it has little effect on the GuessWhat?! Oracle and Guesser tasks, it improves the accuracy on ReferIt by 1-2%. Note that ReferIt does not include class labels of the selected object, so the visual feedback might act as a surrogate for this information. To further investigate this hypothesis, we remove the object category from the GuessWhat?! task and report results in Tab. 5 in the supplementary material. In this setup, we indeed observe a relative improvement 0.4% on the Oracle task, further confirming this hypothesis. Pointing Task In GuessWhat?!, the Guesser must select an object from among a list of items. A more natural task would be to have the Guesser directly point out the object as a human might. Thus, in the supplementary material, we introduce this task and provide initial baselines (Tab. 7) which include FiLM models. This task shows ample room for improvement with a best test error of 84.0%. Related Work The ReferIt game [23] has been a testbed for various vision-and-language tasks over the past years, including object retrieval [36,51,52,54,32,50], semantic image segmentation [16,39], and generating referring descriptions [51,32,52]. To tackle object retrieval, [36,51,50] extract additional visual features such as relative object locations and [52,32] use reinforcement learning to iteratively train the object retrieval and description generation models. Closer to our work, [17,54] use the full image and the object crop to locate the correct object. While some previous work relies on task-specific modules [51,50], our approach is general and can be easily extended to other vision-and-language tasks. The GuessWhat?! game [6] can be seen as a dialogue version of the ReferIt game, one which additionally draws on visual question answering ability. [42,28,53] make headway on the dialogue generation task via reinforcement learning. However, these approaches are bottlenecked by the accuracy of Oracle and Guesser models, despite existing modeling advances [54,44]; accurate Oracle and Guesser models are crucial for providing a meaningful learning signal for dialogue generation models, so we believe the Multi-hop FiLM architecture will facilitate high quality dialogue generation as well. A special case of Feature-wise Linear Modulation was first successfully applied to image style transfer [8], whose approach modulates image features according to some image style (i.e., cubism or impressionism). [44] extended this approach to vision-and-language tasks, injecting FiLM-like layers along the entire visual pipeline of a pre-trained ResNet. [38] demonstrates that a convolutional network with FiLM layers achieves strong performance on CLEVR [21], a task that focuses on answering reasoning-oriented, multi-step questions about synthetic images. Subsequent work has demonstrated that FiLM and variants thereof are effective for video object segmentation where the conditioning input is the first image's segmentation (instead of language) [49] and language-guided image segmentation [40]. Even more broadly, [9] overviews the strength of FiLMrelated methods across machine learning domains, ranging from reinforcement learning to generative modeling to domain adaptation. There are other notable models that decompose reasoning into different modules. For instance, Neural Turing Machines [13,14] divide a model into a controller with read and write units. Memory networks use an attention mechanism to answer a query by reasoning over a linguistic knowledge base [45,43] or image features [46]. A memory network updates a query vector by performing several attention hops over the memory before outputting a final answer from this query vector. Although Multi-hop FiLM computes a similar context vector, this intermediate embedding is used to predict FiLM parameters rather than the final answer. Thus, Multi-hop FiLM includes a second reasoning step over the image. Closer to our work, [18] designed networks composed of Memory, Attention, and Control (MAC) cells to perform visual reasoning. Similar to Neural Turing Machines, each MAC cell is composed of a control unit that attends over the language input, a read unit that attends over the image and a write unit that fuses both pipelines. Though conceptually similar to Multi-hop FiLM models, Compositional Attention Networks differ structurally, for instance using a dynamic neural architecture and relying on spatial attention rather than FiLM. Conclusion In this paper, we introduce a new way to exploit Feature-wise Linear Modulation (FiLM) layers for vision-and-language tasks. Our approach generates the parameters of FiLM layers going up the visual pipeline by attending to the language input in multiple hops rather than all at once. We show Multi-hop FiLM Generator architectures are better able to handle longer sequences than their single-hop counterparts. We outperform state-of-the-art vision-and-language models significantly on the long input sequence GuessWhat?! tasks, while maintaining stateof-the-art performance for the shorter input sequence ReferIt task. Finally, this Multi-hop FiLM Generator approach uses few problem-specific priors, and thus we believe it can extended to a variety of vision-and-language tasks, particularly those requiring complex visual reasoning. Oracle (Without Category Label) For existing tasks on the GuessWhat?! dataset, the Guesser selects its predicted target object from among a provided list of possible answers. A more natural task would be for the Guesser to directly point out the object, much as a human might. Thus, we introduce a pointing task as a new benchmark for GuessWhat?!. The specific task is to locate the intended object based on a series of questions and answers; however, instead of selecting the object from a list, the Guesser must output a bounding box around the object of its guess, making the task more challenging. This task also does not include important side information, namely object category and (x,y)-position [6], making the object retrieval more difficult than the originally introduced Guesser task as well. The bounding box is defined more specifically as the 4-tuple (x, y, width, height), where (x, y) is the coordinate of the top left corner of the box within the original image I, given an input dialogue. Additional Results ReferIt ImageClef We assess bounding box accuracy using the Intersection Over Union (IoU) metric: the area of the intersection of predicted and ground truth bounding boxes, divided by the area of their union. Prior work [10,12], generally considers an object found if IoU exceeds 0.5. IoU = |bboxA ∩ bboxB| |bboxA ∪ bboxB| = |bboxA ∩ bboxB| |bboxA| + |bboxB| − |bboxA ∩ bboxB| (10) We report model error in Table 7. Interestingly, the baseline obtains 92.0% error while Multi-hop FiLM obtains 84.0% error. As previously mentioned, reinjecting visual features into the Multi-hop FiLM Generator's context cell is beneficial. The error rates are relatively high but still in line with those of similar pointing tasks such as SCRC [16,17] (around 90%) on ReferIt.
5,091
1808.04446
2950895666
Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. We propose to generate the parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation achieves state-of-the-art for the short input sequence task ReferIt --- on-par with single-hop FiLM generation --- while also significantly outperforming prior state-of-the-art and single-hop FiLM generation on the GuessWhat?! visual dialogue task.
The game @cite_22 can be seen as a dialogue version of the game, one which additionally draws on visual question answering ability. @cite_52 @cite_45 @cite_49 make headway on the dialogue generation task via reinforcement learning. However, these approaches are bottlenecked by the accuracy of Oracle and Guesser models, despite existing modeling advances @cite_34 @cite_2 ; accurate Oracle and Guesser models are crucial for providing a meaningful learning signal for dialogue generation models, so we believe the architecture will facilitate high quality dialogue generation as well.
{ "abstract": [ "We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.", "", "Goal-oriented dialogue has been paid attention for its numerous applications in artificial intelligence. To solve this task, deep learning and reinforcement learning have recently been applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose \"Answerer in Questioner's Mind\" (AQM), a novel algorithm for goal-oriented dialogue. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer's intent via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialogue tasks: \"MNIST Counting Dialog\" and \"GuessWhat?!.\" In our experiments, AQM outperforms comparative algorithms and makes human-like dialogue. We further use AQM as a tool for analyzing the mechanism of deep reinforcement learning approach and discuss the future direction of practical goal-oriented neural dialogue systems.", "", "It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the by a linguistic input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet ( ) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial.", "Recognising objects according to a pre-defined fixed set of class labels has been well studied in the Computer Vision. There are a great many practical applications where the subjects that may be of interest are not known beforehand, or so easily delineated, however. In many of these cases natural language dialog is a natural way to specify the subject of interest, and the task achieving this capability (a.k.a, Referring Expression Comprehension) has recently attracted attention. To this end we propose a unified framework, the ParalleL AttentioN (PLAN) network, to discover the object in an image that is being referred to in variable length natural expression descriptions, from short phrases query to long multi-round dialogs. The PLAN network has two attention mechanisms that relate parts of the expressions to both the global visual content and also directly to object candidates. Furthermore, the attention mechanisms are recurrent, making the referring process visualizable and explainable. The attended information from these dual sources are combined to reason about the referred object. These two attention mechanisms can be trained in parallel and we find the combined system outperforms the state-of-art on several benchmarked datasets with different length language input, such as RefCOCO, RefCOCO+ and GuessWhat?!." ], "cite_N": [ "@cite_22", "@cite_52", "@cite_45", "@cite_49", "@cite_2", "@cite_34" ], "mid": [ "2558809543", "2599940792", "2785722920", "", "2963245493", "2770129969" ] }
Visual Reasoning with Multi-hop Feature Modulation
Computer vision has witnessed many impressive breakthroughs over the past decades in image classification [27,15], image segmentation [30], and object detection [12] by applying convolutional neural networks to large-scale, labeled datasets, often exceeding human performance. These systems give outputs such as class labels, segmentation masks, or bounding boxes, but it would be more natural for humans to interact with these systems through natural language. To this end, the research community has introduced various multi-modal tasks, such as image captioning [48], referring expressions [23], visual question-answering [1,34], visual reasoning [21], and visual dialogue [6,5]. These tasks require models to effectively integrate information from both vision and language. One common approach is to process both modalities independently with large unimodal networks before combining them through concatenation [34], element-wise product [25,31], or bilinear pooling [11]. Inspired by the success of attention in machine translation [3], several works have proposed ReferIt GuessWhat?! -The girl with a sweater Is it a person? Yes -The fourth person Is it a girl? Yes -The girl holding a white Does she have a blue No frisbee frisbee? Fig. 1: The ReferIt task identifies a selected object (in the bounding box) using a single expression, while in GuessWhat?!, a speaker localizes the object with a series of yes or no questions. to incorporate various forms of spatial attention to bias models towards focusing on question-specific image regions [48,47]. However, spatial attention sometimes only gives modest improvements over simple baselines for visual question answering [20] and can struggle on questions involving multi-step reasoning [21]. More recently, [44,38] introduced Feature-wise Linear Modulation (FiLM) layers as a promising approach for vision-and-language tasks. These layers apply a per-channel scaling and shifting to a convolutional network's visual features, conditioned on an external input such as language, e.g., captions, questions, or full dialogues. Such feature-wise affine transformations allow models to dynamically highlight the key visual features for the task at hand. The parameters of FiLM layers which scale and shift features or feature maps are determined by a separate network, the so-called FiLM generator, which predicts these parameters using the external conditioning input. Within various architectures, FiLM has outperformed prior state-of-art for visual question-answering [44,38], multi-modal translation [7], and language-guided image segmentation [40]. However, the best way to design the FiLM generator is still an open question. For visual question-answering and visual reasoning, prior work uses single-hop FiLM generators that predict all FiLM parameters at once [38,44]. That is, a Recurrent Neural Network (RNN) sequentially processes input language tokens and then outputs all FiLM parameters via a Multi-Layer Perceptron (MLP). In this paper, we argue that using a Multi-hop FiLM Generator is better suited for tasks involving longer input sequences and multi-step reasoning such as dialogue. Even for shorter input sequence tasks, single-hop FiLM generators can require a large RNN to achieve strong performance; on the CLEVR visual reasoning task [21] which only involves a small vocabulary and templated questions, the FiLM generator in [38] uses an RNN with 4096 hidden units that comprises almost 90% of the model's parameters. Models with Multi-hop FiLM Generators may thus be easier to scale to more difficult tasks involving human-generated language involving larger vocabularies and more ambiguity. As an intuitive example, consider the dialogue in Fig. 1 through which one speaker localizes the second girl in the image, the one who does not "have a blue frisbee." For this task, a single-hop model must determine upfront what steps of reasoning to carry out over the image and in what order; thus, it might decide in a single shot to highlight feature maps throughout the visual network detecting either non-blue colors or girls. In contrast, a multi-hop model may first determine the most immediate step of reasoning necessary (i.e., locate the girls), highlight the relevant visual features, and then determine the next immediate step of reasoning necessary (i.e., locate the blue frisbee), and so on. While it may be appropriate to reason in either way, the latter approach may scale better to longer language inputs and/or or to ambiguous images where the full sequence of reasoning steps is hard to determine upfront, which can even be further enhanced by having intermediate feedback while processing the image. In this paper, we therefore explore several approaches to generating FiLM parameters in multiple hops. These approaches introduce an intermediate context embedding that controls the language and visual processing, and they alternate between updating the context embedding via an attention mechanism over the language sequence (and optionally by incorporating image activations) and predicting the FiLM parameters. We evaluate Multi-hop FiLM generation on ReferIt [23] and GuessWhat?! [6], two vision-and-language tasks illustrated in Fig. 1. We show that Multi-hop FiLM models significantly outperform their single-hop counterparts and prior state-of-the-art for the longer input sequence, dialogue-based GuessWhat?! task while matching the state-of-the-art performance of other models on ReferIt. Our best GuessWhat?! model only updates the context embedding using the language input, while for ReferIt, incorporating visual feedback to update the context embedding improves performance. In summary, this paper makes the following contributions: -We introduce the Multi-hop FiLM architecture and demonstrate that our approach matches or significantly improves state-of-the-art on the Guess-What?! Oracle task, GuessWhat?! Guesser task, and ReferIt Guesser task. -We show Multi-hop FiLM models outperforms their single-hop counterparts on vision-and-language tasks involving complex visual reasoning. -We find that updating the context embedding of Multi-hop FiLM Generator based on visual feedback may be helpful in some cases, such as for tasks which do not include object category labels like ReferIt. Recurrent Neural Networks One common approach in natural language processing is to use a Recurrent Neural Network (RNN) to encode some linguistic input sequence l into a fixed-size embedding. The input (such as a question or dialogue) consists of a sequence of words ω 1:T of length T , where each word ω t is contained within a predefined vocabulary V. We embed each input token via a learned look-up table e and obtain a dense word-embedding e ωt = e(ω t ). The sequence of embeddings {e ωt } T t=1 is then fed to a RNN, which produces a sequence of hidden states {s t } T t=1 by repeatedly applying a transition function f : s t+1 = f (s t , e ωt ) To better handle long-term dependencies in the input sequence, we use a Gated Recurrent Unit (GRU) [4] with layer normalization [2] as transition function. In this work, we use a bidirectional GRU, which consists of one forward GRU, producing hidden states − → s t by running from ω 1 to ω T , and a second backward GRU, producing states ← − s t by running from ω T to ω 1 . We concatenate both unidirectional GRU states s t = [ − → s t ; ← − s t ] at each step t to get a final GRU state, which we then use as the compressed embedding e l of the linguistic sequence l. Attention The form of attention we consider was first introduced in the context of machine translation [3,33]. This mechanism takes a weighted average of the hidden states of an encoding RNN based on their relevance to a decoding RNN at various decoding time steps. Subsequent spatial attention mechanisms have extended the original mechanism to image captioning [48] and other vision-and-language tasks [47,24]. More formally, given an arbitrary linguistic embedding e l and image activations F w,h,c where w, h, c are the width, height, and channel indices, respectively, of the image features F at one layer, we obtain a final visual embedding e v as follows: ξ w,h = M LP (g(F w,h,· , e l )) ; α w,h = exp(ξ w,h ) w ,h exp(ξ w ,h ) ; ev = w,h α w,h F w,h,· ,(1) where M LP is a multi-layer perceptron and g(., .) is an arbitrary fusion mechanism (concatenation, element-wise product, etc.). We will use Multi-modal Lowrank Bilinear (MLB) attention [24] which defines g(., .) as: g(F w,h,· , e l ) = tanh(U T F w,h,· ) • tanh(V T e l ),(2) where • denotes an element-wise product and where U and V are trainable weight matrices. We choose MLB attention because it is parameter efficient and has shown strong empirical performance [24,22]. Feature-wise Linear Modulation Feature-wise Linear Modulation was introduced in the context of image stylization [8] and extended and shown to be highly effective for multi-modal tasks such as visual question-answering [44,38,7]. A Feature-wise Linear Modulation (FiLM) layer applies a per-channel scaling and shifting to the convolutional feature maps. Such layers are parameter efficient (only two scalars per feature map) while still retaining high capacity, as they are able to scale up or down, zero-out, or negate whole feature maps. In vision-andlanguage tasks, another network, the so-called FiLM generator h, predicts these modulating parameters from the linguistic input e l . More formally, a FiLM layer computes a modulated feature mapF w,h,c as follows: [ γ ; β ] = h(e l ) ;F .,.,c = γ c F .,.,c + β c ,(3) where γ and β are the scaling and shifting parameters which modulate the activations of the original feature map F .,.,c . We will use the superscript k ∈ [1; K] to refer to the k th FiLM layer in the network. FiLM layers may be inserted throughout the hierarchy of a convolutional network, either pre-trained and fixed [6] or trained from scratch [38]. Prior FiLMbased models [44,38,7] have used a single-hop FiLM generator to predict the FiLM parameters in all layers, e.g., an MLP which takes the language embedding e l as input [44,38,7]. Multi-hop FiLM In this section, we introduce the Multi-hop FiLM architecture (shown in Fig. 2) to predict the parameters of FiLM layers in an iterative fashion, to better scale to longer input sequences such as in dialogue. Another motivation was to better disantangle the linguistic reasoning from the visual one by iteratively attending to both pipelines. We introduce a context vector c k that acts as a controller for the linguistic and visual pipelines. We initialize the context vector with the final state of a bidirectional RNN s T and repeat the following procedure for each of the FiLM layers in sequence (from lowest to highest convolutional layer): first, the context vector is updated by performing attention over RNN states (extracting relevant language information), and second, the context is used to predict a layer's FiLM parameters (dynamically modulating the visual information). Thus, the context vector enables the model to perform multi-hop reasoning over the linguistic pipeline while iteratively modulating the image features. More formally, the context vector is computed as follows: c 0 = s T c k = t κ k t (c k−1 , s t )s t ,(4) where: κ k t (c k−1 , s t ) = exp(χ k t ) t exp(χ k t ) ; χ k t (c k−1 , s t ) = M LP Attn (g (c k , s t )),(5) where the dependence of χ k t on (c k−1 , s t ) may be omitted to simplify notation. M LP Attn is a network (shared across layers) which aids in producing attention weights. g can be any fusion mechanism that facilitates selecting the relevant context to attend to; here we use a simple dot-product following [33], so g (c k , s t ) = c k • s t . Finally, FiLM is carried out using a layer-dependent neural network M LP k F iLM : [ γ k ; β k ] = M LP k F iLM (c k ) ;F k w,h,c = γ k c F k .,.,c + β k c .(6) As a regularization, we append a normalization-layer [2] on top of the context vector after each attention step. External information. Some tasks provide additional information which may be used to further improve the visual modulation. For instance, GuessWhat?! provides spatial features of the ground truth object to models which must answer questions about that object. Our model incorporates such features by concatenating them to the context vector before generating FiLM parameters. Visual feedback. Inspired by the co-attention mechanism [31,54], we also explore incorporating visual feedback into the Multi-hop FiLM architecture. To do so, we first extract the image or crop features F k (immediately before modulation) and apply a global mean-pooling over spatial dimensions. We then concatenate this visual state into the context vector c k before generating the next set of FiLM parameters. Experiments In this section, we first introduce the ReferIt and GuessWhat?! datasets and respective tasks and then describe our overall Multi-hop FiLM architecture 1 . Dataset ReferIt [23,51] is a cooperative two-player game. The first player (the Oracle) selects an object in a rich visual scene, for which they must generate an expression that refers to it (e.g., "the person eating ice cream"). Based on this expression, the second player (the Guesser) must then select an object within the image. There are four ReferIt datasets exist: RefClef, RefCOCO, RefCOCO+ and Re-fCOCOg. The first dataset contains 130K references over 20K images from the ImageClef dataset [35], while the three other datasets respectively contain 142K, 142K and 86K references over 20K, 20k and 27K images from the MSCOCO dataset [29]. Each dataset has small differences. RefCOCO and RefClef were constructed using different image sets. RefCOCO+ forbids certain words to prevent object references from being too simplistic, and RefCOCOg only relies on images containing 2-4 objects from the same category. RefCOCOg also contains longer and more complex sentences than RefCOCO (8.4 vs. 3.5 average words). Here, we will show results on both the Guesser and Oracle tasks. GuessWhat?! [6] is a cooperative three-agent game in which players see the picture of a rich visual scene with several objects. One player (the Oracle) is randomly assigned an object in the scene. The second player (Questioner) aims to ask a series of yes-no questions to the Oracle to collect enough evidence to allow the third player (Guesser) to correctly locate the object in the image. The GuessWhat?! dataset is composed of 131K successful natural language dialogues containing 650k question-answer pairs on over 63K images from MSCOCO [29]. Dialogues contain 5.2 question-answer pairs and 34.4 words on average. Here, we will focus on the Guesser and Oracle tasks. Task Descriptions Game Features. Both games consist of triplets (I, l, o), where I ∈ R 3×M ×N is an RGB image and l is some language input (i.e., a series of words) describing an object o in I. The object o is defined by an object category, a pixel-wise segmentation, an RGB crop of I based on bounding box information, and handcrafted spatial information x spatial , where x spatial = [x min , y min , x max , y max , x center , y center , w box , h box ](7) We replace words with two or fewer occurrences with an <unk> token. The Oracle task. Given an image I, an object o, a question q, and a sequence δ of previous question-answer pairs (q, a) 1:δ where a ∈ {Yes, No, N/A}, the oracle's task is to produce an answer a that correctly answers the question q. The Guesser task. Given an image I, a list of objects O = o 1:Φ , a target object o * ∈ O and the dialogue D, the guesser needs to output a probability σ φ that each object o φ is the target object o * . Following [17], the Guesser is evaluated by selecting the object with the highest probability of being correct. Note that even if the individual probabilities σ φ are between 0 and 1, their sum can be greater than 1. More formally, the Guesser loss and error are computed as follows: L Guesser = −1 N games Ngames n 1 Φ n Φ φ log(p(o * |I n , o n φ , D n ))(8)E Guesser = −1 N games Ngames n 1(o * = o argmax φ σ n φ )(9) where 1 is the indicator function and Φ n the number of objects in the n th game. Model We use similar models for both ReferIt and GuessWhat?! and provide its architectural details in this subsection. Object embedding The object category is fed into a dense look-up table e cat , and the spatial information is scaled to [-1;1] before being up-sampled via nonlinear projection to e spat . We do not use the object category in ReferIt models. Visual Pipeline We first resized the image and object crop to 448×448 before extracting 14 × 14 × 1024 dimensional features from a ResNet-152 [15] (block3) pre-trained on ImageNet [41]. Following [38], we feed these features to a 3 × 3 convolution layer with Batch Normalization [19] and Rectified Linear Unit [37] (ReLU). We then stack four modulated residual blocks (shown in Fig 2), each producing a set of feature maps F k via (in order) a 1 × 1 convolutional layer (128 units), ReLU activations, a 3 × 3 convolutional layer (128 units), and an untrainable Batch Normalization layer. The residual block then modulates F k with a FiLM layer to getF k , before again applying ReLU activations. Lastly, a residual connection sums the activations of both ReLU outputs. After the last residual block, we use a 1 × 1 convolution layer (512 units) with Batch Normalization and ReLU followed by MLB attention [24] (256 units and 1 glimpse) to obtain the final embedding e v . Note our model uses two independent visual pipeline modules: one to extract modulated image features e img v , one to extract modulated crop features e crop v . To incorporate spatial information, we concatenate two coordinate feature maps indicating relative x and y spatial position (scaled to [−1, 1]) with the image features before each convolution layer (except for convolutional layers followed by FiLM layers). In addition, the pixel-wise segmentations S ∈ {0, 1} M ×N are rescaled to 14 × 14 floating point masks before being concatenated to the feature maps. Linguistic Pipeline We compute the language embedding by using a wordembedding look-up (200 dimensions) with dropout followed by a Bi-GRU (512 × 2units) with Layer Normalization [2]. As described in Section 3, we initialize the context vector with the last RNN state c 0 = s T . We then attend to the other Bi-GRU states via an attention mechanism with a linear projection and ReLU activations and regularize the new context vector with Layer Normalization. FiLM parameter generation We concatenate spatial information e spat and object category information e cat to the context vector. In some experiments, we also concatenate a fourth embedding consisting of intermediate visual features Training Process We train our model end-to-end with Adam [26] (learning rate 3e −4 ), dropout (0.5), weight decay (5e −6 ) for convolutional network layers, and a batch size of 64. We report results after early stopping on the validation set with a maximum of 15 epochs. Baselines In our experiments, we re-implement several baseline models to benchmark the performance of our models. The standard Baseline NN simply concatenates the image and object crop features after mean pooling, the linguistic embedding, and the spatial embedding and the category embedding (GuessWhat?! only), passing those features to the same final layers described in our proposed model. We refer to a model which uses the MLB attention mechanism to pool the visual features as Baseline NN+MLB. We also implement a Single-hop FiLM mechanism which is equivalent to setting all context vectors equal to the last state of the Bi-GRU e l,T . Finally, we experiment with injecting intermediate visual features into the FiLM Generator input, and we refer to the model as Multi-hop FiLM (+img). Results ReferIt Guesser We report the best test error of the outlined methods on the ReferIt Guesser task in Tab GuessWhat?! Oracle We report the best test error of several variants of GuessWhat?! Oracle models in Tab. 2. First, we baseline any visual or language biases by predicting the Oracle's target answer using only the image (46.7% error) or the question (41.1% error). As first reported in [6], we observe that the baseline methods perform worse when integrating the image and crop inputs (21.1%) rather than solely using the object category and spatial location (20.6%). On the other hand, concatenating previous question-answer pairs to answer the current question is beneficial in our experiments. Finally, using Single-hop FiLM reduces the error to 17.6% and Multi-hop FiLM further to 16.9%, outperforming the previous best model by 2.4%. GuessWhat?! Guesser We provide the best test error of the outlined methods on the GuessWhat?! Guesser task in Tab. 3. As a baseline, we find that random object selection achieves an error rate of 82.9%. Our initial model baseline performs significantly worse (38.3%) than concurrent models (36.6%), highlighting Discussion Single-hop FiLM vs. Multi-hop FiLM In the GuessWhat?! task, Multi-hop FiLM outperforms Single-hop FiLM by 6.1% on the Guesser task but only 0.7% on the Oracle task. We think that the small performance gain for the Oracle task is due to the nature of the task; to answer the current question, it is often not necessary to look at previous question-answer pairs, and in most cases this task does not require a long chain of reasoning. On the other hand, the Guesser task needs to gather information across the whole dialogue in order to correctly retrieve the object, and it is therefore more likely to benefit from multi-hop reasoning. The same trend can be observed for ReferIt. Single-hop FiLM and Multi-hop FiLM perform similarly on RefClef and RefCOCO, while we observe 1.3% and 2% gains on RefCOCO+ and RefCOCOg, respectively. This pattern of performance is intuitive, as the former datasets consist of shorter referring expressions (3.5 average words) than the latter (8.4 average words in RefCOCOg), and the latter datasets also consist of richer, more complex referring expressions due e.g. to taboo words (RefCOCO+). In short, our experiments demonstrate that Multihop FiLM is better able reason over complex linguistic sequences. Reasoning mechanism We conduct several experiments to better understand our method. First, we assess whether Multi-hop FiLM performs better because of increased network capacity. We remove the attention mechanism over the linguistic sequence and update the context vector via a shared MLP. We observe that this change significantly hurts performance across all tasks, e.g., increasing the Multi-hop FiLM error of the Guesser from 30.5 to 37.3%. Second, we in- vestigate how the model attends to GuessWhat?! dialogues for the Oracle and Guesser tasks, providing more insight into how to the model reasons over the language input. We first look at the top activation in the (crop) attention layers to observe where the most prominent information is. Note that similar trends are observed for the image pipeline. As one would expect, the Oracle is focused on a specific word in the last question 99.5% of the time, one which is crucial to answer the question at hand. However, this ratio drops to 65% in the Guesser task, suggesting the model is reasoning in a different way. If we then extract the top 3 activations per layer, the attention points to <yes> or <no> tokens (respectively) at least once, 50% of the time for the Oracle and Guesser, showing that the attention is able to correctly split the dialogue into question-answer pairs. Finally, we plot the attention masks for each FiLM layer to have a better intuition of this reasoning process in Fig. 4. Crop vs. Image. We also evaluate the impact of using the image and/or crop on the final error for the Guesser task 3. Using the image alone (while still including object category and spatial information) performs worse than using the crop. However, using image and crop together inarguably gives the lowest errors, though prior work has not always used the crop due to architecture-specific GPU limitations [44]. Visual feedback We explore whether adding visual feedback to the context embedding improves performance. While it has little effect on the GuessWhat?! Oracle and Guesser tasks, it improves the accuracy on ReferIt by 1-2%. Note that ReferIt does not include class labels of the selected object, so the visual feedback might act as a surrogate for this information. To further investigate this hypothesis, we remove the object category from the GuessWhat?! task and report results in Tab. 5 in the supplementary material. In this setup, we indeed observe a relative improvement 0.4% on the Oracle task, further confirming this hypothesis. Pointing Task In GuessWhat?!, the Guesser must select an object from among a list of items. A more natural task would be to have the Guesser directly point out the object as a human might. Thus, in the supplementary material, we introduce this task and provide initial baselines (Tab. 7) which include FiLM models. This task shows ample room for improvement with a best test error of 84.0%. Related Work The ReferIt game [23] has been a testbed for various vision-and-language tasks over the past years, including object retrieval [36,51,52,54,32,50], semantic image segmentation [16,39], and generating referring descriptions [51,32,52]. To tackle object retrieval, [36,51,50] extract additional visual features such as relative object locations and [52,32] use reinforcement learning to iteratively train the object retrieval and description generation models. Closer to our work, [17,54] use the full image and the object crop to locate the correct object. While some previous work relies on task-specific modules [51,50], our approach is general and can be easily extended to other vision-and-language tasks. The GuessWhat?! game [6] can be seen as a dialogue version of the ReferIt game, one which additionally draws on visual question answering ability. [42,28,53] make headway on the dialogue generation task via reinforcement learning. However, these approaches are bottlenecked by the accuracy of Oracle and Guesser models, despite existing modeling advances [54,44]; accurate Oracle and Guesser models are crucial for providing a meaningful learning signal for dialogue generation models, so we believe the Multi-hop FiLM architecture will facilitate high quality dialogue generation as well. A special case of Feature-wise Linear Modulation was first successfully applied to image style transfer [8], whose approach modulates image features according to some image style (i.e., cubism or impressionism). [44] extended this approach to vision-and-language tasks, injecting FiLM-like layers along the entire visual pipeline of a pre-trained ResNet. [38] demonstrates that a convolutional network with FiLM layers achieves strong performance on CLEVR [21], a task that focuses on answering reasoning-oriented, multi-step questions about synthetic images. Subsequent work has demonstrated that FiLM and variants thereof are effective for video object segmentation where the conditioning input is the first image's segmentation (instead of language) [49] and language-guided image segmentation [40]. Even more broadly, [9] overviews the strength of FiLMrelated methods across machine learning domains, ranging from reinforcement learning to generative modeling to domain adaptation. There are other notable models that decompose reasoning into different modules. For instance, Neural Turing Machines [13,14] divide a model into a controller with read and write units. Memory networks use an attention mechanism to answer a query by reasoning over a linguistic knowledge base [45,43] or image features [46]. A memory network updates a query vector by performing several attention hops over the memory before outputting a final answer from this query vector. Although Multi-hop FiLM computes a similar context vector, this intermediate embedding is used to predict FiLM parameters rather than the final answer. Thus, Multi-hop FiLM includes a second reasoning step over the image. Closer to our work, [18] designed networks composed of Memory, Attention, and Control (MAC) cells to perform visual reasoning. Similar to Neural Turing Machines, each MAC cell is composed of a control unit that attends over the language input, a read unit that attends over the image and a write unit that fuses both pipelines. Though conceptually similar to Multi-hop FiLM models, Compositional Attention Networks differ structurally, for instance using a dynamic neural architecture and relying on spatial attention rather than FiLM. Conclusion In this paper, we introduce a new way to exploit Feature-wise Linear Modulation (FiLM) layers for vision-and-language tasks. Our approach generates the parameters of FiLM layers going up the visual pipeline by attending to the language input in multiple hops rather than all at once. We show Multi-hop FiLM Generator architectures are better able to handle longer sequences than their single-hop counterparts. We outperform state-of-the-art vision-and-language models significantly on the long input sequence GuessWhat?! tasks, while maintaining stateof-the-art performance for the shorter input sequence ReferIt task. Finally, this Multi-hop FiLM Generator approach uses few problem-specific priors, and thus we believe it can extended to a variety of vision-and-language tasks, particularly those requiring complex visual reasoning. Oracle (Without Category Label) For existing tasks on the GuessWhat?! dataset, the Guesser selects its predicted target object from among a provided list of possible answers. A more natural task would be for the Guesser to directly point out the object, much as a human might. Thus, we introduce a pointing task as a new benchmark for GuessWhat?!. The specific task is to locate the intended object based on a series of questions and answers; however, instead of selecting the object from a list, the Guesser must output a bounding box around the object of its guess, making the task more challenging. This task also does not include important side information, namely object category and (x,y)-position [6], making the object retrieval more difficult than the originally introduced Guesser task as well. The bounding box is defined more specifically as the 4-tuple (x, y, width, height), where (x, y) is the coordinate of the top left corner of the box within the original image I, given an input dialogue. Additional Results ReferIt ImageClef We assess bounding box accuracy using the Intersection Over Union (IoU) metric: the area of the intersection of predicted and ground truth bounding boxes, divided by the area of their union. Prior work [10,12], generally considers an object found if IoU exceeds 0.5. IoU = |bboxA ∩ bboxB| |bboxA ∪ bboxB| = |bboxA ∩ bboxB| |bboxA| + |bboxB| − |bboxA ∩ bboxB| (10) We report model error in Table 7. Interestingly, the baseline obtains 92.0% error while Multi-hop FiLM obtains 84.0% error. As previously mentioned, reinjecting visual features into the Multi-hop FiLM Generator's context cell is beneficial. The error rates are relatively high but still in line with those of similar pointing tasks such as SCRC [16,17] (around 90%) on ReferIt.
5,091
1808.04446
2950895666
Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. We propose to generate the parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation achieves state-of-the-art for the short input sequence task ReferIt --- on-par with single-hop FiLM generation --- while also significantly outperforming prior state-of-the-art and single-hop FiLM generation on the GuessWhat?! visual dialogue task.
A special case of Feature-wise Linear Modulation was first successfully applied to image style transfer @cite_27 , whose approach modulates image features according to some image style (i.e. @, cubism or impressionism). @cite_2 extended this approach to vision-and-language tasks, injecting FiLM-like layers along the entire visual pipeline of a pre-trained ResNet. @cite_4 demonstrates that a convolutional network with FiLM layers achieves strong performance on CLEVR @cite_43 , a task that focuses on answering reasoning-oriented, multi-step questions about synthetic images. Subsequent work has demonstrated that FiLM and variants thereof are effective for video object segmentation where the conditioning input is the first image's segmentation (instead of language) @cite_36 and language-guided image segmentation @cite_29 . Even more broadly, @cite_50 overviews the strength of FiLM-related methods across machine learning domains, ranging from reinforcement learning to generative modeling to domain adaptation.
{ "abstract": [ "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.", "", "Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from text descriptions, less focus has been placed on the use of language to guide or improve the performance of a learned visual processing algorithm. In this paper, we explore methods to flexibly guide a trained convolutional neural network through user input to improve its performance during inference. We do so by inserting a layer that acts as a spatio-semantic guide into the network. This guide is trained to modify the network's activations, either directly via an energy minimization scheme or indirectly through a recurrent model that translates human language queries to interaction weights. Learning the verbal interaction is fully automatic and does not require manual text annotations. We evaluate the method on two datasets, showing that guiding a pre-trained network can improve performance, and provide extensive insights into the interaction between the guide and the CNN.", "When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.", "The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.", "", "It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the by a linguistic input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet ( ) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial." ], "cite_N": [ "@cite_4", "@cite_36", "@cite_29", "@cite_43", "@cite_27", "@cite_50", "@cite_2" ], "mid": [ "2760103357", "", "2794719971", "2561715562", "2953054324", "", "2963245493" ] }
Visual Reasoning with Multi-hop Feature Modulation
Computer vision has witnessed many impressive breakthroughs over the past decades in image classification [27,15], image segmentation [30], and object detection [12] by applying convolutional neural networks to large-scale, labeled datasets, often exceeding human performance. These systems give outputs such as class labels, segmentation masks, or bounding boxes, but it would be more natural for humans to interact with these systems through natural language. To this end, the research community has introduced various multi-modal tasks, such as image captioning [48], referring expressions [23], visual question-answering [1,34], visual reasoning [21], and visual dialogue [6,5]. These tasks require models to effectively integrate information from both vision and language. One common approach is to process both modalities independently with large unimodal networks before combining them through concatenation [34], element-wise product [25,31], or bilinear pooling [11]. Inspired by the success of attention in machine translation [3], several works have proposed ReferIt GuessWhat?! -The girl with a sweater Is it a person? Yes -The fourth person Is it a girl? Yes -The girl holding a white Does she have a blue No frisbee frisbee? Fig. 1: The ReferIt task identifies a selected object (in the bounding box) using a single expression, while in GuessWhat?!, a speaker localizes the object with a series of yes or no questions. to incorporate various forms of spatial attention to bias models towards focusing on question-specific image regions [48,47]. However, spatial attention sometimes only gives modest improvements over simple baselines for visual question answering [20] and can struggle on questions involving multi-step reasoning [21]. More recently, [44,38] introduced Feature-wise Linear Modulation (FiLM) layers as a promising approach for vision-and-language tasks. These layers apply a per-channel scaling and shifting to a convolutional network's visual features, conditioned on an external input such as language, e.g., captions, questions, or full dialogues. Such feature-wise affine transformations allow models to dynamically highlight the key visual features for the task at hand. The parameters of FiLM layers which scale and shift features or feature maps are determined by a separate network, the so-called FiLM generator, which predicts these parameters using the external conditioning input. Within various architectures, FiLM has outperformed prior state-of-art for visual question-answering [44,38], multi-modal translation [7], and language-guided image segmentation [40]. However, the best way to design the FiLM generator is still an open question. For visual question-answering and visual reasoning, prior work uses single-hop FiLM generators that predict all FiLM parameters at once [38,44]. That is, a Recurrent Neural Network (RNN) sequentially processes input language tokens and then outputs all FiLM parameters via a Multi-Layer Perceptron (MLP). In this paper, we argue that using a Multi-hop FiLM Generator is better suited for tasks involving longer input sequences and multi-step reasoning such as dialogue. Even for shorter input sequence tasks, single-hop FiLM generators can require a large RNN to achieve strong performance; on the CLEVR visual reasoning task [21] which only involves a small vocabulary and templated questions, the FiLM generator in [38] uses an RNN with 4096 hidden units that comprises almost 90% of the model's parameters. Models with Multi-hop FiLM Generators may thus be easier to scale to more difficult tasks involving human-generated language involving larger vocabularies and more ambiguity. As an intuitive example, consider the dialogue in Fig. 1 through which one speaker localizes the second girl in the image, the one who does not "have a blue frisbee." For this task, a single-hop model must determine upfront what steps of reasoning to carry out over the image and in what order; thus, it might decide in a single shot to highlight feature maps throughout the visual network detecting either non-blue colors or girls. In contrast, a multi-hop model may first determine the most immediate step of reasoning necessary (i.e., locate the girls), highlight the relevant visual features, and then determine the next immediate step of reasoning necessary (i.e., locate the blue frisbee), and so on. While it may be appropriate to reason in either way, the latter approach may scale better to longer language inputs and/or or to ambiguous images where the full sequence of reasoning steps is hard to determine upfront, which can even be further enhanced by having intermediate feedback while processing the image. In this paper, we therefore explore several approaches to generating FiLM parameters in multiple hops. These approaches introduce an intermediate context embedding that controls the language and visual processing, and they alternate between updating the context embedding via an attention mechanism over the language sequence (and optionally by incorporating image activations) and predicting the FiLM parameters. We evaluate Multi-hop FiLM generation on ReferIt [23] and GuessWhat?! [6], two vision-and-language tasks illustrated in Fig. 1. We show that Multi-hop FiLM models significantly outperform their single-hop counterparts and prior state-of-the-art for the longer input sequence, dialogue-based GuessWhat?! task while matching the state-of-the-art performance of other models on ReferIt. Our best GuessWhat?! model only updates the context embedding using the language input, while for ReferIt, incorporating visual feedback to update the context embedding improves performance. In summary, this paper makes the following contributions: -We introduce the Multi-hop FiLM architecture and demonstrate that our approach matches or significantly improves state-of-the-art on the Guess-What?! Oracle task, GuessWhat?! Guesser task, and ReferIt Guesser task. -We show Multi-hop FiLM models outperforms their single-hop counterparts on vision-and-language tasks involving complex visual reasoning. -We find that updating the context embedding of Multi-hop FiLM Generator based on visual feedback may be helpful in some cases, such as for tasks which do not include object category labels like ReferIt. Recurrent Neural Networks One common approach in natural language processing is to use a Recurrent Neural Network (RNN) to encode some linguistic input sequence l into a fixed-size embedding. The input (such as a question or dialogue) consists of a sequence of words ω 1:T of length T , where each word ω t is contained within a predefined vocabulary V. We embed each input token via a learned look-up table e and obtain a dense word-embedding e ωt = e(ω t ). The sequence of embeddings {e ωt } T t=1 is then fed to a RNN, which produces a sequence of hidden states {s t } T t=1 by repeatedly applying a transition function f : s t+1 = f (s t , e ωt ) To better handle long-term dependencies in the input sequence, we use a Gated Recurrent Unit (GRU) [4] with layer normalization [2] as transition function. In this work, we use a bidirectional GRU, which consists of one forward GRU, producing hidden states − → s t by running from ω 1 to ω T , and a second backward GRU, producing states ← − s t by running from ω T to ω 1 . We concatenate both unidirectional GRU states s t = [ − → s t ; ← − s t ] at each step t to get a final GRU state, which we then use as the compressed embedding e l of the linguistic sequence l. Attention The form of attention we consider was first introduced in the context of machine translation [3,33]. This mechanism takes a weighted average of the hidden states of an encoding RNN based on their relevance to a decoding RNN at various decoding time steps. Subsequent spatial attention mechanisms have extended the original mechanism to image captioning [48] and other vision-and-language tasks [47,24]. More formally, given an arbitrary linguistic embedding e l and image activations F w,h,c where w, h, c are the width, height, and channel indices, respectively, of the image features F at one layer, we obtain a final visual embedding e v as follows: ξ w,h = M LP (g(F w,h,· , e l )) ; α w,h = exp(ξ w,h ) w ,h exp(ξ w ,h ) ; ev = w,h α w,h F w,h,· ,(1) where M LP is a multi-layer perceptron and g(., .) is an arbitrary fusion mechanism (concatenation, element-wise product, etc.). We will use Multi-modal Lowrank Bilinear (MLB) attention [24] which defines g(., .) as: g(F w,h,· , e l ) = tanh(U T F w,h,· ) • tanh(V T e l ),(2) where • denotes an element-wise product and where U and V are trainable weight matrices. We choose MLB attention because it is parameter efficient and has shown strong empirical performance [24,22]. Feature-wise Linear Modulation Feature-wise Linear Modulation was introduced in the context of image stylization [8] and extended and shown to be highly effective for multi-modal tasks such as visual question-answering [44,38,7]. A Feature-wise Linear Modulation (FiLM) layer applies a per-channel scaling and shifting to the convolutional feature maps. Such layers are parameter efficient (only two scalars per feature map) while still retaining high capacity, as they are able to scale up or down, zero-out, or negate whole feature maps. In vision-andlanguage tasks, another network, the so-called FiLM generator h, predicts these modulating parameters from the linguistic input e l . More formally, a FiLM layer computes a modulated feature mapF w,h,c as follows: [ γ ; β ] = h(e l ) ;F .,.,c = γ c F .,.,c + β c ,(3) where γ and β are the scaling and shifting parameters which modulate the activations of the original feature map F .,.,c . We will use the superscript k ∈ [1; K] to refer to the k th FiLM layer in the network. FiLM layers may be inserted throughout the hierarchy of a convolutional network, either pre-trained and fixed [6] or trained from scratch [38]. Prior FiLMbased models [44,38,7] have used a single-hop FiLM generator to predict the FiLM parameters in all layers, e.g., an MLP which takes the language embedding e l as input [44,38,7]. Multi-hop FiLM In this section, we introduce the Multi-hop FiLM architecture (shown in Fig. 2) to predict the parameters of FiLM layers in an iterative fashion, to better scale to longer input sequences such as in dialogue. Another motivation was to better disantangle the linguistic reasoning from the visual one by iteratively attending to both pipelines. We introduce a context vector c k that acts as a controller for the linguistic and visual pipelines. We initialize the context vector with the final state of a bidirectional RNN s T and repeat the following procedure for each of the FiLM layers in sequence (from lowest to highest convolutional layer): first, the context vector is updated by performing attention over RNN states (extracting relevant language information), and second, the context is used to predict a layer's FiLM parameters (dynamically modulating the visual information). Thus, the context vector enables the model to perform multi-hop reasoning over the linguistic pipeline while iteratively modulating the image features. More formally, the context vector is computed as follows: c 0 = s T c k = t κ k t (c k−1 , s t )s t ,(4) where: κ k t (c k−1 , s t ) = exp(χ k t ) t exp(χ k t ) ; χ k t (c k−1 , s t ) = M LP Attn (g (c k , s t )),(5) where the dependence of χ k t on (c k−1 , s t ) may be omitted to simplify notation. M LP Attn is a network (shared across layers) which aids in producing attention weights. g can be any fusion mechanism that facilitates selecting the relevant context to attend to; here we use a simple dot-product following [33], so g (c k , s t ) = c k • s t . Finally, FiLM is carried out using a layer-dependent neural network M LP k F iLM : [ γ k ; β k ] = M LP k F iLM (c k ) ;F k w,h,c = γ k c F k .,.,c + β k c .(6) As a regularization, we append a normalization-layer [2] on top of the context vector after each attention step. External information. Some tasks provide additional information which may be used to further improve the visual modulation. For instance, GuessWhat?! provides spatial features of the ground truth object to models which must answer questions about that object. Our model incorporates such features by concatenating them to the context vector before generating FiLM parameters. Visual feedback. Inspired by the co-attention mechanism [31,54], we also explore incorporating visual feedback into the Multi-hop FiLM architecture. To do so, we first extract the image or crop features F k (immediately before modulation) and apply a global mean-pooling over spatial dimensions. We then concatenate this visual state into the context vector c k before generating the next set of FiLM parameters. Experiments In this section, we first introduce the ReferIt and GuessWhat?! datasets and respective tasks and then describe our overall Multi-hop FiLM architecture 1 . Dataset ReferIt [23,51] is a cooperative two-player game. The first player (the Oracle) selects an object in a rich visual scene, for which they must generate an expression that refers to it (e.g., "the person eating ice cream"). Based on this expression, the second player (the Guesser) must then select an object within the image. There are four ReferIt datasets exist: RefClef, RefCOCO, RefCOCO+ and Re-fCOCOg. The first dataset contains 130K references over 20K images from the ImageClef dataset [35], while the three other datasets respectively contain 142K, 142K and 86K references over 20K, 20k and 27K images from the MSCOCO dataset [29]. Each dataset has small differences. RefCOCO and RefClef were constructed using different image sets. RefCOCO+ forbids certain words to prevent object references from being too simplistic, and RefCOCOg only relies on images containing 2-4 objects from the same category. RefCOCOg also contains longer and more complex sentences than RefCOCO (8.4 vs. 3.5 average words). Here, we will show results on both the Guesser and Oracle tasks. GuessWhat?! [6] is a cooperative three-agent game in which players see the picture of a rich visual scene with several objects. One player (the Oracle) is randomly assigned an object in the scene. The second player (Questioner) aims to ask a series of yes-no questions to the Oracle to collect enough evidence to allow the third player (Guesser) to correctly locate the object in the image. The GuessWhat?! dataset is composed of 131K successful natural language dialogues containing 650k question-answer pairs on over 63K images from MSCOCO [29]. Dialogues contain 5.2 question-answer pairs and 34.4 words on average. Here, we will focus on the Guesser and Oracle tasks. Task Descriptions Game Features. Both games consist of triplets (I, l, o), where I ∈ R 3×M ×N is an RGB image and l is some language input (i.e., a series of words) describing an object o in I. The object o is defined by an object category, a pixel-wise segmentation, an RGB crop of I based on bounding box information, and handcrafted spatial information x spatial , where x spatial = [x min , y min , x max , y max , x center , y center , w box , h box ](7) We replace words with two or fewer occurrences with an <unk> token. The Oracle task. Given an image I, an object o, a question q, and a sequence δ of previous question-answer pairs (q, a) 1:δ where a ∈ {Yes, No, N/A}, the oracle's task is to produce an answer a that correctly answers the question q. The Guesser task. Given an image I, a list of objects O = o 1:Φ , a target object o * ∈ O and the dialogue D, the guesser needs to output a probability σ φ that each object o φ is the target object o * . Following [17], the Guesser is evaluated by selecting the object with the highest probability of being correct. Note that even if the individual probabilities σ φ are between 0 and 1, their sum can be greater than 1. More formally, the Guesser loss and error are computed as follows: L Guesser = −1 N games Ngames n 1 Φ n Φ φ log(p(o * |I n , o n φ , D n ))(8)E Guesser = −1 N games Ngames n 1(o * = o argmax φ σ n φ )(9) where 1 is the indicator function and Φ n the number of objects in the n th game. Model We use similar models for both ReferIt and GuessWhat?! and provide its architectural details in this subsection. Object embedding The object category is fed into a dense look-up table e cat , and the spatial information is scaled to [-1;1] before being up-sampled via nonlinear projection to e spat . We do not use the object category in ReferIt models. Visual Pipeline We first resized the image and object crop to 448×448 before extracting 14 × 14 × 1024 dimensional features from a ResNet-152 [15] (block3) pre-trained on ImageNet [41]. Following [38], we feed these features to a 3 × 3 convolution layer with Batch Normalization [19] and Rectified Linear Unit [37] (ReLU). We then stack four modulated residual blocks (shown in Fig 2), each producing a set of feature maps F k via (in order) a 1 × 1 convolutional layer (128 units), ReLU activations, a 3 × 3 convolutional layer (128 units), and an untrainable Batch Normalization layer. The residual block then modulates F k with a FiLM layer to getF k , before again applying ReLU activations. Lastly, a residual connection sums the activations of both ReLU outputs. After the last residual block, we use a 1 × 1 convolution layer (512 units) with Batch Normalization and ReLU followed by MLB attention [24] (256 units and 1 glimpse) to obtain the final embedding e v . Note our model uses two independent visual pipeline modules: one to extract modulated image features e img v , one to extract modulated crop features e crop v . To incorporate spatial information, we concatenate two coordinate feature maps indicating relative x and y spatial position (scaled to [−1, 1]) with the image features before each convolution layer (except for convolutional layers followed by FiLM layers). In addition, the pixel-wise segmentations S ∈ {0, 1} M ×N are rescaled to 14 × 14 floating point masks before being concatenated to the feature maps. Linguistic Pipeline We compute the language embedding by using a wordembedding look-up (200 dimensions) with dropout followed by a Bi-GRU (512 × 2units) with Layer Normalization [2]. As described in Section 3, we initialize the context vector with the last RNN state c 0 = s T . We then attend to the other Bi-GRU states via an attention mechanism with a linear projection and ReLU activations and regularize the new context vector with Layer Normalization. FiLM parameter generation We concatenate spatial information e spat and object category information e cat to the context vector. In some experiments, we also concatenate a fourth embedding consisting of intermediate visual features Training Process We train our model end-to-end with Adam [26] (learning rate 3e −4 ), dropout (0.5), weight decay (5e −6 ) for convolutional network layers, and a batch size of 64. We report results after early stopping on the validation set with a maximum of 15 epochs. Baselines In our experiments, we re-implement several baseline models to benchmark the performance of our models. The standard Baseline NN simply concatenates the image and object crop features after mean pooling, the linguistic embedding, and the spatial embedding and the category embedding (GuessWhat?! only), passing those features to the same final layers described in our proposed model. We refer to a model which uses the MLB attention mechanism to pool the visual features as Baseline NN+MLB. We also implement a Single-hop FiLM mechanism which is equivalent to setting all context vectors equal to the last state of the Bi-GRU e l,T . Finally, we experiment with injecting intermediate visual features into the FiLM Generator input, and we refer to the model as Multi-hop FiLM (+img). Results ReferIt Guesser We report the best test error of the outlined methods on the ReferIt Guesser task in Tab GuessWhat?! Oracle We report the best test error of several variants of GuessWhat?! Oracle models in Tab. 2. First, we baseline any visual or language biases by predicting the Oracle's target answer using only the image (46.7% error) or the question (41.1% error). As first reported in [6], we observe that the baseline methods perform worse when integrating the image and crop inputs (21.1%) rather than solely using the object category and spatial location (20.6%). On the other hand, concatenating previous question-answer pairs to answer the current question is beneficial in our experiments. Finally, using Single-hop FiLM reduces the error to 17.6% and Multi-hop FiLM further to 16.9%, outperforming the previous best model by 2.4%. GuessWhat?! Guesser We provide the best test error of the outlined methods on the GuessWhat?! Guesser task in Tab. 3. As a baseline, we find that random object selection achieves an error rate of 82.9%. Our initial model baseline performs significantly worse (38.3%) than concurrent models (36.6%), highlighting Discussion Single-hop FiLM vs. Multi-hop FiLM In the GuessWhat?! task, Multi-hop FiLM outperforms Single-hop FiLM by 6.1% on the Guesser task but only 0.7% on the Oracle task. We think that the small performance gain for the Oracle task is due to the nature of the task; to answer the current question, it is often not necessary to look at previous question-answer pairs, and in most cases this task does not require a long chain of reasoning. On the other hand, the Guesser task needs to gather information across the whole dialogue in order to correctly retrieve the object, and it is therefore more likely to benefit from multi-hop reasoning. The same trend can be observed for ReferIt. Single-hop FiLM and Multi-hop FiLM perform similarly on RefClef and RefCOCO, while we observe 1.3% and 2% gains on RefCOCO+ and RefCOCOg, respectively. This pattern of performance is intuitive, as the former datasets consist of shorter referring expressions (3.5 average words) than the latter (8.4 average words in RefCOCOg), and the latter datasets also consist of richer, more complex referring expressions due e.g. to taboo words (RefCOCO+). In short, our experiments demonstrate that Multihop FiLM is better able reason over complex linguistic sequences. Reasoning mechanism We conduct several experiments to better understand our method. First, we assess whether Multi-hop FiLM performs better because of increased network capacity. We remove the attention mechanism over the linguistic sequence and update the context vector via a shared MLP. We observe that this change significantly hurts performance across all tasks, e.g., increasing the Multi-hop FiLM error of the Guesser from 30.5 to 37.3%. Second, we in- vestigate how the model attends to GuessWhat?! dialogues for the Oracle and Guesser tasks, providing more insight into how to the model reasons over the language input. We first look at the top activation in the (crop) attention layers to observe where the most prominent information is. Note that similar trends are observed for the image pipeline. As one would expect, the Oracle is focused on a specific word in the last question 99.5% of the time, one which is crucial to answer the question at hand. However, this ratio drops to 65% in the Guesser task, suggesting the model is reasoning in a different way. If we then extract the top 3 activations per layer, the attention points to <yes> or <no> tokens (respectively) at least once, 50% of the time for the Oracle and Guesser, showing that the attention is able to correctly split the dialogue into question-answer pairs. Finally, we plot the attention masks for each FiLM layer to have a better intuition of this reasoning process in Fig. 4. Crop vs. Image. We also evaluate the impact of using the image and/or crop on the final error for the Guesser task 3. Using the image alone (while still including object category and spatial information) performs worse than using the crop. However, using image and crop together inarguably gives the lowest errors, though prior work has not always used the crop due to architecture-specific GPU limitations [44]. Visual feedback We explore whether adding visual feedback to the context embedding improves performance. While it has little effect on the GuessWhat?! Oracle and Guesser tasks, it improves the accuracy on ReferIt by 1-2%. Note that ReferIt does not include class labels of the selected object, so the visual feedback might act as a surrogate for this information. To further investigate this hypothesis, we remove the object category from the GuessWhat?! task and report results in Tab. 5 in the supplementary material. In this setup, we indeed observe a relative improvement 0.4% on the Oracle task, further confirming this hypothesis. Pointing Task In GuessWhat?!, the Guesser must select an object from among a list of items. A more natural task would be to have the Guesser directly point out the object as a human might. Thus, in the supplementary material, we introduce this task and provide initial baselines (Tab. 7) which include FiLM models. This task shows ample room for improvement with a best test error of 84.0%. Related Work The ReferIt game [23] has been a testbed for various vision-and-language tasks over the past years, including object retrieval [36,51,52,54,32,50], semantic image segmentation [16,39], and generating referring descriptions [51,32,52]. To tackle object retrieval, [36,51,50] extract additional visual features such as relative object locations and [52,32] use reinforcement learning to iteratively train the object retrieval and description generation models. Closer to our work, [17,54] use the full image and the object crop to locate the correct object. While some previous work relies on task-specific modules [51,50], our approach is general and can be easily extended to other vision-and-language tasks. The GuessWhat?! game [6] can be seen as a dialogue version of the ReferIt game, one which additionally draws on visual question answering ability. [42,28,53] make headway on the dialogue generation task via reinforcement learning. However, these approaches are bottlenecked by the accuracy of Oracle and Guesser models, despite existing modeling advances [54,44]; accurate Oracle and Guesser models are crucial for providing a meaningful learning signal for dialogue generation models, so we believe the Multi-hop FiLM architecture will facilitate high quality dialogue generation as well. A special case of Feature-wise Linear Modulation was first successfully applied to image style transfer [8], whose approach modulates image features according to some image style (i.e., cubism or impressionism). [44] extended this approach to vision-and-language tasks, injecting FiLM-like layers along the entire visual pipeline of a pre-trained ResNet. [38] demonstrates that a convolutional network with FiLM layers achieves strong performance on CLEVR [21], a task that focuses on answering reasoning-oriented, multi-step questions about synthetic images. Subsequent work has demonstrated that FiLM and variants thereof are effective for video object segmentation where the conditioning input is the first image's segmentation (instead of language) [49] and language-guided image segmentation [40]. Even more broadly, [9] overviews the strength of FiLMrelated methods across machine learning domains, ranging from reinforcement learning to generative modeling to domain adaptation. There are other notable models that decompose reasoning into different modules. For instance, Neural Turing Machines [13,14] divide a model into a controller with read and write units. Memory networks use an attention mechanism to answer a query by reasoning over a linguistic knowledge base [45,43] or image features [46]. A memory network updates a query vector by performing several attention hops over the memory before outputting a final answer from this query vector. Although Multi-hop FiLM computes a similar context vector, this intermediate embedding is used to predict FiLM parameters rather than the final answer. Thus, Multi-hop FiLM includes a second reasoning step over the image. Closer to our work, [18] designed networks composed of Memory, Attention, and Control (MAC) cells to perform visual reasoning. Similar to Neural Turing Machines, each MAC cell is composed of a control unit that attends over the language input, a read unit that attends over the image and a write unit that fuses both pipelines. Though conceptually similar to Multi-hop FiLM models, Compositional Attention Networks differ structurally, for instance using a dynamic neural architecture and relying on spatial attention rather than FiLM. Conclusion In this paper, we introduce a new way to exploit Feature-wise Linear Modulation (FiLM) layers for vision-and-language tasks. Our approach generates the parameters of FiLM layers going up the visual pipeline by attending to the language input in multiple hops rather than all at once. We show Multi-hop FiLM Generator architectures are better able to handle longer sequences than their single-hop counterparts. We outperform state-of-the-art vision-and-language models significantly on the long input sequence GuessWhat?! tasks, while maintaining stateof-the-art performance for the shorter input sequence ReferIt task. Finally, this Multi-hop FiLM Generator approach uses few problem-specific priors, and thus we believe it can extended to a variety of vision-and-language tasks, particularly those requiring complex visual reasoning. Oracle (Without Category Label) For existing tasks on the GuessWhat?! dataset, the Guesser selects its predicted target object from among a provided list of possible answers. A more natural task would be for the Guesser to directly point out the object, much as a human might. Thus, we introduce a pointing task as a new benchmark for GuessWhat?!. The specific task is to locate the intended object based on a series of questions and answers; however, instead of selecting the object from a list, the Guesser must output a bounding box around the object of its guess, making the task more challenging. This task also does not include important side information, namely object category and (x,y)-position [6], making the object retrieval more difficult than the originally introduced Guesser task as well. The bounding box is defined more specifically as the 4-tuple (x, y, width, height), where (x, y) is the coordinate of the top left corner of the box within the original image I, given an input dialogue. Additional Results ReferIt ImageClef We assess bounding box accuracy using the Intersection Over Union (IoU) metric: the area of the intersection of predicted and ground truth bounding boxes, divided by the area of their union. Prior work [10,12], generally considers an object found if IoU exceeds 0.5. IoU = |bboxA ∩ bboxB| |bboxA ∪ bboxB| = |bboxA ∩ bboxB| |bboxA| + |bboxB| − |bboxA ∩ bboxB| (10) We report model error in Table 7. Interestingly, the baseline obtains 92.0% error while Multi-hop FiLM obtains 84.0% error. As previously mentioned, reinjecting visual features into the Multi-hop FiLM Generator's context cell is beneficial. The error rates are relatively high but still in line with those of similar pointing tasks such as SCRC [16,17] (around 90%) on ReferIt.
5,091
1808.04091
2887561040
In this paper, we propose the task of live comment generation. Live comments are a new form of comments on videos, which can be regarded as a mixture of comments and chats. A high-quality live comment should be not only relevant to the video, but also interactive with other users. In this work, we first construct a new dataset for live comment generation. Then, we propose a novel end-to-end model to generate the human-like live comments by referring to the video and the other users' comments. Finally, we evaluate our model on the constructed dataset. Experimental results show that our method can significantly outperform the baselines.
One task that is similar to live comment generation is image caption generation, which is an area that has been studied for a long time. tried to generate descriptions of an image by retrieving from a big sentence pool. proposed to generate descriptions based on the parsing result of the image with a simple language model. These systems are often applied in a pipeline fashion, and the generated description is not creative. More recent work is to use stepwise merging network to improve the performance @cite_2 .
{ "abstract": [ "The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances.(The code is available at this https URL)" ], "cite_N": [ "@cite_2" ], "mid": [ "2952978943" ] }
Live Video Comment Generation Based on Surrounding Frames and Live Comments
In this paper, we focus on the task of automatically generating live comments. Live comments (also known as "弹幕 bullet screen" in Chinese) are a new form of comments that appear in videos. We show an example of live comments in Figure 1. Live comments are popular among youngsters as it plays the role of not only sharing opinions but also chatting. Automatically generating live comments can make the video more interesting and appealing. Different from the other video-to-text tasks, such as video caption, a live comment appears at a certain point of the video timeline, which gives it some unique characteristics. The live comments can be causal chats about a topic with other users instead of serious descriptions of the videos. Therefore, a human-like comment should be not only relevant to the video, but also interactive with other users. In this paper, we aim at generating human-like live comments for the videos. We propose a novel end-to-end model to generate the comments by referring to the video and other users' comments. We have access to not only the current frame but also the surrounding frames and live comments because a live comment and its associated frame are in the context of a series of surrounding frames and live comments. To make use of the information in those two parts, we design a model that encodes the surrounding frames and live comments together into a vector, based on which we decode the new live comment. Experimental results show that our model can generate human-like live comments. Our contributions are two folds: • We propose a new task of automatically generating live comments and build a dataset with videos and live comments for live comment generation. • We propose a novel joint video and live comment model to make use of the current frame, the surrounding frames, and the surrounding live comments to generate a new live comment. Experimental results show that our model can generate human-like live comments. Figure 2: An illustration of our joint video and live comment model. We make use of not only the surrounding frames but also the surrounding live comments to generate the target live comment. Proposed Model In Figure 2 we show the architecture of our model. Our live comment generation model is composed of four parts: a video encoder, a text encoder, a gated component, and a live comment generator. The video encoder encodes n consecutive frames, and the text encoder encodes m surrounding live comments into the vectors. The gated component aggregates the video and the comments into a joint representation. Finally, the live comment generator generates the target live comment. Video Encoder In our task, each generated live comment is attached with n consecutive frames. In the video encoding part, each frame f (i) is first encoded into a vector v (i) f by a convolution layer. We then use a GRU layer to encode all the frame vectors into their hidden states h (i) v : v (i) f = CN N (f (i) )(1)h (i) v =GRU (v (i) f , h (i−1) v )(2) We set the last hidden state h v v = h (n) v . Text Encoder In the comment encoding part, a live comment c (i) with L (i) words (w (i) 1 , w (i) 2 , · · · , w (i) L (i) ) is first encoded into a series of word-level hidden states (h (i) w 1 , h (i) w 2 , · · · , h (i) w L (i) ), using a word-level GRU layer. We use the last hidden state h (i) c : h (i) w j = GRU (w (i) j , h (i) w j−1 ) (3) h (i) c = GRU (v (i) c , h (i−1) c )(4) The last hidden state h Gated Selection In order to describe how much information we should get from the video and the live comments, we apply a gated multi-layer perceptron (MLP) to combine v c and v v , and get the final vector h: s v = u ReLU (W v v v + b v ) (5) s c = u ReLU (W c v c + b c ) (6) h = e sc e sv +e sc v c , e sv e sv +e sc v v(7) where u, W, b are trainable parameters. Live Comment Generator We use a GRU to decode the live comment. The encoder encodes the frames and live comments jointly into a vector h. The probability of generating a sentence given the encoded vector h is defined as, p(w 0 , ..., w T |h) = T t=1 p(w t |w 0 , ..., w t−1 , h) (8) More specifically, the probability distribution of word w i is calculated as follows, h i =GRU (w i , h i−1 ) (9) p(w i |w 0 , ..., w i−1 ,h) = sof tmax(W o h i ) (10) Experiments In this section, we show the experimental results of our proposed model and compare it with three baselines on the dataset we construct from Youku. 2 Live Comment Dataset Construction Video frames: We extract frames from an animated TV series named "Tianxingjiuge" ("天行 九歌") at a frequency of 1 frame per second. We get 21 600 frames from 40 videos in total, with a shape of 128 × 72 for each frame. We split the frames into training set (21 000) and test set (600). Live comments: We use the developer tools in Google Chrome to manually detect the live comments sources, via which we get all live comments of the 40 videos. For each extracted frame, we select 5 live comments which are the nearest to the frame at the time they appear. Reference set: Besides the live comments in our training set and test set, we crawl 1 036 978 extra live comments to be the reference set for calculating BLEU score and perplexity which can evaluate the fluency of generated live comments (refer to Table 2). Copyright statement: The dataset we construct can only be used for scientific research. The copyright belongs to the original website Youku. Baselines Besides the model described in section 2, we have three baseline methods: Frame-to-Comment (F2C) (Vinyals et al., 2015) applies a CNN to encode the current frame to a vector, based on which the decoder generates the target live comment. Moment-to-Comment (M2C) applies an RNN to make use of one live comment near the current frame besides the CNN for the frame. The two encoded vectors are concatenated to be the initial hidden state for the decoder. Context-to-Comment (C2C) is similar to (Venugopalan et al., 2015) which makes use of a series of surrounding frames and live comments by encoding them with extra higher-level RNNs. Evaluation Metrics We design two types of evaluation metrics: human evaluation and automatic evaluation. Human Evaluation: We evaluate in three aspects: Fluency is designed to measure whether the generated live comments are fluent setting aside the relevance to videos. Relevance is designed to measure the relevance between the generated live comments and their associated frames. Overall Score is designed to synthetically measure the confidence that the generated live comments are made by humans in the context of the video. For all of the above three aspects, we stipulate the score to be an integer in {1, 2, 3, 4, 5}. The higher the better. The scores are evaluated by three seasoned native speakers and finally we take the average of three raters as the final result. Automatic Evaluation: We adopt two metrics: BLEU score (Papineni et al., 2002) and perplexity. These two metrics are designed to measure whether the generated live comments accord with the human-like style. To get BLEU scores, for each generated live comment, we calculate its BLEU-4 score with all live comments in the reference set, and then we pick the maximal one to be its final score. Perplexity is to measure the language quality of the generated live comments, which is estimated as, perplexity = 2 − 1 n i log p(w i |h i ) for each word w i in the sentence, h i is the predicted word. Experimental Details The vocabulary is limited to be the 34,100 most common words in the training dataset. We use a shared embedding between encoder and decoder and set the word embedding size to 300. The word embedding is randomly initialized and learned automatically during the training. For the 3 GRUs used in the encoding stage, we set the hidden size to 300. For the decoding GRU, we set the hidden size to 600. For the encoding CNN, we use 3 convolution layers and 3 linear layers, and get a final vector with a size of 300. The batch size is 512. We use the Adam (Kingma and Ba, 2014) optimization method to train the model. For the hyper-parameters of Adam optimizer, we set the learning rate α = 0.0003, two momentum parameters β 1 = 0.9 and β 2 = 0.999 respectively, and = 1 × 10 −8 . During training, we use "teacher forcing" (Williams and Zipser, 1989) to make our model converge faster and we set the teacher forcing ratio p = 0.5. Results and Analysis As shown in Table 1, our model achieves the highest scores over the baseline models in all three degrees. When only the current frame or one extra live comment is considered (F2C and M2C), the generated live comments have low scores. After considering more surrounding frames and live comments (C2C), all of the scores get higher. Finally, with the gate mechanism that can automatically decide the weights of surrounding frames and live comments, our proposal achieves the highest scores, which are almost near to those of real-world live comments. We use Spearman's Rank correlation coefficients to evaluate the agreement among the raters. The coefficients between any two raters are all near 0.6 and at an average of 0.63. These high coefficients show that our human evaluation scores are consistent and credible. Relevance: The relevant scores presented in Table 1 show that the live comments generated by all models do not achieve high relevant scores, which means that many of the generated live comments are not relevant to the current frame. We go through the real live comments and find that about 75.7% live comments are not relevant to the current frame, but are just chatting. In fact, we can find from Table 1 that the relevance score of real live comments is not high as well. Therefore, the low relevant scores are reasonable. Still, our proposal can generate more relevant live comments owing to its ability to combine the information from the surrounding frames and live comments. Fluency: From the fluency score presented in Table 1, the BLEU-4 score and the perplexity score presented in Table 2, we can see our proposal can generate live comments which best accord with the human-like style. Informativeness: From the Average Length in Table 2, we can see our proposal improves the length of the generated live comments, which indicates that more meaningful information is embodied. Conclusion In this paper, we propose the task of live comment generation. In order to generate high-quality comments, we propose a novel neural model which makes use of the surrounding frames in the video and other surrounding live comments. Experimental results show that our model performs better than the baselines in various metrics, and even approaches the performance of human.
1,926
1808.04091
2887561040
In this paper, we propose the task of live comment generation. Live comments are a new form of comments on videos, which can be regarded as a mixture of comments and chats. A high-quality live comment should be not only relevant to the video, but also interactive with other users. In this work, we first construct a new dataset for live comment generation. Then, we propose a novel end-to-end model to generate the human-like live comments by referring to the video and the other users' comments. Finally, we evaluate our model on the constructed dataset. Experimental results show that our method can significantly outperform the baselines.
We cast this problem as a natural language generation problem, and we are also inspired by the recent related work of natural language generation models with the text inputs @cite_5 @cite_6 @cite_8 @cite_3 .
{ "abstract": [ "", "Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailed caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task.", "A sentence can be translated into more than one correct sentences. However, most of the existing neural machine translation models only use one of the correct translations as the targets, and the other correct sentences are punished as the incorrect sentences in the training stage. Since most of the correct translations for one sentence share the similar bag-of-words, it is possible to distinguish the correct translations from the incorrect ones by the bag-of-words. In this paper, we propose an approach that uses both the sentences and the bag-of-words as targets in the training stage, in order to encourage the model to generate the potentially correct sentences that are not appeared in the training set. We evaluate our model on a Chinese-English translation dataset, and experiments show our model outperforms the strong baselines by the BLEU score of 4.55.", "We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask answer about certain visual attributes (shape color style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team." ], "cite_N": [ "@cite_5", "@cite_3", "@cite_6", "@cite_8" ], "mid": [ "", "2609138599", "2799090016", "2953119472" ] }
Live Video Comment Generation Based on Surrounding Frames and Live Comments
In this paper, we focus on the task of automatically generating live comments. Live comments (also known as "弹幕 bullet screen" in Chinese) are a new form of comments that appear in videos. We show an example of live comments in Figure 1. Live comments are popular among youngsters as it plays the role of not only sharing opinions but also chatting. Automatically generating live comments can make the video more interesting and appealing. Different from the other video-to-text tasks, such as video caption, a live comment appears at a certain point of the video timeline, which gives it some unique characteristics. The live comments can be causal chats about a topic with other users instead of serious descriptions of the videos. Therefore, a human-like comment should be not only relevant to the video, but also interactive with other users. In this paper, we aim at generating human-like live comments for the videos. We propose a novel end-to-end model to generate the comments by referring to the video and other users' comments. We have access to not only the current frame but also the surrounding frames and live comments because a live comment and its associated frame are in the context of a series of surrounding frames and live comments. To make use of the information in those two parts, we design a model that encodes the surrounding frames and live comments together into a vector, based on which we decode the new live comment. Experimental results show that our model can generate human-like live comments. Our contributions are two folds: • We propose a new task of automatically generating live comments and build a dataset with videos and live comments for live comment generation. • We propose a novel joint video and live comment model to make use of the current frame, the surrounding frames, and the surrounding live comments to generate a new live comment. Experimental results show that our model can generate human-like live comments. Figure 2: An illustration of our joint video and live comment model. We make use of not only the surrounding frames but also the surrounding live comments to generate the target live comment. Proposed Model In Figure 2 we show the architecture of our model. Our live comment generation model is composed of four parts: a video encoder, a text encoder, a gated component, and a live comment generator. The video encoder encodes n consecutive frames, and the text encoder encodes m surrounding live comments into the vectors. The gated component aggregates the video and the comments into a joint representation. Finally, the live comment generator generates the target live comment. Video Encoder In our task, each generated live comment is attached with n consecutive frames. In the video encoding part, each frame f (i) is first encoded into a vector v (i) f by a convolution layer. We then use a GRU layer to encode all the frame vectors into their hidden states h (i) v : v (i) f = CN N (f (i) )(1)h (i) v =GRU (v (i) f , h (i−1) v )(2) We set the last hidden state h v v = h (n) v . Text Encoder In the comment encoding part, a live comment c (i) with L (i) words (w (i) 1 , w (i) 2 , · · · , w (i) L (i) ) is first encoded into a series of word-level hidden states (h (i) w 1 , h (i) w 2 , · · · , h (i) w L (i) ), using a word-level GRU layer. We use the last hidden state h (i) c : h (i) w j = GRU (w (i) j , h (i) w j−1 ) (3) h (i) c = GRU (v (i) c , h (i−1) c )(4) The last hidden state h Gated Selection In order to describe how much information we should get from the video and the live comments, we apply a gated multi-layer perceptron (MLP) to combine v c and v v , and get the final vector h: s v = u ReLU (W v v v + b v ) (5) s c = u ReLU (W c v c + b c ) (6) h = e sc e sv +e sc v c , e sv e sv +e sc v v(7) where u, W, b are trainable parameters. Live Comment Generator We use a GRU to decode the live comment. The encoder encodes the frames and live comments jointly into a vector h. The probability of generating a sentence given the encoded vector h is defined as, p(w 0 , ..., w T |h) = T t=1 p(w t |w 0 , ..., w t−1 , h) (8) More specifically, the probability distribution of word w i is calculated as follows, h i =GRU (w i , h i−1 ) (9) p(w i |w 0 , ..., w i−1 ,h) = sof tmax(W o h i ) (10) Experiments In this section, we show the experimental results of our proposed model and compare it with three baselines on the dataset we construct from Youku. 2 Live Comment Dataset Construction Video frames: We extract frames from an animated TV series named "Tianxingjiuge" ("天行 九歌") at a frequency of 1 frame per second. We get 21 600 frames from 40 videos in total, with a shape of 128 × 72 for each frame. We split the frames into training set (21 000) and test set (600). Live comments: We use the developer tools in Google Chrome to manually detect the live comments sources, via which we get all live comments of the 40 videos. For each extracted frame, we select 5 live comments which are the nearest to the frame at the time they appear. Reference set: Besides the live comments in our training set and test set, we crawl 1 036 978 extra live comments to be the reference set for calculating BLEU score and perplexity which can evaluate the fluency of generated live comments (refer to Table 2). Copyright statement: The dataset we construct can only be used for scientific research. The copyright belongs to the original website Youku. Baselines Besides the model described in section 2, we have three baseline methods: Frame-to-Comment (F2C) (Vinyals et al., 2015) applies a CNN to encode the current frame to a vector, based on which the decoder generates the target live comment. Moment-to-Comment (M2C) applies an RNN to make use of one live comment near the current frame besides the CNN for the frame. The two encoded vectors are concatenated to be the initial hidden state for the decoder. Context-to-Comment (C2C) is similar to (Venugopalan et al., 2015) which makes use of a series of surrounding frames and live comments by encoding them with extra higher-level RNNs. Evaluation Metrics We design two types of evaluation metrics: human evaluation and automatic evaluation. Human Evaluation: We evaluate in three aspects: Fluency is designed to measure whether the generated live comments are fluent setting aside the relevance to videos. Relevance is designed to measure the relevance between the generated live comments and their associated frames. Overall Score is designed to synthetically measure the confidence that the generated live comments are made by humans in the context of the video. For all of the above three aspects, we stipulate the score to be an integer in {1, 2, 3, 4, 5}. The higher the better. The scores are evaluated by three seasoned native speakers and finally we take the average of three raters as the final result. Automatic Evaluation: We adopt two metrics: BLEU score (Papineni et al., 2002) and perplexity. These two metrics are designed to measure whether the generated live comments accord with the human-like style. To get BLEU scores, for each generated live comment, we calculate its BLEU-4 score with all live comments in the reference set, and then we pick the maximal one to be its final score. Perplexity is to measure the language quality of the generated live comments, which is estimated as, perplexity = 2 − 1 n i log p(w i |h i ) for each word w i in the sentence, h i is the predicted word. Experimental Details The vocabulary is limited to be the 34,100 most common words in the training dataset. We use a shared embedding between encoder and decoder and set the word embedding size to 300. The word embedding is randomly initialized and learned automatically during the training. For the 3 GRUs used in the encoding stage, we set the hidden size to 300. For the decoding GRU, we set the hidden size to 600. For the encoding CNN, we use 3 convolution layers and 3 linear layers, and get a final vector with a size of 300. The batch size is 512. We use the Adam (Kingma and Ba, 2014) optimization method to train the model. For the hyper-parameters of Adam optimizer, we set the learning rate α = 0.0003, two momentum parameters β 1 = 0.9 and β 2 = 0.999 respectively, and = 1 × 10 −8 . During training, we use "teacher forcing" (Williams and Zipser, 1989) to make our model converge faster and we set the teacher forcing ratio p = 0.5. Results and Analysis As shown in Table 1, our model achieves the highest scores over the baseline models in all three degrees. When only the current frame or one extra live comment is considered (F2C and M2C), the generated live comments have low scores. After considering more surrounding frames and live comments (C2C), all of the scores get higher. Finally, with the gate mechanism that can automatically decide the weights of surrounding frames and live comments, our proposal achieves the highest scores, which are almost near to those of real-world live comments. We use Spearman's Rank correlation coefficients to evaluate the agreement among the raters. The coefficients between any two raters are all near 0.6 and at an average of 0.63. These high coefficients show that our human evaluation scores are consistent and credible. Relevance: The relevant scores presented in Table 1 show that the live comments generated by all models do not achieve high relevant scores, which means that many of the generated live comments are not relevant to the current frame. We go through the real live comments and find that about 75.7% live comments are not relevant to the current frame, but are just chatting. In fact, we can find from Table 1 that the relevance score of real live comments is not high as well. Therefore, the low relevant scores are reasonable. Still, our proposal can generate more relevant live comments owing to its ability to combine the information from the surrounding frames and live comments. Fluency: From the fluency score presented in Table 1, the BLEU-4 score and the perplexity score presented in Table 2, we can see our proposal can generate live comments which best accord with the human-like style. Informativeness: From the Average Length in Table 2, we can see our proposal improves the length of the generated live comments, which indicates that more meaningful information is embodied. Conclusion In this paper, we propose the task of live comment generation. In order to generate high-quality comments, we propose a novel neural model which makes use of the surrounding frames in the video and other surrounding live comments. Experimental results show that our model performs better than the baselines in various metrics, and even approaches the performance of human.
1,926
1808.03810
2887150966
In this paper, we investigate the potential of the Boyer-Moore waterfall model for the automation of inductive proofs within a modern proof assistant. We analyze the basic concepts and methodology underlying this 30-year-old model and implement a new, fully integrated tool in the theorem prover HOL Light that can be invoked as a tactic. We also describe several extensions and enhancements to the model. These include the integration of existing HOL Light proof procedures and the addition of state-of-the-art generalization techniques into the waterfall. Various features, such as proof feedback and heuristics dealing with non-termination, that are needed to make this automated tool useful within our interactive setting are also discussed. Finally, we present a thorough evaluation of the approach using a set of 150 theorems, and discuss the effectiveness of our additions and relevance of the model in light of our results.
One of the main heuristic tools in Proof Planning is Rippling @cite_3 . Rippling gives a direction to the rewriting process. The idea is based on the observation that throughout the process, on each side of the equality being proved, there is an unchanging part (skeleton) and a changing part (wave-front). In principle the proof is guided towards rewrite rules that move the wave-front upwards in the syntax tree of the term. Rippling has proved to be a powerful heuristic for inductive proofs. However, there is still a possibility that the proof will block, thus requiring a patch'' or critic''. Critics include lemma speculations and generalizations, some of which can be produced automatically in the modern Proof Planning systems.
{ "abstract": [ "Preface 1. An introduction to rippling 2. Varieties of rippling 3. Productive use of failure 4. A formal account of rippling 5. The scope and limitations of rippling 6. From rippling to a general methodology 7. Conclusions Appendix 1. An annotated calculus and a unification algorithm Appendix 2. Definitions of functions used in this book Bibliography Index." ], "cite_N": [ "@cite_3" ], "mid": [ "1992696924" ] }
The Boyer-Moore Waterfall Model Revisited
work are still being used in modern research for automated inductive proofs. For example, the Nqthm system [5], which started off as an implementation of a similar model to Boyer-Moore's original prover, later evolved into ACL2, system which is still under development [14]. Although ACL2 is now a much more sophisticated and powerful system than the original Boyer-Moore waterfall approach, we wanted to investigate whether this venerable model could still be beneficial to modern, general-purpose theorem proving systems. Our investigation involves the integration of the Boyer and Moore waterfall model into the HOL Light theorem prover, followed by its extension with modern algorithms and procedures. Our work reconstructs Boulton's implementation of the Boyer-Moore system [3] from HOL 90 (an earlier version of HOL), which is believed to be a quite faithful reconstruction of the Boyer-Moore approach. The paper is organized as follows: In Section 2, we briefly discuss HOL Light, a state-of-the-art theorem prover. In Section 3, we review the waterfall model as it was originally suggested by Boyer and Moore. This is followed, in Section 4, by the details of our implementation and the extensions that we added, including state-ofthe-art generalization algorithms. In Section 5, we analyze the setup and results of the system evaluation. A brief review of related work in included in Section 6. We describe our suggestions for future work in Section 7 and summarize our conclusions in Section 8. HOL Light HOL Light [10] is a relatively recent member of the HOL family of theorem provers that was initially built in an attempt to overcome certain disadvantages of its predecessors. The system has equality as the only primitive concept and a few primitive inference rules that form the basis of more complex rules and tactics. Built on top of these, HOL Light has its own automated methods for proofs such as the model elimination method MESON [11]. Additionally, it has an array of conversion methods that allow for efficient and fine-grained manipulation (such as rewriting or numerical reduction) of formulas. HOL Light has significant advantages over the other modern systems especially for the current work. It is a lightweight, flexible system written in OCaml, that allows for interaction at every level. This allows for relatively easy implementation and integration of tools that can seamlessly interact with the internals and methods of HOL Light. Unfortunately, there are also a few disadvantages. HOL Light is not too userfriendly when writing proofs due to its relatively large number of low-level tactics and complicated syntax. It has a steep learning curve and its procedural proofs have reduced readability compared to systems such as Isabelle [15], where the declarative proof-style seems now to be the norm. In a nutshell, HOL Light can be characterised as a system functioning at a lower programming level rather than the higher but limiting user level. For our purpose though, the advantage of a smooth and direct interaction, coupled with the fact that HOL Light is a well-regarded and powerful system were convincing enough to select it as the backend for the Boyer-Moore system. The Boyer & Moore Model In the next few sections, we provide a review of the Boyer-Moore waterfall model. In particular, we describe its architecture and the various heuristics that are applied in the automatic search for a proof. We note that that this description encompasses both the original model and Boulton's HOL reconstruction, which, as mentioned before, is believed to be mostly faithful. Nevertheless, we shall point out any aspects, where Boulton's HOL version seems to diverge slightly from the original model either by design or due to the use of the HOL system as a vehicle. The Waterfall Metaphor One of the main principles underlying the Boyer-Moore model is the application of "black-box" procedures. According to Boyer and Moore, induction should be applied only as a last resort when all the other procedures have failed. Moreover, one must ensure that induction is applied to the simplest and most general clauses. The black-box procedures either prove or, failing that, simplify and generalize the clauses as much as possible so as to prepare them for induction. It should be noted that Boyer and Moore call all such procedures "heuristics" even though not all of them use heuristic methods and, for this paper, we shall follow their terminology. The heuristics are organised and applied in a way that metaphorically resembles rocks in an initially dry waterfall. Clauses that are to be proven are poured from the top of the waterfall and each heuristic is then applied to the clause sequentially. The application of each heuristic can have one of the following results: -It may prove a clause, in which case the latter 'evaporates'. -Sometimes it may simplify or split the clause into smaller ones. In this case, the proof of the resulting clauses is sufficient for the proof of the initial clause. We then say that the heuristic has been applied successfully or simply was successful. The newly created clause or clauses are recursively poured from the top of the waterfall. -It may disprove the clause, for example by reducing it to False, in which case the system immediately fails. -If it cannot deal with the clause, it passes it on to the next heuristic. In that case we say that the heuristic has failed. If all the heuristics fail, the clause ends up at the bottom of the waterfall and, together with all the clauses that the waterfall failed to prove, forms a pool. The aim then is to prove each clause of the pool by induction. Doing so is sufficient to prove the initial conjecture. An illustration of the model we have just described can be found in Fig. 1. Once induction is applied to one of the clauses in the pool, the newly produced clauses (the base case and step case) are in turn poured over a new waterfall. The same process of heuristic application as before is then used. New pools of clauses may be formed and another induction may be applied to them as illustrated in Fig. 2. Eventually, assuming the system is successful, all clauses will be proved and will have evaporated from all pools and waterfalls, resulting in the proof of the initial clause. The Shell The "Shell principle" [4] was used in the original Boyer-Moore model to define and describe recursive datatypes. Such a principle was crucial to this initial description because of the lack of support for such datatypes in Lisp, the underlying programming language for the system. Boulton, in his HOL90 implementation, uses an extended version of the original Shell that contains more defined properties and information. Even though the HOL systems, including HOL Light, have full support for recursive data types, the implementation and usage of the Shell within the automated system is still necessary. This is because it contains useful, explicitly defined information about the types that needs to be readily available to the automated waterfall heuristics at any given point. Boyer and Moore describe a Shell as a "colored n-tuple with restrictions on the colors of objects that can occupy its components" [4]. The color represents a unique identifier for a datatype. Apart from identifying the datatype and separating it from other similar data types, a number of properties are defined for it within Accessors: P RE : ∀n. P RE (SU C n) = n Type Axiom: ∀e f. ∃g. g 0 = e ∧ (∀n. g (SU C n) = f (g n) n) Induction theorem: ∀P. P 0 ∧ (∀n. P n ⇒ P (SU C n)) ⇒ (∀n. P n) Cases theorem: ∀m. m = 0 ∨ (∃n. m = SU C n) Distinctness theorem(s): ∀n. ¬(SU C n = 0) One-one restriction(s): ∀m n. SU C m = SU C n ⇔ m = n the Shell. For example it will contain constructors, bottom objects and accessors (also known as "destructors" in the more recent literature) as some of its main parts. Boulton, in his HOL versions, included additional properties such as a type axiom, an induction theorem, a theorem for splitting cases, and theorems to ensure distinctness of constructors and one-one restrictions. As an example, we provide the shell for natural numbers (type num in HOL Light) in Table 1. The heuristics There are seven heuristics proposed in the original Boyer-Moore system. These include the transformation to clausal form and the induction heuristic which applies the induction scheme. As mentioned previously, the induction heuristic is applied separately from the waterfall loop, but it is still implemented using the same structure and output as all the other heuristics. We will describe the six heuristics that form part of the waterfall next, focusing on their functionality, limitations and output. The Clausal Form Heuristic Boyer and Moore decided to rely on Clausal Normal Form (CNF) because they could avoid an asymmetry they observed with conditionals. Generally, a term can be transformed to CNF (i.e. a conjunction of disjunctions) by eliminating existential quantifiers through Skolemisation and removing universal quantifiers. Each conjunct is then a disjunction of literals and is called a clause. For example, the term m + n = 0 ⇒ m = 0 ∧ n = 0 is transformed into (¬(m + n = 0) ∨ m = 0 ) ∧ (¬(m + n = 0) ∨ n = 0 ) which is a conjunction of two clauses. In the Boyer-Moore model, the Clausal Form heuristic is responsible for the transformation of quantifier-free sentences to CNF. It fails if the input term is a single clause already in CNF. It also splits a conjunction of clauses and returns them as a list. We note that an important limitation of this heuristic as implemented in the Boyer-Moore system is that it cannot deal with quantifiers, ie. it assumes quantifiers have already been eliminated. The Substitution Heuristic The Substitution heuristic is a simplification procedure used to eliminate negations of equalities between variables and terms. For example, assume we have the following input, where x is a variable and A 1 , A 2 , A 3 do not contain x: A 1 ∨ ¬(x = t) ∨ A 2 ∨ P (x) ∨ A 3 If t is a term that does not contain x as a variable then we can substitute x in P (x) with t, thus obtaining: A 1 ∨ F ∨ A 2 ∨ P (t) ∨ A 3 We note that the negations of equalities often appear in CNF because such equalities are often on the left-hide side of an implication, either as part of the initial conjecture as a by-product of induction. The heuristic fails if it cannot be applied, meaning that there is no such negated equality. Otherwise, the heuristic returns a single simplified clause. The Simplify Heuristic This heuristic applies rewriting to the clause in an attempt to simplify or prove it. It uses rewrite rules defined by the user (see Section 3.4), definitions of recursive functions, and a few special rules for specific cases of clauses, such as conditionals (eg. the rule if p then q else q ⇔ q). The heuristic fails if no rules can be applied, or if no changes are made to the clause. It also uses lexicographic ordering in an attempt to avoid looping that may be caused by permutative rules and supports conditional rewrite rules. Unfortunately, this does not eliminate all possible loops and it is left to the user to ensure the set of rewrite rules is terminating. The methods to manipulate the set of rules are rather limited: they only allow the creation of new rules from existing, proved theorems and there is no explicit mechanism to remove a rule from the set. The Equality Heuristic The Equality heuristic is similar to the substitution one. It uses equalities for "cross" (or "weak") fertilization. Cross-fertilization is the replacement of part of the induction hypothesis within the induction conclusion (as opposed to a complete replacement in strong "fertilization"). Instead of negations of equalities between a variable and terms not containing the variable as in the substitution heuristic, the equality heuristic checks for negations of equalities involving a term which is not a so-called explicit value template. An explicit value template is a non-variable term consisting of constants or any constructor application to bottom objects or variables. For example 0, SU C(0) and SU C(SU C(x)) are all explicit value templates. If the clause is the result of an induction step, the negated equality is eliminated (because of cross-fertilization). The heuristic fails if no such negated equality is found. As an example, during the proof of the commutative property of multiplication we obtain the following clause as an induction step: ¬(n × 0 = 0) ∨ (n × 0) + 0 = 0. The equality heuristic is applied in this case, as n × 0 is not an explicit value template, giving us the result: F ∨ 0 + 0 = 0 ie. 0 + 0 = 0. The Generalization Heuristic The Generalization heuristic attempts to substitute the clause with a more general one that might be easier to prove. In particular, the generalization proposed by Boyer and Moore, and thus implemented by Boulton, is based on the elimination of minimal common subterms. First, the generalizable terms of the clause are calculated. A term is generalizable if it is neither a variable, nor an explicit value template (see section 3.3.4), nor an application of accessor functions. From the generalizable terms, candidates for generalization are picked based on the common subterm criterion. According to this criterion, a generalizable term is a candidate for generalization if it appears in more than one generalizable subterms, or on both sides of an equation, or on both sides of a negated equation. Finally, from the list of candidates, the minimal common subterms are picked, meaning that candidates that have other candidates as subterms are rejected. Thus the "smallest" candidates are generalized simultaneously. These subterms are replaced with fresh variables. The heuristic fails if no such subterms are found. Otherwise, it returns a single generalized clause. The original clause follows by a simple instantiation of the variable in the generalized clause. As an example, taken from an actual test case, when trying to prove the commutative property for the multiplication of natural numbers, the system ends up with the clause (m × n) + n = n + (m × n) after a few steps. In this clause, the term (m × n) is on both sides of the equation and may be generalized and substituted with a new variable n . The resulting clause is n + n = n + n . It is worth noting that generalization produces new clauses, some of which can be particularly interesting. In our example, generalization yields the commutative property of addition for natural numbers. As a result, generalization is considered to be a form of lemma speculation. Unfortunately, as expected, the process is not flawless. It may over-generalize, resulting in a clause which is no longer provable. In other words, it is sometimes the case that the original clause might have been provable if we just had proceeded with induction rather than generalization. Creating a generalization process that minimizes the risk of over-generalizing remains an open issue. The heuristic additionally supports the use of generalization lemmas supplied by the user. These are theorems that "point out properties of terms that are good to keep in mind when generalizing formulas" [4]. If one of the candidate subterms is an instantiation of the generalization lemma, the lemma is added to the clause so as to "keep in mind" the property that it represents. For instance, if we use the (trivial) generalization lemma m × n ≥ 0 then, in our last example, after generalization, we obtain the additional restriction n ≥ 0 and the result is n ≥ 0 ⇒ n + n = n + n . We should note that this result is not in CNF but will be converted in the next proof step, as it will be poured at the top of the waterfall and go through the Clausal Form Heuristic (see Section 3.3.1). The Irrelevance Heuristic This heuristic is another form of generalization. It attempts to eliminate irrelevant subterms from the clause. Firstly the subterms of the clause are split into partitions based on common variables, meaning that two subterms are in the same partition if they share at least one variable. One such partition is irrelevant if it is falsifiable. Judging if a partition of subterms is falsifiable is done using two actual heuristics. The first one checks if there are any occurrences of recursive functions. If not, then the subterms consist only of functions of the shell (constructors, accessors, constants etc). Therefore, if the partition was always true, we should have proved it by simplification. Since the irrelevance check comes after the simplifier in the waterfall, we have certainly failed to do so and consequently we can assume that the partition of subterms can be falsified. The second heuristic checks if a subterm is an application of a function over variables. If so, it can only be a theorem if the function always returns true. Again it is assumed that it should have been simplified by rewriting. If a partition of subterms can be falsified, it is safe to eliminate the subterms from the clause. The resulting clause will be a theorem if and only if the original one is a theorem. We illustrate the above idea using an example. Consider the clause: p = [] ∨ REV ERSE (AP P EN D (REV ERSE p) [a]) = CON S a (REV ERSE (REV ERSE p)) which is generalized to: p = [] ∨ REV ERSE (AP P EN D l [a]) = CON S a (REV ERSE l) In this result, the subterm p = [] is deemed irrelevant because it does not have common variables with the rest of the term, does not have any application of recursive functions nor is an application of a function to variables. After eliminating the irrelevant term, the resulting clause is: REV ERSE (AP P EN D l [a]) = CON S a (REV ERSE l) which is a generalization of the original one. Unfortunately, these heuristics are unsafe and may eliminate relevant subterms, thereby rendering the clause unprovable. The heuristic fails if no irrelevant terms are found, or it indicates that the clause cannot be proved if it finds that all subterms are irrelevant. Otherwise, it returns a simplified clause and the proof of the original one. User Interaction However systematic the system that we describe might be, it still does not guarantee to find the proof of a true statement, i.e. it is not complete. Thus, it may require some user intervention to "set the tracks" and guide the proof procedures. The user can interact with the system and affect its performance in various simple ways: -Firstly, the user is responsible for providing the shell for the data type and the definitions of the functions, both simple and recursive. -Moreover, the user can manipulate the sets of rewrite rules and generalization lemmas. Picking the set of rewrite rules carefully may prove crucial for achieving the proof. Allowing the user to manipulate the set offers significant control over the proof procedure. -Additionally, picking generalization lemmas (see Section 3.3.5) containing useful properties may help guide or unlock proofs that would otherwise fail. -The user may also choose which main waterfall heuristics will be used and in what order. Different combinations of heuristics may produce different results. For instance, the user may choose to remove the generalization heuristic which, as an unsafe operation, might over-generalize and render a conjecture unprovable. The Boyer-Moore Waterfall Model implemented and extended in HOL Light In this section we discuss our implementation of the Boyer-Moore model in HOL Light. The main system consists of a reimplementation of Richard Boulton's old code HOL90 [3]. The main issues of this reimplementation are discussed in Section 4.1. We also proceeded to develop various enhancements and improvements to the system in our attempt to evaluate its potential and effectiveness within a state-ofthe-art theorem prover. The enhancements were applied in number of steps. Firstly, we made some effort to fix some issues and upgrade the system so that it is a better fit to our current interactive setting. These changes are discussed in Section 4.2. Secondly, we focused on integrating some of HOL Light's features into the system and these attempts are analysed in Section 4.3. Finally, as will be described in Section 4.4, we attempted to upgrade the generalization heuristics by introducing some of the latest work in this area. Moreover, in an attempt to address overgeneralization issues we implemented and integrated a simple disprover, which is discussed in Section 4.4.3 Main issues The primary implementation task was to reconstruct the old code by [3] for HOL Light. It is important to note that this was not a simple, straightforward translation from one environment to another, since HOL Light has significant differences from HOL90, and the systems lack complete documentation. The encountered issues can be split into two basic categories, which we briefly discuss. i) The first one involves those caused by the differences between Standard ML (SML) used to implement HOL90 and OCaml used of HOL Light. There are syntactic variations, such as those in function and data type declarations, in case splits, in the test for the empty list (equality to an empty list is used instead of the null function) and many more. Combined with the limited documentation for these platforms (consisting mainly of expert users offering solutions through mailing lists), these made some aspects of the re-implementation task a tedious process. Dealing with logical differences and resulting errors was even harder. ii) The second category involves those caused by the difference in system functionalities. For example, some inference rules and tactics that existed in HOL90 have no counterparts in HOL Light. For instance, "SUBS OCCS" (a rule used to substitute occurrences of a term in a theorem using other equational theorems) and "INDUCT TAC" (a tactic used to apply induction based on a given induction rule). We were compelled to reconstruct the missing rules and tactics based on the existing ones in HOL Light. Differences in the system behaviour also had an impact on the reconstruction. As an example, HOL Light treats natural numbers not as constants (as is the case in HOL) but as applications of the N U M ERAL function. Fitting the model into an interactive setting The first challenge that we encountered, once the initial reconstruction of the system had been accomplished, involved augmenting the means of user interaction so as to improve the fit of the automatic system within HOL Light's interactive setting. The two main steps we took towards this goal were the extension and improvement of the feedback provided by the system (Section 4.2.1) and the attempt to minimize non-termination (Section 4.2.2). Increasing the System Verbosity One of the simplest, yet important, issues involved in testing, evaluating and improving the system is to have the option of producing a trace for every proof attempt. In order to investigate the reasons for failure or error, one naturally desires as much information as possible. However, this need has to be balanced against the fact that too much information may lead to clutter when dealing with large proofs, making the trace unreadable. Boulton's original implementation of the system contained a minimalistic proof printer. Upon activation, it would give information about which clause is being evaluated by the waterfall at any given time. We enhanced this proof printer so as to offer richer information about the mechanics of the system. Each heuristic, upon success, prints out its name before the resulting clause. Many offer even more information about their results. For example the Clausal Form heuristic shows the number of new clauses produced (by breaking conjunctions) and the Generalization heuristic shows which subterms were generalized. A message is also printed out whenever induction is applied, indicating the clause to which it was applied. Therefore, the steps followed in the proof process are now made explicit. Moreover, a machinery was included to indicate the reason for failure, wherever possible, as well as the theorem produced upon the successful proof of a clause. Finally, the user was given the option of viewing the proof tree created by the waterfall upon its completion and before moving to induction. As the proof trace may increase drastically when dealing with complicated theorems, we aimed to keep the messages compact and easy to read. The improved tracing mechanism effectively gave us means of properly monitoring the system, when need be, and of analyzing its performance and finding solutions to its problems. As a simple example, the proof trace for the simple lemma SU C(m) = m + SU C(0) is given in Fig. 3. Eliminating Loops One of the first disadvantages of the original Boulton implementation, as noticed during its reconstruction in HOL Light, was that the system would in some cases fall into endless loops. This is a tricky issue for such automated systems because the user is in no position of knowing if progress is being made towards the proof or if the system will never terminate. This becomes particularly troublesome when the system is used as part of an extensive run on a set of hundreds of theorems. As a way of dealing with this issue, we decided to introduce two techniques: a warehouse filter and the imposition of a maximum depth limit on the size of terms. These are applied outside the waterfall model, as described next. The warehouse filter is a storage of clauses that have already been evaluated successfully by a given waterfall. If the same clause is poured on top of the same waterfall it means that at least one of the heuristics was successful but after one or more loops the system ended up with the same result. Consequently, if we allow it to proceed further, the same heuristic will be applied and the same result will loop through the waterfall forever. Our filter checks if the clause has already been evaluated by the waterfall and which heuristic was applied to it. It then skips the heuristic that lead to the loop and tries the next one instead in the hope of eventually achieving the proof. It is worth noting that the warehouse is local. Therefore, if the same clause is poured over a different waterfall (eg. after at least one induction step) it will not be filtered, as it is not certain in that case that there is a loop. For example it might just be a subterm that occurs more than once in the same proof. The same warehouse filtering technique is also applied in the induction scheme. Before applying induction we check if induction has already been applied to the same clause in the same proof branch. If this is the case, the system fails because further induction will only lead to the same result. Despite our efforts with the warehouse filter and its effectiveness in some situations, it was still insufficient as the system still looped fairly often. After careful observation of various non-terminating cases, we noticed that in most of them, the repetitive application of rewriting and inductions lead to a constant increase of the size of the term by having multiple constructors or function applications to a variable. Our "maximum depth" heuristic measures the maximum depth in the syntax tree of a term where a variable occurs. By adding a user-defined limit to this depth we accomplished a drastic decrease in the number of looping cases (see Section 5 for detailed results and evaluation). Unfortunately, it is possible for the heuristic to interrupt proofs that might eventually succeed. However, given our interactive environment, early termination was favoured over lengthy proof times. Moreover, despite this heuristic, not all loops were eliminated. In some cases, for example, the terms can expand very slowly (more than 10 minutes to reach maximum depth limit in some of our evaluation tests). In other cases, one of the terms kept being split into multiple clauses after being rewritten. Investigating more heuristics to tackle these cases or more sophisticated techniques used in similar automated systems (such as incremental depth search used in HOL Light's MESON tactic) is part of future work (see Section 7). Integrating HOL Light tools In this section, we discuss the integration of two HOL Light tools into the waterfall and some of the resulting issues. In particular, we tried to exploit HOL Light's tautology prover and simplifier within our system. The Tautology heuristic HOL Light includes an automated procedure that can be used to prove tautologies. It can successfully deal with terms such as p∨¬p, p = p and (p ⇒ q)∨(q ⇒ p) where p and q are atomic formulas that are not necessarily propositional. We exploited this function to build a tautology heuristic for the waterfall. The heuristic is placed at the very top of the waterfall for maximum efficiency since it does not alter the clause in any way, it only proves the clause immediately if it can. The HOL Light Simplifier HOL Light's simplifier is a powerful and efficient tool, which is the workhorse for many proofs. In an attempt to exploit the efficiency of this simplifier in our system, we we devised a version of the system in which we replaced the simplify heuristic with one of HOL Light's conversions, the so-called REWRITE CONV. The new simplify heuristic works in a similar way to the original one (see Section 3.3.3). One of the major differences, though, is that the original simplifier only rewrote recursive functions based on their definitions. The new heuristic is allowed to apply all rules (ie. both derived rewrite rules and definitions) at all times. Such a behaviour, we did realise, could be both an advantage, as it might provide more powerful simplification in some cases, and a disadvantage, because of the increased likelihood of looping. The Setify heuristic The use of HOL Light's simplifier required a new, straightforward heuristic to deal with an issue that the original Boyer-Moore simplifier dealt with as one of its steps. In some cases, after several proof steps, a clause may end up including the same subterm as a disjunct more than once. The original Boyer-Moore simplifier would then remove such duplications, keeping only one copy of any disjunct in a clause. To achieve the same behaviour, we therefore created a heuristic to simplify such clauses by eliminating duplicate disjuncts. So, for a clause such as A ∨ B ∨ A, the second A term is eliminated giving A ∨ B as a result. The heuristic also helped prevent some loops where a clause would endlessly expand with multiple identical disjuncts. Next, our attempts focused on the improvement of the original Boyer-Moore generalization heuristics using some state-of-the-art techniques described in Section 4.4. Finally, we made an attempt on a simple counterexample checker, described in Section 4.4.3, which allowed us to avoid several overgeneralizations. Incorporating state-of-the-art Generalization techniques Recent research on formula generalization has provided better heuristics and more filters to avoid over-generalizations. In particular, we studied Aderhold's approach, which is summarized in a recent paper [1]. In this work, a generalization heuristic and a tactic are created for a verification system called VeriFun [17] and are shown to be effective at dealing with a substantial range and number of inductive properties. Aderhold's research builds on well-regarded generalization mechanisms, such as the ones used in the Boyer-Moore system, as well as novel ideas. Aderhold's generalization heuristic contains five subprocesses, each handling a different aspect of generalization and not all of which are applicable to our system. We chose to implement the generalization of common subterms as an alternative for the generalization heuristic in the waterfall. We also implemented the algorithm for generalizing variables apart. We note that for any comparison between Alderhold's and the original waterfall algorithms within the Boyer-Moore model, that one should bear in mind that Aderhold's techniques are applied in a system with destructive-style rather than constructor-style induction. This leads to different handling of accessors, constructors, and the induction hypothesis (which in this case is a more general term). Generalizing Common Subterms The algorithm for the generalization of common subterms proposed by Aderhold is quite similar to the Boyer-Moore generalization of minimal common subterms but with important differences. It is split into three steps: identifying generalizable subterms, generating proposals, and evaluating them. The only difference when identifying generalizable subterms is that, in addition to the criteria in the Boyer-Moore generalization (i.e. be neither a variable, nor an explicit value template, nor an application of accessor functions), generalizable terms should not contain constructors. For instance, let us consider (m × n + n) + SU C(m) = (m × n + m) + SU C(n). This clause occurs during the proof of the commutativity property of multiplication. The generalizable subterms are: m × n, m × n + n and m × n + m. Notice that the newly extended generalization criteria discard (m × n + n) + SU C(m) and (m × n + m) + SU C(n) as potentially generalizable subterms because they contain the constructor SU C, whereas in the Boyer-Moore generalization they would be accepted. The second step generates proposals, which are sets of generalizable subterms that occur in a recursive position of a function or form one of the sides of an equation, eg. the subterm a+b in equation a+b = (a+b)+0. Proposals are filtered and only the "suitable" ones are kept by following an idea similar to the one in the Boyer-Moore system but with a different algorithm. A proposal is suitable for a formula ϕ if the proposed terms are generalizable subterms of ϕ and each occur at least twice in ϕ. Aderhold, also mentions a special check for equations, where the proposed term must also occur on both sides of the equation (the equation criterion), or at least twice on one side. Having established that, each subterm of the formula is examined recursively for suitable proposals. Thus, in our first example, there is only a singleton, suitable proposal containing m × n, which is proposed twice. Two further differences in Aderhold's algorithm compared to the Boyer-Moore heuristic can be found in its third step. The Boyer-Moore system picks all of the minimal common subterms to generalize simultaneously (see Section 3.3.5). Aderhold's algorithm only applies the single best proposal, after ordering these with respect to a number of criteria. The first criterion is the induction test: the induction scheme is used to test if an induction is possible on the generalized variable. A successful induction test shows that the proposal is much more likely to be correct. Other criteria include how often the proposal was made in the generation step and how many occurrences of the terms of the proposal can be found in the formula. After sorting the proposals, the first one is picked and applied. In Veri-Fun, the disprover is also used at this point to filter out over-generalizations. In our example, the generalized lemma produced by the single proposal m × n is (n + n) + SU C(m) = (n + m) + SU C(n). It passes the induction test because an induction is possible on n . It is worth noting that, without some special machinery, the recursive nature of the waterfall model would defeat the purpose of only applying the best proposal. This is because the successfully generalized clause will be poured on top of the waterfall again and go through the same generalization heuristic which will essentially generalize the second proposal. Eventually all proposals will be generalized and not just the best one. We took special care to prevent this behaviour by using a technique similar to the warehouse filter (see Section 4.2.2), storing the generalized terms and preventing a second generalization. Generalizing Variables Apart As indicated by one of Aderhold's Verifun examples [1], it is often necessary to generalize apart the occurrences of x in an expression such as x + (x + x) = (x + x) + x. In his algorithm, it is deemed necessary to rename the occurrences of the variable in the recursive position of the functions involved. First, a heuristic filter is applied to detect the need for generalizing apart. The filter searches for a function f and a variable v that match the following criteria: f should appear twice in the clause and v should be an argument in the recursive position in the first appearance and an argument in a non-recursive position in the second appearance. If such a function and variable are found, the generalization of that variable is proposed. Two functions are used to ensure the variable is generalized in the correct positions of the clause. The variable is replaced in those positions by a fresh variable v . A term t is said to have been generalized apart successfully if the whole term t is replaced by v (i.e. t = v ) or at least one but not all occurrences of v in t were replaced by v . For equations, it is required that both sides are generalized apart successfully. Once the generalization is applied, a check is used to verify if this is a useful generalization. A useful generalization is one which was generalized successfully and in which all the equations were generalized apart successfully as well. A disprover is also used to rule out over-generalizations. Following this algorithm, our example is generalized to n + (x + x) = (x + x) + n. If the first generalization proposal is not a useful generalization, another attempt is made. For all functions g, other than f , that appear in the clause and have the same recursive argument position as f , the variable is generalized apart in all such positions. This generalization is also checked for usefulness. This part of the algorithm accomplishes the generalization of an expression such as LEN GT H(AP P EN D x x) = LEN GT H x + LEN GT H x to LEN GT H(AP P EN D x x) = LEN GT H x + LEN GT H x At this point we should emphasize an important aspect of Aderhold's original algorithm. In his case, the algorithm allows for multiple recursive argument positions in functions. In fact a recursive position powerset is defined for each function, allowing for multiple definitions of the function with a different set of recursive argument positions for each definition. In our system, only functions with one recursive argument are allowed and hence only one position needs to be stored for each function. This simplifies the algorithm but the capability of the system to deal with different function definitions remains limited. Dealing with over-generalizations Careful observation of several proof traces where the new algorithm did not contribute to a successful proof, combined with the fact that Aderhold uses a disprover to filter-out over-generalizations in several stages of the algorithms, lead to the implementation of a simple counterexample checker. For each generalized clause, a random example is generated for every free variable in it. Then simplification is used in an attempt to evaluate the grounded clause. The definitions of functions, constructors and accessors as well as some particular rewrite rules (such as SU C 0 = 1 in order to deal with the HOL Light numeric 1 that appears in some definitions) are given to the simplifier in order to accomplish this task. Additionally, we use a HOL Light conversion NUM REDUCE CONV to evaluate numeric expressions faster. This allows increased efficiency when handling terms that would otherwise take long to evaluate (such as terms containing exponential expressions). If the simplifier reduces the term to False, it disproves the clause and the generalization is rejected. Otherwise, if the term is reduced to True, the generalization is allowed to proceed. It is also worth noting that, in some cases there may not be enough rewrite rules to fully reduce the grounded term to either True or False. We have chosen the safe option, ie. to consider the corresponding clauses unsafe for generalization, and thus reject them. As a simple illustration of our disprover in action, consider the clause m + n = n + m. The generalization apart algorithm attempts to generalize this to m + n = n + m. The counterexample checker, however, can produce a counter example by instantiating m, n, and n to SU C 0, 0, and SU C(SU C(SU C 0)) respectively. Then our simplifier is able to reduce the grounded clause SU C 0 + 0 = SU C(SU C(SU C 0)) + SU C 0 to False and thus we can reject the overgeneralization. In order to generate the random examples, we use the constructors defined in the Shell for the type. A "maximum depth" parameter is used to limit the size of the example. The constructors are applied randomly with a gradually increasing probability of using a bottom object. The same procedure is called for each constructor parameter. In the simple example of natural numbers, we have the option of using either SU C or the bottom object 0. In this case, 0 has an increasing probability of being used and thus terminating the procedure. Given that the counterexamples are generated randomly, often one random instance is insufficient to disprove a clause. In particular, for formulae that are falsified by few variable instantiations (such as m × n < m × SU C n that is only false if m = 0) the counterexample checker will most likely fail to disprove them. Therefore, we apply multiple counterexample checks so as to achieve a more thorough (yet still incomplete) check. The number of such checks can be set by the user while taking into consideration the tradeoff between efficiency and thoroughness. Usage of this counterexample checker altered the evaluation results significantly. The number of disproved clauses in every proof was added as a measure in our evaluation. The details of these results, along with all the others, are discussed in the next section. Evaluation Our primary aim for the proper evaluation of such a system is to investigate its theorem proving potential as an automated tool within HOL Light. We also aim to evaluate the effect of our additions, including the loop elimination methods and new generalization techniques, on the performance of the system. There are considerably many parameters to take into consideration and various measures so an exhaustive evaluation of all scenarios is not possible. We describe the setup of our evaluation in Section 5.1 and we discuss the results in Section 5.2. Setup Our evaluation involves inputting known theorems from existing theories into the system as conjectures and having it attempt to prove them fully automatically. In particular, we chose a total of 145 theorems from two test sets (see Appendix B). The first 120 form the basis of Peano arithmetic in HOL Light. The rest of the theorems were picked among the 50 examples from both Peano arithmetic and the list theory used for an evaluation of Rippling [2]. The same test set is used by Aderhold for the evaluation of his generalization algorithms in VeriFun [17]. It is worth noting that we were unable to test the whole set of 50 theorems, as some of them used functions that are not primitive recursive and thus cannot be defined within our system. The definitions of the functions used in our test sets that were added to the system are shown in Appendix A. Deciding which parameters to test is important for the proper evaluation of the system. We first decided to consider six instances of the system. The first instance named "BOYER MOORE" (BM) is the pure reconstruction of Boulton's implementation with the addition of the counterexample checker. The second instance named "BOYER MOORE EXT" (BME) is the extension of the original implementation with all the additions we described in Section 4 except from HOL Light's simplifier (see Section 4.3.2) and the improved Generalization heuristic of Section 4.4. We replaced the Boyer-Moore rewrite engine with the HOL Light simplifier (see Section 4.3.2) to form the third instance of the system named "BOYER MOORE REWRITE" (BMR). The improved Generalization heuristic is tested in the fourth instance named "BOYER MOORE GEN" (BMG) where we substitute it for the generalization method in "BOYER MOORE REWRITE". After some result analysis, we tested a fifth instance of the system denoted by (BMG') which is the same as "BOYER MOORE GEN" except that it lacks the equation criterion (see Section 4.4.1 for a description of the criterion and Section 5.2.2 for the reasoning behind its removal). Finally, having completed a detailed evaluation of our test sets, we combined the elements that were giving the best results into a final instance of the system called "BOYER MOORE FINAL" (BMF). Table 2 shows the elements used in each of the six instances. BM BME BMR BMG BMG' BMF Basic Heuristics x x x x x x Counterexample checker x x x x x x Boyer-Moore simplifier x x HOL Light simplifier x x x x Warehouse filter x x x x x Maximum depth heuristic x x x x x Tautology heuristic x x x x x Setify heuristic x x x x x Boyer-Moore generalization x x x x Aderhold's generalization x x Variables apart generalization x x x Equation criterion x Table 2 The six evaluated system instances Another crucial aspect involved finding an appropriate setup for the numerous parameters that affect the system performance, given the fully automatic evaluation process. We decided to test the system with a minimum number of rewrite rules (see Appendix A) so as to have the least possible user intervention. The rules that were added are mainly properties of the involved datatypes and are not provable in the Boyer-Moore system (since they only involve constructors and accessors, not functions). Most of these properties are included in the datatype's shell. We also included a theorem involving the abbreviation of SU C 0 as 1. Having the system automatically add rewrite rules depending on their potential usefulness in future proofs is an outstanding issue. Moreover, after some experimentation we decided that 5 counterexample checks per generalization attempt were sufficient to provide some useful results without a major impact on efficiency. Finally, after some observation of successful proofs, we chose a value of 12 for the maximum depth heuristic (see Section 4.2.2). Clauses involved in successful proofs were never nearly as complex as those cut off by a maximum depth of 12. There are also various measures that one could record to extract useful conclusions. We chose to log the result and the following four measures: 1. The time it takes for the system to prove a theorem (or to fail) as this is quite essential for this kind of systems. 2. The number of proof steps as measured by the number of calls to a waterfall plus the number of inductions. The resulting number is proportional to the number of intermediate clauses produced and, upon success, proved by the system. 3. The number of intermediate lemmas produced by generalization. Generalization is an unsafe operation, therefore having fewer generalizations is better for the system. 4. The number of over-generalizations detected by the counterexample checker. This measure is used for the evaluation of the generalization techniques. Fewer over-generalizations indicate a better heuristic method. In addition to the above, we also examined the output of the generalization heuristic in relative detail as part of the evaluation. The clauses produced by generalization are separate lemmas speculated by the system (as opposed to the resulting clauses of the other heuristics which are simplifications or rewrites of the initial clause). Since these speculated lemmas often express interesting properties or theorems, they are investigated and evaluated separately. Our evaluation setup was implemented with the help of a wrapper function that recorded and gave the various measures as output. The data was collected in a spreadsheet and examined. At that point, we picked the most interesting or unexpected cases and examined them more closely in an attempt to analyse and explain them. Given the size of the test set and the numerous parameters that can be taken into consideration, one can extract a multitude of useful conclusions and ideas for the improvement of the system. Some of these are described in the following section. Results We begin with an evaluation of the Boyer-Moore system by discussing some general results in Section 5.2.1. This is followed in Section 5.2.2 by an analysis of the results obtained by having the improved generalization techniques in BMG when compared to BMR. We conclude our evaluation with a brief description of the results from BMF in Section 5.2.3. General Results The first results from the tests showed that our reconstruction of Boulton's code worked as intended. We compared the results of BM with those originally given by Boulton [3] and they matched. Moreover, based on our results we believe that the system can be a useful automated tactic for inductive proofs in HOL Light. The system was able to prove around 43% of the 120 theorems in the first set and 33% of the 25 theorems in the second test set automatically, ie. without any user interaction. An excerpt from the evaluation results containing successful proofs of BM is shown in Table 3. As another metric, if we look at a number of current HOL Light proofs (see Figure 4), we see that the same theorems are now proven automatically by BM or BMG in half a second or less. Therefore, we believe that it may prove useful as an automatic tactic in the hands of HOL Light users. Looping examples One of the most noticeable problems with the Boyer-Moore system in our initial evaluation runs was the sheer number of non-terminating examples. The BM instance of the system looped for more than a third of the cases that were tried. Some examples of theorems whose proofs loop in BM are shown in Table 4. This lead to the implementation of the warehouse filter and the maximum depth heuristic (see Section 4.2.2) and the creation of the BME version of the system. The two procedures effectively reduced the number of looping examples in our two sets to 1%. In particular, the maximum depth heuristic prevented 4 times as many loops as the warehouse filter. Although we are aware that the maximum depth heuristic may block proofs that would eventually succeed, we were unable to find such examples within our test sets. Failed proofs After careful investigation of some of the failed proofs, it was clear that the performance of the system could be greatly enhanced by properly managing the rewrite rule set manually. We observed that often a group of theorems could Table 3 The evaluation results (time, proof steps, inductions, and generalizations) for some successful proofs of BM and BMG. Set "H" corresponds to the HOL Light test set, whereas set "R" to the Rippling test set. not be stratighforwardly proven because some simple lemmas were missing. For example, a number of theorems involving EV EN and ODD, shown in Table 4, are provable by the system if we can demonstrate the theorem ¬ODD n ⇔ EV EN n separately (eg. interactively without the waterfall) and add it to the rewrite rule set. This strengthens our view of the Boyer-Moore system as an automated proce- Table 4 Examples of theorems that cause BM to loop, but can be proven automatically once ¬ODD n ⇔ EV EN n is added as a rewrite rule. ¬EVEN n ⇔ ODD n EVEN n ∨ ODD n ¬(EVEN n ∧ ODD n) EVEN (m + n) ⇔ EVEN m ⇔ EVEN n EVEN (m * n) ⇔ EVEN m ∨ EVEN n EVEN (m EXP n) ⇔ EVEN m ∧ ¬(n = 0) ODD (m + n) ⇔ ¬(ODD m ⇔ ODD n) dure within an interactive theorem prover, where the user can manage the rewrite rule set properly so as to achieve optimal results. Efficiency Having timed the evaluation, we observed that the average proof time for successful proofs was under half a second for all five system instances and both test sets. Failed proofs (including those blocked by the loop elimination methods) took an average of 5 seconds, with a maximum of 25 seconds for BMR and 1 minute 15 seconds for BMG. If we consider 30 seconds as an acceptable time for an average user to expect a result in an interactive setting, these times are tolerable and provide enough room for more optimized cutoff heuristics, especially given the fact that successful proofs take considerably little time to complete. We also noted that HOL Light enhancements in BME compared to the original BM led to an average of 7% fewer proof steps for successful proofs. For example, the lemma m < SU Cn ⇔ m ≤ n is proven in 45 steps in BME as opposed to 54 in BM. Comparing rewrite engines The comparison between the results of BME and BMR is essentially a comparison between the original rewrite algorithm by Boulton and the usage of the HOL Light simplifier. BMR proved the same number of conjectures as BME. However, some small differences were observed in the efficiency of the two instances of the system. For BMR, there was a 6% drop in the average number of proof steps in successful proofs as well as small drops in the number of inductions and generalizations. Even though the differences were small, they are still noteworthy as they are expected to scale up in larger proofs. The drop in the average number of inductions for successful proofs is an indication that some of the proofs required less inductions which, in turn, is a considerable advantage. Overall, using the HOL Light simplifier did not decrease the proof power of the system for the given test sets, but did offer a minor boost in the efficiency of the system. Lemma speculation The last important point which is indicative of the power of the system is the set of generalized terms. We filtered the speculated lemmas in successful proofs from BM and BMG. Examining the list of lemmas we can discover conjectures expressing interesting properties of our theory that are automatically speculated and proved. For natural numbers, these properties include commutativity of addition (x + y = y + x), associativity of multiplication m × (n × p) = (m × n) × p and distributivity of multiplication over addition (n × p + m × p = (n + m) × p) amongst many others. A few trivial lemmas are speculated especially in BM because of the lack of the tautology checker which solves them before getting to generalization in BMG. To sum up, we observed that the system is capable of speculating interesting lemmas that may prove useful additions to the theory. This leads us to propose a filtering process at the end of a successful proof, which could heuristically select the most "interesting" theorems that were created by generalizations and make them available in the theory for the user to use in other proofs. Evaluating Generalization techniques within the Boyer-Moore system Having established the potential of the system as an automated proof procedure, we investigated its usefulness when augmented with state-of-the-art generalization techniques such as the ones described in Section 4.4. Results showed that on average 36% of the successful proofs required one or more generalizations, so the importance and power of the generalization heuristic seems quite apparent. Unfortunately, the initial results with the new heuristic were not as expected, especially for the first test set. The success rate of BMG initially dropped significantly compared to BMR (29% of the set proven compared to BMR's 44%). Careful observation and result analysis was required to investigate the reasons for this somewhat unexpected decrease. Rejecting over-generalizations One of the immediately apparent problems of the new generalization heuristic was over-generalization that often led to non-theorems. This was mainly caused by generalizing variables apart. Noticing how Aderhold specifically mentions the necessity of a disprover, we implemented a simple counterexample checker (as described in Section 4.4.3). The number of disproven generalizations then demonstrated some interesting facts. Primarily, BME and BMR made no overgeneralizations in any of the successful proofs. BMG's performance, however, increased to 37% (compared to 29% without the counterexample checker) and the measure showed an average 0.7 overgeneralizations per successful proofs. This made it clear that the counterexample checker is essential for the new generalization heuristic to work properly, since it often overgeneralizes. Examination of particular examples showed that in the vast majority problems were caused by the generalization of variables apart. For example, n ≤ n and n ≤ n × n were both generalized to the non-theorems n ≤ n and n ≤ n × n. The counterexample checker is able to prevent both these overgeneralizations. The Equation criterion We were able to discover two particular cases where generalizing common terms should have been applied but is filtered out by our new generalization heuristic. In one of the cases, for instance, the clause (m × n = 0 ) ⇔ (m = 0) ∨ (n = 0) is transformed into n × n = 0 ∨ ¬(n × n + n = 0) ∨ (n = 0) after a few proof steps using the waterfall heuristics. At that point the Boyer-Moore generalization heuristic generalizes n × n to n , giving n = 0 ∨ ¬(n + n = 0) ∨ (n = 0) which is then easily proved. However, the new heuristic based on Aderhold's approach does not allow this generalization. This is because n × n appears only once on the left-hand side of the equation. According to the criterion for equations (see Section 4.4.1), this generalization is ruled out and the system is unable to prove the original clause. However, in the given example this is a rational and useful generalization so empirically there's Table 5 Examples of theorems that demonstrate the differences in the results for BMR, BMG and BMG'. The comments refer to the main reasons behind these differences. no reason why it should be ruled out. Having observed this issue we decided to rerun the evaluation for BMG while ignoring the equation criterion. BMG' had the same results as BMG with the addition of the proofs for the two problematic cases. There were no cases in our test sets where the lack of the equation criterion blocked the proof or led to an overgeneralization. Comparing the heuristics Even though BMR and BMG had similar results, rather surprisingly BMG appeared to be slightly less capable than BMR. We examined the particular cases where the Boyer-Moore generalization versus Aderhold's generalization produced different results. There was a number of theorems that BMR was able to solve and BMG failed. Examining these examples in detail showed that Aderhold's algorithm for generalizing common subterms rejected some crucial generalizations. As an example, it was unable to generalize P RE(SU C (m + n) − m) = (m + n) − m which the Boyer-Moore generalization heuristic generalizes to P RE(SU C n − m) = n − m and is then able to prove. Such a behaviour occurs because the procedure that generates the proposals for generalization in Aderhold's approach does not investigate deeper into constructors or accessors recursively, but only does so for functions and equations. Notably, there were a few cases where the new heuristic overgeneralized but the counterexample checker was unable to detect it. Finally, a few examples mostly in the second test set, were proven by BMG but BMR failed. Further investigation showed this success can be attributed to the generalization of variables apart. For example, the statement DBL x = x + x is rewritten, using the definition of DBL: DBL (SU C x) = SU C (SU C (DBL x)), to SU C (n + n) = n + SU C n. BMR attempts to prove the latter by continuously applying induction until stopped by the maximum depth heuristic. In contrast, BMG proves this by generalizing n apart resulting in SU C (n + n) = n + SU C n, which is then proven with a single induction on n . Some more examples of theorems that demonstrate the differences between BMR, BMG and BMG' are given in Table 5. Combining the best components in BMF Having completed a detailed analysis of our results, we were able to identify the components that were leading to the most successful proofs with the least possible steps. Since the results of BMR were slightly improved compared to BM, all the added components, including the loop detection heuristics and the HOL Light tools, were kept in BMF. Moreover, we concluded that the best choice for a generalization heuristic in our system is a combination of the original Boyer-Moore generalization heuristic with Aderhold's generalization of variables apart. The evaluation of BMF using our two test sets showed some improvement over the original version (BM). In particular, BMF managed to prove 47% of the lemmas in the first set and 37% of the second set (as opposed to 43% and 33% respectively for BM), and these are the best results we were able to achieve so far with the given evaluation setup. Notably, BMF used fewer proof steps for successful proofs on average than BM (11% reduction) and only slightly more inductions and generalizations (1-3% on average). The complete evaluation results for BMF can be found in Appendix B. Future Work The encouraging results produced in a relatively limited timespan, provide multiple pointers for future work. The evaluation of the system was a time consuming process which, however, produced interesting results. We believe there is a lot of room for further evaluation of the system. On the one hand, one could expand the evaluation set using more theorems from different domains, eg. recursively-defined trees. On the other hand, one could delve deeper into the particular examples where the system failed or produced unexpected results and draw even more conclusions and ideas for improvement of the system. There is also scope for improvement of the loop elimination heuristics. We have already considered possible measures, such as the number of clauses that remain to be proven in the pool of the waterfall and the number of inductions applied. We have also considered an incremental depth approach similar to the one used in the MESON tactic. As far as the generalization heuristic is concerned, immediate future work would involve replacing the irrelevance heuristic by its counterpart from Aderhold's approach, known as "inverse weakening". Further experimentation for the optimal combination of criteria for generalizing common subterms within our system is also among our future plans. Conclusion In this paper, we discussed the reconstruction and extension of the Boyer-Moore waterfall model for automated inductive proofs in HOL Light. An extensive and detailed evaluation of our implementation led to a plethora of useful and interesting conclusions about the relevance of the approach. Of those, the most important was the conclusion that the model, despite being over 30 years old, can improve the support for automated inductive proofs within HOL Light's interactive setting. Proofs, such as those in Fig.4, were fully automated with a single usage of the Boyer-Moore tactic. Even though we were only able to prove 47% of the evaluation set, if we keep in mind the simplicity and fully automated setup for the evaluation, this result is promising. In an interactive setup, the user will be able to manipulate various system parameters (see Section 3.4) so as to achieve optimal results. Often adding a simple rewrite rule may allow the proof to unblock. Moreover, the user will be able to stop a looping procedure manually even if our loop elimination heuristics fail. This is a common step during interactive theorem proving and is often used with automated tactics such as HOL Light's model elimination procedure MESON. B Evaluation results for BMF Evaluations results for the final version (BMF) of our system. These include whether the system was successful or not (false* indicates failure by loop detection), the time in seconds (Time), the number of proof steps (Steps), inductions (Inds), generalizations (Gens), and detected overgeneralizations (Over).
10,756
1808.03715
2963408210
Music generation has generally been focused on either creating scores or interpreting them. We discuss differences between these two problems and propose that, in fact, it may be valuable to work in the space of direct performance generation: jointly predicting the notes and also their expressive timing and dynamics. We consider the significance and qualities of the dataset needed for this. Having identified both a problem domain and characteristics of an appropriate dataset, we show an LSTM-based recurrent network model that subjectively performs quite well on this task. Critically, we provide generated examples. We also include feedback from professional composers and musicians about some of these examples.
Perhaps it is precisely because music is so often perceived as a profoundly human endeavour that there has also been, in parallel, an ongoing fascination with automating its creation. This fascination long predates notions such as the Turing test (ostensibly for discriminating automation of the most human behaviour), and has spawned a range of efforts: from attempts at the formalization of unambiguously strict rules of composition to incorporation of complete random chance into scores and performances. The use of rules exemplifies the algorithmic (and largely deterministic) approach to music generation, one that is interesting and outside the scope of the current work; for background on this we refer the reader, for example, to the text by Nierhaus @cite_1 . Our present work, on the other hand, lies in a part of the spectrum that incorporates probability and sampling.
{ "abstract": [ "Algorithmic composition composing by means of formalizable methods has a century old tradition not only in occidental music history. This is the first book to provide a detailed overview of prominent procedures of algorithmic composition in a pragmatic way rather than by treating formalizable aspects in single works. In addition to an historic overview, each chapter presents a specific class of algorithm in a compositional context by providing a general introduction to its development and theoretical basis and describes different musical applications. Each chapter outlines the strengths, weaknesses and possible aesthetical implications resulting from the application of the treated approaches. Topics covered are: markov models, generative grammars, transition networks, chaos and self-similarity, genetic algorithms, cellular automata, neural networks and artificial intelligence are covered. The comprehensive bibliography makes this work ideal for the musician and the researcher alike." ], "cite_N": [ "@cite_1" ], "mid": [ "1556624199" ] }
This Time with Feeling: Learning Expressive Musical Performance Preamble/Request
0
1808.03715
2963408210
Music generation has generally been focused on either creating scores or interpreting them. We discuss differences between these two problems and propose that, in fact, it may be valuable to work in the space of direct performance generation: jointly predicting the notes and also their expressive timing and dynamics. We consider the significance and qualities of the dataset needed for this. Having identified both a problem domain and characteristics of an appropriate dataset, we show an LSTM-based recurrent network model that subjectively performs quite well on this task. Critically, we provide generated examples. We also include feedback from professional composers and musicians about some of these examples.
Two centuries later, as the foundations of AI were being set, the notion of automatically understanding (and therefore generating) music was among the earliest applications to capture the imagination of researchers, with papers on computational approaches to perception, interpretation and generation of music by Simon, Longuet-Higgins and others @cite_4 @cite_12 @cite_30 @cite_28 @cite_6 . Since then, many interesting efforts were made @cite_42 @cite_18 @cite_22 @cite_2 @cite_11 @cite_17 , and it is clear that in recent years both interest and progress in score generation has continued to advance, e.g. @cite_38 , Boulanger- @cite_32 , @cite_0 , @cite_41 , @cite_19 , Sturm @cite_5 , to name only a few. @cite_20 provide a survey of generative music models that involve machine learning. @cite_43 provide a comprehensive survey and satisfying taxonomy of music generation systems. McDonald @cite_3 gives an overview highlighting some key examples of such work.
{ "abstract": [ "AbstractThe author recently described elsewhere a computer program which would transcribe classical melodies played on an organ console into the equivalent of standard musical notation. This program was the fruit of a prolonged effort to understand how Western musicians succeed in making sense of music, in discerning rhythm and the tonal relation hip between notes. The interest of the problem arise from the fact that no two performance of the same piece of music are ever identical so that the listener has to discriminate between those variation of timing and pitch which are structurally significant and those which are merely expressive. In order to under land the ability of some musician to reproduce the cores of music that they hear it is necessary to develop a formally precise theory of rhythm and tonality – a theory which is couched in terms reminiscent of Chomskyan linguistics. But it is also necessary to consider how the listener builds in his mind a representation of the rhythm and tonality of perfo...", "Abstract In algorithmic music composition, a simple technique involves selecting notes sequentially according to a transition table that specifies the probability of the next note as a function of the previous context. An extension of this transition-table approach is described, using a recurrent autopredictive connectionist network called CONCERT. CONCERT is trained on a set of pieces with the aim of extracting stylistic regularities. CONCERT can then be used to compose new pieces. A central ingredient of CONCERT is the incorporation of psychologically grounded representations of pitch, duration and harmonic structure. CONCERT was tested on sets of examples artificially generated according to simple rules and was shown to learn the underlying structure, even where other approaches failed. In larger experiments, CONCERT was trained on sets of J. S. Bach pieces and traditional European folk melodies and was then allowed to compose novel melodies. Although the compositions are occasionally pleasant, and are...", "Generating music with long-term structure is one of the main challenges in the field of automatic composition. This article describes MorpheuS, a music generation system. MorpheuS uses state-of-the-art pattern detection techniques to find repeated patterns in a template piece. These patterns are then used to constrain the generation process for a new polyphonic composition. The music generation process is guided by an efficient optimization algorithm, variable neighborhood search, which uses a mathematical model of tonal tension to derive its objective function. The ability to generate music according to a tension profile could be useful in a game or film music context. Pieces generated by MorpheuS have been performed in live concerts.", "This volume presents the most up-to-date collection of neural network models of music and creativity gathered together in one place. Chapters by leaders in the field cover new connectionist models of pitch perception, tonality, musical streaming, sequential and hierarchical melodic structure, composition, harmonization, rhythmic analysis, sound generation, and creative evolution. The collection combines journal papers on connectionist modeling, cognitive science, and music perception with new papers solicited for this volume. It also contains an extensive bibliography of related work. Contributors: Shumeet Baluja, M.I. Bellgard, Michael A. Casey, Garrison W. Cottrell, Peter Desain, Robert O. Gjerdingen, Mike Greenhough, Niall Griffith, Stephen Grossberg, Henkjan Honing, Todd Jochem, Bruce F. Katz, John F. Kolen, Edward W. Large, Michael C. Mozer, Michael P.A. Page, Caroline Palmer, Jordan B. Pollack, Dean Pomerleau, Stephen W. Smoliar, Ian Taylor, Peter M. Todd, C.P. Tsang, Gregory M. Werner.", "Automatic music generation systems have gained in popularity and sophistication as advances in cloud computing have enabled large-scale complex computations such as deep models and optimization algorithms on personal devices. Yet, they still face an important challenge, that of long-term structure, which is key to conveying a sense of musical coherence. We present the MorpheuS music generation system designed to tackle this problem. MorpheuS' novel framework has the ability to generate polyphonic pieces with a given tension profile and long- and short-term repeated pattern structures. A mathematical model for tonal tension quantifies the tension profile and state-of-the-art pattern detection algorithms extract repeated patterns in a template piece. An efficient optimization metaheuristic, variable neighborhood search, generates music by assigning pitches that best fit the prescribed tension profile to the template rhythm while hard constraining long-term structure through the detected patterns. This ability to generate affective music with specific tension profile and long-term structure is particularly useful in a game or film music context. Music generated by the MorpheuS system has been performed live in concerts.", "Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.", "We consider the problem of extracting essential ingredients of music signals, such as a well-defined global temporal structure in the form of nested periodicities (or meter). We investigate whether we can construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style. Because recurrent neural networks (RNNs) can, in principle, learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard RNNs often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages. We show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure, it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.", "We apply deep learning methods, specifically long short-term memory (LSTM) networks, to music transcription modelling and composition. We build and train LSTM networks using approximately 23,000 music transcriptions expressed with a high-level vocabulary (ABC notation), and use them to generate new transcriptions. Our practical aim is to create music transcription models useful in particular contexts of music composition. We present results from three perspectives: 1) at the population level, comparing descriptive statistics of the set of training transcriptions and generated transcriptions; 2) at the individual level, examining how a generated transcription reflects the conventions of a music practice in the training transcriptions (Celtic folk); 3) at the application level, using the system for idea generation in music composition. We make our datasets, software and sound examples open and available: this https URL .", "This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.", "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient des- cent constraint optimisation to provide further control over the genera- tion process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and trans- ferred as constraints to the newly generated material. The sampling pro- cess is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.", "As one of our highest expressions of thought and creativity, music has always been a difficult realm to capture, model, and understand. The connectionist paradigm, now beginning to provide insights into many realms of human behavior, offers a new and unified viewpoint from which to investigate the subtleties of musical experience. Music and Connectionism provides a fresh approach to both fields, using the techniques of connectionism and parallel distributed processing to look at a wide range of topics in music research, from pitch perception to chord fingering to composition.The contributors, leading researchers in both music psychology and neural networks, address the challenges and opportunities of musical applications of network models. The result is a current and thorough survey of the field that advances understanding of musical phenomena encompassing perception, cognition, composition, and performance, and in methods for network design and analysis.Peter M. Todd is a doctoral candidate in the PDP Research Group of the Psychology Department at Stanford University. Gareth Loy is an award-winning composer, a lecturer in the Music Department of the University of California, San Diego, and a member of the technical staff of Frox Inc.Contributors. Jamshed J. Bharucha. Peter Desain. Mark Dolson. Robert Gjerclingen. Henkjan Honing. B. Keith Jenkins. Jacqueline Jons. Douglas H. Keefe. Tuevo Kohonen. Bernice Laden. Pauli Laine. Otto Laske. Marc Leman. J. P. Lewis. Christoph Lischka. D. Gareth Loy. Ben Miller. Michael Mozer. Samir I. Sayegh. Hajime Sano. Todd Soukup. Don Scarborough. Kalev Tiits. Peter M. Todd. Kari Torkkola.", "", "HARMONET, a system employing connectionist networks for music processing, is presented. After being trained on some dozen Bach chorales using error backpropagation, the system is capable of producing four-part chorales in the style of J.S. Bach, given a one-part melody. Our system solves a musical real-world problem on a performance level appropriate for musical practice. HARMONET's power is based on (a) a new coding scheme capturing musically relevant information and (b) the integration of backpropagation and symbolic algorithms in a hierarchical system, combining the advantages of both.", "", "We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription.", "", "", "", "", "We propose a system, the Continuator, that bridges the gap between two classes of traditionally incompatible musical systems: (1) interactive musical systems, limited in their ability to generate stylistically consistent material, and (2) music imitation systems, which are fundamentally not interactive. Our purpose is to allow musicians to extend their technical ability with stylistically consistent, automatically learnt material. This goal requires the ability for the system to build operational representations of musical styles in a real time context. Our approach is based on a Markov model of musical styles augmented to account for musical issues such as management of rhythm, beat, harmony, and imprecision. The resulting system is able to learn and generate music in any style, either in standalone mode, as continuations of musician’s input, or as interactive improvisation back up. Lastly, the very design of the system makes possible new modes of musical collaborative playing. We describe the architectu..." ], "cite_N": [ "@cite_30", "@cite_22", "@cite_41", "@cite_42", "@cite_3", "@cite_43", "@cite_2", "@cite_5", "@cite_20", "@cite_38", "@cite_18", "@cite_4", "@cite_17", "@cite_28", "@cite_32", "@cite_6", "@cite_19", "@cite_12", "@cite_0", "@cite_11" ], "mid": [ "2120408038", "2067516917", "2559726422", "1501340791", "2744457411", "2758804652", "2137619888", "2343635552", "2752134738", "2579406683", "2078265833", "", "2118730391", "2802416084", "1819710477", "", "", "2417853901", "2604567995", "2169264582" ] }
This Time with Feeling: Learning Expressive Musical Performance Preamble/Request
0
1808.03715
2963408210
Music generation has generally been focused on either creating scores or interpreting them. We discuss differences between these two problems and propose that, in fact, it may be valuable to work in the space of direct performance generation: jointly predicting the notes and also their expressive timing and dynamics. We consider the significance and qualities of the dataset needed for this. Having identified both a problem domain and characteristics of an appropriate dataset, we show an LSTM-based recurrent network model that subjectively performs quite well on this task. Critically, we provide generated examples. We also include feedback from professional composers and musicians about some of these examples.
@cite_15 , observe that many previous performance rendering systems often consist of many heuristic rules and tend to be complex. It makes [it] difficult to generate and select the useful rules, or perform the optimization of parameters in the rules.'' They thus present a method that uses Gaussian Processes to achieve this, where some parameters can be learned. In their ostensibly simpler system, for each single note, three outputs and corresponding thirteen input features are defined, and three functions each of which returns one of three outputs and receive the thirteen input features, are independently learned''. However, some of these features, too, depend on certain information, e.g. they compute the differences between successive pitches, and this only works in compositions where the voice leading is absolutely clear; in the majority of classical piano repertoire, this is not the case. In Laminae @cite_34 , systematize a set of context-dependent models, building a decision tree which allows rendering a performance by combining contextual information.
{ "abstract": [ "So far, many of the computational models for rendering music performance have been proposed, but they often consist of many heuristic rules and tend to be complex. It makes difficult to generate and select the useful rules, or perform the optimization of parameters in the rules. In this study, we present a new approach that automatically learns a computational model for rendering music performance with score information as an input and the corresponding real performance data as an output. We use a Gaussian Process (GP) incorporated with a Bayesian Committee Machine to reduce naive GP's heavy computation cost, to learn those input-output relationships. We compared three normalized errors: dynamics, attack time and release time between the real and predicted performance by the trained GP to evaluate our proposed scheme. We evaluated the learning ability and the generalization ability. The results show that the trained GP has an acceptable learning ability for 'known' pieces, but show insufficient generalization ability for 'unknown' pieces, suggesting that the GP can learn the expressive music performance without setting many parameters manually, but the size of the current training dataset is not sufficiently large so as to generalize the training pieces to 'unknown' test pieces.", "This paper proposes a system for performance rendering of keyboard instruments. The goal is fully autonomous rendition of a performance with musical smoothness without losing any of the characteristics of the actual performer. The system is based on a method that systematizes combinations of constraints and thereby elucidates the rendering process of the performer’s performance by defining stochastic models that associate artistic deviations observed in a performance with the contextual information notated in its musical score. The proposed system can be used to search for a sequence of optimum cases from the combination of all existing cases of the existing performance observed to render an unseen performance efficiently. Evaluations conducted indicate that musical features expected in existing performances are transcribed appropriately in the performances rendered by the system. The evaluations also demonstrate that the system is able to render performances with natural expressions stably, even for compositions with unconventional styles. Consequently, performances rendered via the proposed system have won first prize in the autonomous section of a performance rendering contest for computer systems." ], "cite_N": [ "@cite_15", "@cite_34" ], "mid": [ "2572594408", "2296217271" ] }
This Time with Feeling: Learning Expressive Musical Performance Preamble/Request
0
1808.03715
2963408210
Music generation has generally been focused on either creating scores or interpreting them. We discuss differences between these two problems and propose that, in fact, it may be valuable to work in the space of direct performance generation: jointly predicting the notes and also their expressive timing and dynamics. We consider the significance and qualities of the dataset needed for this. Having identified both a problem domain and characteristics of an appropriate dataset, we show an LSTM-based recurrent network model that subjectively performs quite well on this task. Critically, we provide generated examples. We also include feedback from professional composers and musicians about some of these examples.
Moulieras and Pachet @cite_27 use a maximum entropy model to generate expressive music, but their focus is again monophonic plus simple harmonic information. They also explicitly assume that musical expression consists in local texture, rather than long-range correlations''. While this is fairly reasonable at this point, and indeed it is hard to say how much long-range correlation is captured by our model, we wished to choose a model which, at least in principle, allowed the possibility of modeling long-range correlation: ultimately, we believe that these correlations are of fundamental importance. Malik and Ek @cite_7 use a neural network to learn to predict the dynamic levels of individual notes while assuming quantized and steady timing.
{ "abstract": [ "In the context of contemporary monophonic music, expression can be seen as the difference between a musical performance and its symbolic representation, i.e. a musical score. In this paper, we show how Maximum Entropy (MaxEnt) models can be used to generate musical expression in order to mimic a human performance. As a training corpus, we had a professional pianist play about 150 melodies of jazz, pop, and latin jazz. The results show a good predictive power, validating the choice of our model. Additionally, we set up a listening test whose results reveal that on average, people significantly prefer the melodies generated by the MaxEnt model than the ones without any expression, or with fully random expression. Furthermore, in some cases, MaxEnt melodies are almost as popular as the human performed ones.", "Music is an expressive form of communication often used to convey emotion in scenarios where \"words are not enough\". Part of this information lies in the musical composition where well-defined language exists. However, a significant amount of information is added during a performance as the musician interprets the composition. The performer injects expressiveness into the written score through variations of different musical properties such as dynamics and tempo. In this paper, we describe a model that can learn to perform sheet music. Our research concludes that the generated performances are indistinguishable from a human performance, thereby passing a test in the spirit of a \"musical Turing test\"." ], "cite_N": [ "@cite_27", "@cite_7" ], "mid": [ "2530415313", "2747023714" ] }
This Time with Feeling: Learning Expressive Musical Performance Preamble/Request
0
1808.03413
2950506508
We propose a framework called inverse augmented reality (IAR) which describes the scenario that a virtual agent living in the virtual world can observe both virtual objects and real objects. This is different from the traditional augmented reality. The traditional virtual reality, mixed reality and augmented reality are all generated for humans, i.e., they are human-centered frameworks. On the contrary, the proposed inverse augmented reality is a virtual agent-centered framework, which represents and analyzes the reality from a virtual agent's perspective. In this paper, we elaborate the framework of inverse augmented reality to argue the equivalence of the virtual world and the physical world regarding the whole physical structure.
In the past, a lot of related researches about augmented reality have been presented. Before IAR, some novel styles of reality have been proposed. For example, Lifton al @cite_4 proposed the dual reality'' system to make the virtual world and the physical world be corresponding to each other. Roo al @cite_0 proposed the one reality'' system, which contained a 6-level mixture of virtual and real contents ranging from purely physical to purely virtual world. But they all describe the mixed reality from the perspective of humans, ignoring the view from the virtual world.
{ "abstract": [ "Most of our daily activities take place in the physical world, which inherently imposes physical constraints. In contrast, the digital world is very flexible, but usually isolated from its physical counterpart. To combine these two realms, many Mixed Reality (MR) techniques have been explored, at different levels in the continuum. In this work we present an integrated Mixed Reality ecosystem that allows users to incrementally transition from pure physical to pure virtual experiences in a unique reality. This system stands on a conceptual framework composed of 6 levels. This paper presents these levels as well as the related interaction techniques.", "This paper proposes the convergence of sensor networks and virtual worlds not only as a possible solution to their respective limitations, but also as the beginning of a new creative medium. In such a “dual reality,” both real and virtual worlds are complete unto themselves, but also enhanced by the ability to mutually reflect, influence, and merge by means of sensor actuator networks deeply embedded in everyday environments. This paper describes a full implementation of a dual reality system using a popular online virtual world and a human-centric sensor network designed around a common electrical power strip. Example applications (e.g., browsing sensor networks in online virtual worlds), interaction techniques, and design strategies for the dual reality domain are demonstrated and discussed." ], "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2744963411", "2160971146" ] }
Inverse Augmented Reality: A Virtual Agent's Perspective
The basic framework for augmented reality (AR), mixed reality (MR) and virtual reality (VR) was proposed by Milgram et al. [8]. These paradigms are designed for the human-centered world. As the artificial intelligence develops rapidly, a virtual agent will finally possess an independent mind similar to that of humans. Based on Minsky's analysis of the human's mind [9], a virtual agent could develop its own independent mind and live successfully in the virtual world as humans can do in the real world. For this reason, a virtual agent can have an equal status with real humans. The well-known augmented reality can transfer from a human-centered framework to a virtual agent-centered framework. When the virtual agent is the center of the system, it can observe both virtual objects in the virtual world and real objects in the real world. This is called inverse augmented reality (IAR), because it uses an exactly opposite observing direction compared to the traditional augmented reality. The idea of IAR is originally inspired by the concept of the parallel world in the discipline of physics [1]. Based on the consideration of physics, IAR requires that the virtual world exists with similar structures and interaction roles to that of the physical world. These similar structures and interaction roles have been applied to virtual reality in order to define inverse virtual reality (IVR) [13]. In this paper, we would talk about inverse augmented reality using the similar methodology. The study about IAR is significant for two following reasons. First, it figures out the relationship between the virtual world and the physical world under the background of IAR, promoting the development of the scientific architecture of virtual agent-centered inverse augmented reality. Second, it lays the foundation of inverse augmented reality applications which do not treat the human as the system center, increasing the diversity of augmented reality systems. For these reasons, the proposed IAR is expected to make a breakthrough in both theory and practice. * [email protected] Figure 1: A typical scene of inverse augmented reality. In the left side, the virtual agent is represented as an orange avatar. A real chair is registered into the virtual world, so that a virtual one corresponds to a real one. Meanwhile, the virtual yellow table in the virtual world can exist independently with no relationship with the desks in the real world. The real world can be observed by the virtual agent, but only the registered real objects are available data which can augment the virtual world. This paper proposes the concept of IAR, and concretely shows the relationship between the virtual world and the physical world. As shown in Fig. 1, it is a typical scene of IAR. Contribution In this paper, our main contributions are listed as follows. • Propose the concept of inverse augmented reality and elaborate the formulations according to physical properties. • Show the typical structure of inverse augmented reality systems and present the proof of concept for IAR. FRAMEWORK OF INVERSE AUGMENTED REALITY 2.1 Dual-World Structure The proposed inverse augmented reality and the traditional augmented reality, as shown in Fig. 2, are under the unified dual-world structure. The traditional augmented reality (human-centered observation) is to augment the physical world with virtual objects, while the inverse augmented reality (virtual agent-centered observation) is to augment the virtual world with real objects. There might be a misconception between the proposed "inverse augmented reality" and another well-known concept called "augmented virtuality". Even though the two concepts are all describing using real elements in the physical world to augment virtual elements in the virtual world, their positions are definitely different. The augmented virtuality means that it is the human who can see a scene where the virtual elements are augmented by real elements, and the human himself is located in the real world. Conversely, the inverse augmented reality means that it is the virtual agent who can see a scene where the virtual elements are augmented by real elements, and the virtual agent itself is located in the virtual world. Mathematical Model Take the visual AR and IAR as the example, the formulation for AR and IAR can be as follows. Let O R denote the real objects, O V the virtual objects, H the humans, A the virtual agents, then we get AR ⇔ S H (O R , O V , A) IAR ⇔ S A (O R , O V , H)(1) where S H denotes the observation function of humans, and S A denotes the observation function of virtual agents. PHYSICAL PERSPECTIVE OF INVERSE AUGMENTED RE-ALITY In this work, we emphasize the equivalence of the virtual world and the physical world regarding the structure in physics. The referred physics here contains both the physical world and the virtual world, i.e., the virtual world is treated as a kind of existence in physics, which possesses the same structure with the physical world. In this way, IAR has the same important role as the traditional AR. We use a definition called physical equivalence to elaborate the equivalence of the physical world and the virtual world. This means the two worlds should be the same when talking about the physical structure, which can also be seen in Equation 1. Spatial Structure In the traditional augmented reality, there are three key components, i.e., the humans, the physical world and the virtual contents added to the physical world. As a correspondence, the same structure applies to inverse augmented reality. Concretely, inverse augmented reality also contains three key components, i.e., the virtual character, the programmable virtual world and the physical contents added to the virtual world. We emphasize the spatial structure rather than the appearance, because the difference regarding appearance is obvious. For example, all objects in the virtual world are data that are first created by human and then develop independently. Though the appearance is different, the spatial structure can be similar, especially the physical roles and interaction ways. Self Development As a common knowledge, the physical world we live in is keeping developing all the time. It seems to be driven by a kind of energy with the form of physical roles. Meanwhile, humans are born with intelligence, so they can actively interact with the physical world. Since the virtual world is expected to be developing by itself, it should have two kinds of agents, i.e., the character agent and the environment agent [7]. The character agent can be treated as a virtual human in the virtual world, while the environment agent determines how the virtual environment can develop automatically. The two agents are created by our physical world, then they construct the virtual world and develop independently without being directly controlled by the physical world. The agents can not only learn from physical world but also evolve by themselves. Notice that only the character agents can observe things in the proposed framework of inverse augmented reality. Equal-Status Interaction Considering the traditional AR and the proposed IAR, the physical world and the virtual world are equal to each other regarding interaction. As we often see in the traditional AR, a human can interact with both real and virtual objects that have been observed by him. Similarly, the character agent in the virtual world can interact with both virtual and real objects that have been observed by the agent. The two interaction processes are dual processes with the exactly symmetrical interaction style, as shown in Fig. 3. The interaction from virtual world to physical world means the virtual agent can control some physical power in order to change the physical state of real objects, e.g., if the virtual agent want to put a real box on a virtual table, it is required to find a certain physical way to support the real box so that it seems to be on the virtual table. And the physical way to realize this physical effect is expected to be controlled by the virtual agent. This is surely very hard for the current technology, but it is an essential part for IAR to support an equal interaction process compared with the traditional AR. Therefore, the equal-status interaction may need to be further studied and realized in the future. Framework Representation Since the basic framework has been illustrated above, we present a typical demonstration of IAR using an office environment. We add a virtual cube floating above the table, which is located by a small photo. The small photo is fixed on the top of a table, which serves as a bridge connecting the physical world and the virtual world. After the environment is constructed, two views from the different worlds are shown in Fig. 4. In the traditional augmented reality, the user can see the physical environment and the virtual element (a cube with the checkerboard pattern), and she can also interact with the virtual element. In the inverse augmented reality, a virtual agent is constructed, and it can behave like a physical human. Though what can be "seen" by the agent is absolutely some data, we can still figure out the meaning of these data. Usually, these data include the virtual cube that is connected with the physical world, the virtual table that corresponds to the real table in the physical world, and some other virtual objects that do not exist in the physical world. DISCUSSION AND CONCLUSION The equivalence between the virtual world and the real world is proposed regarding the structure. As for the structure, it is already illustrated by introducing all essential parts of the traditional augmented reality and the inverse augmented reality. Though the specific expression forms are different, the two paradigms possess the same structure with each other. Our demonstration is about the concept verification, and all the results are shown directly by images observing from different worlds. This is a clear way to show the concept of IAR. In this paper, we propose the big framework of the traditional augmented reality and the inverse augmented reality. Then we illustrate the main properties of this framework. Under this framework, we emphasize that the self-intelligence would play an important role in the virtual world, which contributes greatly to building an inverse augmented reality system. We also present a typical implementation of an inverse augmented reality system, which shows the inverse augmented reality can be realized with most current techniques. The remaining challenges in the field of inverse augmented reality mainly include three aspects: (1) Physical construction of virtual objects in the physical world. (2) Specific design of virtual-to-physical bridges. (3) Intelligence and knowledge for the self-driven virtual world. Future work will be unifying the proposed IAR and the previous IVR into a more general framework in order to represent the reality at a higher level than what we have done currently. In this way, what the virtual agent could experience in both the virtual and the real world can be well illustrated.
1,811
1808.03413
2950506508
We propose a framework called inverse augmented reality (IAR) which describes the scenario that a virtual agent living in the virtual world can observe both virtual objects and real objects. This is different from the traditional augmented reality. The traditional virtual reality, mixed reality and augmented reality are all generated for humans, i.e., they are human-centered frameworks. On the contrary, the proposed inverse augmented reality is a virtual agent-centered framework, which represents and analyzes the reality from a virtual agent's perspective. In this paper, we elaborate the framework of inverse augmented reality to argue the equivalence of the virtual world and the physical world regarding the whole physical structure.
To make the virtual world intelligent, Taylor al @cite_7 discussed the possibility of making a virtual world evolve by itself. The evolution of the virtual world took advantage of the principle of biological evolution in the physical world. Though the self-learning is not simple, there are still many learning frameworks that can be used to obtain the self-learning ability, such as evolutionary computation @cite_2 , reinforcement learning @cite_6 and deep learning @cite_9 .
{ "abstract": [ "", "Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.", "This chapter discusses the possibility of instilling a virtual world with mechanisms for evolution and natural selection in order to generate rich ecosystems of complex organisms in a process akin to biological evolution. Some previous work in the area is described, and successes and failures are discussed. The components of a more comprehensive framework for designing such worlds are mapped out, including the design of the individual organisms, the properties and dynamics of the environmental medium in which they are evolving, and the representational relationship between organism and environment. Some of the key issues discussed include how to allow organisms to evolve new structures and functions with few restrictions, and how to create an interconnectedness between organisms in order to generate drives for continuing evolutionary activity.", "From the Publisher: In this revised and significantly expanded second edition, distinguished scientist David B. Fogel presents the latest advances in both the theory and practice of evolutionary computation to help you keep pace with developments in this fast-changing field.. \"In-depth and updated, Evolutionary Computation shows you how to use simulated evolution to achieve machine intelligence. You will gain current insights into the history of evolutionary computation and the newest theories shaping research. Fogel carefully reviews the \"no free lunch theorem\" and discusses new theoretical findings that challenge some of the mathematical foundations of simulated evolution. This second edition also presents the latest game-playing techniques that combine evolutionary algorithms with neural networks, including their success in playing competitive checkers. Chapter by chapter, this comprehensive book highlights the relationship between learning and intelligence.. \"Evolutionary Computation features an unparalleled integration of history with state-of-the-art theory and practice for engineers, professors, and graduate students of evolutionary computation and computer science who need to keep up-to-date in this developing field." ], "cite_N": [ "@cite_9", "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "", "2121863487", "2626320532", "1978970913" ] }
Inverse Augmented Reality: A Virtual Agent's Perspective
The basic framework for augmented reality (AR), mixed reality (MR) and virtual reality (VR) was proposed by Milgram et al. [8]. These paradigms are designed for the human-centered world. As the artificial intelligence develops rapidly, a virtual agent will finally possess an independent mind similar to that of humans. Based on Minsky's analysis of the human's mind [9], a virtual agent could develop its own independent mind and live successfully in the virtual world as humans can do in the real world. For this reason, a virtual agent can have an equal status with real humans. The well-known augmented reality can transfer from a human-centered framework to a virtual agent-centered framework. When the virtual agent is the center of the system, it can observe both virtual objects in the virtual world and real objects in the real world. This is called inverse augmented reality (IAR), because it uses an exactly opposite observing direction compared to the traditional augmented reality. The idea of IAR is originally inspired by the concept of the parallel world in the discipline of physics [1]. Based on the consideration of physics, IAR requires that the virtual world exists with similar structures and interaction roles to that of the physical world. These similar structures and interaction roles have been applied to virtual reality in order to define inverse virtual reality (IVR) [13]. In this paper, we would talk about inverse augmented reality using the similar methodology. The study about IAR is significant for two following reasons. First, it figures out the relationship between the virtual world and the physical world under the background of IAR, promoting the development of the scientific architecture of virtual agent-centered inverse augmented reality. Second, it lays the foundation of inverse augmented reality applications which do not treat the human as the system center, increasing the diversity of augmented reality systems. For these reasons, the proposed IAR is expected to make a breakthrough in both theory and practice. * [email protected] Figure 1: A typical scene of inverse augmented reality. In the left side, the virtual agent is represented as an orange avatar. A real chair is registered into the virtual world, so that a virtual one corresponds to a real one. Meanwhile, the virtual yellow table in the virtual world can exist independently with no relationship with the desks in the real world. The real world can be observed by the virtual agent, but only the registered real objects are available data which can augment the virtual world. This paper proposes the concept of IAR, and concretely shows the relationship between the virtual world and the physical world. As shown in Fig. 1, it is a typical scene of IAR. Contribution In this paper, our main contributions are listed as follows. • Propose the concept of inverse augmented reality and elaborate the formulations according to physical properties. • Show the typical structure of inverse augmented reality systems and present the proof of concept for IAR. FRAMEWORK OF INVERSE AUGMENTED REALITY 2.1 Dual-World Structure The proposed inverse augmented reality and the traditional augmented reality, as shown in Fig. 2, are under the unified dual-world structure. The traditional augmented reality (human-centered observation) is to augment the physical world with virtual objects, while the inverse augmented reality (virtual agent-centered observation) is to augment the virtual world with real objects. There might be a misconception between the proposed "inverse augmented reality" and another well-known concept called "augmented virtuality". Even though the two concepts are all describing using real elements in the physical world to augment virtual elements in the virtual world, their positions are definitely different. The augmented virtuality means that it is the human who can see a scene where the virtual elements are augmented by real elements, and the human himself is located in the real world. Conversely, the inverse augmented reality means that it is the virtual agent who can see a scene where the virtual elements are augmented by real elements, and the virtual agent itself is located in the virtual world. Mathematical Model Take the visual AR and IAR as the example, the formulation for AR and IAR can be as follows. Let O R denote the real objects, O V the virtual objects, H the humans, A the virtual agents, then we get AR ⇔ S H (O R , O V , A) IAR ⇔ S A (O R , O V , H)(1) where S H denotes the observation function of humans, and S A denotes the observation function of virtual agents. PHYSICAL PERSPECTIVE OF INVERSE AUGMENTED RE-ALITY In this work, we emphasize the equivalence of the virtual world and the physical world regarding the structure in physics. The referred physics here contains both the physical world and the virtual world, i.e., the virtual world is treated as a kind of existence in physics, which possesses the same structure with the physical world. In this way, IAR has the same important role as the traditional AR. We use a definition called physical equivalence to elaborate the equivalence of the physical world and the virtual world. This means the two worlds should be the same when talking about the physical structure, which can also be seen in Equation 1. Spatial Structure In the traditional augmented reality, there are three key components, i.e., the humans, the physical world and the virtual contents added to the physical world. As a correspondence, the same structure applies to inverse augmented reality. Concretely, inverse augmented reality also contains three key components, i.e., the virtual character, the programmable virtual world and the physical contents added to the virtual world. We emphasize the spatial structure rather than the appearance, because the difference regarding appearance is obvious. For example, all objects in the virtual world are data that are first created by human and then develop independently. Though the appearance is different, the spatial structure can be similar, especially the physical roles and interaction ways. Self Development As a common knowledge, the physical world we live in is keeping developing all the time. It seems to be driven by a kind of energy with the form of physical roles. Meanwhile, humans are born with intelligence, so they can actively interact with the physical world. Since the virtual world is expected to be developing by itself, it should have two kinds of agents, i.e., the character agent and the environment agent [7]. The character agent can be treated as a virtual human in the virtual world, while the environment agent determines how the virtual environment can develop automatically. The two agents are created by our physical world, then they construct the virtual world and develop independently without being directly controlled by the physical world. The agents can not only learn from physical world but also evolve by themselves. Notice that only the character agents can observe things in the proposed framework of inverse augmented reality. Equal-Status Interaction Considering the traditional AR and the proposed IAR, the physical world and the virtual world are equal to each other regarding interaction. As we often see in the traditional AR, a human can interact with both real and virtual objects that have been observed by him. Similarly, the character agent in the virtual world can interact with both virtual and real objects that have been observed by the agent. The two interaction processes are dual processes with the exactly symmetrical interaction style, as shown in Fig. 3. The interaction from virtual world to physical world means the virtual agent can control some physical power in order to change the physical state of real objects, e.g., if the virtual agent want to put a real box on a virtual table, it is required to find a certain physical way to support the real box so that it seems to be on the virtual table. And the physical way to realize this physical effect is expected to be controlled by the virtual agent. This is surely very hard for the current technology, but it is an essential part for IAR to support an equal interaction process compared with the traditional AR. Therefore, the equal-status interaction may need to be further studied and realized in the future. Framework Representation Since the basic framework has been illustrated above, we present a typical demonstration of IAR using an office environment. We add a virtual cube floating above the table, which is located by a small photo. The small photo is fixed on the top of a table, which serves as a bridge connecting the physical world and the virtual world. After the environment is constructed, two views from the different worlds are shown in Fig. 4. In the traditional augmented reality, the user can see the physical environment and the virtual element (a cube with the checkerboard pattern), and she can also interact with the virtual element. In the inverse augmented reality, a virtual agent is constructed, and it can behave like a physical human. Though what can be "seen" by the agent is absolutely some data, we can still figure out the meaning of these data. Usually, these data include the virtual cube that is connected with the physical world, the virtual table that corresponds to the real table in the physical world, and some other virtual objects that do not exist in the physical world. DISCUSSION AND CONCLUSION The equivalence between the virtual world and the real world is proposed regarding the structure. As for the structure, it is already illustrated by introducing all essential parts of the traditional augmented reality and the inverse augmented reality. Though the specific expression forms are different, the two paradigms possess the same structure with each other. Our demonstration is about the concept verification, and all the results are shown directly by images observing from different worlds. This is a clear way to show the concept of IAR. In this paper, we propose the big framework of the traditional augmented reality and the inverse augmented reality. Then we illustrate the main properties of this framework. Under this framework, we emphasize that the self-intelligence would play an important role in the virtual world, which contributes greatly to building an inverse augmented reality system. We also present a typical implementation of an inverse augmented reality system, which shows the inverse augmented reality can be realized with most current techniques. The remaining challenges in the field of inverse augmented reality mainly include three aspects: (1) Physical construction of virtual objects in the physical world. (2) Specific design of virtual-to-physical bridges. (3) Intelligence and knowledge for the self-driven virtual world. Future work will be unifying the proposed IAR and the previous IVR into a more general framework in order to represent the reality at a higher level than what we have done currently. In this way, what the virtual agent could experience in both the virtual and the real world can be well illustrated.
1,811
1808.02974
2887133168
Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length @math among any @math players such that for a threshold parameter @math , (i) the knowledge of any @math shares does not reveal any information about the secret and, (ii) any choice of @math shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length @math , and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any @math shares, for a positive integer gap parameter @math . Ramp secret sharing scheme necessarily requires shares of length @math . Other than the bound related to secret length @math , the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio @math . In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio @math , where standard ramp secret sharing becomes impossible. We show, however, that a slightly relaxed but equally effective notion of semantic security for the secret, and negligible reconstruction error probability, eliminate the impossibility. Moreover, we provide explicit constructions of such schemes. One of the consequences of our relaxation is that, unlike standard ramp schemes with perfect secrecy, adaptive and non-adaptive adversaries need different analysis and construction. For non-adaptive adversaries, we explicitly construct secret sharing schemes that provide secrecy against any @math fraction of observed shares, and reconstruction from any @math fraction of shares, for any choices of @math . Our construction achieves secret length @math , which we show to be optimal. For adaptive adversaries, we construct explicit schemes attaining a secret length @math .
In @cite_23 @cite_17 , the secret is one bit. In @cite_20 , secrets of length equal to a fraction of @math (number of players) is considered. This time binary secret sharing with adaptive and non-adaptive adversaries similar to the model we consider in this work is defined. However the paper considers only a privacy threshold @math , and reconstruction is from the full share set ( @math always). Their goal is to achieve large secrets @math over binary shares with large privacy parameter @math , which is also similar to ours. They have an additional goal of keeping the computational complexity of the reconstruction algorithm within AC @math , which we do not consider in this work. Their large privacy parameter @math is with a @math much smaller than @math , which means that the relative threshold gap @math can not be arbitrarily small.
{ "abstract": [ "We present a novel method for constructing linear secret sharing schemes (LSSS) from linear error correcting codes and linear universal hash functions in a blackbox way. The main advantage of this new construction is that the privacy property of the resulting secret sharing scheme essentially becomes independent of the code we use, only depending on its rate. This allows us to fully harness the algorithmic properties of recent code constructions such as efficient encoding and decoding or efficient list-decoding. Choosing the error correcting codes and universal hash functions involved carefully, we obtain solutions to the following open problems:", "", "Shamir's scheme for sharing secrets is closely related to Reed-Solomon coding schemes. Decoding algorithms for Reed-Solomon codes provide extensions and generalizations of Shamir's method." ], "cite_N": [ "@cite_20", "@cite_23", "@cite_17" ], "mid": [ "602475741", "", "1990304797" ] }
Secret Sharing with Binary Shares
Secret sharing, introduced independently by Blakley [3] and Shamir [21], is one of the most fundamental cryptographic primitives with far-reaching applications, such as being a major tool in secure multiparty computation (cf. [12]). The general goal in secret sharing is to encode a secret s into a number of shares X 1 , . . . , X N that are distributed among N players such that only certain authorized subsets of the players can reconstruct the secret. An authorized subset of players is a set A ⊆ [N ] such that the set of shares with indices in A can collectively be used to reconstruct the secret s (perfect reconstructiblity). On the other hand, A is an unauthorized subset if the knowledge of the shares with indices in A reveals no information about the secret (perfect privacy). The set of authorized and unauthorized sets define an access structure, of which the most widely used is the so-called threshold structure. A secret sharing scheme with threshold access structure, is defined with respect to an integer parameter t and satisfies the following properties. Any set A ⊆ [N ] with |A| ≤ t is an unauthorized set. That is, the knowledge of any t shares, or fewer, does not reveal any information about the secret. On the other hand, any set A with |A| > t is an authorized set. That is, the knowledge of any t + 1 or more shares completely reveals the secret. Shamir's secret sharing scheme [21] gives an elegant construction for the threshold access structure that can be interpreted as the use of Reed-Solomon codes for encoding the secret. Suppose the secret s is an ℓ-bit string and N ≤ 2 ℓ . Then, Shamir's scheme treats the secret as an element of the finite field F q , where q = 2 ℓ , padded with t uniformly random and independent elements from the same field. The resulting vector over F t+1 q is then encoded using a Reed-Solomon code of length N , providing N shares of length ℓ bits each. The fact that a Reed-Solomon code is Maximum Distance Separable (MDS) can then be used to show that the threshold guarantee for privacy and reconstruction is satisfied. Remarkably, Shamir's scheme is optimal for threshold secret sharing in the following sense: Any threshold secret sharing scheme sharing ℓ-bit secrets necessarily requires shares of length at least ℓ, and Shamir's scheme attains this lower bound [23]. It is natural to ask whether secret sharing is possible at share lengths below the secret length log q < ℓ, where log is to base 2 throughout this work. Of course, in this case, the threshold guarantee that requires all subsets of participants be either authorized, or unauthorized, can no longer be attained. Instead, the notion can be relaxed to ramp secret sharing which allows some subset of participants to learn some information about the secret. A ramp scheme is defined with respect to two threshold parameters, t and r > t + 1. As in threshold scheme, the knowledge of any t shares or fewer does not reveal any information about the secret. On the other hand, any r shares can be used to reconstruct the secret. The subsets of size between t + 1 and r − 1, may learn some information about the secret. The information-theoretic bound (see e.g. [18]) now becomes ℓ ≤ (r − t) log q. (1) Ideally, one would like to obtain equality in (1) for as general parameter settings as possible. Let g : = r − t denote the gap between the privacy and reconstructibility parameters. Let the secret length ℓ and the number of players N be unconstrained integer parameters. It is known that, using Reed-Solomon code interpretation of Shamir's approach applied to a random linear code, for every fixed relative gap γ : = g/N , there is a constant q only depending on γ such that a ramp secret sharing scheme with share size q exists. Such schemes can actually be constructed by using explicit algebraic geometry codes instead of random linear codes. In fact, this dependence of share size q on relative gap g/N is inherent for threshold and more generally ramp schemes. It is shown in an unpublished work of Kilian and Nisan 1 for threshold schemes, and later more generally in [8], that for ramp schemes with share size q, threshold gap g, privacy threshold t and unconstrained number of players N , the following always holds: q ≥ (N − t + 1)/g. Very recently in [4], a new bound with respect to the reconstruction parameter r is proved through connecting secret sharing for one bit secret to game theory: q ≥ (r + 1)/g. These two bounds together yield q ≥ (N + g + 2)/(2g). (2) Note that the bound (2) is very different from the bound (1) in nature. The bound (1) is the fundamental limitation of information-theoretic security, bearing the same flavour as the One-Time-Pad. The bound (2) is independent of the secret length and holds even when the secret is one bit. We ask the following question: For a fixed constant share size q (in particular, q = 2), is it possible to construct (relaxed but equally effective) ramp secret sharing schemes with arbitrarily small relative gap γ > 0 that asymptotically achieve equality in (1)? Our results in this work show that the restriction (2) can be overcome if we allow a negligible privacy error in statistical distance (semantic security) and a negligible reconstruction error probability. Our contributions We motivate the study of secret sharing scheme with fixed share size q, and study the extremal case of binary shares. Our goal is to show that even in this extremely restrictive case, a slight relaxation of the privacy and reconstruction notions of ramp secret sharing guarantees explicit construction of families of ramp schemes 2 with any constant relative privacy and reconstruction thresholds 0 ≤ τ < ρ ≤ 1, in particular, the relative threshold gap γ = ρ − τ can be an arbitrarily small constant. Namely, for any constants 0 ≤ τ < ρ ≤ 1, it can be guaranteed that any τ N or fewer shares reveal essentially no information about the secret, whereas any ρN or more shares can reconstruct the exact secret with a negligible failure probability. While we only focus on the extremal special case q = 2 in this presentation, all our results can be extended to any constant q (see Section 6). We consider binary sharing of a large ℓ-bit secret and for this work focus on the asymptotic case where the secret length ℓ, and consequently the number of players N , are sufficiently large. We replace perfect privacy with semantic security, the strongest cryptographic notion of privacy second only to perfect privacy. That is, for any two secrets (possibly chosen by the adversary), we require the adversary's view to be statistically indistinguishable. The view of the adversary is a random variable with randomness coming solely from the internal randomness of the sharing algorithm. The notion of indistinguishability that we use is statistical (total variation) distance bounded by a leakage error parameter ε that is negligible in N . Using non-perfect privacy creates a distinction between non-adaptive and adaptive secrecy. A non-adaptive adversary chooses any τ fraction of the N players at once, and receives their corresponding shares. An adaptive adversary however, selects share holders one by one, receives their shares and uses its available information to make its next choice. When ε = 0, i.e., when perfect privacy holds, non-adaptive secrecy automatically implies adaptive secrecy as well. However, this is not necessarily the case when ε > 0 and we thus study the two cases separately. Similarly, we replace the perfect reconstruction with probabilistic reconstruction allowing a failure probability δ that is negligible in N . The special case of δ = 0 means perfect reconstruction. Note that secret sharing with fixed share size necessarily imposes certain restrictions that are not common in standard secret sharing. Unlike secret sharing with share length dependent on the secret length (for threshold schemes) or secret length and threshold gap (for ramp schemes), binary sharing of an ℓ-bit secret obviously requires at least ℓ shares to accommodate the secret information. For a family of ramp secret sharing schemes with fixed share size q and fixed relative thresholds 0 ≤ τ < ρ ≤ 1, as N grows the absolute gap length (ρ−τ )N grows, and the accommadatable length of the secret is expected to grow and so the ratio ℓ/N ∈ (0, 1] becomes a key parameter of interest for the family, referred to as the coding rate. As is customary in coding theory, it is desired to characterize the maximum possible ratio ℓ/N ∈ (0, 1] for binary secret sharing. We use the relation (a similar relation was used in [10] for robust secret sharing) between a binary secret sharing family with relative threshold (τ, ρ) and codes for a Wyner wiretap channel with two BEC's to derive a coding rate upper bound of ρ − τ for binary secret sharing (see Lemma 11). Our main technical contributions are explicit constructions of binary secret sharing schemes in both the non-adaptive and adaptive models, and proving optimality of non-adaptive construction. Namely, we prove the following: Theorem 1 (informal summary of Lemma 11, Corollary 17, and Corollary 21). For any choice of 0 ≤ τ < ρ ≤ 1, and large enough N , there is an explicit construction of a binary secret sharing scheme with N players that provides (adaptive or non-adaptive) privacy against leakage of any τ N or fewer shares, as well as reconstruction from any ρN or more of the shares (achieving semantic secrecy with negligible error and imperfect reconstruction with negligible failure probability). For non-adaptive secrecy, the scheme shares a secret of length ℓ = (ρ − τ − o(1))N , which is asymptotically optimal. For adaptive secrecy, the scheme shares a secret of length ℓ = Ω((ρ − τ )N ). As a side contribution, our findings unify the Wyner wiretap model and its adversarial analogue. Our capacity-achieving construction of binary secret sharing for non-adaptive adversaries implies that the secrecy capacity of the adversarial analogue of the erasure scenario Wyner wiretap channel is similarly characterized by the erasure ratios of the two channels. Moreover, the secrecy can be strengthened to semantic security. This answers an open question posted in [1]. The authors studied a generalisation of the wiretap II model, where the adversary chooses t bits to observe and erases them. They showed that the rate 1 − τ − h 2 (τ ), where h 2 (·) is the binary entropy function, can be achieved and left open the question of whether a higher rate is achievable. Our result specialized to their setting shows that, the rate 1 − 2τ can be explicitly achieved. Our approach and techniques Our explicit constructions follow the paradigm of invertible randomness extractors formalized in [11]. Invertible extractors were used in [11] for explicit construction of optimal wiretap coding schemes in the Wiretap channel II [19]. This, in particular, is corresponding to the ρ = 1 special case of secret sharing where reconstruction is only required when all shares are available. Moreover, the secrecy there is an information-theoretic notion, and only required to hold for uniform messages. The consequence of the latter is that the construction in [11] does not directly give us binary secret sharing, not even for the ρ = 1 special case. The exposition below is first focused on how semantic security is achieved. As in [11], we rely on invertible affine extractors as our primary technical tool. Such an extractor is an explicit function AExt : {0, 1} n → {0, 1} ℓ such that, for any random variable X uniformly distributed over an unknown k-dimensional affine subspace of F n 2 , the distribution of AExt(X) is close to the uniform distribution over F ℓ 2 in statistical distance. Furthermore, the invertibility guarantee provides an efficient algorithm for sampling a uniform element from the set AExt −1 (s) of pre-images for any given output s ∈ F ℓ 2 . It is then natural to consider the affine extractor's uniform inverter as a candidate building block for the sharing algorithm of a secret sharing scheme. Intuitively, if the secret s is chosen uniformly at random, we have the guarantee that for any choice of a bounded number of the bits of its random pre-image revealed to the adversary, the distribution of the random pre-image conditioned on the revealed value satisfies that of an affine source. Now according to the definition of an affine extractor, the extractor's output (i.e., the secret s) remains uniform (and thus unaffected in distribution) given the information revealed to the adversary. Consequently, secrecy should at least hold in an information-theoretic sense, i.e. the mutual information between the secret and the revealed vector components is zero. This is what was formalized and used in [11] for the construction of Wiretap channel II codes. For non-adaptive adversaries, in fact it is possible to use invertible seeded extractors rather than invertible affine extractors described in the above construction. A (strong) seeded extractor assumes, in addition to the main input, an independent seed as an auxiliary input and ensures uniformity of the output for most fixings of the seed. The secret sharing encoder appends a randomly chosen seed to the encoding and inverts the extractor with respect to the chosen seed. Then, the above argument would still hold even if the seed is completely revealed to the adversary. The interest in the use of seeded, as opposed to seedless affine, extractors is twofold. First, nearly optimal and very efficient constructions of seeded extractors are known in the literature that extract nearly the entire source entropy with only a short seed. This allows us to attain nearly optimal rates for the non-adaptive case. Furthermore, and crucially, such nearly optimal extractor constructions (in particular, Trevisan's extractor [24,20]) can in fact be linear functions for every fixed choice of the seed (in contrast, seedless affine extractors can never be linear functions). We take advantage of the linearity of the extractor in a crucial way and use a rather delicate analysis to show that in fact the linearity of the extractor can be utilized to prove that the resulting secret sharing scheme provides the stringent worst-case secret guarantee which is a key requirement distinguishing secret sharing schemes (a cryptographic primitive) from wiretap codes (an information-theoretic notion). Using a seeded extractor instead of a seedless extractor, however, introduces a new challenge. In order for the seeded extractor to work, the seed has to be independent of the main input, which is a distribution induced by the adversary's choice of reading positions. The independence of the seed and the main input can be directly argued when the adversary is non-adaptive. An adaptive adversary, however, may choose its reading positions to learn about the seed first, and then choose the rest of the reading positions according the value of the seed. In this case, we can not prove the independence of the seed and the main input. For adaptive adversaries, we go back to using an invertible affine extractor. We prove that both security for worst-case messages and against adaptive adversaries are guaranteed if the affine extractor provides the strong guarantee of having a nearly uniform output with respect to the ℓ ∞ measure rather than ℓ 1 . However, this comes at the cost of the extractor not being able to extract the entire entropy of the source, leading to ramp secret sharing schemes with slightly sub-optimal rates, albeit still achieving rates within a constant factor of the optimum. As a proof of concept, we utilize a simple padding and truncation technique to convert any off-the-shelf seedless affine extractor (such as those of Bourgain [7] or Li [17]) to one that satisfies the stronger uniformity condition that we require. We now turn to reconstruction from an incomplete set of shares. In order to provide reconstructibility from a subset of size r of the shares, we naturally compose the encoding obtained from the extractor's inversion routine with a linear erasure-correcting code. The linearity of the code ensures that the extractor's input subject to the adversary's observation (which now can consist of linear combinations of the original encoding) remains uniform on some affine space, thus preserving the privacy guarantee. However, since by the known rate-distance trade-offs of binary error-correcting codes, no deterministic coding scheme can correct more than a 1/2 fraction of erasures (a constraint that would limit the choice of ρ), the relaxed notion of stochastic coding schemes is necessary for us to allow reconstruction for all choices of ρ ∈ (τ, 1]. Intuitively, a stochastic code is a randomized encoder with a deterministic decoder, that allows the required fraction of errors to be corrected. We utilize what we call a stochastic affine code. Such codes are equipped with encoders that are affine functions of the message for every fixing of the encoder's internal randomness. We show that such codes are as suitable as deterministic linear codes for providing the linearity properties that our construction needs. In fact, we need capacity-achieving stochastic erasure codes, i.e., those that correct every 1 − ρ fraction of erasures at asymptotic rate ρ, to be able to construct binary secret sharing schemes with arbitrarily small relative gap γ = ρ − τ . To construct capacity-achieving stochastic affine erasure codes, we utilize a construction of stochastic codes due to Guruswami and Smith [16] for bit-flip errors. We observe that this construction can be modified to yield capacity-achieving erasure codes. Roughly speaking, this is achieved by taking an explicit capacity-achieving linear code for BEC and pseudo-randomly shuffling the codeword positions. Combined with a delicate encoding of hidden "control information" to communicate the choice of the permutation to the decoder in a robust manner, the construction transforms robustness against random erasures to worst-case erasures at the cost of making the encoder randomized. Organization of the paper Section 2 contains a brief introduction to the two building blocks for our constructions: randomness extractors and stochastic codes. In Section 3, we formally define the binary secret sharing model and prove a coding rate upper bound. Section 4 contains a capacity-achieving construction with privacy against non-adaptive adversaries. Section 5 contains a constant rate construction with privacy against adaptive adversaries. Finally, we conclude the paper and discuss open problems in Section 6. Preliminaries and definitions In this section, we review the necessary facts and results about randomness extractors, both the seeded and seedless affine variants, as well as the stochastic erasure correcting codes. Randomness extractors extract close to uniform bits from input sequences that are not uniform but have some guaranteed entropy. The closeness to uniform of the extractor output is measured by the statistical distance (half the ℓ 1 -norm). For a set X , we use X ← X to denote that X is distributed over the set X . For two random variables X, Y ← X , the statistical distance between X and Y is defined as, SD(X; Y) = 1 2 x∈X |Pr[X = x] − Pr[Y = x]| . We say X and Y are ε-close if SD(X, Y) ≤ ε. A randomness source is a random variable with lower bound on its min-entropy, which is defined by H ∞ (X) = − log max x {Pr[X = x]}. We say a random variable X ← {0, 1} n is a (n, k)-source if H ∞ (X) ≥ k. For well structured sources, there exist deterministic functions that can extract close to uniform bits. The support of X ← X is the set of x ∈ X such that Pr[X = x] > 0. An affine (n, k)-source is an (n, k)-source whose support is an affine sub-space of {0, 1} n and each vector in the support occurs with the same probability. Let U m denote the random variable uniformly distributed over {0, 1} m . where S is chosen uniformly from {0, 1} d . A seeded extractor Ext(·, ·) is called linear if for any fixed seed S = s, the function Ext(s, ·) is a linear function. We will use Trevisan's extractor [24] in our first construction. In particular, we use the following improvement of this extractor due to Raz, Reingold and Vadhan [20]. We will use Bourgain's affine extractor in our second construction. We note, however, that we could have used other explicit extractors for this purpose, such as [17]. Explicit constructions of randomness extractors have efficient forward direction of extraction. In some applications, we usually need to efficiently invert the process: Given an extractor output, sample a random pre-image. • (Inversion) Given y ∈ {0, 1} m such that its pre-image f −1 (y) is nonempty, for every r ∈ {0, 1} r we have f (Inv(y, r)) = y. • (Uniformity) Inv(U m , U r ) is v-close to U n . A v-inverter is called efficient if there is a randomized algorithm that runs in worst-case polynomial time and, given y ∈ {0, 1} m and r as a random seed, computes Inv(y, r). We call a mapping vinvertible if it has an efficient v-inverter, and drop the prefix v from the notation when it is zero. We abuse the notation and denote the inverter of f by f −1 . A stochastic code has a randomised encoder and a deterministic decoder. The encoder Enc : {0, 1} m × R → {0, 1} n uses local randomness R ← R to encode a message m ∈ {0, 1} m . The decoder is a deterministic function Dec : {0, 1} n → {0, 1} m ∪ {⊥}. The decoding probability is defined over the encoding randomness R ← R. Stochastic codes are known to explicitly achieve the capacity of some adversarial channels [16]. Affine sources play an important role in our constructions. We define a general requirement for the stochastic code used in our constructions. Definition 7 (Stochastic Affine codes). Let Enc : {0, 1} m × R → {0, 1} n be the encoder of a stochastic code. We say it is a stochastic affine code if for any r ∈ R, the encoding function Enc(·, r) specified by r is an affine function of the message. That is we have Enc(m, r) = mG r + ∆ r , where G r ∈ {0, 1} m×n and ∆ r ∈ {0, 1} n are specified by the randomness r. We then adapt a construction in [16] to obtain the following capacity-achieving Stochastic Affine-Erasure Correcting Code (SA-ECC). In particular, we show for any p ∈ [0, 1), there is an explicit stochastic affine code that corrects p fraction of adversarial erasures and achieves the rate 1 − p (see Appendix A for more details). Lemma 8 (Adapted from [16]). For every p ∈ [0, 1), and every ξ > 0, there is an efficiently encodable and decodable stochastic affine code (Enc, Dec) with rate R = 1 − p − ξ such that for every m ∈ {0, 1} N R and erasure pattern of at most p fraction, we have Pr[Dec( Enc(m)) = m] ≥ 1 − exp(−Ω(ξ 2 N/ log 2 N )), where Enc(m) denotes the partially erased random codeword and N denotes the length of the codeword. Binary secret sharing schemes In this section, we define our model of nearly-threshold binary secret sharing schemes. We begin with a description of the two models of non-adaptive and adaptive adversaries which can access up to t of the N shares. A leakage oracle is a machine O(·) that takes as input an N -bit string c ∈ {0, 1} N and then answers the leakage queries of the type I j , for I j ⊂ [N ], j = 1, 2, . . . , q. Each query I j is answered with c I j . An interactive machine A that issues the leakage queries is called a leakage adversary. Let A c = ∪ q j=1 I j denote the union of all the index sets chosen by A when the oracle input is c. The oracle is called t-bounded, denoted by O t (·), if it rejects leakage queries from A if there exists some c ∈ {0, 1} N such that |A c | > t. An adaptive leakage adversary decides the index set I j+1 according to the oracle's answers to all previous queries I 1 , . . . , I j . A non-adaptive leakage adversary has to decide the index set A c before any information about c is given. This means that for a non-adaptive adversary, given any oracle input c ∈ {0, 1} N , we always have A c = A for some A ⊂ [N ]. Let View Ot(·) A denote the view of the leakage adversary A interacting with a t-bounded leakage oracle. When A is non-adaptive, we use the shorthand View Ot(·) A = (·) A , for some A ⊂ [N ] of size |A| ≤ t. A function ε : N → R is called negligible if for every positive integer k, there exists an N k ∈ N such that |ε(N )| < 1 N k for all N > N k . The following definition of ramp Secret Sharing Scheme (SSS) allows imperfect privacy and reconstruction with errors bounded by negligible functions ε(·) and δ(·), respectively. Definition 9. For any 0 ≤ τ < ρ ≤ 1, an (ε(N ), δ(N ))-SSS with relative threshold pair (τ, ρ) is a pair of polynomial-time algorithms (Share, Recst), Share : {0, 1} ℓ(N ) × R → {0, 1} N , where R denote the randomness set, and Recst : {0, 1} N → {0, 1} ℓ(N ) ∪ {⊥}, where {0, 1} N denotes the subset of ({0, 1} ∪ {?}) N with at least N ρ components not equal to the erasure symbol "?", that satisfy the following properties. • Reconstruction: Given r(N ) = N ρ correct shares of a share vector Share(s), the reconstruct algorithm Recst reconstructs the secret s with probability at least 1 − δ(N ). When δ(N ) = 0, we say the SSS has perfect reconstruction. • Privacy (non-adaptive/adaptive): -Non-adaptive: for any s 0 , s 1 ∈ {0, 1} ℓ(N ) , any A ⊂ [N ] of size |A| ≤ t(N ) = N τ , SD(Share(s 0 ) A ; Share(s 1 ) A ) ≤ ε(N ).(3) -Adaptive: for any s 0 , s 1 ∈ {0, 1} ℓ(N ) and any adaptive adversary A interacting with a t(N )-bounded leakage oracle O t(N ) (·) for t(N ) = N τ , SD View O t(N) (Share(s 0 )) A ; View O t(N) (Share(s 1 )) A ≤ ε(N ).(4) When ε(N ) = 0, we say the SSS has perfect privacy. The difference γ = ρ − τ is called the relative gap, since N γ = r(N ) − t(N ) is the threshold gap of the scheme. When clear from context, we write ε, δ, t, k, ℓ instead of ε(N ), δ(N ), t(N ), r(N ), ℓ(N ). When the parameters are not specified, we call a (ε, δ)-SSS simply a binary SSS. In the above definition, a binary SSS has a pair of designed relative thresholds (τ, ρ). In this work, we are concerned with constructing nearly-threshold binary SSS, namely, binary SSS with arbitrarily small relative gap γ = ρ − τ . We also want our binary SSS to share a large secret ℓ = Ω(N ). Definition 10. For any 0 ≤ τ < ρ ≤ 1, a coding rate R ∈ [0, 1] is achievable if there exists a family of (ε, δ)-SSS with relative threshold pair (τ, ρ) such that ε and δ are both negligible in N and ℓ N → R. The highest achievable coding rate of binary SSS for a pair (τ, ρ) is called its capacity. By relating binary SSS to Wyner wiretap codes with a pair of BEC's, we obtain the following coding rate upper bound for binary SSS. Lemma 11. For 0 ≤ τ < ρ ≤ 1, the coding rate capacity of binary SSS with relative threshold pair (τ, ρ) is asymptotically upper-bounded by ρ − τ . Proof. Let (Share, Recst) be a non-adaptive binary SSS with relative threshold pair (τ, ρ). We use Share as the encoder and Recst as the decoder, and verify in the following that we obtain a Wyner wiretap code for a BEC pm main channel and a BEC pw wiretapper channel, where p m = 1 − ρ − ξ and p w = 1 − τ + ξ, respectively, for arbitrarily small ξ > 0. Erasure in binary SSS is worst-case, while it is probabilistic in the Wyner wiretap model. We however note that asymptotically, the number of random erasures of BEC pm and BEC pw approaches N p m and N p w , respectively, with overwhelming probability, and so a code that protects against worst-case erasure can be used as a wiretap code with probabilistic erasure. In our proof we also take into account the difference in the secrecy notion in SSS and in the case of Wyner wiretap code. The N -bit output Y = Y 1 , . . . , Y N of a BEC p has a distribution where each bit is identically independently erased with probability p. By the Chernoff-Hoeffding bounds, the fraction η of erasures satisfies the following. For arbitrarily small ξ > 0,        Pr[η ≥ p + ξ] ≤ p p+ξ p+ξ 1−p 1−p−ξ 1−p−ξ N ; Pr[η ≤ p − ξ] ≤ p p−ξ p−ξ 1−p 1−p+ξ 1−p+ξ N . Applying the two inequalities to BEC pm and BEC pw , respectively, we obtain the following conclusions. The probability that BEC pm has at most p m + ξ = 1 − ρ fraction of erasures and the probability that BEC pw has at least p w −ξ = 1−τ fraction of erasures are both at most exp(−Ω(N )) for arbitrarily small ξ > 0. We are ready to prove the Wyner wiretap reliability and secrecy properties as defined in [25,14]. We show correct decoding with probability 1− o(1). When the erasures are below p m + ξ = 1− ρ fraction, it follows directly from the reconstructability of SSS that the decoding error is bounded from above by δ, which is arbitrarily small for big enough N , where the probability is over the randomness of the encoder. When the erasures are not below p m + ξ = 1 − ρ fraction, we do not have correct decoding guarantee. But as argued above, this only occurs with a negligible probability over the randomness of the BEC pm . Averaging over the channel randomness of the BEC pm , we have correct decoding with probability 1 − o(1). We show random message equivocation secrecy H(S|W) ≥ ℓ(1 − o(1)), where S is a uniform secret and W = BEC pw (Share(S)) is the view of the wiretapper. We in fact first prove the wiretap indistinguishability security as defined in [2] and then deduce that it implies Wyner wiretap secrecy as defined in [25,14]. For each of the erasure patterns (say A ⊂ [N ] are not erased) of BEC pw that exceeds p w − ξ = 1 − τ fraction (equivalently, |A| ≤ N τ ), the binary SSS privacy gives that for any two secrets, the corresponding views W|(S = s 0 , A not erased) and W|(S = s 1 , A not erased) are indistinguishable with error ε, which is arbitrarily small for big enough N . The distribution (W|S = s 0 ) and (W|S = s 1 ) are convex combinations of W|(S = s 0 , A not erased) and W|(S = s 1 , A not erased), respectively, for all the erasure patterns A of BEC pw . As argued before, the probability that the erasures does not exceed p w − ξ = 1 − τ fraction is negligible. We average over the channel randomness of the wiretapper channel BEC pw and claim that the statistical distance of (W|S = s 0 ) and (W|S = s 1 ) is arbitrarily small for big enough N . According to [2], this is strictly stronger than the Wyner wiretap secrecy. Finally we use the coding rate upper bound of the Wyner wiretap code to bound the coding rate of binary SSS. We have shown that a binary SSS with relative threshold pair (τ, ρ) is a wiretap code for the pair (BEC pm ,BEC pw ). According to [25,14], the achievable coding rate for the Wyner wiretap code is ( 1 − p m ) − (1 − p w ) = p w − p m = ρ − τ + 2ξ. Since this holds for arbitrarily small ξ > 0, we obtain an upper bound of ρ − τ for binary SSS with relative threshold pair (τ, ρ). In the rest of the paper, we give two constant rate constructions of nearly-threshold binary SSS against non-adaptive adversary and adaptive adversary, respectively. The non-adaptive adversary construction is optimal in the sense that the coding rate achieves the upper bound in Lemma 11. Secret sharing against non-adaptive adversaries We first present our construction of capacity-achieving binary SSS against non-adaptive adversaries, using linear strong seeded extractors and optimal rate stochastic erasure correcting codes. The following theorem describes the construction using these components. Share(s) = SA-ECCenc(Z||Ext −1 (Z, s)), where Z $ ← {0, 1} d ; Recst(ṽ) = Ext(z, x), where (z||x) = SA-ECCdec(ṽ). Hereṽ denotes an incomplete version of a share vector v ∈ {0, 1} N with some of its components replaced by erasure symbols. The proof of Theorem 12 will follow naturally from Lemma 13. We first state and prove this general property of a linear strong extractor, which is of independent interest. For the property to hold, we in fact only need the extractor to be able to extract from affine sources. The proof of Lemma 13 is a bit long. We then break it into a claim and two propositions. SD(f A (Z, X); f A (Z ′ , X ′ )) ≤ 8ε.(5) Proof. For the above pairwise guarantee (5) to hold, it suffices to show that for every fixed choice of m ∈ {0, 1} m , the distribution of f A (Z, X) is (4ε)-close to U d ×D, where U d is the uniform distribution on {0, 1} d . Without loss of generality, we assume that the linear function Ext(z, ·) : {0, 1} n → {0, 1} m , for every seed z, has the entire {0, 1} m as its image 3 . Without loss of generality, it suffices to assume that f A is of the form f A (Z, X) = (Z, W (X)) for some affine function W : {0, 1} n → {0, 1} t . This is because for any arbitrary f A , the information contained in f A (Z, X) can be obtained from (Z, W (X)) for a suitable choice of W . Let D be the uniform distribution on the image of W . Let K ← {0, 1} n be a random variable uniformly distributed over the kernel of the linear transformation defined by W , and note that it has entropy at least n − t ≥ k. The extractor Ext thus guarantees that Ext(Z, K), for a uniform and independent seed Z, is ε-close to uniform. By averaging, it follows that for at least 1 − 4ε fraction of the choices of the seed z ∈ {0, 1} d , the distribution of Ext(z, K) is (1/4)-close to uniform. Claim 14. Let U be uniformly distributed on {0, 1} m and U ′ be any affine source that is not uniform on {0, 1} m . Then, the statistical distance between U and U ′ is at least 1/2. Claim 14 follows from the observation that any affine source U ′ that is not uniform on {0, 1} m will have a support (the set of vectors u such that Pr[U ′ = u] > 0) that is an affine subspace of {0, 1} m with dimension at most m − 1. Continuing with the previous argument, since Ext is a linear function for every seed, the distribution of Ext(z, K) for any seed z is an affine source. Therefore, the above claim allows us to conclude that for at least 1 − 4ε fraction of the choices of z, the distribution of Ext(z, K) is exactly uniform. Let G ⊆ {0, 1} d be the set of such choices of the seed. Observe that if Ext(z, K) is uniform for some seed z, then for any affine translation of K, namely, K + v for any v ∈ {0, 1} n , we have that Ext(z, K + v) is uniform as well. This is due to the linearity of the extractor. Recall that our goal is to show that f A (Z, X) = (Z, W (X)) is (4ε)-close to U d × D. The distribution (Z, W (X)) is obtained as (U d , W (U n ))|(Ext(U d , U n ) = m). For the rest of the proof, we first find out the distribution (U d , W (U n ))|(Ext(U d , U n ) = m, U d = z) for a seed z ∈ G (Proposition 15) and then take the convex combination over the uniform seed to obtain (Z, W (X)) (Proposition To prove Proposition 15, note that the distribution of (Z, Y) is uniform on {0, 1} d+n . Now, fix any z ∈ G and let w ∈ {0, 1} t be any element in the image of W (·). Since the conditional distribution Y|(Z = z) is uniform over {0, 1} n , further conditioning on W (Y) = w yields that Y|(Z = z, W = w) is uniform over a translation of the kernel of W (·). By the assumption z ∈ G and recalling M = Ext(Z, Y), we therefore know that the extractor output is exactly uniform over {0, 1} m . That is, M|(Z = z, W = w) is exactly uniform over {0, 1} m and hence in this case M and W are independent. On the other hand, the distribution of (Z, W) is exactly U d × D, since the map W (·) is linear. In particular, for any z ∈ {0, 1} d , the conditional distribution W|(Z = z) is exactly D. This together with the fact that M and W are independent yield that the conditional distribution of (M, W)|(Z = z) is exactly U m × D. We have therefore proved Proposition 15. Pr[Z = z, W = w|M = m] + η = 2 −d (z,w)∈E,z∈G Pr[W = w|M = m, Z = z] + η(6)= 2 −d (z,w)∈E,z∈G D(w) + η (7) = 2 −d (z,w)∈E D(w) − (z,w)∈E,z / ∈G D(w) + η where (6) uses the independence of W and Z and (7) follows from Proposition 15. Observe that η ′ := 2 −d (z,w)∈E,z / ∈G D(w) = 2 −d z / ∈G w : (z,w)∈E D(w) ≤ 2 −d (2 d − |G|) ≤ 4ε. Therefore, Pr[(Z, W) ∈ E|M = m] = p + η − η ′ = p ± 4ε = Pr[(Z, W) ∈ E] ± 4ε, since 0 ≤ η ≤ 4ε and 0 ≤ η ′ ≤ 4ε. We have therefore proved Proposition 16. With Lemma 13 at hand, we are now at a good position to prove Theorem 12. Proof of Theorem 12. The reconstruction from r shares follows trivially from the definition of stochastic erasure correcting code. We now prove the privacy. The sharing algorithm of the SSS (before applying the stochastic affine code) takes a secret, which is a particular extractor output s ∈ {0, 1} ℓ , and uniformly samples a seed z ∈ {0, 1} d of Ext before uniformly finds an x ∈ {0, 1} n such that Ext(z, x) = s. This process of obtaining (z, x) is the same as sampling (U d , U n ) $ ← {0, 1} d+n and then restrict to Ext(U d , U n ) = s. We define the random variable tuple (Z, We can now formulate the privacy of the SSS in this context. We want to prove that the statistical distance of the views of the adversary for a pair of secrets s and s ′ can be made arbitrarily small. The views of the adversary are the outputs of the affine function f A with inputs (Z, X) and (Z ′ , X ′ ) for the secret s and s ′ , respectively. According to Lemma 13, we then have that the privacy error is 8 × ε 8 = ε. X) := (U d , U n )| (Ext(U d , U n ) = s)(8) We now analyze the coding rate of the (ε, δ)-SSS with relative threshold pair ( t N , r N ) constructed in Theorem 12 when instantiated with the SA-ECC from Lemma 8 and the Ext from Lemma 4. The secret length is ℓ = n − t − O(d), where the seed length is d = O(log 3 (2n/ε)). The SA-ECC encodes d + n bits to N bits and with coding rate R ECC = ρ − ξ for a small ξ determined by δ (satisfying the relation δ = exp(−Ω(ξ 2 N/ log 2 N )) according to Lemma 8). We then have n = N (ρ − ξ) − d, resulting in the coding rate R = ℓ N = n − t − O(d) N = N (ρ − ξ) − t − O(d) N = ρ − τ − (ξ + O(d) N ) = ρ − τ − o(1). Corollary 17. For any 0 ≤ τ < ρ ≤ 1, there is an explicit construction of non-adaptive (ε, δ)-SSS with relative threshold pair (τ, ρ) achieving coding rate ρ − τ − o(1), where ε and δ are both negligible. The binary SSS obtained in Corollary 17 is asymptotically optimal as it achieves the upper bound in Lemma 11. Secret sharing against adaptive adversaries In this section, we will describe our construction which achieves privacy against adaptive adversaries, using seedless affine extractors. We start with the specific extraction property needed from our affine extractors. Almost perfect property can be trivially achieved by requiring an exponentially (in m) small error in statistical distance, using the relation between ℓ ∞ -norm and ℓ 1 -norm. Theorem 19. Let AExt : {0, 1} n → {0, 1} ℓ be an invertible (n − t, ε 2 )-almost perfect affine extractor and AExt −1 : {0, 1} ℓ × R 1 → {0, 1} n be its v-inverter that maps an s ∈ {0, 1} ℓ to one of its preimages chosen uniformly at random. Let (SA-ECCenc, SA-ECCdec) be a stochastic affine-erasure correcting code with encoder SA-ECCenc : {0, 1} n × R 2 → {0, 1} N that tolerates N − r erasures and decodes with success probability at least 1 − δ. Then the (Share, Recst) defined as follows is an adaptive (ε + v, δ)-SSS with threshold pair (t, r). Share(s) = SA-ECCenc(AExt −1 (s)); Recst(ṽ) = AExt(SA-ECCdec(ṽ)), whereṽ denotes an incomplete version of a share vector v ∈ {0, 1} N with some of its components replaced by erasure symbols. Proof. The (r, δ)-reconstructability follows directly from the erasure correcting capability of the SA-ECC. For anyṽ with at most N − r erasure symbols and the rest of its components consistent with a valid codeword v ∈ {0, 1} N , the SA-ECC decoder identifies the unique codeword v with probability 1 − δ over the encoder randomness. The corresponding SA-ECC message of v is then inputted to AExt and the original secret s is reconstructed with the same probability. We next prove the (t, ε)-privacy. Without loss of generality, we first assume the inverter of the affine extractor is perfect, namely, v = 0. When v is negligible but not equal to zero, the overall privacy error will increase slightly, but still remain negligible. For any r ∈ R 2 , the affine encoder of SA-ECC is characterised by a matrix G r ∈ {0, 1} n×N and an offset ∆ r . For n unknowns x = (x 1 , . . . , x n ), we have SA-ECCenc(x) = xG r + ∆ r = (xG 1 , . . . , xG N ) + ∆ r , where G i = (g 1,i , . . . , g n,i ) T (here the subscript " r " is omitted to avoid double subscripts) denotes the ith column of G r , i = 1, . . . , N . This means that knowing a component c i of the SA-ECC codeword is equivalent to obtaining a linear equation c i ⊕ ∆ i = xG i = g 1,i x 1 + · · · + g n,i x n about the n unknowns x 1 , . . . , x n , where ∆ i (again, the subscript " r " is omitted) denotes the ith component of ∆ r . Now, we investigate the distribution of View = (1 ± ε 2 )2 −ℓ · Pr[View SA-ECCenc(X) A = w] Pr[AExt(X) = s] (ii) = (1 ± ε 2 )2 −ℓ · 2 n−rank(A) 2 n 2 −ℓ = (1 ± ε 2 ) · 2 −rank(A) , where notations X, ±, rank(A) and (i), (ii) are explained as follows. In above, we first use the fact that Pr[View Ot(Share(s)) A = w] can be seen as the probability of uniformly selecting X from {0, 1} n , with the condition that AExt(X) = s. This is true because the sets AExt −1 (s) for all s, partition {0, 1} n and the rest follows from Definition 6. The shorthand "y = 1 ± ε 2 " denotes "1 − ε 2 ≤ y ≤ 1 + ε 2 ". The shorthand "rank(A)" denotes the rank of the up to t columns of G corresponding to the index set A adaptively chosen by A. The equality (i) follows from the fact that AExt is an (n − t, 2 −(ℓ+1) ε)-affine extractor and the uniform X conditioned on at most t linear equations is an affine source with at least n − t bits entropy. The equality (ii) holds if and only if w is in the set {SA-ECCenc(x) A : x ∈ {0, 1} n }. Indeed, consider X as unknowns for equations, the number of solutions to the linear system SA-ECCenc(X) A = w is either 0 or equal to 2 n−rank(A) . The distribution of View Ot(Share(s)) A for any secret s is determined by the quantity rank(A), which is independent of the secret s. Let W be the uniform distribution over the set {SA-ECCenc(x) A : x ∈ {0, 1} n }. Then by the triangular inequality, we have SD View Ot(Share(s 0 )) A ; View Ot(Share(s 1 )) A ≤ SD View Ot(Share(s 0 )) A ; W + SD W; View Ot(Share(s 1 )) A ≤ ε 2 + ε 2 = ε. When the inverter of the affine extractor is not perfect, the privacy error is upper bounded by ε + v. This concludes the privacy proof. There are explicit constructions of binary affine extractors that, given a constant fraction of entropy, outputs a constant fraction of random bits with exponentially small error (see Lemma 5). There are known methods for constructing an invertible affine extractor AExt ′ from any affine extractor AExt such that the constant fraction output size and exponentially small error properties are preserved. A simple method is to let AExt ′ (U n ||M ) := AExt(U n ) ⊕ M (see Appendix B for a discussion). This is summarized in the lemma below. Lemma 20. For any δ ∈ (0, 1], there is an explicit seedless (δn, ε)-almost perfect affine extractor AExt : {0, 1} n → {0, 1} m where m = Ω(n) and ε = exp(−Ω(n)). Moreover, there is an efficiently computable ε-inverter for the extractor. Proof. Let AExt : {0, 1} n → {0, 1} m be Bourgain's affine extractor (Lemma 5) for entropy rate µ, output length m = Ω(n), and achieving exponentially small error ε = exp(−Ω(n)). Using the one-time pad trick (Appendix B), we construct an invertible variant achieving output length m ′ = Ω(m) = Ω(n) and exponentially small error. Finally, we simply truncate the output length of the resulting extractor to m ′′ = Ω(m ′ ) = Ω(n) bits so that the closeness to uniformity measured by ℓ ∞ norm required for almost-perfect extraction is satisfied. The truncated extractor is still invertible since the inverter can simply pad the given input with random bits and invoke the original inverter function. It now suffices to instantiate Theorem 19 with the explicit construction of SA-ECC and the invertible affine extractor AExt of Lemma 20. Let R ECC denote the rate of the SA-ECC. Then we have R ECC = n N , where n is the input length of the affine extractor AExt and N is the number of players. The intuition of the construction in Theorem 19 is that if a uniform secret is shared and conditioning on the revealed shares the secret still has a uniform distribution (being the output of a randomness extractor), then no information is leaked. In fact, the proof of Theorem 19 above is this intuition made exact, with special care on handling the imperfectness of the affine extractor. So as long as the "source" of the affine extractor AExt has enough entropy, privacy is guaranteed. Here the "source" is the distribution U n conditioned on the adversary's view, which is the output of a t-bit affine function. The "source" then is affine and has at least n − τ N = n(1 − τ R ECC ) bits of entropy. Now as long as τ < R ECC , using the AExt from Lemma 5 (more precisely, an invertible affine extractor AExt ′ : {0, 1} n ′ → {0, 1} ℓ constructed from AExt) with µ = 1 − τ R ECC , a constant fraction of random bits can be extracted with exponentially small error. This says that privacy is guaranteed for τ ∈ [0, R ECC ). The stochastic affine ECC in Lemma 8 asymptotically achieves the rate 1 − (1 − ρ) = ρ. We then have the following corollary. Corollary 21. For any 0 ≤ τ < ρ ≤ 1, there is an explicit constant coding rate adaptive (ε, δ)-SSS with relative threshold pair (τ, ρ), where ε and δ are both negligible. The construction above achieves a constant coding rate for any (τ, ρ) pair satisfying 0 ≤ τ < ρ ≤ 1. However, since the binary affine extractor in Lemma 5 does not extract all the entropy from the source and moreover the step that transforms an affine extractor into an invertible affine extractor incurs non-negligible overhead, the coding rate of the above construction does not approach ρ − τ . We leave explicit constructions of binary SSS against adaptive adversary with better coding rate as an interesting technical open question. Conclusion We studied the problem of sharing arbitrary long secrets using constant length shares and required a nearly-threshold access structure. By nearly-threshold we mean a ramp scheme with arbitrarily small gap to number of players ratio. We show that by replacing perfect privacy and reconstructibility with slightly relaxed notions and inline with similar strong cryptographic notions, one can explicitly construct such nearly-threshold schemes. We gave two constructions with security against non-adaptive and adaptive adversaries, respectively, and proved optimality of the former. Our work also make a new connection between secret sharing and wiretap coding. We presented our model and constructions for the extremal case of binary shares. However, we point out that the model and our constructions can be extended to shares over any desired alphabet size q. Using straightforward observations (such as assigning multiple shares to each player), this task reduces to extending the constructions over any prime q. In this case, the building blocks that we use; namely, the stochastic error-correcting code, seeded and seedless affine extractors need to be extended to the q-ary alphabet. The constructions [24,20,16] that we use, however, can be extended to general alphabets with straightforward modifications. The only exception is Bourgain's seedless affine extractor [7]. The extension of [7] to arbitrary alphabets is not straightforward and has been accomplished in a work by Yehudayoff [26]. Our constructions are not linear: even the explicit non-adaptive construction that uses an affine function for every fixing of the encoder's randomness does not result in a linear secret sharing. Linearity is an essential property in applications such as multiparty computation and so explicit constructions of linear secret sharing schemes in our model will be an important achievement. Yet another important direction for future work is deriving bounds and constructing optimal codes for finite length (N ) case. Such result will also be of high interest for wiretap coding. explicit construction of capacity achieving codes for BEC 1−ρ for REC and use a similar argument of [22]. We now refer to the Algorithm 1. in the proof of [16, Theorem 6.1] and show that, with the SC and REC replaced accordingly, we do have a SA-ECC. The error correction capability and optimal rate follow similarly as in the proof of [16, Theorem 6.1]. We next show affine property. Phase 1 and Phase 2 are about the control information, which are part of the encoding randomness r of the SA-ECC to be fixed to constant value in the analysis of affine property. During Phase 3, the message m is linearly encoded (our REC is linear) and then permuted, followed by adding a translation term ∆ r . Since permutation is a linear transformation, we combine the two linear transformations and write mG r + ∆ r , where G r is a binary matrix. Finally, during Phase 4, some blocks that contain the control information are inserted into mG r + ∆ r . We add dummy zero columns into G r and zero blocks into ∆ r to the corresponding positions where the control information blocks are inserted. Let mĜ r +∆ r be the vector after padding dummy zeros. Let∆ ′ r be the vector obtained from padding dummy zero blocks, complementary to the padding above, to the control information blocks. We then write the final codeword of the SA-ECC in the form mĜ r + (∆ r +∆ ′ r ), which is indeed an affine function of the message m. B One-Time-Pad trick of inverting extractors There is a well known way to transform an efficient function into one that is also efficiently invertible through a "One-Time-Pad" trick. We give a proof for the special case of affine extractors, for completeness. AExt ′ (z) = AExt ′ (x||y) = AExt(x) + y, where the input z ∈ {0, 1} n+m is separated into two parts: x ∈ {0, 1} n and y ∈ {0, 1} m . Proof. Let Z be a random variable with flat distribution supported on an affine subspace of {0, 1} n+m of dimension at least k + m. Separate Z into two parts Z = (X||Y), where X ∈ {0, 1} n and Y ∈ {0, 1} m . Then conditioned on any Y = y, X has a distribution supported on an affine subspace of {0, 1} n of dimension at least k. This asserts that conditioned on any Y = y, SD(AExt(X) + y; U {0,1} m ) ≤ ε. Averaging over the distribution of Y concludes the extractor proof. We next show an efficient inverter AExt ′−1 for AExt ′ . For any s ∈ {0, 1} m , define AExt ′−1 (s) = (U n ||AExt(U n ) + s). The randomised function AExt ′−1 is efficient and AExt ′−1 (U m ) ε ∼ U n+m .
9,499
1808.02974
2887133168
Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length @math among any @math players such that for a threshold parameter @math , (i) the knowledge of any @math shares does not reveal any information about the secret and, (ii) any choice of @math shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length @math , and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any @math shares, for a positive integer gap parameter @math . Ramp secret sharing scheme necessarily requires shares of length @math . Other than the bound related to secret length @math , the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio @math . In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio @math , where standard ramp secret sharing becomes impossible. We show, however, that a slightly relaxed but equally effective notion of semantic security for the secret, and negligible reconstruction error probability, eliminate the impossibility. Moreover, we provide explicit constructions of such schemes. One of the consequences of our relaxation is that, unlike standard ramp schemes with perfect secrecy, adaptive and non-adaptive adversaries need different analysis and construction. For non-adaptive adversaries, we explicitly construct secret sharing schemes that provide secrecy against any @math fraction of observed shares, and reconstruction from any @math fraction of shares, for any choices of @math . Our construction achieves secret length @math , which we show to be optimal. For adaptive adversaries, we construct explicit schemes attaining a secret length @math .
The notion of secrecy in wiretap codes has evolved over years. More recently the notion of semantic security for wiretap model has been introduced @cite_6 , which allows arbitrary message distribution and is shown to be equivalent to negligible leakage with respect to statistical distance. There remains one last distinction between semantically secure wiretap model and secret sharing with fixed share size. That is the nature of the main and wiretapper channels are typically stochastic (e.g., the erasure channel with random i.i.d. erasures), whereas for secret sharing a worst-case guarantee for the erasure patterns is required. Namely, in secret sharing, reconstruction with overwhelming probability is required for every choice of @math or more shares, and privacy of the secret is required for every (adaptive or non-adaptive) choice of the @math shares observed by the adversary.
{ "abstract": [ "The wiretap channel is a setting where one aims to provide information-theoretic privacy of communicated data based solely on the assumption that the channel from sender to adversary is \"noisier\" than the channel from sender to receiver. It has developed in the Information and Coding IC it has optimal rate; and both the encryption and decryption algorithms are proven to be polynomial-time." ], "cite_N": [ "@cite_6" ], "mid": [ "182801106" ] }
Secret Sharing with Binary Shares
Secret sharing, introduced independently by Blakley [3] and Shamir [21], is one of the most fundamental cryptographic primitives with far-reaching applications, such as being a major tool in secure multiparty computation (cf. [12]). The general goal in secret sharing is to encode a secret s into a number of shares X 1 , . . . , X N that are distributed among N players such that only certain authorized subsets of the players can reconstruct the secret. An authorized subset of players is a set A ⊆ [N ] such that the set of shares with indices in A can collectively be used to reconstruct the secret s (perfect reconstructiblity). On the other hand, A is an unauthorized subset if the knowledge of the shares with indices in A reveals no information about the secret (perfect privacy). The set of authorized and unauthorized sets define an access structure, of which the most widely used is the so-called threshold structure. A secret sharing scheme with threshold access structure, is defined with respect to an integer parameter t and satisfies the following properties. Any set A ⊆ [N ] with |A| ≤ t is an unauthorized set. That is, the knowledge of any t shares, or fewer, does not reveal any information about the secret. On the other hand, any set A with |A| > t is an authorized set. That is, the knowledge of any t + 1 or more shares completely reveals the secret. Shamir's secret sharing scheme [21] gives an elegant construction for the threshold access structure that can be interpreted as the use of Reed-Solomon codes for encoding the secret. Suppose the secret s is an ℓ-bit string and N ≤ 2 ℓ . Then, Shamir's scheme treats the secret as an element of the finite field F q , where q = 2 ℓ , padded with t uniformly random and independent elements from the same field. The resulting vector over F t+1 q is then encoded using a Reed-Solomon code of length N , providing N shares of length ℓ bits each. The fact that a Reed-Solomon code is Maximum Distance Separable (MDS) can then be used to show that the threshold guarantee for privacy and reconstruction is satisfied. Remarkably, Shamir's scheme is optimal for threshold secret sharing in the following sense: Any threshold secret sharing scheme sharing ℓ-bit secrets necessarily requires shares of length at least ℓ, and Shamir's scheme attains this lower bound [23]. It is natural to ask whether secret sharing is possible at share lengths below the secret length log q < ℓ, where log is to base 2 throughout this work. Of course, in this case, the threshold guarantee that requires all subsets of participants be either authorized, or unauthorized, can no longer be attained. Instead, the notion can be relaxed to ramp secret sharing which allows some subset of participants to learn some information about the secret. A ramp scheme is defined with respect to two threshold parameters, t and r > t + 1. As in threshold scheme, the knowledge of any t shares or fewer does not reveal any information about the secret. On the other hand, any r shares can be used to reconstruct the secret. The subsets of size between t + 1 and r − 1, may learn some information about the secret. The information-theoretic bound (see e.g. [18]) now becomes ℓ ≤ (r − t) log q. (1) Ideally, one would like to obtain equality in (1) for as general parameter settings as possible. Let g : = r − t denote the gap between the privacy and reconstructibility parameters. Let the secret length ℓ and the number of players N be unconstrained integer parameters. It is known that, using Reed-Solomon code interpretation of Shamir's approach applied to a random linear code, for every fixed relative gap γ : = g/N , there is a constant q only depending on γ such that a ramp secret sharing scheme with share size q exists. Such schemes can actually be constructed by using explicit algebraic geometry codes instead of random linear codes. In fact, this dependence of share size q on relative gap g/N is inherent for threshold and more generally ramp schemes. It is shown in an unpublished work of Kilian and Nisan 1 for threshold schemes, and later more generally in [8], that for ramp schemes with share size q, threshold gap g, privacy threshold t and unconstrained number of players N , the following always holds: q ≥ (N − t + 1)/g. Very recently in [4], a new bound with respect to the reconstruction parameter r is proved through connecting secret sharing for one bit secret to game theory: q ≥ (r + 1)/g. These two bounds together yield q ≥ (N + g + 2)/(2g). (2) Note that the bound (2) is very different from the bound (1) in nature. The bound (1) is the fundamental limitation of information-theoretic security, bearing the same flavour as the One-Time-Pad. The bound (2) is independent of the secret length and holds even when the secret is one bit. We ask the following question: For a fixed constant share size q (in particular, q = 2), is it possible to construct (relaxed but equally effective) ramp secret sharing schemes with arbitrarily small relative gap γ > 0 that asymptotically achieve equality in (1)? Our results in this work show that the restriction (2) can be overcome if we allow a negligible privacy error in statistical distance (semantic security) and a negligible reconstruction error probability. Our contributions We motivate the study of secret sharing scheme with fixed share size q, and study the extremal case of binary shares. Our goal is to show that even in this extremely restrictive case, a slight relaxation of the privacy and reconstruction notions of ramp secret sharing guarantees explicit construction of families of ramp schemes 2 with any constant relative privacy and reconstruction thresholds 0 ≤ τ < ρ ≤ 1, in particular, the relative threshold gap γ = ρ − τ can be an arbitrarily small constant. Namely, for any constants 0 ≤ τ < ρ ≤ 1, it can be guaranteed that any τ N or fewer shares reveal essentially no information about the secret, whereas any ρN or more shares can reconstruct the exact secret with a negligible failure probability. While we only focus on the extremal special case q = 2 in this presentation, all our results can be extended to any constant q (see Section 6). We consider binary sharing of a large ℓ-bit secret and for this work focus on the asymptotic case where the secret length ℓ, and consequently the number of players N , are sufficiently large. We replace perfect privacy with semantic security, the strongest cryptographic notion of privacy second only to perfect privacy. That is, for any two secrets (possibly chosen by the adversary), we require the adversary's view to be statistically indistinguishable. The view of the adversary is a random variable with randomness coming solely from the internal randomness of the sharing algorithm. The notion of indistinguishability that we use is statistical (total variation) distance bounded by a leakage error parameter ε that is negligible in N . Using non-perfect privacy creates a distinction between non-adaptive and adaptive secrecy. A non-adaptive adversary chooses any τ fraction of the N players at once, and receives their corresponding shares. An adaptive adversary however, selects share holders one by one, receives their shares and uses its available information to make its next choice. When ε = 0, i.e., when perfect privacy holds, non-adaptive secrecy automatically implies adaptive secrecy as well. However, this is not necessarily the case when ε > 0 and we thus study the two cases separately. Similarly, we replace the perfect reconstruction with probabilistic reconstruction allowing a failure probability δ that is negligible in N . The special case of δ = 0 means perfect reconstruction. Note that secret sharing with fixed share size necessarily imposes certain restrictions that are not common in standard secret sharing. Unlike secret sharing with share length dependent on the secret length (for threshold schemes) or secret length and threshold gap (for ramp schemes), binary sharing of an ℓ-bit secret obviously requires at least ℓ shares to accommodate the secret information. For a family of ramp secret sharing schemes with fixed share size q and fixed relative thresholds 0 ≤ τ < ρ ≤ 1, as N grows the absolute gap length (ρ−τ )N grows, and the accommadatable length of the secret is expected to grow and so the ratio ℓ/N ∈ (0, 1] becomes a key parameter of interest for the family, referred to as the coding rate. As is customary in coding theory, it is desired to characterize the maximum possible ratio ℓ/N ∈ (0, 1] for binary secret sharing. We use the relation (a similar relation was used in [10] for robust secret sharing) between a binary secret sharing family with relative threshold (τ, ρ) and codes for a Wyner wiretap channel with two BEC's to derive a coding rate upper bound of ρ − τ for binary secret sharing (see Lemma 11). Our main technical contributions are explicit constructions of binary secret sharing schemes in both the non-adaptive and adaptive models, and proving optimality of non-adaptive construction. Namely, we prove the following: Theorem 1 (informal summary of Lemma 11, Corollary 17, and Corollary 21). For any choice of 0 ≤ τ < ρ ≤ 1, and large enough N , there is an explicit construction of a binary secret sharing scheme with N players that provides (adaptive or non-adaptive) privacy against leakage of any τ N or fewer shares, as well as reconstruction from any ρN or more of the shares (achieving semantic secrecy with negligible error and imperfect reconstruction with negligible failure probability). For non-adaptive secrecy, the scheme shares a secret of length ℓ = (ρ − τ − o(1))N , which is asymptotically optimal. For adaptive secrecy, the scheme shares a secret of length ℓ = Ω((ρ − τ )N ). As a side contribution, our findings unify the Wyner wiretap model and its adversarial analogue. Our capacity-achieving construction of binary secret sharing for non-adaptive adversaries implies that the secrecy capacity of the adversarial analogue of the erasure scenario Wyner wiretap channel is similarly characterized by the erasure ratios of the two channels. Moreover, the secrecy can be strengthened to semantic security. This answers an open question posted in [1]. The authors studied a generalisation of the wiretap II model, where the adversary chooses t bits to observe and erases them. They showed that the rate 1 − τ − h 2 (τ ), where h 2 (·) is the binary entropy function, can be achieved and left open the question of whether a higher rate is achievable. Our result specialized to their setting shows that, the rate 1 − 2τ can be explicitly achieved. Our approach and techniques Our explicit constructions follow the paradigm of invertible randomness extractors formalized in [11]. Invertible extractors were used in [11] for explicit construction of optimal wiretap coding schemes in the Wiretap channel II [19]. This, in particular, is corresponding to the ρ = 1 special case of secret sharing where reconstruction is only required when all shares are available. Moreover, the secrecy there is an information-theoretic notion, and only required to hold for uniform messages. The consequence of the latter is that the construction in [11] does not directly give us binary secret sharing, not even for the ρ = 1 special case. The exposition below is first focused on how semantic security is achieved. As in [11], we rely on invertible affine extractors as our primary technical tool. Such an extractor is an explicit function AExt : {0, 1} n → {0, 1} ℓ such that, for any random variable X uniformly distributed over an unknown k-dimensional affine subspace of F n 2 , the distribution of AExt(X) is close to the uniform distribution over F ℓ 2 in statistical distance. Furthermore, the invertibility guarantee provides an efficient algorithm for sampling a uniform element from the set AExt −1 (s) of pre-images for any given output s ∈ F ℓ 2 . It is then natural to consider the affine extractor's uniform inverter as a candidate building block for the sharing algorithm of a secret sharing scheme. Intuitively, if the secret s is chosen uniformly at random, we have the guarantee that for any choice of a bounded number of the bits of its random pre-image revealed to the adversary, the distribution of the random pre-image conditioned on the revealed value satisfies that of an affine source. Now according to the definition of an affine extractor, the extractor's output (i.e., the secret s) remains uniform (and thus unaffected in distribution) given the information revealed to the adversary. Consequently, secrecy should at least hold in an information-theoretic sense, i.e. the mutual information between the secret and the revealed vector components is zero. This is what was formalized and used in [11] for the construction of Wiretap channel II codes. For non-adaptive adversaries, in fact it is possible to use invertible seeded extractors rather than invertible affine extractors described in the above construction. A (strong) seeded extractor assumes, in addition to the main input, an independent seed as an auxiliary input and ensures uniformity of the output for most fixings of the seed. The secret sharing encoder appends a randomly chosen seed to the encoding and inverts the extractor with respect to the chosen seed. Then, the above argument would still hold even if the seed is completely revealed to the adversary. The interest in the use of seeded, as opposed to seedless affine, extractors is twofold. First, nearly optimal and very efficient constructions of seeded extractors are known in the literature that extract nearly the entire source entropy with only a short seed. This allows us to attain nearly optimal rates for the non-adaptive case. Furthermore, and crucially, such nearly optimal extractor constructions (in particular, Trevisan's extractor [24,20]) can in fact be linear functions for every fixed choice of the seed (in contrast, seedless affine extractors can never be linear functions). We take advantage of the linearity of the extractor in a crucial way and use a rather delicate analysis to show that in fact the linearity of the extractor can be utilized to prove that the resulting secret sharing scheme provides the stringent worst-case secret guarantee which is a key requirement distinguishing secret sharing schemes (a cryptographic primitive) from wiretap codes (an information-theoretic notion). Using a seeded extractor instead of a seedless extractor, however, introduces a new challenge. In order for the seeded extractor to work, the seed has to be independent of the main input, which is a distribution induced by the adversary's choice of reading positions. The independence of the seed and the main input can be directly argued when the adversary is non-adaptive. An adaptive adversary, however, may choose its reading positions to learn about the seed first, and then choose the rest of the reading positions according the value of the seed. In this case, we can not prove the independence of the seed and the main input. For adaptive adversaries, we go back to using an invertible affine extractor. We prove that both security for worst-case messages and against adaptive adversaries are guaranteed if the affine extractor provides the strong guarantee of having a nearly uniform output with respect to the ℓ ∞ measure rather than ℓ 1 . However, this comes at the cost of the extractor not being able to extract the entire entropy of the source, leading to ramp secret sharing schemes with slightly sub-optimal rates, albeit still achieving rates within a constant factor of the optimum. As a proof of concept, we utilize a simple padding and truncation technique to convert any off-the-shelf seedless affine extractor (such as those of Bourgain [7] or Li [17]) to one that satisfies the stronger uniformity condition that we require. We now turn to reconstruction from an incomplete set of shares. In order to provide reconstructibility from a subset of size r of the shares, we naturally compose the encoding obtained from the extractor's inversion routine with a linear erasure-correcting code. The linearity of the code ensures that the extractor's input subject to the adversary's observation (which now can consist of linear combinations of the original encoding) remains uniform on some affine space, thus preserving the privacy guarantee. However, since by the known rate-distance trade-offs of binary error-correcting codes, no deterministic coding scheme can correct more than a 1/2 fraction of erasures (a constraint that would limit the choice of ρ), the relaxed notion of stochastic coding schemes is necessary for us to allow reconstruction for all choices of ρ ∈ (τ, 1]. Intuitively, a stochastic code is a randomized encoder with a deterministic decoder, that allows the required fraction of errors to be corrected. We utilize what we call a stochastic affine code. Such codes are equipped with encoders that are affine functions of the message for every fixing of the encoder's internal randomness. We show that such codes are as suitable as deterministic linear codes for providing the linearity properties that our construction needs. In fact, we need capacity-achieving stochastic erasure codes, i.e., those that correct every 1 − ρ fraction of erasures at asymptotic rate ρ, to be able to construct binary secret sharing schemes with arbitrarily small relative gap γ = ρ − τ . To construct capacity-achieving stochastic affine erasure codes, we utilize a construction of stochastic codes due to Guruswami and Smith [16] for bit-flip errors. We observe that this construction can be modified to yield capacity-achieving erasure codes. Roughly speaking, this is achieved by taking an explicit capacity-achieving linear code for BEC and pseudo-randomly shuffling the codeword positions. Combined with a delicate encoding of hidden "control information" to communicate the choice of the permutation to the decoder in a robust manner, the construction transforms robustness against random erasures to worst-case erasures at the cost of making the encoder randomized. Organization of the paper Section 2 contains a brief introduction to the two building blocks for our constructions: randomness extractors and stochastic codes. In Section 3, we formally define the binary secret sharing model and prove a coding rate upper bound. Section 4 contains a capacity-achieving construction with privacy against non-adaptive adversaries. Section 5 contains a constant rate construction with privacy against adaptive adversaries. Finally, we conclude the paper and discuss open problems in Section 6. Preliminaries and definitions In this section, we review the necessary facts and results about randomness extractors, both the seeded and seedless affine variants, as well as the stochastic erasure correcting codes. Randomness extractors extract close to uniform bits from input sequences that are not uniform but have some guaranteed entropy. The closeness to uniform of the extractor output is measured by the statistical distance (half the ℓ 1 -norm). For a set X , we use X ← X to denote that X is distributed over the set X . For two random variables X, Y ← X , the statistical distance between X and Y is defined as, SD(X; Y) = 1 2 x∈X |Pr[X = x] − Pr[Y = x]| . We say X and Y are ε-close if SD(X, Y) ≤ ε. A randomness source is a random variable with lower bound on its min-entropy, which is defined by H ∞ (X) = − log max x {Pr[X = x]}. We say a random variable X ← {0, 1} n is a (n, k)-source if H ∞ (X) ≥ k. For well structured sources, there exist deterministic functions that can extract close to uniform bits. The support of X ← X is the set of x ∈ X such that Pr[X = x] > 0. An affine (n, k)-source is an (n, k)-source whose support is an affine sub-space of {0, 1} n and each vector in the support occurs with the same probability. Let U m denote the random variable uniformly distributed over {0, 1} m . where S is chosen uniformly from {0, 1} d . A seeded extractor Ext(·, ·) is called linear if for any fixed seed S = s, the function Ext(s, ·) is a linear function. We will use Trevisan's extractor [24] in our first construction. In particular, we use the following improvement of this extractor due to Raz, Reingold and Vadhan [20]. We will use Bourgain's affine extractor in our second construction. We note, however, that we could have used other explicit extractors for this purpose, such as [17]. Explicit constructions of randomness extractors have efficient forward direction of extraction. In some applications, we usually need to efficiently invert the process: Given an extractor output, sample a random pre-image. • (Inversion) Given y ∈ {0, 1} m such that its pre-image f −1 (y) is nonempty, for every r ∈ {0, 1} r we have f (Inv(y, r)) = y. • (Uniformity) Inv(U m , U r ) is v-close to U n . A v-inverter is called efficient if there is a randomized algorithm that runs in worst-case polynomial time and, given y ∈ {0, 1} m and r as a random seed, computes Inv(y, r). We call a mapping vinvertible if it has an efficient v-inverter, and drop the prefix v from the notation when it is zero. We abuse the notation and denote the inverter of f by f −1 . A stochastic code has a randomised encoder and a deterministic decoder. The encoder Enc : {0, 1} m × R → {0, 1} n uses local randomness R ← R to encode a message m ∈ {0, 1} m . The decoder is a deterministic function Dec : {0, 1} n → {0, 1} m ∪ {⊥}. The decoding probability is defined over the encoding randomness R ← R. Stochastic codes are known to explicitly achieve the capacity of some adversarial channels [16]. Affine sources play an important role in our constructions. We define a general requirement for the stochastic code used in our constructions. Definition 7 (Stochastic Affine codes). Let Enc : {0, 1} m × R → {0, 1} n be the encoder of a stochastic code. We say it is a stochastic affine code if for any r ∈ R, the encoding function Enc(·, r) specified by r is an affine function of the message. That is we have Enc(m, r) = mG r + ∆ r , where G r ∈ {0, 1} m×n and ∆ r ∈ {0, 1} n are specified by the randomness r. We then adapt a construction in [16] to obtain the following capacity-achieving Stochastic Affine-Erasure Correcting Code (SA-ECC). In particular, we show for any p ∈ [0, 1), there is an explicit stochastic affine code that corrects p fraction of adversarial erasures and achieves the rate 1 − p (see Appendix A for more details). Lemma 8 (Adapted from [16]). For every p ∈ [0, 1), and every ξ > 0, there is an efficiently encodable and decodable stochastic affine code (Enc, Dec) with rate R = 1 − p − ξ such that for every m ∈ {0, 1} N R and erasure pattern of at most p fraction, we have Pr[Dec( Enc(m)) = m] ≥ 1 − exp(−Ω(ξ 2 N/ log 2 N )), where Enc(m) denotes the partially erased random codeword and N denotes the length of the codeword. Binary secret sharing schemes In this section, we define our model of nearly-threshold binary secret sharing schemes. We begin with a description of the two models of non-adaptive and adaptive adversaries which can access up to t of the N shares. A leakage oracle is a machine O(·) that takes as input an N -bit string c ∈ {0, 1} N and then answers the leakage queries of the type I j , for I j ⊂ [N ], j = 1, 2, . . . , q. Each query I j is answered with c I j . An interactive machine A that issues the leakage queries is called a leakage adversary. Let A c = ∪ q j=1 I j denote the union of all the index sets chosen by A when the oracle input is c. The oracle is called t-bounded, denoted by O t (·), if it rejects leakage queries from A if there exists some c ∈ {0, 1} N such that |A c | > t. An adaptive leakage adversary decides the index set I j+1 according to the oracle's answers to all previous queries I 1 , . . . , I j . A non-adaptive leakage adversary has to decide the index set A c before any information about c is given. This means that for a non-adaptive adversary, given any oracle input c ∈ {0, 1} N , we always have A c = A for some A ⊂ [N ]. Let View Ot(·) A denote the view of the leakage adversary A interacting with a t-bounded leakage oracle. When A is non-adaptive, we use the shorthand View Ot(·) A = (·) A , for some A ⊂ [N ] of size |A| ≤ t. A function ε : N → R is called negligible if for every positive integer k, there exists an N k ∈ N such that |ε(N )| < 1 N k for all N > N k . The following definition of ramp Secret Sharing Scheme (SSS) allows imperfect privacy and reconstruction with errors bounded by negligible functions ε(·) and δ(·), respectively. Definition 9. For any 0 ≤ τ < ρ ≤ 1, an (ε(N ), δ(N ))-SSS with relative threshold pair (τ, ρ) is a pair of polynomial-time algorithms (Share, Recst), Share : {0, 1} ℓ(N ) × R → {0, 1} N , where R denote the randomness set, and Recst : {0, 1} N → {0, 1} ℓ(N ) ∪ {⊥}, where {0, 1} N denotes the subset of ({0, 1} ∪ {?}) N with at least N ρ components not equal to the erasure symbol "?", that satisfy the following properties. • Reconstruction: Given r(N ) = N ρ correct shares of a share vector Share(s), the reconstruct algorithm Recst reconstructs the secret s with probability at least 1 − δ(N ). When δ(N ) = 0, we say the SSS has perfect reconstruction. • Privacy (non-adaptive/adaptive): -Non-adaptive: for any s 0 , s 1 ∈ {0, 1} ℓ(N ) , any A ⊂ [N ] of size |A| ≤ t(N ) = N τ , SD(Share(s 0 ) A ; Share(s 1 ) A ) ≤ ε(N ).(3) -Adaptive: for any s 0 , s 1 ∈ {0, 1} ℓ(N ) and any adaptive adversary A interacting with a t(N )-bounded leakage oracle O t(N ) (·) for t(N ) = N τ , SD View O t(N) (Share(s 0 )) A ; View O t(N) (Share(s 1 )) A ≤ ε(N ).(4) When ε(N ) = 0, we say the SSS has perfect privacy. The difference γ = ρ − τ is called the relative gap, since N γ = r(N ) − t(N ) is the threshold gap of the scheme. When clear from context, we write ε, δ, t, k, ℓ instead of ε(N ), δ(N ), t(N ), r(N ), ℓ(N ). When the parameters are not specified, we call a (ε, δ)-SSS simply a binary SSS. In the above definition, a binary SSS has a pair of designed relative thresholds (τ, ρ). In this work, we are concerned with constructing nearly-threshold binary SSS, namely, binary SSS with arbitrarily small relative gap γ = ρ − τ . We also want our binary SSS to share a large secret ℓ = Ω(N ). Definition 10. For any 0 ≤ τ < ρ ≤ 1, a coding rate R ∈ [0, 1] is achievable if there exists a family of (ε, δ)-SSS with relative threshold pair (τ, ρ) such that ε and δ are both negligible in N and ℓ N → R. The highest achievable coding rate of binary SSS for a pair (τ, ρ) is called its capacity. By relating binary SSS to Wyner wiretap codes with a pair of BEC's, we obtain the following coding rate upper bound for binary SSS. Lemma 11. For 0 ≤ τ < ρ ≤ 1, the coding rate capacity of binary SSS with relative threshold pair (τ, ρ) is asymptotically upper-bounded by ρ − τ . Proof. Let (Share, Recst) be a non-adaptive binary SSS with relative threshold pair (τ, ρ). We use Share as the encoder and Recst as the decoder, and verify in the following that we obtain a Wyner wiretap code for a BEC pm main channel and a BEC pw wiretapper channel, where p m = 1 − ρ − ξ and p w = 1 − τ + ξ, respectively, for arbitrarily small ξ > 0. Erasure in binary SSS is worst-case, while it is probabilistic in the Wyner wiretap model. We however note that asymptotically, the number of random erasures of BEC pm and BEC pw approaches N p m and N p w , respectively, with overwhelming probability, and so a code that protects against worst-case erasure can be used as a wiretap code with probabilistic erasure. In our proof we also take into account the difference in the secrecy notion in SSS and in the case of Wyner wiretap code. The N -bit output Y = Y 1 , . . . , Y N of a BEC p has a distribution where each bit is identically independently erased with probability p. By the Chernoff-Hoeffding bounds, the fraction η of erasures satisfies the following. For arbitrarily small ξ > 0,        Pr[η ≥ p + ξ] ≤ p p+ξ p+ξ 1−p 1−p−ξ 1−p−ξ N ; Pr[η ≤ p − ξ] ≤ p p−ξ p−ξ 1−p 1−p+ξ 1−p+ξ N . Applying the two inequalities to BEC pm and BEC pw , respectively, we obtain the following conclusions. The probability that BEC pm has at most p m + ξ = 1 − ρ fraction of erasures and the probability that BEC pw has at least p w −ξ = 1−τ fraction of erasures are both at most exp(−Ω(N )) for arbitrarily small ξ > 0. We are ready to prove the Wyner wiretap reliability and secrecy properties as defined in [25,14]. We show correct decoding with probability 1− o(1). When the erasures are below p m + ξ = 1− ρ fraction, it follows directly from the reconstructability of SSS that the decoding error is bounded from above by δ, which is arbitrarily small for big enough N , where the probability is over the randomness of the encoder. When the erasures are not below p m + ξ = 1 − ρ fraction, we do not have correct decoding guarantee. But as argued above, this only occurs with a negligible probability over the randomness of the BEC pm . Averaging over the channel randomness of the BEC pm , we have correct decoding with probability 1 − o(1). We show random message equivocation secrecy H(S|W) ≥ ℓ(1 − o(1)), where S is a uniform secret and W = BEC pw (Share(S)) is the view of the wiretapper. We in fact first prove the wiretap indistinguishability security as defined in [2] and then deduce that it implies Wyner wiretap secrecy as defined in [25,14]. For each of the erasure patterns (say A ⊂ [N ] are not erased) of BEC pw that exceeds p w − ξ = 1 − τ fraction (equivalently, |A| ≤ N τ ), the binary SSS privacy gives that for any two secrets, the corresponding views W|(S = s 0 , A not erased) and W|(S = s 1 , A not erased) are indistinguishable with error ε, which is arbitrarily small for big enough N . The distribution (W|S = s 0 ) and (W|S = s 1 ) are convex combinations of W|(S = s 0 , A not erased) and W|(S = s 1 , A not erased), respectively, for all the erasure patterns A of BEC pw . As argued before, the probability that the erasures does not exceed p w − ξ = 1 − τ fraction is negligible. We average over the channel randomness of the wiretapper channel BEC pw and claim that the statistical distance of (W|S = s 0 ) and (W|S = s 1 ) is arbitrarily small for big enough N . According to [2], this is strictly stronger than the Wyner wiretap secrecy. Finally we use the coding rate upper bound of the Wyner wiretap code to bound the coding rate of binary SSS. We have shown that a binary SSS with relative threshold pair (τ, ρ) is a wiretap code for the pair (BEC pm ,BEC pw ). According to [25,14], the achievable coding rate for the Wyner wiretap code is ( 1 − p m ) − (1 − p w ) = p w − p m = ρ − τ + 2ξ. Since this holds for arbitrarily small ξ > 0, we obtain an upper bound of ρ − τ for binary SSS with relative threshold pair (τ, ρ). In the rest of the paper, we give two constant rate constructions of nearly-threshold binary SSS against non-adaptive adversary and adaptive adversary, respectively. The non-adaptive adversary construction is optimal in the sense that the coding rate achieves the upper bound in Lemma 11. Secret sharing against non-adaptive adversaries We first present our construction of capacity-achieving binary SSS against non-adaptive adversaries, using linear strong seeded extractors and optimal rate stochastic erasure correcting codes. The following theorem describes the construction using these components. Share(s) = SA-ECCenc(Z||Ext −1 (Z, s)), where Z $ ← {0, 1} d ; Recst(ṽ) = Ext(z, x), where (z||x) = SA-ECCdec(ṽ). Hereṽ denotes an incomplete version of a share vector v ∈ {0, 1} N with some of its components replaced by erasure symbols. The proof of Theorem 12 will follow naturally from Lemma 13. We first state and prove this general property of a linear strong extractor, which is of independent interest. For the property to hold, we in fact only need the extractor to be able to extract from affine sources. The proof of Lemma 13 is a bit long. We then break it into a claim and two propositions. SD(f A (Z, X); f A (Z ′ , X ′ )) ≤ 8ε.(5) Proof. For the above pairwise guarantee (5) to hold, it suffices to show that for every fixed choice of m ∈ {0, 1} m , the distribution of f A (Z, X) is (4ε)-close to U d ×D, where U d is the uniform distribution on {0, 1} d . Without loss of generality, we assume that the linear function Ext(z, ·) : {0, 1} n → {0, 1} m , for every seed z, has the entire {0, 1} m as its image 3 . Without loss of generality, it suffices to assume that f A is of the form f A (Z, X) = (Z, W (X)) for some affine function W : {0, 1} n → {0, 1} t . This is because for any arbitrary f A , the information contained in f A (Z, X) can be obtained from (Z, W (X)) for a suitable choice of W . Let D be the uniform distribution on the image of W . Let K ← {0, 1} n be a random variable uniformly distributed over the kernel of the linear transformation defined by W , and note that it has entropy at least n − t ≥ k. The extractor Ext thus guarantees that Ext(Z, K), for a uniform and independent seed Z, is ε-close to uniform. By averaging, it follows that for at least 1 − 4ε fraction of the choices of the seed z ∈ {0, 1} d , the distribution of Ext(z, K) is (1/4)-close to uniform. Claim 14. Let U be uniformly distributed on {0, 1} m and U ′ be any affine source that is not uniform on {0, 1} m . Then, the statistical distance between U and U ′ is at least 1/2. Claim 14 follows from the observation that any affine source U ′ that is not uniform on {0, 1} m will have a support (the set of vectors u such that Pr[U ′ = u] > 0) that is an affine subspace of {0, 1} m with dimension at most m − 1. Continuing with the previous argument, since Ext is a linear function for every seed, the distribution of Ext(z, K) for any seed z is an affine source. Therefore, the above claim allows us to conclude that for at least 1 − 4ε fraction of the choices of z, the distribution of Ext(z, K) is exactly uniform. Let G ⊆ {0, 1} d be the set of such choices of the seed. Observe that if Ext(z, K) is uniform for some seed z, then for any affine translation of K, namely, K + v for any v ∈ {0, 1} n , we have that Ext(z, K + v) is uniform as well. This is due to the linearity of the extractor. Recall that our goal is to show that f A (Z, X) = (Z, W (X)) is (4ε)-close to U d × D. The distribution (Z, W (X)) is obtained as (U d , W (U n ))|(Ext(U d , U n ) = m). For the rest of the proof, we first find out the distribution (U d , W (U n ))|(Ext(U d , U n ) = m, U d = z) for a seed z ∈ G (Proposition 15) and then take the convex combination over the uniform seed to obtain (Z, W (X)) (Proposition To prove Proposition 15, note that the distribution of (Z, Y) is uniform on {0, 1} d+n . Now, fix any z ∈ G and let w ∈ {0, 1} t be any element in the image of W (·). Since the conditional distribution Y|(Z = z) is uniform over {0, 1} n , further conditioning on W (Y) = w yields that Y|(Z = z, W = w) is uniform over a translation of the kernel of W (·). By the assumption z ∈ G and recalling M = Ext(Z, Y), we therefore know that the extractor output is exactly uniform over {0, 1} m . That is, M|(Z = z, W = w) is exactly uniform over {0, 1} m and hence in this case M and W are independent. On the other hand, the distribution of (Z, W) is exactly U d × D, since the map W (·) is linear. In particular, for any z ∈ {0, 1} d , the conditional distribution W|(Z = z) is exactly D. This together with the fact that M and W are independent yield that the conditional distribution of (M, W)|(Z = z) is exactly U m × D. We have therefore proved Proposition 15. Pr[Z = z, W = w|M = m] + η = 2 −d (z,w)∈E,z∈G Pr[W = w|M = m, Z = z] + η(6)= 2 −d (z,w)∈E,z∈G D(w) + η (7) = 2 −d (z,w)∈E D(w) − (z,w)∈E,z / ∈G D(w) + η where (6) uses the independence of W and Z and (7) follows from Proposition 15. Observe that η ′ := 2 −d (z,w)∈E,z / ∈G D(w) = 2 −d z / ∈G w : (z,w)∈E D(w) ≤ 2 −d (2 d − |G|) ≤ 4ε. Therefore, Pr[(Z, W) ∈ E|M = m] = p + η − η ′ = p ± 4ε = Pr[(Z, W) ∈ E] ± 4ε, since 0 ≤ η ≤ 4ε and 0 ≤ η ′ ≤ 4ε. We have therefore proved Proposition 16. With Lemma 13 at hand, we are now at a good position to prove Theorem 12. Proof of Theorem 12. The reconstruction from r shares follows trivially from the definition of stochastic erasure correcting code. We now prove the privacy. The sharing algorithm of the SSS (before applying the stochastic affine code) takes a secret, which is a particular extractor output s ∈ {0, 1} ℓ , and uniformly samples a seed z ∈ {0, 1} d of Ext before uniformly finds an x ∈ {0, 1} n such that Ext(z, x) = s. This process of obtaining (z, x) is the same as sampling (U d , U n ) $ ← {0, 1} d+n and then restrict to Ext(U d , U n ) = s. We define the random variable tuple (Z, We can now formulate the privacy of the SSS in this context. We want to prove that the statistical distance of the views of the adversary for a pair of secrets s and s ′ can be made arbitrarily small. The views of the adversary are the outputs of the affine function f A with inputs (Z, X) and (Z ′ , X ′ ) for the secret s and s ′ , respectively. According to Lemma 13, we then have that the privacy error is 8 × ε 8 = ε. X) := (U d , U n )| (Ext(U d , U n ) = s)(8) We now analyze the coding rate of the (ε, δ)-SSS with relative threshold pair ( t N , r N ) constructed in Theorem 12 when instantiated with the SA-ECC from Lemma 8 and the Ext from Lemma 4. The secret length is ℓ = n − t − O(d), where the seed length is d = O(log 3 (2n/ε)). The SA-ECC encodes d + n bits to N bits and with coding rate R ECC = ρ − ξ for a small ξ determined by δ (satisfying the relation δ = exp(−Ω(ξ 2 N/ log 2 N )) according to Lemma 8). We then have n = N (ρ − ξ) − d, resulting in the coding rate R = ℓ N = n − t − O(d) N = N (ρ − ξ) − t − O(d) N = ρ − τ − (ξ + O(d) N ) = ρ − τ − o(1). Corollary 17. For any 0 ≤ τ < ρ ≤ 1, there is an explicit construction of non-adaptive (ε, δ)-SSS with relative threshold pair (τ, ρ) achieving coding rate ρ − τ − o(1), where ε and δ are both negligible. The binary SSS obtained in Corollary 17 is asymptotically optimal as it achieves the upper bound in Lemma 11. Secret sharing against adaptive adversaries In this section, we will describe our construction which achieves privacy against adaptive adversaries, using seedless affine extractors. We start with the specific extraction property needed from our affine extractors. Almost perfect property can be trivially achieved by requiring an exponentially (in m) small error in statistical distance, using the relation between ℓ ∞ -norm and ℓ 1 -norm. Theorem 19. Let AExt : {0, 1} n → {0, 1} ℓ be an invertible (n − t, ε 2 )-almost perfect affine extractor and AExt −1 : {0, 1} ℓ × R 1 → {0, 1} n be its v-inverter that maps an s ∈ {0, 1} ℓ to one of its preimages chosen uniformly at random. Let (SA-ECCenc, SA-ECCdec) be a stochastic affine-erasure correcting code with encoder SA-ECCenc : {0, 1} n × R 2 → {0, 1} N that tolerates N − r erasures and decodes with success probability at least 1 − δ. Then the (Share, Recst) defined as follows is an adaptive (ε + v, δ)-SSS with threshold pair (t, r). Share(s) = SA-ECCenc(AExt −1 (s)); Recst(ṽ) = AExt(SA-ECCdec(ṽ)), whereṽ denotes an incomplete version of a share vector v ∈ {0, 1} N with some of its components replaced by erasure symbols. Proof. The (r, δ)-reconstructability follows directly from the erasure correcting capability of the SA-ECC. For anyṽ with at most N − r erasure symbols and the rest of its components consistent with a valid codeword v ∈ {0, 1} N , the SA-ECC decoder identifies the unique codeword v with probability 1 − δ over the encoder randomness. The corresponding SA-ECC message of v is then inputted to AExt and the original secret s is reconstructed with the same probability. We next prove the (t, ε)-privacy. Without loss of generality, we first assume the inverter of the affine extractor is perfect, namely, v = 0. When v is negligible but not equal to zero, the overall privacy error will increase slightly, but still remain negligible. For any r ∈ R 2 , the affine encoder of SA-ECC is characterised by a matrix G r ∈ {0, 1} n×N and an offset ∆ r . For n unknowns x = (x 1 , . . . , x n ), we have SA-ECCenc(x) = xG r + ∆ r = (xG 1 , . . . , xG N ) + ∆ r , where G i = (g 1,i , . . . , g n,i ) T (here the subscript " r " is omitted to avoid double subscripts) denotes the ith column of G r , i = 1, . . . , N . This means that knowing a component c i of the SA-ECC codeword is equivalent to obtaining a linear equation c i ⊕ ∆ i = xG i = g 1,i x 1 + · · · + g n,i x n about the n unknowns x 1 , . . . , x n , where ∆ i (again, the subscript " r " is omitted) denotes the ith component of ∆ r . Now, we investigate the distribution of View = (1 ± ε 2 )2 −ℓ · Pr[View SA-ECCenc(X) A = w] Pr[AExt(X) = s] (ii) = (1 ± ε 2 )2 −ℓ · 2 n−rank(A) 2 n 2 −ℓ = (1 ± ε 2 ) · 2 −rank(A) , where notations X, ±, rank(A) and (i), (ii) are explained as follows. In above, we first use the fact that Pr[View Ot(Share(s)) A = w] can be seen as the probability of uniformly selecting X from {0, 1} n , with the condition that AExt(X) = s. This is true because the sets AExt −1 (s) for all s, partition {0, 1} n and the rest follows from Definition 6. The shorthand "y = 1 ± ε 2 " denotes "1 − ε 2 ≤ y ≤ 1 + ε 2 ". The shorthand "rank(A)" denotes the rank of the up to t columns of G corresponding to the index set A adaptively chosen by A. The equality (i) follows from the fact that AExt is an (n − t, 2 −(ℓ+1) ε)-affine extractor and the uniform X conditioned on at most t linear equations is an affine source with at least n − t bits entropy. The equality (ii) holds if and only if w is in the set {SA-ECCenc(x) A : x ∈ {0, 1} n }. Indeed, consider X as unknowns for equations, the number of solutions to the linear system SA-ECCenc(X) A = w is either 0 or equal to 2 n−rank(A) . The distribution of View Ot(Share(s)) A for any secret s is determined by the quantity rank(A), which is independent of the secret s. Let W be the uniform distribution over the set {SA-ECCenc(x) A : x ∈ {0, 1} n }. Then by the triangular inequality, we have SD View Ot(Share(s 0 )) A ; View Ot(Share(s 1 )) A ≤ SD View Ot(Share(s 0 )) A ; W + SD W; View Ot(Share(s 1 )) A ≤ ε 2 + ε 2 = ε. When the inverter of the affine extractor is not perfect, the privacy error is upper bounded by ε + v. This concludes the privacy proof. There are explicit constructions of binary affine extractors that, given a constant fraction of entropy, outputs a constant fraction of random bits with exponentially small error (see Lemma 5). There are known methods for constructing an invertible affine extractor AExt ′ from any affine extractor AExt such that the constant fraction output size and exponentially small error properties are preserved. A simple method is to let AExt ′ (U n ||M ) := AExt(U n ) ⊕ M (see Appendix B for a discussion). This is summarized in the lemma below. Lemma 20. For any δ ∈ (0, 1], there is an explicit seedless (δn, ε)-almost perfect affine extractor AExt : {0, 1} n → {0, 1} m where m = Ω(n) and ε = exp(−Ω(n)). Moreover, there is an efficiently computable ε-inverter for the extractor. Proof. Let AExt : {0, 1} n → {0, 1} m be Bourgain's affine extractor (Lemma 5) for entropy rate µ, output length m = Ω(n), and achieving exponentially small error ε = exp(−Ω(n)). Using the one-time pad trick (Appendix B), we construct an invertible variant achieving output length m ′ = Ω(m) = Ω(n) and exponentially small error. Finally, we simply truncate the output length of the resulting extractor to m ′′ = Ω(m ′ ) = Ω(n) bits so that the closeness to uniformity measured by ℓ ∞ norm required for almost-perfect extraction is satisfied. The truncated extractor is still invertible since the inverter can simply pad the given input with random bits and invoke the original inverter function. It now suffices to instantiate Theorem 19 with the explicit construction of SA-ECC and the invertible affine extractor AExt of Lemma 20. Let R ECC denote the rate of the SA-ECC. Then we have R ECC = n N , where n is the input length of the affine extractor AExt and N is the number of players. The intuition of the construction in Theorem 19 is that if a uniform secret is shared and conditioning on the revealed shares the secret still has a uniform distribution (being the output of a randomness extractor), then no information is leaked. In fact, the proof of Theorem 19 above is this intuition made exact, with special care on handling the imperfectness of the affine extractor. So as long as the "source" of the affine extractor AExt has enough entropy, privacy is guaranteed. Here the "source" is the distribution U n conditioned on the adversary's view, which is the output of a t-bit affine function. The "source" then is affine and has at least n − τ N = n(1 − τ R ECC ) bits of entropy. Now as long as τ < R ECC , using the AExt from Lemma 5 (more precisely, an invertible affine extractor AExt ′ : {0, 1} n ′ → {0, 1} ℓ constructed from AExt) with µ = 1 − τ R ECC , a constant fraction of random bits can be extracted with exponentially small error. This says that privacy is guaranteed for τ ∈ [0, R ECC ). The stochastic affine ECC in Lemma 8 asymptotically achieves the rate 1 − (1 − ρ) = ρ. We then have the following corollary. Corollary 21. For any 0 ≤ τ < ρ ≤ 1, there is an explicit constant coding rate adaptive (ε, δ)-SSS with relative threshold pair (τ, ρ), where ε and δ are both negligible. The construction above achieves a constant coding rate for any (τ, ρ) pair satisfying 0 ≤ τ < ρ ≤ 1. However, since the binary affine extractor in Lemma 5 does not extract all the entropy from the source and moreover the step that transforms an affine extractor into an invertible affine extractor incurs non-negligible overhead, the coding rate of the above construction does not approach ρ − τ . We leave explicit constructions of binary SSS against adaptive adversary with better coding rate as an interesting technical open question. Conclusion We studied the problem of sharing arbitrary long secrets using constant length shares and required a nearly-threshold access structure. By nearly-threshold we mean a ramp scheme with arbitrarily small gap to number of players ratio. We show that by replacing perfect privacy and reconstructibility with slightly relaxed notions and inline with similar strong cryptographic notions, one can explicitly construct such nearly-threshold schemes. We gave two constructions with security against non-adaptive and adaptive adversaries, respectively, and proved optimality of the former. Our work also make a new connection between secret sharing and wiretap coding. We presented our model and constructions for the extremal case of binary shares. However, we point out that the model and our constructions can be extended to shares over any desired alphabet size q. Using straightforward observations (such as assigning multiple shares to each player), this task reduces to extending the constructions over any prime q. In this case, the building blocks that we use; namely, the stochastic error-correcting code, seeded and seedless affine extractors need to be extended to the q-ary alphabet. The constructions [24,20,16] that we use, however, can be extended to general alphabets with straightforward modifications. The only exception is Bourgain's seedless affine extractor [7]. The extension of [7] to arbitrary alphabets is not straightforward and has been accomplished in a work by Yehudayoff [26]. Our constructions are not linear: even the explicit non-adaptive construction that uses an affine function for every fixing of the encoder's randomness does not result in a linear secret sharing. Linearity is an essential property in applications such as multiparty computation and so explicit constructions of linear secret sharing schemes in our model will be an important achievement. Yet another important direction for future work is deriving bounds and constructing optimal codes for finite length (N ) case. Such result will also be of high interest for wiretap coding. explicit construction of capacity achieving codes for BEC 1−ρ for REC and use a similar argument of [22]. We now refer to the Algorithm 1. in the proof of [16, Theorem 6.1] and show that, with the SC and REC replaced accordingly, we do have a SA-ECC. The error correction capability and optimal rate follow similarly as in the proof of [16, Theorem 6.1]. We next show affine property. Phase 1 and Phase 2 are about the control information, which are part of the encoding randomness r of the SA-ECC to be fixed to constant value in the analysis of affine property. During Phase 3, the message m is linearly encoded (our REC is linear) and then permuted, followed by adding a translation term ∆ r . Since permutation is a linear transformation, we combine the two linear transformations and write mG r + ∆ r , where G r is a binary matrix. Finally, during Phase 4, some blocks that contain the control information are inserted into mG r + ∆ r . We add dummy zero columns into G r and zero blocks into ∆ r to the corresponding positions where the control information blocks are inserted. Let mĜ r +∆ r be the vector after padding dummy zeros. Let∆ ′ r be the vector obtained from padding dummy zero blocks, complementary to the padding above, to the control information blocks. We then write the final codeword of the SA-ECC in the form mĜ r + (∆ r +∆ ′ r ), which is indeed an affine function of the message m. B One-Time-Pad trick of inverting extractors There is a well known way to transform an efficient function into one that is also efficiently invertible through a "One-Time-Pad" trick. We give a proof for the special case of affine extractors, for completeness. AExt ′ (z) = AExt ′ (x||y) = AExt(x) + y, where the input z ∈ {0, 1} n+m is separated into two parts: x ∈ {0, 1} n and y ∈ {0, 1} m . Proof. Let Z be a random variable with flat distribution supported on an affine subspace of {0, 1} n+m of dimension at least k + m. Separate Z into two parts Z = (X||Y), where X ∈ {0, 1} n and Y ∈ {0, 1} m . Then conditioned on any Y = y, X has a distribution supported on an affine subspace of {0, 1} n of dimension at least k. This asserts that conditioned on any Y = y, SD(AExt(X) + y; U {0,1} m ) ≤ ε. Averaging over the distribution of Y concludes the extractor proof. We next show an efficient inverter AExt ′−1 for AExt ′ . For any s ∈ {0, 1} m , define AExt ′−1 (s) = (U n ||AExt(U n ) + s). The randomised function AExt ′−1 is efficient and AExt ′−1 (U m ) ε ∼ U n+m .
9,499
1808.02513
2752721426
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6x with less than 1 degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.
To the best of our knowledge, our work is the first to examine the impact of numeric representations on the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neurons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smaller networks such as CIFARNET and LeNet-5 @cite_16 @cite_15 @cite_10 @cite_17 @cite_20 @cite_11 . Many of these works focused on fixed-point computation due to the fixed-point representation working well on small-scale neural networks. We find very different conclusions when considering production-ready DNNs.
{ "abstract": [ "Neural network algorithms simulated on standard computing platforms typically make use of high resolution weights, with floating-point notation. However, for dedicated hardware implementations of such algorithms, fixed-point synaptic weights with low resolution are preferable. The basic approach of reducing the resolution of the weights in these algorithms by standard rounding methods incurs drastic losses in performance. To reduce the resolution further, in the extreme case even to binary weights, more advanced techniques are necessary. To this end, we propose two methods for mapping neural network algorithms with high resolution weights to corresponding algorithms that work with low resolution weights and demonstrate that their performance is substantially better than standard rounding. We further use these methods to investigate the performance of three common neural network algorithms under fixed memory size of the weight matrix with different weight resolutions. We show that dedicated hardware systems, whose technology dictates very low weight resolutions (be they electronic or biological) could in principle implement the algorithms we study.", "Many companies are deploying services, either for consumers or industry, which are largely based on machine-learning algorithms for sophisticated processing of large amounts of data. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be both computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on-chip storage of a multi-chip system. This property, combined with the CNN DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system. We implement the node down to the place and route at 28nm, containing a combination of custom storage and computational units, with industry-grade interconnects.", "Today advanced computer vision (CV) systems of ever increasing complexity are being deployed in a growing number of application scenarios with strong real-time and power constraints. Current trends in CV clearly show a rise of neural network-based algorithms, which have recently broken many object detection and localization records. These approaches are very flexible and can be used to tackle many different challenges by only changing their parameters. In this paper, we present the first convolutional network accelerator which is scalable to network sizes that are currently only handled by workstation GPUs, but remains within the power envelope of embedded systems. The architecture has been implemented on 3.09 mm2 core area in UMC 65 nm technology, capable of a throughput of 274 GOp s at 369 GOp s W with an external memory bandwidth of just 525 MB s full-duplex \" a decrease of more than 90 from previous work.", "We simulate the training of a set of state of the art neural networks, the Maxout networks (, 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct arithmetics: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those arithmetics, we assess the impact of the precision of the computations on the final error of the training. We find that very low precision computation is sufficient not just for running trained networks but also for training them. For example, almost state-of-the-art results were obtained on most datasets with 10 bits for computing activations and gradients, and 12 bits for storing updated parameters.", "", "Recent development of neuromorphic hardware offers great potential to speed up simulations of neural networks. SpiNNaker is a neuromorphic hardware and software system designed to be scalable and flexible enough to implement a variety of different types of simulations of neural systems, including spiking simulations with plasticity and learning. Spike-timing dependent plasticity (STDP) rules are the most common form of learning used in spiking networks. However, to date very few such rules have been implemented on SpiNNaker, in part because implementations must be designed to fit the specialized nature of the hardware. Here we explain how general STDP rules can be efficiently implemented in the SpiNNaker system. We give two examples of applications of the implemented rule: learning of a temporal sequence, and balancing inhibition and excitation of a neural network. Comparing the results from the SpiNNaker system to a conventional double-precision simulation, we find that the network behavior is comparable, and the final weights differ by less than 3 between the two simulations, while the SpiNNaker simulation runs much faster, since it runs in real time, independent of network size." ], "cite_N": [ "@cite_11", "@cite_15", "@cite_16", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2118717252", "2048266589", "2070167224", "2950769435", "", "2150389614" ] }
RETHINKING NUMERICAL REPRESENTATIONS FOR DEEP NEURAL NETWORKS
Recently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide array of AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannun et al. (2014), and language understanding . In addition to algorithmic innovations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behind these successes are advances in computing infrastructure that enable large-scale deep learning-the training and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al. (2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first breakthrough of deep learning for image classification Krizhevsky et al. (2012). Given the ever growing amount of data available for indexing, analysis, and training, and the increasing prevalence of everlarger DNNs as key building blocks for AI applications, it is critical to design computing platforms to support faster, more resource-efficient DNN computation. A set of core design decisions are common to the design of these infrastructures. One such critical choice is the numerical representation and precision used in the implementation of underlying storage and computation. Several recent works have investigated the numerical representation for DNNs Cavigelli et al. (2015); ; Du et al. (2014); Muller & Indiveri (2015). One recent work found that substantially lower precision can be used for training when the correct numerical rounding method is employed Gupta et al. (2015). Their work resulted in the design of a very energy-efficient DNN platform. This work and other previous numerical representation studies for DNNs have either limited themselves to a small subset of the customized precision design space or drew conclusions using only small neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-point and wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky & Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numeric representations. Exploring a limited customized precision design space inevitably results in designs lacking in energy efficiency and computational performance. Evaluating customized precision accuracy based on small neural networks requires the assumption that much larger, production-grade neural networks would operate comparably when subjected to the same customized precision. In this work, we explore the accuracy-efficiency trade-off made available via specialized customprecision hardware for inference and present a method to efficiently traverse this large design space to find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized precision settings for fixed-point and floating-point representations on accuracy and computational performance. We evaluate these customized precision configurations on large, state-of-the-art neural networks. By evaluating the full computational precision design space on a spectrum of these production-grade DNNs, we find that: 1. Precision requirements do not generalize across all neural networks. This prompts designers of future DNN infrastructures to carefully consider the applications that will be executed on their platforms, contrary to works that design for large networks and evaluate accuracy on small networks Cavigelli et al. (2015); . 2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than previously found from small-scale evaluations Cavigelli et al. (2015); ; Du et al. (2014). For example, we find that GoogLeNet requires on the order of 40 bits when implemented with fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5. 3. Floating-point representations are more efficient than fixed-point representations when selecting optimal precision settings. For example, a 17-bit floating-point representation is acceptable for GoogLeNet, while over 40 bits are required for the fixed-point representation -a more expensive computation than the standard single precision floating-point format. To make these conclusions on large-scale customized precision design readily actionable for DNN infrastructure designers, we propose and validate a novel technique to quickly search the large customized precision design space. This technique leverages the activations in the last layer to build a model to predict accuracy based on the insight that these activations effectively capture the propagation of numerical error from computation. Using this method on deployable DNNs, including GoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that using these recommendations to introduce customized precision into a DNN accelerator fabric results in an average speedup of 7.6× with less than 1% degradation in inference accuracy. CUSTOMIZED PRECISION HARDWARE We begin with an overview of the available design choices in the representation of real numbers in binary and discuss how these choices impact hardware performance. DESIGN SPACE We consider three aspects of customized precision number representations. First, we contrast the high-level choice between fixed-point and floating-point representations. Fixed-point binary arithmetic is computationally identical to integer arithmetic, simply changing the interpretation of each bit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a real number separately. Floating-point calculations involve several steps absent in integer arithmetic. In particular, addition operations require aligning the mantissas of each operand. As a result, floatingpoint computation units are substantially larger, slower, and more complex than integer units. In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed according to the data types supported by the hardware. Thus, the second aspect of precision customization we examine is to consider customizing the number of bits used in representing floating-point and fixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignment of bits to the mantissa and exponent in a floating-point value. CUSTOMIZED PRECISION TYPES In a fixed-point representation, we select the number of bits as well as the position of the radix point, which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixed point with the radix point at bit l (counting from the right) represents the value 2 −l N −1 i=0 2 i · x i . Sign Exponent Mantissa Sign Exponent Mantissa Comparator Sign Exponent Mantissa In contrast to floating point, fixed-point representations with a particular number of bits have a fixed level of precision. By varying the position of the radix point, we change the representable range. An example floating-point representation is depicted in Figure 2. As shown in the figure, there are three parameters to select when designing a floating-point representation: the bit-width of the mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissa and exponent control precision and dynamic range, respectively. The exponent bias adjusts the offset of the exponent (which is itself represented as an unsigned integer) relative to zero to facilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, a floating-point format with N m mantissa bits, N e exponent bits, and a bias of b, encodes the value 2 ( Ne −1 i=0 2 i ·ei)−b (1 + Nm i=1 2 −i · m i ), where m and e are the segments of a bit array representing the mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be 1 and hence is not explicitly stored, eliminating redundant encodings of the same value. A singleprecision value in the IEEE-754 standard (i.e. float) comprises 23 mantissa bits, 8 exponent bits, and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specific values, such as zero and infinity. Both fixed-point and floating-point representations have limitations in terms of the precision and the dynamic ranges available given particular representations, manifesting themselves computationally as rounding and saturation errors. These errors propagate through the deep neural network in a way that is difficult to estimate holistically, prompting experimentation on the DNN itself. HARDWARE IMPLICATIONS The key hardware building block for implementing DNNs is the multiply-accumulate (MAC) operation. The MAC operation implements the sum-of-products operation that is fundamental to the activation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3 (a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations. As seen in the figure, floating-point addition operations involve a number of sub-components that compare exponents, align mantissas, perform the addition, and normalize the result. Nearly all of the sub-components of the MAC unit scale in speed, power, and area with the bit width. Reducing the floating-point bit width improves hardware performance in two ways. First, reduced bit width makes a computation unit faster. Binary arithmetic computations involve chains of logic operations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagation of carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces the length of these chains, allowing the logic to operate at a higher clock frequency. Second, reduced bit width makes a computation unit smaller and require less energy, typically linearly in the number of bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. As shown in the figure, scaling the length of the mantissa provides substantial opportunity because it defines the size of the internal addition unit. Similar trends follow for bit-widths in other representations. When a unit is smaller, more replicas can fit within the same chip area and power budget, all of which can operate in parallel. Hence, for computations like those in DNNs, where ample parallelism is available, area reductions translate into proportional performance improvement. This trend of bit width versus speed, power, and area is applicable to every computation unit in hardware DNN implementations. Thus, in designing hardware that uses customized representations there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Our goal is to use precision that delivers sufficient accuracy while attaining large improvements in power, area, and speed over standard floating-point designs. METHODOLOGY We describe the methodology we use to evaluate the customized precision design space, using image classification tasks of varying complexity as a proxy for computer vision applications. We evaluate DNN implementations using several metrics, classification accuracy, speedup, and energy savings relative to a baseline custom hardware design that uses single-precision floating-point representations. Using the results of this analysis, we propose and validate a search technique to efficiently determine the correct customized precision design point. ACCURACY We evaluate accuracy by modifying the Caffe deep learning framework to perform calculations with arbitrary fixed-point and floating-point formats. We continue to store values as C floats in Caffe, but truncate the mantissa and exponent to the desired format after each arithmetic operation. Accuracy, using a set of test inputs disjoint from the training input set, is then measured by running the forward pass of a DNN model with the customized format and comparing the outputs with the ground truth. We use the standard accuracy metrics that accompany the dataset for each DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for Ima-geNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percent of inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracy represents the percent of inputs that DNN predicts correctly after five attempts. EFFICIENCY We quantify the efficiency advantages of customized floating-point representations by designing a floating-point MAC unit in each candidate precision and determining its silicon area and delay characteristics. We then report speedup and energy savings relative to a baseline custom hardware implementation of a DNN that uses standard single-precision floating-point computations. We design each variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industry standard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The tools report the power, delay, and area characteristics of each precision variant. As shown in Figure 5, we compute speedups and energy savings relative to the standardized IEEE-754 floating-point representation considering both the clock frequency advantage and improved parallelism due to area reduction of the narrower bit-width MAC units. This allows customized precision designs to yield a quadratic improvement in total system throughput. EFFICIENT CUSTOMIZED PRECISION SEARCH To exploit the benefits of customized precision, a mechanism to select the correct configuration must be introduced. There are hundreds of designs among floating-point and fixed-point formats due to designs varying by the total bit width and the allocation of those bits. This spectrum of designs strains the ability to select an optimal configuration. A straightforward approach to select the customized precision design point is to exhaustively compute the accuracy of each design with a large number of neural network inputs. This strategy requires substantial computational resources that are proportional to the size of the network and variety of output classifications. We describe our technique that significantly reduces the time required to search for the correct configuration in order to facilitate the use of customized precision. The key insight behind our search method is that customized precision impacts the underlying internal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead (c) AlexNet q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q q q q qqq qq q q q q q 0x 5x 10x 15x 20x 25x Speedup 0% 20% 40% 60% 80% 100% Accuracy (d) CIFARNET q q q q q q q q q q qq q qq q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q qq qqq qq q q q q 0x 5x 10x 15x 20x 25x Speedup 0% 20% 40% 60% 80% 100% Accuracy (e) LeNet−5 q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q q qq qqq qq q q q q q q q q q q q q q qqq q q q q q q q q q q Custom Floating Point Custom Fixed Point IEEE 754 Single Prec. Figure 6: The inference accuracy versus speedup design space for each of the neural networks, showing substantial computational performance improvements for minimal accuracy degradation when customized precision floating-point formats are used. of comparing the final accuracy generated by networks with different precision configurations, we compare the original NN activations to the customized precision activations. This circumvents the need to evaluate the large number of inputs required to produce representative neural network accuracy. Furthermore, instead of examining all of the activations, we only analyze the last layer, since the last layer captures the usable output from the neural network as well as the propagation of lost accuracy. Our method summarizes the differences between the last layer of two configurations by calculating the linear coefficient of determination between the last layer activations. A method to translate the coefficient of determination to a more desirable metric, such as end-to-end inference accuracy, is necessary. We find that a linear model provides such a transformation. The customized precision setting with the highest speedup that meets a specified accuracy threshold is then selected. In order to account for slight inaccuracies in the model, inference accuracy for a subset of configurations is evaluated. If the configuration provided by the accuracy model results in insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if the accuracy threshold is met, then a bit is removed from the customized precision format. EXPERIMENTS In this section, we evaluate five common neural networks spanning a range of sizes and depths in the context of customized precision hardware. We explore the trade-off between accuracy and efficiency when various customized precision representations are employed. Next, we address the sources of accuracy degradation when customized precision is utilized. Finally, we examine the characteristics of our customized precision search technique. For each DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, and AlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set for all experiments, except for GoogLeNet and VGG experiments involving the entire design space. In these cases we use a randomly-selected 1% of the validation set to make the experiments tractable. EXPERIMENTAL SETUP ACCURACY VERSUS EFFICIENCY TRADE-OFFS To evaluate the benefits of customized precision hardware, we swept the design space for accuracy and performance characteristics. This performance-accuracy trade off is shown in Figure 6. This figure shows the DNN inference accuracy across the full input set versus the speedup for each of the five DNN benchmarks. The black star represents the IEEE 754 single precision representation (i.e. the original accuracy with 1× speedup), while the red circles and blue triangles represent the complete set of our customized precision floating-point and fixed-point representations, respectively. For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixedpoint format. In fact, the standard single precision floating-point format is faster than all fixedpoint configurations that achieve above 40% accuracy. Although fixed-point computation is simpler and faster than floating-point computation when the number of bits is fixed, customized precision floating-point representations are more efficient because less bits are needed for similar accuracy. Figure 7: The speedup and energy savings as the two parameters are adjusted for the custom floating point and fixed-point representations. The marked area denotes configurations where the total loss in AlexNet accuracy is less than 1%. By comparing the results across the five different networks in Figure 6, it is apparent that the size and structure of the network impacts the customized precision flexibility of the network. This insight suggests that hardware designers should carefully consider which neural network(s) they expect their device to execute as one of the fundamental steps in the design process. The impact of network size on accuracy is discussed in further detail in the following section. The specific impact of bit assignments on performance and energy efficiency are illustrated in Figure 7. This figure shows the the speedup and energy improvements over the single precision floatingpoint representation as the number of allocated bits is varied. For the floating-point representations, the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixedpoint representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) are varied. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we define acceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation in accuracy from the IEEE 754 single precision accuracy on classification in AlexNet). The fastest and most energy efficient representation occurs at the bottom-left corner of the region with acceptable accuracy, since a minimal number of bits are used. The configuration with the highest performance that meets this requirement is a floating-point representation with 6 exponent bits and 7 mantissa bits, which yields a 7.2× speedup and a 3.4× savings in energy over the single precision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary, 0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used, which achieves a 5.7× speedup and 3.0× energy savings. SOURCES OF ACCUMULATION ERROR In order to understand how customized precision degrades DNN accuracy among numeric representations, we examine the impact of various reduced precision computations on a neuron. Figure 8 presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet. The x-axis represents the number of inputs that have been accumulated, while the y-axis represents the current value of the running sum. The black line represents the original DNN computation, a baseline for customized precision settings to match. We find two causes of error between the customized precision fixed-point and floating-point representations, saturation and excessive rounding. In the fixed-point case (green line, representing 16 bits with the radix point in the center), the central cause of error is from saturation at the extreme values. The running sum exceeds 255, the maximum representable value in this representation, after 60 inputs are accumulated, as seen in the figure. For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and exponent (blue line), respectively, we find that the lack of precision for large values causes excessive rounding errors. As shown in the figure, after accumulating 120 inputs, this configuration's running sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponent normalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of the customized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and 6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE 754 floating-point configuration, as expected based on the final output accuracy. The other main cause of accuracy loss is from values that are too small to be encoded as a non-zero value in the chosen customized precision configuration. These values, although not critical during addition, cause significant problems when multiplied with a large value, since the output should be encoded as a non-zero value in the specific precision setting. We found that the weighted input is minimally impacted, until the precision is reduced low enough for the weight to become zero. While it may be intuitive based on these results to apply different customized precision settings to various stages of the neural network in order to mitigate the sudden loss in accuracy, the realizable gains of multi-precision configurations present significant challenges. The variability between units will cause certain units to be unused during specific layers of the neural network causing gains to diminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, the application specific hardware design is already an extensive process and multiple customized precision configurations increases the difficulty of the hardware design and verification process. CUSTOMIZED PRECISION SEARCH Now we evaluate our proposed customized precision search method. The goal of this method is to significantly reduce the required time to navigate the customized precision design space and still provide an optimal design choice in terms of speedup, limited by an accuracy constraint. Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which shows the relationship between the normalized accuracy of each setting in the design space and the correlation between its last layer activations compared to those of the original NN. This model, although built using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet-5 neural networks, produces a good fit with a correlation of 0.96. It is important that the model matches across networks and precision design choices (e.g., floating point versus fixed point), since creating this model for each DNN, individually, requires as much time as exhaustive search. Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-off curves from our method compared to the ideal design points. We first obtain optimal results via Figure 11: The speedup resulting from searching for the fastest setting with less than 1% inference accuracy degradation. All selected customized precision DNNs meet this accuracy constraint. exhaustive search. We present our search with a variable number of refinement iterations, where we evaluate the accuracy of the current design point and adjust the precision if necessary. To verify robustness, the accuracy models were generated using cross-validation where all configurations in the DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFAR-NET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs, a tiny subset compared that needed for classification accuracy, some of which are even incorrectly classified by the original neural network. Thus, the cost of prediction using the model is negligible. We observe that, in all cases, the accuracy model combined with the evaluation of just two customized precision configurations provides the same result as the exhaustive search. Evaluating two designs out of 340 is 170× faster than exhaustively evaluating all designs. When only one configuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selected customized precision setting never violates the target accuracy, but concedes a small amount of performance. Finally, we note that our search mechanism, without evaluating inference accuracy for any of the design points, provides a representative prediction of the optimal customized precision setting. Although occasionally violating the target accuracy (i.e. the cases where the speedup is higher than the exhaustive search), this prediction can be used to gauge the amenability of the NN to customized precision without investing any considerable amount of time in experimentation. Speedup. We present the final speedup produced by our search method in Figure 11 when the algorithm is configured for 99% target accuracy and to use two samples for refinement. In all cases, the chosen customized precision configuration meets the targeted accuracy constraint. In most cases, we find that the larger networks require more precision (DNNs are sorted from left to right in descending order based on size). VGG requires less precision than expected, but VGG also uses smaller convolution kernels than all of the other DNNs except LeNet-5. CONCLUSION In this work, we introduced the importance of carefully considering customized precision when realizing neural networks. We show that using the IEEE 754 single precision floating point representation in hardware results in surrendering substantial performance. On the other hand, picking a configuration that has lower precision than optimal will result in severe accuracy loss. By reconsidering the representation from the ground up in designing custom precision hardware and using our search technique, we find an average speedup across deployable DNNs, including GoogLeNet and VGG, of 7.6× with less than 1% degradation in inference accuracy.
4,320
1808.02586
2949760630
Domain Adaptation arises when we aim at learning from source domain a model that can per- form acceptably well on a different target domain. It is especially crucial for Natural Language Generation (NLG) in Spoken Dialogue Systems when there are sufficient annotated data in the source domain, but there is a limited labeled data in the target domain. How to effectively utilize as much of existing abilities from source domains is a crucial issue in domain adaptation. In this paper, we propose an adversarial training procedure to train a Variational encoder-decoder based language generator via multiple adaptation steps. In this procedure, a model is first trained on a source domain data and then fine-tuned on a small set of target domain utterances under the guidance of two proposed critics. Experimental results show that the proposed method can effec- tively leverage the existing knowledge in the source domain to adapt to another related domain by using only a small amount of in-domain data.
Generally, Domain Adaptation involves two different types of datasets, one from a source domain and the other from a target domain. The source domain typically contains a sufficient amount of annotated data such that a model can be efficiently built, while there is often little or no labeled data in the target domain. Domain adaptation for NLG have been less studied despite its important role in developing multi-domain SDS. proposed a SPoT-based generator to address domain adaptation problems. Subsequently, a system focused on tailoring user preferences @cite_19 , and controlling user perceptions of linguistic style @cite_3 . Moreover, a phrase-based statistical generator @cite_13 using graphical models and active learning, and a multi-domain procedure @cite_21 via data counterfeiting and discriminative training.
{ "abstract": [ "One of the biggest challenges in the development and deployment of spoken dialogue systems is the design of the spoken language generation module. This challenge arises from the need for the generator to adapt to many features of the dialogue domain, user population, and dialogue context. A promising approach is trainable generation, which uses general-purpose linguistic knowledge that is automatically adapted to the features of interest, such as the application domain, individual user, or user group. In this paper we present and evaluate a trainable sentence planner for providing restaurant information in the MATCH dialogue system. We show that trainable sentence planning can produce complex information presentations whose quality is comparable to the output of a template-based generator tuned to this domain. We also show that our method easily supports adapting the sentence planner to individuals, and that the individualized sentence planners generally perform better than models trained and tested on a population of individuals. Previous work has documented and utilized individual preferences for content selection, but to our knowledge, these results provide the first demonstration of individual preferences for sentence planning operations, affecting the content order, discourse structure and sentence structure of system responses. Finally, we evaluate the contribution of different feature sets, and show that, in our application, n-gram features often do as well as features based on higher-level linguistic representations.", "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents Bagel, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that Bagel can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data.", "Recent work in natural language generation has begun to take linguistic variation into account, developing algorithms that are capable of modifying the system's linguistic style based either on the user's linguistic style or other factors, such as personality or politeness. While stylistic control has traditionally relied on handcrafted rules, statistical methods are likely to be needed for generation systems to scale to the production of the large range of variation observed in human dialogues. Previous work on statistical natural language generation (SNLG) has shown that the grammaticality and naturalness of generated utterances can be optimized from data; however these data-driven methods have not been shown to produce stylistic variation that is perceived by humans in the way that the system intended. This paper describes Personage, a highly parameterizable language generator whose parameters are based on psychological findings about the linguistic reflexes of personality. We present a novel SNLG method which uses parameter estimation models trained on personality-annotated data to predict the generation decisions required to convey any combination of scalar values along the five main dimensions of personality. A human evaluation shows that parameter estimation models produce recognizable stylistic variation along multiple dimensions, on a continuous scale, and without the computational cost incurred by overgeneration techniques." ], "cite_N": [ "@cite_19", "@cite_21", "@cite_13", "@cite_3" ], "mid": [ "2099542783", "2951718774", "2161181481", "2018116724" ] }
Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems
Traditionally, Spoken Dialogue Systems are typically developed for various specific domains, including: finding a hotel, searching a restaurant (Wen et al., 2015a), or buying a tv, laptop (Wen et al., 2015b), flight reservations (Levin et al., 2000), etc. Such system are often requiring a well-defined ontology, which is essentially a data structured representation that the dialogue system can converse about. Statistical approaches to multi-domain in SDS system have shown promising results in how to reuse data in a domain-scalable framework efficiently (Young et al., 2013). Mrkšić et al. (2015) addressed the question of multi-domain in the SDS belief tracking by training a general model and adapting it to each domain. Recently, Recurrent Neural Networks (RNNs) based methods have shown improving results in tackling the domain adaptation issue (Chen et al., 2015;Shi et al., 2015;Wen et al., 2016a;Wen et al., 2016b). Such generators have also achieved promising results when providing such adequate annotated datasets (Wen et al., 2015b;Wen et al., 2015a;Tran and Nguyen, 2017a;Tran and Nguyen, 2017b). More recently, the development of the variational autoencoder (VAE) framework (Kingma and Welling, 2013;Rezende and Mohamed, 2015) has paved the way for learning large-scale, directed latent variable models. This has brought considerable benefits to significant progress in natural language processing (Bowman et al., 2015;Miao et al., 2016;Purushotham et al., 2017;Mnih and Gregor, 2014), dialogue system (Wen et al., 2017;Serban et al., 2017). This paper presents an adversarial training procedure to train a variational neural language generator via multiple adaptation steps, which enables the generator to learn more efficiently when in-domain data is in short supply. In summary, we make the following contributions: (1) We propose a variational approach for an NLG problem which benefits the generator to adapt faster to new, unseen domain irrespective of scarce target resources; (2) We propose two critics in an adversarial training procedure, which can guide the generator to generate outputs that resemble the sentences drawn from the target domain; (3) We propose a unifying variational domain adaptation architecture which performs acceptably well in a new, unseen domain by using a limited amount of in-domain data; (4) We investigate the effectiveness of the proposed method in different scenarios, including ablation, domain adaptation, scratch, and unsupervised training with various amount of data. Variational Domain-Adaptation Neural Language Generator Drawing inspiration from Variational autoencoder (Kingma and Welling, 2013) with assumption that there exists a continuous latent variable z from a underlying semantic space of Dialogue Act (DA) and utterance pairs (d, y), we explicitly model the space together with variable d to guide the generation process, i.e., p(y|z, d). With this assumption, the original conditional probability evolves to reformulate as follows: p(y|d) = z p(y, z|d)d z = z p(y|z, d)p(z|d)d z(1) This latent variable enables us to model the underlying semantic space as a global signal for generation, in which the variational lower bound of variational generator can be formulated as follows: L V AE (θ, φ, d, y) = −KL(q φ (z|d, y)||p θ (z|d)) + E q φ (z|d,y) [log p θ (y|z, d)](2) where: p θ (z|d) is the prior model, q φ (z|d, y) is the posterior approximator, and p θ (y|z, d) is the decoder with the guidance from global signal z, KL(Q||P ) is the Kullback-Leibler divergence between Q and P. Variational Neural Encoder The variational neural encoder aims at encoding a given input sequence w 1 , w 2 , .., w L into continuous vectors. In this work, we use a 1-layer, Bidirectional LSTM (BiLSTM) to encode the sequence embedding. The BiLSTM consists of forward and backward LSTMs, which read the sequence from left-toright and right-to-left to produce both forward and backward sequence of hidden states ( − → h 1 , .., − → h L ), and h E = (h 1 , h 2 , .., h L ) where: h i = − → h i + ← − h i . We utilize this encoder to represent both the sequence of slot-value pairs {sv i } T DA i=1 in a given Dialogue Act, and the corresponding utterance {y i } T Y i=1 (see the red parts in Figure 1). We finally operate the mean-pooling over the BiLSTM hidden vectors to obtain the representation: h D = 1 T DA T DA i h i , h Y = 1 T Y T Y i h i . The encoder, accordingly, produces both the DA representation vector which flows into the inferer and decoder, and the utterance representation which streams to the posterior approximator. Variational Neural Inferer In this section, we describe our approach to model both the prior p θ (z|d) and the posterior q φ (z|d, y) by utilizing neural networks. Neural Posterior Approximator Modeling the true posterior p(z|d, y) is usually intractable. Traditional approach fails to capture the true posterior distribution of z due to its oversimplified assumption when using the mean-field approaches. Following the work of (Kingma and Welling, 2013), in this paper we employ neural network to approximate the posterior distribution of z to simplify the posterior inference. We assume the approximation has the following form: q φ (z|d, y) = N (z; µ(f (h D , h Y )), σ 2 (f (h D , h Y ))I)(3) where: the mean µ and standard variance σ are the outputs of the neural network based on the representations of h D and h Y . The function f is a non-linear transformation that project the both DA and utterance representations onto the latent space: h z = f (h D , h Y ) = g(W z [h D ; h Y ] + b z )(4) where: W z ∈ R dz×(d h D +d h Y ) , b z ∈ R dz are matrix and bias parameters respectively, d z is the dimensionality of the latent space, g(.) is an elements-wise activation function which we set to be Relu in our experiments. In this latent space, we obtain the diagonal Gaussian distribution parameter µ and log σ 2 through linear regression: µ = W µ h z + b µ , log σ 2 = W σ h z + b σ(5) where: µ, log σ 2 are both d z dimension vectors. Neural Prior Model We model the prior as follows: p θ (z|d) = N (z; µ (d), σ (d) 2 I)(6) where: µ and σ of the prior are neural models based on DA representation only, which are the same as those of the posterior q φ (z|d, y) in Eq. 4 and Eq. 5, except for the absence of h Y . To acquire a representation of the latent variable z, we utilize the same technique as proposed in VAE (Kingma and Welling, 2013) and re-parameterize it as follows: h z = µ + σ , ∼ N (0, I)(7) In addition, we set h z to be the mean of the prior p θ (z|d), i.e., µ , during decoding due to the absence of the utterance y. Intuitively, by parameterizing the hidden distribution this way, we can back-propagate the gradient to the parameters of the encoder and train the whole network with stochastic gradient descent. Note that the parameters for the prior and the posterior are independent of each other. In order to integrate the latent variable h z into the decoder, we use a non-linear transformation to project it onto the output space for generation: h e = g(W e h z + b e )(8) where: h e ∈ R de . It is important to notice that due to the sample noise , the representation of h e is not fixed for the same input DA and model parameters. This benefits the model to learn to quickly adapt to a new domain (see Table 1-(a) and Table 3, sec. 3). Variational Neural Decoder Given a DA d and the latent variable z, the decoder calculates the probability over the generation y as a joint probability of ordered conditionals: p(y|z, d) = T Y j=1 p(y t |y <t , z, d)(9) where: p(y t |y <t , z, d) = g (RN N (y t , h t−1 , d t ) In this paper, we borrow the d t calculation and the computational RNN cell from (Tran and Nguyen, 2017a) where RNN(.)=RALSTM(.) with a slightly modification in order to integrate the representation of latent variable, i.e., h e , into the RALSTM cell, which is denoted by the bold dashed orange arrow in Figure 1-(iii). We modify the cell calculation as follows:     i t f t o t c t     =     σ σ σ tanh     W 4d h ,4d h     h e d t h t−1 y t    (10) where: i i , f t , o t are input, forget and output gates respectively, d h is hidden layer size, W 4d h ,4d h is model parameter. The resulting Variational RALSTM (VRALSTM) model is demonstrated in Figure 1-(i), (ii), (iii), in which the latent variable can affect the hidden representation through the gates. This allows the model can indirectly take advantage of the underlying semantic information from the latent variable z. In addition, when the model learns to adapt to a new domain with unseen dialogue act, the semantic representation h e can help to guide the generation process (see sec. 6.3 for details). Critics In this section, we introduce a text-similarity critic and a domain critic to guarantee, as much as possible, that the generated sentences resemble the sentences drawn from the target domain. Text similarity critic To check the relevance between sentence pair in two domains and to encourage the model generating sentences in the style which is highly similar to those in the target domain, we propose a Text Similarity Critic (SC) to classify (y (1) , y (2) ) as 1-similar or 0-unsimilar text style. The model SC consists of two parts: a shared BiLSTM h Y with the Variational Neural Encoder to represent the y (1) sentence, and a second BiLSTM to encode the y (2) sentence. The SC model takes input as a pair (y (1) , y (2) ) of ([target], source), ([target], generated), and ([generated], source). Note that we give priority to encoding the y (1) sentence in [.] using the shared BiLSTM, which guides the model to learn the sentence style from the target domain, and also contributes the target domain information into the global latent variables. We further utilize Siamese recurrent architectures (Neculoiu et al., 2016) for learning sentence similarity, in which the architecture allows us to learn useful representations with limited supervision. Domain critic In consideration of the shift between domains, we introduce a Domain Critic (DC) to classify sentence as source, target, or generated domain, respectively. Drawing inspiration from work of (Ganin et al., 2016), we model DC with a gradient reversal layer and two standard feed-forward layers. It is important to notice that our DC model shares parameters with the Variational Neural Encoder and the Variational Neural Inferer. The DC model takes input as a pair of given DA and corresponding utterance to produce a concatenation of both its representation and its latent variable in the output space, which is then passed through a feed-forward layer and a 3-labels classifier. In addition, the gradient reversal layer, which multiplies the gradient by a specific negative value during back-propagation training, ensures that the feature distributions over the two domains are made similar, as indistinguishable as possible for the domain critic, hence resulting in the domain-invariant features. Training Domain Adaptation Model Given a training instance represented by a pair of DA and sentence (d (i) , y (i) ) from the rich source domain S and the limited target domain T , the task aims at finding a set of parameters Θ T that can perform acceptably well on the target domain. Training Critics We provide as following the training objective of SC and DC. For SC, the goal is to classify a sentence pair into 1-similar or 0-unsimilar textual style. This procedure can be formulated as a supervised classification training objective function: L s (ψ) = − N n=1 log C s (l n s |y n (1) , y n (2) , ψ), l n s = 1 − similar if (y n (1) , y n (2) ) ∈ P sim , 0 − unsimilar if (y n (1) , y n (2) ) ∈ P unsim , Y G = {y|y ∼ G(.|d T , .)}, P sim = {y n T , y n Y G }, P unsim = ({y n T , y n S }, {y n Y G , y n S })(11) where: N is number of sentences, ψ is the model parameters of SC, Y G denotes sentences generated from the current generator G given target domain dialogue act d T . The scalar probability C s (1|y n T , y n Y G ) indicates how a generated sentence y n Y G is relevant to a target sentence y n T . The DC critic aims at classifying a pair of DA-utterance into source, target, or generated domain. This can also be formulated as a supervised classification training objective as follows: L d (ϕ) = − N n=1 log C d (l n d |d n , y n , ϕ), l n d =    source if (d n , y n ) ∈ (D S , Y S ), target if (d n , y n ) ∈ (D T , Y T ), generated if (d n , y n ) ∈ (D T , Y G ),(12) where: ϕ is the model parameters of DC, (D S , Y S ), (D T , Y T ) are the DA-utterance pairs from source, target domain, respectively. Note also that the scalar probability C d (target|d n , y n ) indicates how likely the DA-utterance pair (d n , y n ) is from the target domain. Training Variational Generator We utilize the Monte Carlo method to approximate the expectation over the posterior in Eq. 2, i.e., E q φ (z|d,y) [.] 1 M M m=1 log p θ (y|d, h (m) z ) where: M is the number of samples. In this study, the joint training objective for a training instance (d, y) is formulated as follows: L(θ, φ) −KL(q φ (z|d, y)||p θ (z|d)) + 1 M M m=1 Ty t=1 log p θ (y t |y <t , d, h (m) z )(13) where: h m) , and (m) ∼ N (0, I). The first term is the KL divergence between two Gaussian distribution, and the second term is the approximation expectation. We simply set M = 1 which degenerates the second term to the objective of conventional generator. Since the objective function in Eq. 13 is differentiable, we can jointly optimize the parameter θ and variational parameter φ using standard gradient ascent techniques. (m) z = µ + σ( Adversarial Training Our domain adaptation architecture is demonstrated in Figure 1, in which both generator G and critics C s , and C d jointly train by pursuing competing goals as follows. Given a dialogue act d T in the target domain, the generator generates K sentences y's. It would prefer a "good" generated sentence y if the values of C d (target|d T , y) and C s (1|y T , y) are large. In contrast, the critics would prefer large values of C d (generated|d T , y) and C s (1|y, y S ), which imply the small values of C d (target|d T , y) and C s (1|y T , y). We propose a domain-adversarial training procedure in order to iteratively updating the generator and critics as described in Algorithm 1. While the parameters of generator are optimized to minimize their loss in the training set, the parameters of the critics are optimized to minimize the error of text similarity, and to maximize the loss of domain classifier. (G 1 )-Compute g G = { θ L(θ, φ), φ L(θ, φ)} using Eq. 13 8 (G 2 )-Adam update of θ, φ for G using g G 9 (S 1 )-Compute g s = ψ L s (ψ) using Eq. 11 for (y T , y S ); 10 (S 2 )-Adam update of ψ for C s using g s ; 11 Y G ← {yk} K k=1 , where yk ∼ G(.|d Generally, the current generator G for each training iteration i takes a target dialogue act d (i) T as input to over-generate a set Y G of K candidate sentences (step 11). We then choose top k best sentences in the Y G set (step 12) after re-ranking to measure how "good" the generated sentences are by using the critics (steps 14-15). These "good" signals from the critics can guide the generator step by step to generate the outputs which resemble the sentences drawn from the target domain. Note that the re-ranking step is important for separating the "correct" sentences from the current generated outputs Y G by penalizing the generated sentences which have redundant or missing slots. Experiments We conducted experiments on the proposed models in different scenarios: Adaptation, Scratch, and All using several model architectures, evaluation metrics, datasets (Wen et al., 2016a), and configurations (see Appendix A). KL cost annealing strategy (Bowman et al., 2015) encourages the model to encode meaningful representations into the latent vector z, in which we gradually anneal the KL term from 0 to 1. This helps our model to achieve solutions with non-zero KL term. Gradient reversal layer (Ganin et al., 2016) leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar −λ p during the backpropagationbased training. We set the domain adaptation parameter λ p which gradually increases, starting from 0 to 1, by using the following schedule for each training step i: p = f loat(i)/num steps, and λ p = 2 1+exp(−10 * p) − 1 where: num steps is a constant which is set to be 8600, p is training progress. This strategy allows the Domain critic to be less sensitive to noisy signal at the early training stages. 6 Results and Analysis Integrating Variational Inference We compare the original model RALSTM with its modification by integrating Variational Inference (VRALSTM) as demonstrated in Table 2 and Table 1-(a). It clearly shows that the VRALSTM not only preserves the power of the original RALSTM on generation task since its performances are very competitive to those of RALSTM, but also provides a compelling evidence on adapting to a new, unseen domain when the target domain data is scarce, i.e., from 1% to 7%. Table 3, sec. 3 further shows the necessity of the integrating in which the VRALSTM achieved a significant improvement over the RALSTM in Scratch scenario, and of the adversarial domain adaptation algorithm in which although both the RALSTM and VRALSTM model can perform well when providing sufficient in-domain training data (Table 2), the performances are extremely impaired when training from Scratch with only a limited data. These indicate that the proposed variational method can learn the underlying semantic of DAutterance pairs in the source domain via the representation of the latent variable z, from which when adapting to another domain, the models can leverage the existing knowledge to guide the generation process. Ablation Studies The ablation studies (Table 3, sec. 1, 2) demonstrate the contribution of two Critics, in which the models were assessed with either no Critics or both or only one. It clearly sees that combining both Critics makes a substantial contribution to increasing the BLEU score and decreasing the slot error rate by a large margin in every dataset pairs. A comparison of model adapting from source Laptop domain between VRALSTM without Critics (Laptop ) and VDANLG (Laptop ) evaluated on the target Hotel domain shows that the VDANLG not only has better performance with much higher the BLEU score, 82.18 in comparison to 78.70, but also significantly reduce the ERR, from 15.17% down to 2.89%. The trend is consistent across all the other domain pairs. These stipulate the necessary Critics in effective learning to adapt to a new domain. Table 3, sec. 4 further demonstrates that using DC only (sec. 4) brings a benefit of effectively utilizing similar slot-value pairs seen in the training data to closer domain pairs such as: Hotel→Restaurant (68.23 BLEU, 4.97 ERR), Restaurant→Hotel (80.31 BLEU, 6.71 ERR), Laptop→Tv (51.14 BLEU, 10.07 ERR), and Tv→Laptop (50.01 BLEU, 15.40 ERR) pairs. Whereas it is inefficient for the longer domain pairs since their performances are worse than those without Critics, or in some cases even worse than the VRALSTM in scr10 scenario, such as Restaurant→Tv (41.69 BLEU, 34.74 ERR), and the cases where Laptop to be a Target domain. On the other hand, using only SC (sec. 5) helps the models achieve better results since it is aware of the sentence style when adapting to the target domain. Distance of Dataset Pairs To better understand the effectiveness of the methods, we analyze the learning behavior of the proposed model between different dataset pairs. The datasets' order of difficulty was, from easiest to hardest: Hotel↔Restaurant↔Tv↔Laptop. On the one hand, it might be said that the longer datasets' distance is, the more difficult of domain adaptation task becomes. This clearly shows in Table 3, sec. 1, at Hotel column where the adaptation ability gets worse regarding decreasing the BLEU score and increasing the ERR score alongside the order of Restaurant→Tv→Laptop datasets. On the other hand, the closer the dataset pair is, the faster model can adapt. It can be expected that the model can better adapt to the target Tv/Laptop domain from source Laptop/Tv than those from source Restaurant, Hotel, and vice versa, the model can easier adapt to the target Restaurant/Hotel domain from source Hotel/Restaurant than those from Laptop, Tv. However, the above-mentioned is not always true that the proposed method can perform acceptably well from easy source domains (Hotel, Restaurant) to the more difficult target domains (Tv, Laptop) and vice versa (Table 3, sec. 1, 2). Table 3, sec. 2 further shows that the proposed method is able to leverage the out of domain knowledge since the adaptation models trained on union source dataset, such as [R+H] or [L+T], show better performances than those trained on individual source domain data. A specific example in Table 3 Adaptation vs. All Training Scenario It is interesting to compare Adaptation (Table 3, sec. 2) with All training scenario ( Table 2). The VDANLG model shows its considerable ability to shift to another domain with a limited of in-domain labels whose results are competitive to or in some cases better than the previous models trained on full labels of the Target domain. A specific comparison evaluated on the Tv domain where the VDANLG model trained on the source Laptop achieved better performance, at 52.43 BLEU and 1.52 ERR, than HLSTM (52.40, 2.65), SCLSTM (52.35,2.41),3.38). The VDANLG models, in many cases, also have lower of slot error rate ERR scores than the Enc-Dec model. These indicate the stable strength of the VDANLG models in adapting to a new domain when the target domain data is scarce. Unsupervised Domain Adaptation We further examine the effectiveness of the proposed methods by training the VDANLG models on target Counterfeit datasets (Wen et al., 2016a). The promising results are shown in Table 1 -(b), despite the fact that the models were instead adaptation trained on the Counterfeit datasets, or in other words, were indirectly trained on the (Test) domains. However, the proposed models still showed positive signs in remarkably reducing the slot error rate ERR in the cases of Hotel and Tv be the (Test) domains. Surprisingly, even the source domains (Hotel /Restaurant ) are far from the (Test) domain Tv, and the Target domain Counterfeit L2T is also very different to the source domains, the model can still acceptably adapt well since its BLEU scores on (Test) Tv domain reached to (41.83/42.11), and it also produced a very low of ERR scores (2.38/2.74). This phenomenon will be further investigated in the unsupervised scenario in the future work. Comparison on Generated Outputs On the one hand, the VRALSTM models (trained from Scratch or trained adapting model from Source domains) produce the outputs with a diverse range of error types, including missing, misplaced, redundant, wrong slots, or even spelling mistake information, leading to a very high of the slot error rate ERR score. Specifically, the VRALSTM from Scratch tends to make repeated slots and also many of the missing slots in the generated outputs since the training data may inadequate for the model to generally the tecra erebus 20 is for business computing , has a 2 gb of memory. the satellite heracles 45 has 4 gb of memory , is not for business computing. which one do you want handle the unseen dialog acts. Whereas the VRALSTM models without Critics adapting trained from Source domains (denoted by in Table 4 and Appendix B. Table 5) tend to generate the outputs with fewer error types than the model from Scratch because the VRALSTM models may capture the overlap slots of both source and target domain during adaptation training. On the other hand, under the guidance of the Critics (SC and DC) in an adversarial training procedure, the VDANLG model (denoted by ) can effectively leverage the existing knowledge of the source domains to better adapt to the target domains. The VDANLG models can generate the outputs in style of the target domain with much fewer the error types compared with the two above models. Moreover, the VDANLG models seem to produce satisfactory utterances with more correct generated slots. For example, a sample outputted by the [R+H] in Table 4-example 1 contains all the required slots with only a misplaced information of two slots 2 gb and 4 gb, while the generated output produced by Hotel is a successful generation. Another samples in Appendix B. Table 5 generated by the Hotel , Tv , [R+H] (in DA 2) and Laptop (DA 3) models are all fulfilled responses. An analysis of the generated responses in Table 5-example 2 illustrates that the VDANLG models seem to generate a concise response since the models show a tendency to form some potential slots into a concise phrase, i.e., "SLOT NAME SLOT TYPE". For example, the VDANLG models tend to concisely response as "the portege phosphorus 43 laptop ..." instead of "the portege phosphorus 43 is a laptop ...". All these above demonstrate that the VDANLG models have ability to produce better results with a much lower of the slot error rate ERR score. Conclusion and Future Work We have presented an integrating of a variational generator and two Critics in an adversarial training algorithm to examine the model ability in domain adaptation task. Experiments show that the proposed models can perform acceptably well in a new, unseen domain by using a limited amount of in-domain data. The ablation studies also demonstrate that the variational generator contributes to effectively learn the underlying semantic of DA-utterance pairs, while the Critics show its important role of guiding the model to adapt to a new domain. The proposed models further show a positive sign in unsupervised domain adaptation, which would be a worthwhile study in the future.
4,611
1808.02586
2949760630
Domain Adaptation arises when we aim at learning from source domain a model that can per- form acceptably well on a different target domain. It is especially crucial for Natural Language Generation (NLG) in Spoken Dialogue Systems when there are sufficient annotated data in the source domain, but there is a limited labeled data in the target domain. How to effectively utilize as much of existing abilities from source domains is a crucial issue in domain adaptation. In this paper, we propose an adversarial training procedure to train a Variational encoder-decoder based language generator via multiple adaptation steps. In this procedure, a model is first trained on a source domain data and then fine-tuned on a small set of target domain utterances under the guidance of two proposed critics. Experimental results show that the proposed method can effec- tively leverage the existing knowledge in the source domain to adapt to another related domain by using only a small amount of in-domain data.
Neural variational framework for generative models of text have been studied longitudinally. proposed a recurrent latent variable model VRNN for sequential data by integrating latent random variables into hidden state of a RNN model. A hierarchical multi scale recurrent neural networks was proposed to learn both hierarchical and temporal representation @cite_23 . introduced a variational neural machine translation that incorporated a continuous latent variable to model underlying semantics of sentence pairs. presented a variational autoencoder for unsupervised generative language model.
{ "abstract": [ "Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling." ], "cite_N": [ "@cite_23" ], "mid": [ "2510842514" ] }
Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems
Traditionally, Spoken Dialogue Systems are typically developed for various specific domains, including: finding a hotel, searching a restaurant (Wen et al., 2015a), or buying a tv, laptop (Wen et al., 2015b), flight reservations (Levin et al., 2000), etc. Such system are often requiring a well-defined ontology, which is essentially a data structured representation that the dialogue system can converse about. Statistical approaches to multi-domain in SDS system have shown promising results in how to reuse data in a domain-scalable framework efficiently (Young et al., 2013). Mrkšić et al. (2015) addressed the question of multi-domain in the SDS belief tracking by training a general model and adapting it to each domain. Recently, Recurrent Neural Networks (RNNs) based methods have shown improving results in tackling the domain adaptation issue (Chen et al., 2015;Shi et al., 2015;Wen et al., 2016a;Wen et al., 2016b). Such generators have also achieved promising results when providing such adequate annotated datasets (Wen et al., 2015b;Wen et al., 2015a;Tran and Nguyen, 2017a;Tran and Nguyen, 2017b). More recently, the development of the variational autoencoder (VAE) framework (Kingma and Welling, 2013;Rezende and Mohamed, 2015) has paved the way for learning large-scale, directed latent variable models. This has brought considerable benefits to significant progress in natural language processing (Bowman et al., 2015;Miao et al., 2016;Purushotham et al., 2017;Mnih and Gregor, 2014), dialogue system (Wen et al., 2017;Serban et al., 2017). This paper presents an adversarial training procedure to train a variational neural language generator via multiple adaptation steps, which enables the generator to learn more efficiently when in-domain data is in short supply. In summary, we make the following contributions: (1) We propose a variational approach for an NLG problem which benefits the generator to adapt faster to new, unseen domain irrespective of scarce target resources; (2) We propose two critics in an adversarial training procedure, which can guide the generator to generate outputs that resemble the sentences drawn from the target domain; (3) We propose a unifying variational domain adaptation architecture which performs acceptably well in a new, unseen domain by using a limited amount of in-domain data; (4) We investigate the effectiveness of the proposed method in different scenarios, including ablation, domain adaptation, scratch, and unsupervised training with various amount of data. Variational Domain-Adaptation Neural Language Generator Drawing inspiration from Variational autoencoder (Kingma and Welling, 2013) with assumption that there exists a continuous latent variable z from a underlying semantic space of Dialogue Act (DA) and utterance pairs (d, y), we explicitly model the space together with variable d to guide the generation process, i.e., p(y|z, d). With this assumption, the original conditional probability evolves to reformulate as follows: p(y|d) = z p(y, z|d)d z = z p(y|z, d)p(z|d)d z(1) This latent variable enables us to model the underlying semantic space as a global signal for generation, in which the variational lower bound of variational generator can be formulated as follows: L V AE (θ, φ, d, y) = −KL(q φ (z|d, y)||p θ (z|d)) + E q φ (z|d,y) [log p θ (y|z, d)](2) where: p θ (z|d) is the prior model, q φ (z|d, y) is the posterior approximator, and p θ (y|z, d) is the decoder with the guidance from global signal z, KL(Q||P ) is the Kullback-Leibler divergence between Q and P. Variational Neural Encoder The variational neural encoder aims at encoding a given input sequence w 1 , w 2 , .., w L into continuous vectors. In this work, we use a 1-layer, Bidirectional LSTM (BiLSTM) to encode the sequence embedding. The BiLSTM consists of forward and backward LSTMs, which read the sequence from left-toright and right-to-left to produce both forward and backward sequence of hidden states ( − → h 1 , .., − → h L ), and h E = (h 1 , h 2 , .., h L ) where: h i = − → h i + ← − h i . We utilize this encoder to represent both the sequence of slot-value pairs {sv i } T DA i=1 in a given Dialogue Act, and the corresponding utterance {y i } T Y i=1 (see the red parts in Figure 1). We finally operate the mean-pooling over the BiLSTM hidden vectors to obtain the representation: h D = 1 T DA T DA i h i , h Y = 1 T Y T Y i h i . The encoder, accordingly, produces both the DA representation vector which flows into the inferer and decoder, and the utterance representation which streams to the posterior approximator. Variational Neural Inferer In this section, we describe our approach to model both the prior p θ (z|d) and the posterior q φ (z|d, y) by utilizing neural networks. Neural Posterior Approximator Modeling the true posterior p(z|d, y) is usually intractable. Traditional approach fails to capture the true posterior distribution of z due to its oversimplified assumption when using the mean-field approaches. Following the work of (Kingma and Welling, 2013), in this paper we employ neural network to approximate the posterior distribution of z to simplify the posterior inference. We assume the approximation has the following form: q φ (z|d, y) = N (z; µ(f (h D , h Y )), σ 2 (f (h D , h Y ))I)(3) where: the mean µ and standard variance σ are the outputs of the neural network based on the representations of h D and h Y . The function f is a non-linear transformation that project the both DA and utterance representations onto the latent space: h z = f (h D , h Y ) = g(W z [h D ; h Y ] + b z )(4) where: W z ∈ R dz×(d h D +d h Y ) , b z ∈ R dz are matrix and bias parameters respectively, d z is the dimensionality of the latent space, g(.) is an elements-wise activation function which we set to be Relu in our experiments. In this latent space, we obtain the diagonal Gaussian distribution parameter µ and log σ 2 through linear regression: µ = W µ h z + b µ , log σ 2 = W σ h z + b σ(5) where: µ, log σ 2 are both d z dimension vectors. Neural Prior Model We model the prior as follows: p θ (z|d) = N (z; µ (d), σ (d) 2 I)(6) where: µ and σ of the prior are neural models based on DA representation only, which are the same as those of the posterior q φ (z|d, y) in Eq. 4 and Eq. 5, except for the absence of h Y . To acquire a representation of the latent variable z, we utilize the same technique as proposed in VAE (Kingma and Welling, 2013) and re-parameterize it as follows: h z = µ + σ , ∼ N (0, I)(7) In addition, we set h z to be the mean of the prior p θ (z|d), i.e., µ , during decoding due to the absence of the utterance y. Intuitively, by parameterizing the hidden distribution this way, we can back-propagate the gradient to the parameters of the encoder and train the whole network with stochastic gradient descent. Note that the parameters for the prior and the posterior are independent of each other. In order to integrate the latent variable h z into the decoder, we use a non-linear transformation to project it onto the output space for generation: h e = g(W e h z + b e )(8) where: h e ∈ R de . It is important to notice that due to the sample noise , the representation of h e is not fixed for the same input DA and model parameters. This benefits the model to learn to quickly adapt to a new domain (see Table 1-(a) and Table 3, sec. 3). Variational Neural Decoder Given a DA d and the latent variable z, the decoder calculates the probability over the generation y as a joint probability of ordered conditionals: p(y|z, d) = T Y j=1 p(y t |y <t , z, d)(9) where: p(y t |y <t , z, d) = g (RN N (y t , h t−1 , d t ) In this paper, we borrow the d t calculation and the computational RNN cell from (Tran and Nguyen, 2017a) where RNN(.)=RALSTM(.) with a slightly modification in order to integrate the representation of latent variable, i.e., h e , into the RALSTM cell, which is denoted by the bold dashed orange arrow in Figure 1-(iii). We modify the cell calculation as follows:     i t f t o t c t     =     σ σ σ tanh     W 4d h ,4d h     h e d t h t−1 y t    (10) where: i i , f t , o t are input, forget and output gates respectively, d h is hidden layer size, W 4d h ,4d h is model parameter. The resulting Variational RALSTM (VRALSTM) model is demonstrated in Figure 1-(i), (ii), (iii), in which the latent variable can affect the hidden representation through the gates. This allows the model can indirectly take advantage of the underlying semantic information from the latent variable z. In addition, when the model learns to adapt to a new domain with unseen dialogue act, the semantic representation h e can help to guide the generation process (see sec. 6.3 for details). Critics In this section, we introduce a text-similarity critic and a domain critic to guarantee, as much as possible, that the generated sentences resemble the sentences drawn from the target domain. Text similarity critic To check the relevance between sentence pair in two domains and to encourage the model generating sentences in the style which is highly similar to those in the target domain, we propose a Text Similarity Critic (SC) to classify (y (1) , y (2) ) as 1-similar or 0-unsimilar text style. The model SC consists of two parts: a shared BiLSTM h Y with the Variational Neural Encoder to represent the y (1) sentence, and a second BiLSTM to encode the y (2) sentence. The SC model takes input as a pair (y (1) , y (2) ) of ([target], source), ([target], generated), and ([generated], source). Note that we give priority to encoding the y (1) sentence in [.] using the shared BiLSTM, which guides the model to learn the sentence style from the target domain, and also contributes the target domain information into the global latent variables. We further utilize Siamese recurrent architectures (Neculoiu et al., 2016) for learning sentence similarity, in which the architecture allows us to learn useful representations with limited supervision. Domain critic In consideration of the shift between domains, we introduce a Domain Critic (DC) to classify sentence as source, target, or generated domain, respectively. Drawing inspiration from work of (Ganin et al., 2016), we model DC with a gradient reversal layer and two standard feed-forward layers. It is important to notice that our DC model shares parameters with the Variational Neural Encoder and the Variational Neural Inferer. The DC model takes input as a pair of given DA and corresponding utterance to produce a concatenation of both its representation and its latent variable in the output space, which is then passed through a feed-forward layer and a 3-labels classifier. In addition, the gradient reversal layer, which multiplies the gradient by a specific negative value during back-propagation training, ensures that the feature distributions over the two domains are made similar, as indistinguishable as possible for the domain critic, hence resulting in the domain-invariant features. Training Domain Adaptation Model Given a training instance represented by a pair of DA and sentence (d (i) , y (i) ) from the rich source domain S and the limited target domain T , the task aims at finding a set of parameters Θ T that can perform acceptably well on the target domain. Training Critics We provide as following the training objective of SC and DC. For SC, the goal is to classify a sentence pair into 1-similar or 0-unsimilar textual style. This procedure can be formulated as a supervised classification training objective function: L s (ψ) = − N n=1 log C s (l n s |y n (1) , y n (2) , ψ), l n s = 1 − similar if (y n (1) , y n (2) ) ∈ P sim , 0 − unsimilar if (y n (1) , y n (2) ) ∈ P unsim , Y G = {y|y ∼ G(.|d T , .)}, P sim = {y n T , y n Y G }, P unsim = ({y n T , y n S }, {y n Y G , y n S })(11) where: N is number of sentences, ψ is the model parameters of SC, Y G denotes sentences generated from the current generator G given target domain dialogue act d T . The scalar probability C s (1|y n T , y n Y G ) indicates how a generated sentence y n Y G is relevant to a target sentence y n T . The DC critic aims at classifying a pair of DA-utterance into source, target, or generated domain. This can also be formulated as a supervised classification training objective as follows: L d (ϕ) = − N n=1 log C d (l n d |d n , y n , ϕ), l n d =    source if (d n , y n ) ∈ (D S , Y S ), target if (d n , y n ) ∈ (D T , Y T ), generated if (d n , y n ) ∈ (D T , Y G ),(12) where: ϕ is the model parameters of DC, (D S , Y S ), (D T , Y T ) are the DA-utterance pairs from source, target domain, respectively. Note also that the scalar probability C d (target|d n , y n ) indicates how likely the DA-utterance pair (d n , y n ) is from the target domain. Training Variational Generator We utilize the Monte Carlo method to approximate the expectation over the posterior in Eq. 2, i.e., E q φ (z|d,y) [.] 1 M M m=1 log p θ (y|d, h (m) z ) where: M is the number of samples. In this study, the joint training objective for a training instance (d, y) is formulated as follows: L(θ, φ) −KL(q φ (z|d, y)||p θ (z|d)) + 1 M M m=1 Ty t=1 log p θ (y t |y <t , d, h (m) z )(13) where: h m) , and (m) ∼ N (0, I). The first term is the KL divergence between two Gaussian distribution, and the second term is the approximation expectation. We simply set M = 1 which degenerates the second term to the objective of conventional generator. Since the objective function in Eq. 13 is differentiable, we can jointly optimize the parameter θ and variational parameter φ using standard gradient ascent techniques. (m) z = µ + σ( Adversarial Training Our domain adaptation architecture is demonstrated in Figure 1, in which both generator G and critics C s , and C d jointly train by pursuing competing goals as follows. Given a dialogue act d T in the target domain, the generator generates K sentences y's. It would prefer a "good" generated sentence y if the values of C d (target|d T , y) and C s (1|y T , y) are large. In contrast, the critics would prefer large values of C d (generated|d T , y) and C s (1|y, y S ), which imply the small values of C d (target|d T , y) and C s (1|y T , y). We propose a domain-adversarial training procedure in order to iteratively updating the generator and critics as described in Algorithm 1. While the parameters of generator are optimized to minimize their loss in the training set, the parameters of the critics are optimized to minimize the error of text similarity, and to maximize the loss of domain classifier. (G 1 )-Compute g G = { θ L(θ, φ), φ L(θ, φ)} using Eq. 13 8 (G 2 )-Adam update of θ, φ for G using g G 9 (S 1 )-Compute g s = ψ L s (ψ) using Eq. 11 for (y T , y S ); 10 (S 2 )-Adam update of ψ for C s using g s ; 11 Y G ← {yk} K k=1 , where yk ∼ G(.|d Generally, the current generator G for each training iteration i takes a target dialogue act d (i) T as input to over-generate a set Y G of K candidate sentences (step 11). We then choose top k best sentences in the Y G set (step 12) after re-ranking to measure how "good" the generated sentences are by using the critics (steps 14-15). These "good" signals from the critics can guide the generator step by step to generate the outputs which resemble the sentences drawn from the target domain. Note that the re-ranking step is important for separating the "correct" sentences from the current generated outputs Y G by penalizing the generated sentences which have redundant or missing slots. Experiments We conducted experiments on the proposed models in different scenarios: Adaptation, Scratch, and All using several model architectures, evaluation metrics, datasets (Wen et al., 2016a), and configurations (see Appendix A). KL cost annealing strategy (Bowman et al., 2015) encourages the model to encode meaningful representations into the latent vector z, in which we gradually anneal the KL term from 0 to 1. This helps our model to achieve solutions with non-zero KL term. Gradient reversal layer (Ganin et al., 2016) leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar −λ p during the backpropagationbased training. We set the domain adaptation parameter λ p which gradually increases, starting from 0 to 1, by using the following schedule for each training step i: p = f loat(i)/num steps, and λ p = 2 1+exp(−10 * p) − 1 where: num steps is a constant which is set to be 8600, p is training progress. This strategy allows the Domain critic to be less sensitive to noisy signal at the early training stages. 6 Results and Analysis Integrating Variational Inference We compare the original model RALSTM with its modification by integrating Variational Inference (VRALSTM) as demonstrated in Table 2 and Table 1-(a). It clearly shows that the VRALSTM not only preserves the power of the original RALSTM on generation task since its performances are very competitive to those of RALSTM, but also provides a compelling evidence on adapting to a new, unseen domain when the target domain data is scarce, i.e., from 1% to 7%. Table 3, sec. 3 further shows the necessity of the integrating in which the VRALSTM achieved a significant improvement over the RALSTM in Scratch scenario, and of the adversarial domain adaptation algorithm in which although both the RALSTM and VRALSTM model can perform well when providing sufficient in-domain training data (Table 2), the performances are extremely impaired when training from Scratch with only a limited data. These indicate that the proposed variational method can learn the underlying semantic of DAutterance pairs in the source domain via the representation of the latent variable z, from which when adapting to another domain, the models can leverage the existing knowledge to guide the generation process. Ablation Studies The ablation studies (Table 3, sec. 1, 2) demonstrate the contribution of two Critics, in which the models were assessed with either no Critics or both or only one. It clearly sees that combining both Critics makes a substantial contribution to increasing the BLEU score and decreasing the slot error rate by a large margin in every dataset pairs. A comparison of model adapting from source Laptop domain between VRALSTM without Critics (Laptop ) and VDANLG (Laptop ) evaluated on the target Hotel domain shows that the VDANLG not only has better performance with much higher the BLEU score, 82.18 in comparison to 78.70, but also significantly reduce the ERR, from 15.17% down to 2.89%. The trend is consistent across all the other domain pairs. These stipulate the necessary Critics in effective learning to adapt to a new domain. Table 3, sec. 4 further demonstrates that using DC only (sec. 4) brings a benefit of effectively utilizing similar slot-value pairs seen in the training data to closer domain pairs such as: Hotel→Restaurant (68.23 BLEU, 4.97 ERR), Restaurant→Hotel (80.31 BLEU, 6.71 ERR), Laptop→Tv (51.14 BLEU, 10.07 ERR), and Tv→Laptop (50.01 BLEU, 15.40 ERR) pairs. Whereas it is inefficient for the longer domain pairs since their performances are worse than those without Critics, or in some cases even worse than the VRALSTM in scr10 scenario, such as Restaurant→Tv (41.69 BLEU, 34.74 ERR), and the cases where Laptop to be a Target domain. On the other hand, using only SC (sec. 5) helps the models achieve better results since it is aware of the sentence style when adapting to the target domain. Distance of Dataset Pairs To better understand the effectiveness of the methods, we analyze the learning behavior of the proposed model between different dataset pairs. The datasets' order of difficulty was, from easiest to hardest: Hotel↔Restaurant↔Tv↔Laptop. On the one hand, it might be said that the longer datasets' distance is, the more difficult of domain adaptation task becomes. This clearly shows in Table 3, sec. 1, at Hotel column where the adaptation ability gets worse regarding decreasing the BLEU score and increasing the ERR score alongside the order of Restaurant→Tv→Laptop datasets. On the other hand, the closer the dataset pair is, the faster model can adapt. It can be expected that the model can better adapt to the target Tv/Laptop domain from source Laptop/Tv than those from source Restaurant, Hotel, and vice versa, the model can easier adapt to the target Restaurant/Hotel domain from source Hotel/Restaurant than those from Laptop, Tv. However, the above-mentioned is not always true that the proposed method can perform acceptably well from easy source domains (Hotel, Restaurant) to the more difficult target domains (Tv, Laptop) and vice versa (Table 3, sec. 1, 2). Table 3, sec. 2 further shows that the proposed method is able to leverage the out of domain knowledge since the adaptation models trained on union source dataset, such as [R+H] or [L+T], show better performances than those trained on individual source domain data. A specific example in Table 3 Adaptation vs. All Training Scenario It is interesting to compare Adaptation (Table 3, sec. 2) with All training scenario ( Table 2). The VDANLG model shows its considerable ability to shift to another domain with a limited of in-domain labels whose results are competitive to or in some cases better than the previous models trained on full labels of the Target domain. A specific comparison evaluated on the Tv domain where the VDANLG model trained on the source Laptop achieved better performance, at 52.43 BLEU and 1.52 ERR, than HLSTM (52.40, 2.65), SCLSTM (52.35,2.41),3.38). The VDANLG models, in many cases, also have lower of slot error rate ERR scores than the Enc-Dec model. These indicate the stable strength of the VDANLG models in adapting to a new domain when the target domain data is scarce. Unsupervised Domain Adaptation We further examine the effectiveness of the proposed methods by training the VDANLG models on target Counterfeit datasets (Wen et al., 2016a). The promising results are shown in Table 1 -(b), despite the fact that the models were instead adaptation trained on the Counterfeit datasets, or in other words, were indirectly trained on the (Test) domains. However, the proposed models still showed positive signs in remarkably reducing the slot error rate ERR in the cases of Hotel and Tv be the (Test) domains. Surprisingly, even the source domains (Hotel /Restaurant ) are far from the (Test) domain Tv, and the Target domain Counterfeit L2T is also very different to the source domains, the model can still acceptably adapt well since its BLEU scores on (Test) Tv domain reached to (41.83/42.11), and it also produced a very low of ERR scores (2.38/2.74). This phenomenon will be further investigated in the unsupervised scenario in the future work. Comparison on Generated Outputs On the one hand, the VRALSTM models (trained from Scratch or trained adapting model from Source domains) produce the outputs with a diverse range of error types, including missing, misplaced, redundant, wrong slots, or even spelling mistake information, leading to a very high of the slot error rate ERR score. Specifically, the VRALSTM from Scratch tends to make repeated slots and also many of the missing slots in the generated outputs since the training data may inadequate for the model to generally the tecra erebus 20 is for business computing , has a 2 gb of memory. the satellite heracles 45 has 4 gb of memory , is not for business computing. which one do you want handle the unseen dialog acts. Whereas the VRALSTM models without Critics adapting trained from Source domains (denoted by in Table 4 and Appendix B. Table 5) tend to generate the outputs with fewer error types than the model from Scratch because the VRALSTM models may capture the overlap slots of both source and target domain during adaptation training. On the other hand, under the guidance of the Critics (SC and DC) in an adversarial training procedure, the VDANLG model (denoted by ) can effectively leverage the existing knowledge of the source domains to better adapt to the target domains. The VDANLG models can generate the outputs in style of the target domain with much fewer the error types compared with the two above models. Moreover, the VDANLG models seem to produce satisfactory utterances with more correct generated slots. For example, a sample outputted by the [R+H] in Table 4-example 1 contains all the required slots with only a misplaced information of two slots 2 gb and 4 gb, while the generated output produced by Hotel is a successful generation. Another samples in Appendix B. Table 5 generated by the Hotel , Tv , [R+H] (in DA 2) and Laptop (DA 3) models are all fulfilled responses. An analysis of the generated responses in Table 5-example 2 illustrates that the VDANLG models seem to generate a concise response since the models show a tendency to form some potential slots into a concise phrase, i.e., "SLOT NAME SLOT TYPE". For example, the VDANLG models tend to concisely response as "the portege phosphorus 43 laptop ..." instead of "the portege phosphorus 43 is a laptop ...". All these above demonstrate that the VDANLG models have ability to produce better results with a much lower of the slot error rate ERR score. Conclusion and Future Work We have presented an integrating of a variational generator and two Critics in an adversarial training algorithm to examine the model ability in domain adaptation task. Experiments show that the proposed models can perform acceptably well in a new, unseen domain by using a limited amount of in-domain data. The ablation studies also demonstrate that the variational generator contributes to effectively learn the underlying semantic of DA-utterance pairs, while the Critics show its important role of guiding the model to adapt to a new domain. The proposed models further show a positive sign in unsupervised domain adaptation, which would be a worthwhile study in the future.
4,611
1808.02586
2949760630
Domain Adaptation arises when we aim at learning from source domain a model that can per- form acceptably well on a different target domain. It is especially crucial for Natural Language Generation (NLG) in Spoken Dialogue Systems when there are sufficient annotated data in the source domain, but there is a limited labeled data in the target domain. How to effectively utilize as much of existing abilities from source domains is a crucial issue in domain adaptation. In this paper, we propose an adversarial training procedure to train a Variational encoder-decoder based language generator via multiple adaptation steps. In this procedure, a model is first trained on a source domain data and then fine-tuned on a small set of target domain utterances under the guidance of two proposed critics. Experimental results show that the proposed method can effec- tively leverage the existing knowledge in the source domain to adapt to another related domain by using only a small amount of in-domain data.
Adversarial adaptation methods have shown promising improvement in many machine learning applications despite the presence of domain shift or dataset bias, which reduce the difference between the training and test domain distributions, and thus improve generalization performance. proposed an improved unsupervised domain adaptation method to learn a discriminative mapping of target images to the source feature space by fooling a domain discriminator that tries to differentiate the encoded target images from source examples. We borrowed the idea of @cite_9 , where a domain-adversarial neural network are proposed to learn features that are discriminative for the main learning task on the source domain, and indiscriminate with respect to the shift between domains.
{ "abstract": [ "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application." ], "cite_N": [ "@cite_9" ], "mid": [ "1731081199" ] }
Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems
Traditionally, Spoken Dialogue Systems are typically developed for various specific domains, including: finding a hotel, searching a restaurant (Wen et al., 2015a), or buying a tv, laptop (Wen et al., 2015b), flight reservations (Levin et al., 2000), etc. Such system are often requiring a well-defined ontology, which is essentially a data structured representation that the dialogue system can converse about. Statistical approaches to multi-domain in SDS system have shown promising results in how to reuse data in a domain-scalable framework efficiently (Young et al., 2013). Mrkšić et al. (2015) addressed the question of multi-domain in the SDS belief tracking by training a general model and adapting it to each domain. Recently, Recurrent Neural Networks (RNNs) based methods have shown improving results in tackling the domain adaptation issue (Chen et al., 2015;Shi et al., 2015;Wen et al., 2016a;Wen et al., 2016b). Such generators have also achieved promising results when providing such adequate annotated datasets (Wen et al., 2015b;Wen et al., 2015a;Tran and Nguyen, 2017a;Tran and Nguyen, 2017b). More recently, the development of the variational autoencoder (VAE) framework (Kingma and Welling, 2013;Rezende and Mohamed, 2015) has paved the way for learning large-scale, directed latent variable models. This has brought considerable benefits to significant progress in natural language processing (Bowman et al., 2015;Miao et al., 2016;Purushotham et al., 2017;Mnih and Gregor, 2014), dialogue system (Wen et al., 2017;Serban et al., 2017). This paper presents an adversarial training procedure to train a variational neural language generator via multiple adaptation steps, which enables the generator to learn more efficiently when in-domain data is in short supply. In summary, we make the following contributions: (1) We propose a variational approach for an NLG problem which benefits the generator to adapt faster to new, unseen domain irrespective of scarce target resources; (2) We propose two critics in an adversarial training procedure, which can guide the generator to generate outputs that resemble the sentences drawn from the target domain; (3) We propose a unifying variational domain adaptation architecture which performs acceptably well in a new, unseen domain by using a limited amount of in-domain data; (4) We investigate the effectiveness of the proposed method in different scenarios, including ablation, domain adaptation, scratch, and unsupervised training with various amount of data. Variational Domain-Adaptation Neural Language Generator Drawing inspiration from Variational autoencoder (Kingma and Welling, 2013) with assumption that there exists a continuous latent variable z from a underlying semantic space of Dialogue Act (DA) and utterance pairs (d, y), we explicitly model the space together with variable d to guide the generation process, i.e., p(y|z, d). With this assumption, the original conditional probability evolves to reformulate as follows: p(y|d) = z p(y, z|d)d z = z p(y|z, d)p(z|d)d z(1) This latent variable enables us to model the underlying semantic space as a global signal for generation, in which the variational lower bound of variational generator can be formulated as follows: L V AE (θ, φ, d, y) = −KL(q φ (z|d, y)||p θ (z|d)) + E q φ (z|d,y) [log p θ (y|z, d)](2) where: p θ (z|d) is the prior model, q φ (z|d, y) is the posterior approximator, and p θ (y|z, d) is the decoder with the guidance from global signal z, KL(Q||P ) is the Kullback-Leibler divergence between Q and P. Variational Neural Encoder The variational neural encoder aims at encoding a given input sequence w 1 , w 2 , .., w L into continuous vectors. In this work, we use a 1-layer, Bidirectional LSTM (BiLSTM) to encode the sequence embedding. The BiLSTM consists of forward and backward LSTMs, which read the sequence from left-toright and right-to-left to produce both forward and backward sequence of hidden states ( − → h 1 , .., − → h L ), and h E = (h 1 , h 2 , .., h L ) where: h i = − → h i + ← − h i . We utilize this encoder to represent both the sequence of slot-value pairs {sv i } T DA i=1 in a given Dialogue Act, and the corresponding utterance {y i } T Y i=1 (see the red parts in Figure 1). We finally operate the mean-pooling over the BiLSTM hidden vectors to obtain the representation: h D = 1 T DA T DA i h i , h Y = 1 T Y T Y i h i . The encoder, accordingly, produces both the DA representation vector which flows into the inferer and decoder, and the utterance representation which streams to the posterior approximator. Variational Neural Inferer In this section, we describe our approach to model both the prior p θ (z|d) and the posterior q φ (z|d, y) by utilizing neural networks. Neural Posterior Approximator Modeling the true posterior p(z|d, y) is usually intractable. Traditional approach fails to capture the true posterior distribution of z due to its oversimplified assumption when using the mean-field approaches. Following the work of (Kingma and Welling, 2013), in this paper we employ neural network to approximate the posterior distribution of z to simplify the posterior inference. We assume the approximation has the following form: q φ (z|d, y) = N (z; µ(f (h D , h Y )), σ 2 (f (h D , h Y ))I)(3) where: the mean µ and standard variance σ are the outputs of the neural network based on the representations of h D and h Y . The function f is a non-linear transformation that project the both DA and utterance representations onto the latent space: h z = f (h D , h Y ) = g(W z [h D ; h Y ] + b z )(4) where: W z ∈ R dz×(d h D +d h Y ) , b z ∈ R dz are matrix and bias parameters respectively, d z is the dimensionality of the latent space, g(.) is an elements-wise activation function which we set to be Relu in our experiments. In this latent space, we obtain the diagonal Gaussian distribution parameter µ and log σ 2 through linear regression: µ = W µ h z + b µ , log σ 2 = W σ h z + b σ(5) where: µ, log σ 2 are both d z dimension vectors. Neural Prior Model We model the prior as follows: p θ (z|d) = N (z; µ (d), σ (d) 2 I)(6) where: µ and σ of the prior are neural models based on DA representation only, which are the same as those of the posterior q φ (z|d, y) in Eq. 4 and Eq. 5, except for the absence of h Y . To acquire a representation of the latent variable z, we utilize the same technique as proposed in VAE (Kingma and Welling, 2013) and re-parameterize it as follows: h z = µ + σ , ∼ N (0, I)(7) In addition, we set h z to be the mean of the prior p θ (z|d), i.e., µ , during decoding due to the absence of the utterance y. Intuitively, by parameterizing the hidden distribution this way, we can back-propagate the gradient to the parameters of the encoder and train the whole network with stochastic gradient descent. Note that the parameters for the prior and the posterior are independent of each other. In order to integrate the latent variable h z into the decoder, we use a non-linear transformation to project it onto the output space for generation: h e = g(W e h z + b e )(8) where: h e ∈ R de . It is important to notice that due to the sample noise , the representation of h e is not fixed for the same input DA and model parameters. This benefits the model to learn to quickly adapt to a new domain (see Table 1-(a) and Table 3, sec. 3). Variational Neural Decoder Given a DA d and the latent variable z, the decoder calculates the probability over the generation y as a joint probability of ordered conditionals: p(y|z, d) = T Y j=1 p(y t |y <t , z, d)(9) where: p(y t |y <t , z, d) = g (RN N (y t , h t−1 , d t ) In this paper, we borrow the d t calculation and the computational RNN cell from (Tran and Nguyen, 2017a) where RNN(.)=RALSTM(.) with a slightly modification in order to integrate the representation of latent variable, i.e., h e , into the RALSTM cell, which is denoted by the bold dashed orange arrow in Figure 1-(iii). We modify the cell calculation as follows:     i t f t o t c t     =     σ σ σ tanh     W 4d h ,4d h     h e d t h t−1 y t    (10) where: i i , f t , o t are input, forget and output gates respectively, d h is hidden layer size, W 4d h ,4d h is model parameter. The resulting Variational RALSTM (VRALSTM) model is demonstrated in Figure 1-(i), (ii), (iii), in which the latent variable can affect the hidden representation through the gates. This allows the model can indirectly take advantage of the underlying semantic information from the latent variable z. In addition, when the model learns to adapt to a new domain with unseen dialogue act, the semantic representation h e can help to guide the generation process (see sec. 6.3 for details). Critics In this section, we introduce a text-similarity critic and a domain critic to guarantee, as much as possible, that the generated sentences resemble the sentences drawn from the target domain. Text similarity critic To check the relevance between sentence pair in two domains and to encourage the model generating sentences in the style which is highly similar to those in the target domain, we propose a Text Similarity Critic (SC) to classify (y (1) , y (2) ) as 1-similar or 0-unsimilar text style. The model SC consists of two parts: a shared BiLSTM h Y with the Variational Neural Encoder to represent the y (1) sentence, and a second BiLSTM to encode the y (2) sentence. The SC model takes input as a pair (y (1) , y (2) ) of ([target], source), ([target], generated), and ([generated], source). Note that we give priority to encoding the y (1) sentence in [.] using the shared BiLSTM, which guides the model to learn the sentence style from the target domain, and also contributes the target domain information into the global latent variables. We further utilize Siamese recurrent architectures (Neculoiu et al., 2016) for learning sentence similarity, in which the architecture allows us to learn useful representations with limited supervision. Domain critic In consideration of the shift between domains, we introduce a Domain Critic (DC) to classify sentence as source, target, or generated domain, respectively. Drawing inspiration from work of (Ganin et al., 2016), we model DC with a gradient reversal layer and two standard feed-forward layers. It is important to notice that our DC model shares parameters with the Variational Neural Encoder and the Variational Neural Inferer. The DC model takes input as a pair of given DA and corresponding utterance to produce a concatenation of both its representation and its latent variable in the output space, which is then passed through a feed-forward layer and a 3-labels classifier. In addition, the gradient reversal layer, which multiplies the gradient by a specific negative value during back-propagation training, ensures that the feature distributions over the two domains are made similar, as indistinguishable as possible for the domain critic, hence resulting in the domain-invariant features. Training Domain Adaptation Model Given a training instance represented by a pair of DA and sentence (d (i) , y (i) ) from the rich source domain S and the limited target domain T , the task aims at finding a set of parameters Θ T that can perform acceptably well on the target domain. Training Critics We provide as following the training objective of SC and DC. For SC, the goal is to classify a sentence pair into 1-similar or 0-unsimilar textual style. This procedure can be formulated as a supervised classification training objective function: L s (ψ) = − N n=1 log C s (l n s |y n (1) , y n (2) , ψ), l n s = 1 − similar if (y n (1) , y n (2) ) ∈ P sim , 0 − unsimilar if (y n (1) , y n (2) ) ∈ P unsim , Y G = {y|y ∼ G(.|d T , .)}, P sim = {y n T , y n Y G }, P unsim = ({y n T , y n S }, {y n Y G , y n S })(11) where: N is number of sentences, ψ is the model parameters of SC, Y G denotes sentences generated from the current generator G given target domain dialogue act d T . The scalar probability C s (1|y n T , y n Y G ) indicates how a generated sentence y n Y G is relevant to a target sentence y n T . The DC critic aims at classifying a pair of DA-utterance into source, target, or generated domain. This can also be formulated as a supervised classification training objective as follows: L d (ϕ) = − N n=1 log C d (l n d |d n , y n , ϕ), l n d =    source if (d n , y n ) ∈ (D S , Y S ), target if (d n , y n ) ∈ (D T , Y T ), generated if (d n , y n ) ∈ (D T , Y G ),(12) where: ϕ is the model parameters of DC, (D S , Y S ), (D T , Y T ) are the DA-utterance pairs from source, target domain, respectively. Note also that the scalar probability C d (target|d n , y n ) indicates how likely the DA-utterance pair (d n , y n ) is from the target domain. Training Variational Generator We utilize the Monte Carlo method to approximate the expectation over the posterior in Eq. 2, i.e., E q φ (z|d,y) [.] 1 M M m=1 log p θ (y|d, h (m) z ) where: M is the number of samples. In this study, the joint training objective for a training instance (d, y) is formulated as follows: L(θ, φ) −KL(q φ (z|d, y)||p θ (z|d)) + 1 M M m=1 Ty t=1 log p θ (y t |y <t , d, h (m) z )(13) where: h m) , and (m) ∼ N (0, I). The first term is the KL divergence between two Gaussian distribution, and the second term is the approximation expectation. We simply set M = 1 which degenerates the second term to the objective of conventional generator. Since the objective function in Eq. 13 is differentiable, we can jointly optimize the parameter θ and variational parameter φ using standard gradient ascent techniques. (m) z = µ + σ( Adversarial Training Our domain adaptation architecture is demonstrated in Figure 1, in which both generator G and critics C s , and C d jointly train by pursuing competing goals as follows. Given a dialogue act d T in the target domain, the generator generates K sentences y's. It would prefer a "good" generated sentence y if the values of C d (target|d T , y) and C s (1|y T , y) are large. In contrast, the critics would prefer large values of C d (generated|d T , y) and C s (1|y, y S ), which imply the small values of C d (target|d T , y) and C s (1|y T , y). We propose a domain-adversarial training procedure in order to iteratively updating the generator and critics as described in Algorithm 1. While the parameters of generator are optimized to minimize their loss in the training set, the parameters of the critics are optimized to minimize the error of text similarity, and to maximize the loss of domain classifier. (G 1 )-Compute g G = { θ L(θ, φ), φ L(θ, φ)} using Eq. 13 8 (G 2 )-Adam update of θ, φ for G using g G 9 (S 1 )-Compute g s = ψ L s (ψ) using Eq. 11 for (y T , y S ); 10 (S 2 )-Adam update of ψ for C s using g s ; 11 Y G ← {yk} K k=1 , where yk ∼ G(.|d Generally, the current generator G for each training iteration i takes a target dialogue act d (i) T as input to over-generate a set Y G of K candidate sentences (step 11). We then choose top k best sentences in the Y G set (step 12) after re-ranking to measure how "good" the generated sentences are by using the critics (steps 14-15). These "good" signals from the critics can guide the generator step by step to generate the outputs which resemble the sentences drawn from the target domain. Note that the re-ranking step is important for separating the "correct" sentences from the current generated outputs Y G by penalizing the generated sentences which have redundant or missing slots. Experiments We conducted experiments on the proposed models in different scenarios: Adaptation, Scratch, and All using several model architectures, evaluation metrics, datasets (Wen et al., 2016a), and configurations (see Appendix A). KL cost annealing strategy (Bowman et al., 2015) encourages the model to encode meaningful representations into the latent vector z, in which we gradually anneal the KL term from 0 to 1. This helps our model to achieve solutions with non-zero KL term. Gradient reversal layer (Ganin et al., 2016) leaves the input unchanged during forward propagation and reverses the gradient by multiplying it with a negative scalar −λ p during the backpropagationbased training. We set the domain adaptation parameter λ p which gradually increases, starting from 0 to 1, by using the following schedule for each training step i: p = f loat(i)/num steps, and λ p = 2 1+exp(−10 * p) − 1 where: num steps is a constant which is set to be 8600, p is training progress. This strategy allows the Domain critic to be less sensitive to noisy signal at the early training stages. 6 Results and Analysis Integrating Variational Inference We compare the original model RALSTM with its modification by integrating Variational Inference (VRALSTM) as demonstrated in Table 2 and Table 1-(a). It clearly shows that the VRALSTM not only preserves the power of the original RALSTM on generation task since its performances are very competitive to those of RALSTM, but also provides a compelling evidence on adapting to a new, unseen domain when the target domain data is scarce, i.e., from 1% to 7%. Table 3, sec. 3 further shows the necessity of the integrating in which the VRALSTM achieved a significant improvement over the RALSTM in Scratch scenario, and of the adversarial domain adaptation algorithm in which although both the RALSTM and VRALSTM model can perform well when providing sufficient in-domain training data (Table 2), the performances are extremely impaired when training from Scratch with only a limited data. These indicate that the proposed variational method can learn the underlying semantic of DAutterance pairs in the source domain via the representation of the latent variable z, from which when adapting to another domain, the models can leverage the existing knowledge to guide the generation process. Ablation Studies The ablation studies (Table 3, sec. 1, 2) demonstrate the contribution of two Critics, in which the models were assessed with either no Critics or both or only one. It clearly sees that combining both Critics makes a substantial contribution to increasing the BLEU score and decreasing the slot error rate by a large margin in every dataset pairs. A comparison of model adapting from source Laptop domain between VRALSTM without Critics (Laptop ) and VDANLG (Laptop ) evaluated on the target Hotel domain shows that the VDANLG not only has better performance with much higher the BLEU score, 82.18 in comparison to 78.70, but also significantly reduce the ERR, from 15.17% down to 2.89%. The trend is consistent across all the other domain pairs. These stipulate the necessary Critics in effective learning to adapt to a new domain. Table 3, sec. 4 further demonstrates that using DC only (sec. 4) brings a benefit of effectively utilizing similar slot-value pairs seen in the training data to closer domain pairs such as: Hotel→Restaurant (68.23 BLEU, 4.97 ERR), Restaurant→Hotel (80.31 BLEU, 6.71 ERR), Laptop→Tv (51.14 BLEU, 10.07 ERR), and Tv→Laptop (50.01 BLEU, 15.40 ERR) pairs. Whereas it is inefficient for the longer domain pairs since their performances are worse than those without Critics, or in some cases even worse than the VRALSTM in scr10 scenario, such as Restaurant→Tv (41.69 BLEU, 34.74 ERR), and the cases where Laptop to be a Target domain. On the other hand, using only SC (sec. 5) helps the models achieve better results since it is aware of the sentence style when adapting to the target domain. Distance of Dataset Pairs To better understand the effectiveness of the methods, we analyze the learning behavior of the proposed model between different dataset pairs. The datasets' order of difficulty was, from easiest to hardest: Hotel↔Restaurant↔Tv↔Laptop. On the one hand, it might be said that the longer datasets' distance is, the more difficult of domain adaptation task becomes. This clearly shows in Table 3, sec. 1, at Hotel column where the adaptation ability gets worse regarding decreasing the BLEU score and increasing the ERR score alongside the order of Restaurant→Tv→Laptop datasets. On the other hand, the closer the dataset pair is, the faster model can adapt. It can be expected that the model can better adapt to the target Tv/Laptop domain from source Laptop/Tv than those from source Restaurant, Hotel, and vice versa, the model can easier adapt to the target Restaurant/Hotel domain from source Hotel/Restaurant than those from Laptop, Tv. However, the above-mentioned is not always true that the proposed method can perform acceptably well from easy source domains (Hotel, Restaurant) to the more difficult target domains (Tv, Laptop) and vice versa (Table 3, sec. 1, 2). Table 3, sec. 2 further shows that the proposed method is able to leverage the out of domain knowledge since the adaptation models trained on union source dataset, such as [R+H] or [L+T], show better performances than those trained on individual source domain data. A specific example in Table 3 Adaptation vs. All Training Scenario It is interesting to compare Adaptation (Table 3, sec. 2) with All training scenario ( Table 2). The VDANLG model shows its considerable ability to shift to another domain with a limited of in-domain labels whose results are competitive to or in some cases better than the previous models trained on full labels of the Target domain. A specific comparison evaluated on the Tv domain where the VDANLG model trained on the source Laptop achieved better performance, at 52.43 BLEU and 1.52 ERR, than HLSTM (52.40, 2.65), SCLSTM (52.35,2.41),3.38). The VDANLG models, in many cases, also have lower of slot error rate ERR scores than the Enc-Dec model. These indicate the stable strength of the VDANLG models in adapting to a new domain when the target domain data is scarce. Unsupervised Domain Adaptation We further examine the effectiveness of the proposed methods by training the VDANLG models on target Counterfeit datasets (Wen et al., 2016a). The promising results are shown in Table 1 -(b), despite the fact that the models were instead adaptation trained on the Counterfeit datasets, or in other words, were indirectly trained on the (Test) domains. However, the proposed models still showed positive signs in remarkably reducing the slot error rate ERR in the cases of Hotel and Tv be the (Test) domains. Surprisingly, even the source domains (Hotel /Restaurant ) are far from the (Test) domain Tv, and the Target domain Counterfeit L2T is also very different to the source domains, the model can still acceptably adapt well since its BLEU scores on (Test) Tv domain reached to (41.83/42.11), and it also produced a very low of ERR scores (2.38/2.74). This phenomenon will be further investigated in the unsupervised scenario in the future work. Comparison on Generated Outputs On the one hand, the VRALSTM models (trained from Scratch or trained adapting model from Source domains) produce the outputs with a diverse range of error types, including missing, misplaced, redundant, wrong slots, or even spelling mistake information, leading to a very high of the slot error rate ERR score. Specifically, the VRALSTM from Scratch tends to make repeated slots and also many of the missing slots in the generated outputs since the training data may inadequate for the model to generally the tecra erebus 20 is for business computing , has a 2 gb of memory. the satellite heracles 45 has 4 gb of memory , is not for business computing. which one do you want handle the unseen dialog acts. Whereas the VRALSTM models without Critics adapting trained from Source domains (denoted by in Table 4 and Appendix B. Table 5) tend to generate the outputs with fewer error types than the model from Scratch because the VRALSTM models may capture the overlap slots of both source and target domain during adaptation training. On the other hand, under the guidance of the Critics (SC and DC) in an adversarial training procedure, the VDANLG model (denoted by ) can effectively leverage the existing knowledge of the source domains to better adapt to the target domains. The VDANLG models can generate the outputs in style of the target domain with much fewer the error types compared with the two above models. Moreover, the VDANLG models seem to produce satisfactory utterances with more correct generated slots. For example, a sample outputted by the [R+H] in Table 4-example 1 contains all the required slots with only a misplaced information of two slots 2 gb and 4 gb, while the generated output produced by Hotel is a successful generation. Another samples in Appendix B. Table 5 generated by the Hotel , Tv , [R+H] (in DA 2) and Laptop (DA 3) models are all fulfilled responses. An analysis of the generated responses in Table 5-example 2 illustrates that the VDANLG models seem to generate a concise response since the models show a tendency to form some potential slots into a concise phrase, i.e., "SLOT NAME SLOT TYPE". For example, the VDANLG models tend to concisely response as "the portege phosphorus 43 laptop ..." instead of "the portege phosphorus 43 is a laptop ...". All these above demonstrate that the VDANLG models have ability to produce better results with a much lower of the slot error rate ERR score. Conclusion and Future Work We have presented an integrating of a variational generator and two Critics in an adversarial training algorithm to examine the model ability in domain adaptation task. Experiments show that the proposed models can perform acceptably well in a new, unseen domain by using a limited amount of in-domain data. The ablation studies also demonstrate that the variational generator contributes to effectively learn the underlying semantic of DA-utterance pairs, while the Critics show its important role of guiding the model to adapt to a new domain. The proposed models further show a positive sign in unsupervised domain adaptation, which would be a worthwhile study in the future.
4,611
1808.02632
2949402865
In this paper, we propose a novel Question-Guided Hybrid Convolution (QGHC) network for Visual Question Answering (VQA). Most state-of-the-art VQA methods fuse the high-level textual and visual features from the neural network and abandon the visual spatial information when learning multi-modal features.To address these problems, question-guided kernels generated from the input question are designed to convolute with visual features for capturing the textual and visual relationship in the early stage. The question-guided convolution can tightly couple the textual and visual information but also introduce more parameters when learning kernels. We apply the group convolution, which consists of question-independent kernels and question-dependent kernels, to reduce the parameter size and alleviate over-fitting. The hybrid convolution can generate discriminative multi-modal features with fewer parameters. The proposed approach is also complementary to existing bilinear pooling fusion and attention based VQA methods. By integrating with them, our method could further boost the performance. Extensive experiments on public VQA datasets validate the effectiveness of QGHC.
The attention mechanisms @cite_13 @cite_34 are originally proposed for solving language-related tasks @cite_5 . Xu @cite_13 introduce an attention mechanism for image captioning, which shows that the attention maps could be adaptively generated for predicting captioning words. Based on @cite_13 , Yang @cite_1 propose to stack multiple attention layers so that each layer can focus on different regions adaptively. In @cite_32 , a co-attention mechanism is proposed. The model generates question attention and spatial attention masks so that salient words and regions could be jointly selected for more effective feature fusion. Similarly, Lu @cite_10 employ a co-attention mechanism to simultaneously learn free-form and detection-based image regions related to the input question. In MCB @cite_33 , MLB @cite_28 , and MUTAN @cite_19 , attention mechanisms are adopted to partially recover the spatial information from the input image. Question-guided attention methods @cite_29 @cite_13 are proposed to generate attention maps from the question.
{ "abstract": [ "Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.", "Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property.", "Recently, the Visual Question Answering (VQA) task has gained increasing attention in artificial intelligence. Existing VQA methods mainly adopt the visual attention mechanism to associate the input question with corresponding image regions for effective question answering. The free-form region based and the detection-based visual attention mechanisms are mostly investigated, with the former ones attending free-form image regions and the latter ones attending pre-specified detection-box regions. We argue that the two attention mechanisms are able to provide complementary information and should be effectively integrated to better solve the VQA problem. In this paper, we propose a novel deep neural network for VQA that integrates both attention mechanisms. Our proposed framework effectively fuses features from free-form image regions, detection boxes, and question representations via a multi-modal multiplicative feature embedding scheme to jointly attend question-related free-form image regions and detection boxes for more accurate question answering. The proposed method is extensively evaluated on two publicly available datasets, COCO-QA and VQA, and outperforms state-of-the-art approaches. Source code is available at this https URL", "We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions.", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3 to 60.5 , and from 61.6 to 63.3 on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCO-QA.", "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how our MUTAN model generalizes some of the latest VQA architectures, providing state-of-the-art results.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "", "We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single \"hop\" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3]." ], "cite_N": [ "@cite_33", "@cite_28", "@cite_10", "@cite_29", "@cite_1", "@cite_32", "@cite_19", "@cite_5", "@cite_34", "@cite_13" ], "mid": [ "2412400526", "2949197413", "2770883544", "2174492417", "2171810632", "2963668159", "2616125804", "2133564696", "", "2255577267" ] }
Question-Guided Hybrid Convolution for Visual Question Answering
Convolution Neural Networks (CNN) [1] and Recurrent Neural Networks (RNN) [2] have shown great success in vision and language tasks. Recently, CNN and RNN are jointly trained for learning feature representations for multi-modal tasks, including image captioning [3,4], text-to-image retrieval [5,33], and Visual Question Answering (VQA) [6,11,12,38]. Among the vision-language tasks, VQA is one of the most challenging problems. Instead of embedding images and their textual descriptions into the same feature subspace as in the text-image matching problem [7,8,26], VQA requires algorithms to answer natural language questions about the visual contents. The methods are thus designed to understand both the questions and the image contents to reason the underlying truth. To infer the answer based on the input image and question, it is important to fuse the information from both modalities to create joint representations. Answers could be predicted by learning classifiers on the joint features. Early VQA methods [9] fuse textual and visual information by feature concatenation. State-of-the-art feature fusion methods, such as Multimodal Compact Bilinear pooling (MCB) [10], utilize bilinear pooling to learn multi-model features. However, the above type of methods have main limitations. The multi-modal features are fused in the latter model stage and the spatial information from visual features gets lost before feature fusion. The visual features are usually obtained by averaging the output of the last pooling layer and represented as 1d vectors. But such operation abandons the spatial information of input images. In addition, the textual and visual relationship is modeled only on the topmost layers and misses details from the low-level and mid-level layers. To solve these problems, we propose a feature fusion scheme that generates multi-modal features by applying question-guided convolutions on the visual features (see Figure 1). The mid-level visual features and language features are first learned independently using CNN and RNN. The visual features are designed to keep the spatial information. And then a series of kernels are generated based on the language features to convolve with the visual features. Our model tightly couples the multi-modal features in an early stage to better capture the spatial information before feature fusion. One problem induced by the question-guided kernels is that the large number of parameters make it hard to train the model. Directly predicting "full" convolutional filters requires estimating thousands of parameters (e.g. 256 number of 3 × 3 filters convolve with the 256-channel input feature map). This is memory-inefficient and time-consuming, and does not result in satisfactory performances (as shown in our experiments). Motivated by the group convolution [13,1,14], we decompose large convolution kernels into group kernels, each of which works on a small number of input feature maps. In addition, only a portion of such group convolution kernels (question-dependent kernels) are predicted by RNN and the remaining kernels (question-independent kernels) are freely learned via back-propagation. Both question-dependent and question-independent kernels are shown to be important, and we name the proposed operation as Question-guided Hybrid Convolution (QGHC). The visual and language features are deeply fused to generate discriminative multi-modal features. The spatial relations between the input image and question could be well captured by the question-guided convolution. Our experiments on VQA datasets validate the effectiveness of our approach and show advantages of the proposed feature fusion over the state-of-the-arts. Our contributions can be summarized in threefold. 1) We propose a novel multi-modal feature fusion method based on question-guided convolution kernels. The relative visual regions have high response to the input question and spatial information could be well captured by encoding such connection in the QGHC model. The QGHC explores deep multi-modal relationships which benefits the visual question reasoning. 2) To achieve memory efficiency and robust performance in the question-guided convolution, we propose the group convolution to learn kernel parameters. The question-dependent kernels model the relationship of visual and textual information while the question-independent kernels reduce parameter size and alleviate over-fitting. 3) Extensive experiments and ablation studies on the public datasets show the effectiveness of the proposed QGHC and each individual component. Our approach outperforms the state-of-the-art methods using much fewer parameters. Problem formulation Most state-of-the-art VQA methods rely on deep neural networks for learning discriminative features of the input image I and question q. Usually, Convolutional Neural Networks (CNN) are adopted for learning visual features, while Recurrent Neural Networks (RNN) (e.g., Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU)) encode the input question, i.e., f v = CNN(I; θ v ),(1)f q = RNN(q; θ q ),(2) where f v and f q represent visual features and question features respectively. Conventional ImageQA systems focus on designing robust feature fusion functions to generate multi-modal image-question features for answer prediction. Most state-of-the-art feature fusion methods fuse 1-d visual and language feature vectors in a symmetric way to generate the multi-modal representations. The 1-d visual features are usually generated by the deep neural networks (e.g., GoogleNet and ResNet) with a global average pooling layer. Such visual features f v and the later fused textual-visual features abandon spatial information of the input image and thus less robust to spatial variations. Question-guided Hybrid Convolution (QGHC) for multi-modal feature fusion To fully utilize the spatial information of the input image, we propose Languageguided Hybrid Convolution for feature fusion. Unlike bilinear pooling methods that treat visual and textual features in a symmetric way, our approach performs the convolution on visual feature maps and the convolution kernels are predicted based on the question features which can be formulated as: f v+q = CNN p (I;θ v (f q )),(3) where CNN p is the output before the last pooling layer,θ v (f q ) denotes the convolutional kernels predicted based on the question feature f q ∈ R d , and the convolution on visual feature maps with the predicted kernelsθ v (q) results in the multi-modal feature maps f v+q . However, the naive solution of directly predicting "full" convolutional kernels is memory-inefficient and time-consuming. Mapping the question features to generate full CNN kernels contains a huge number of learnable parameters. In our model, we use the fully-connected layer to learn the question-guided convolutional kernels. To predict a commonly used 3 × 3 × 256 × 256 kernel from a 2000-d question feature vector, the FC layer for learning the mapping generates 117 million parameters, which is hard to learn and causes over-fitting on existing VQA datasets. In our experiments, we validate that the performance of the naive solution is even worse than the simple feature concatenation. To mitigate the problem, we propose to predict parameters of group convolution kernels. The group convolution divides the input feature maps into several groups along the channel dimension, and thus each group has a reduced number of channels for convolution. Outputs of convolution with each group are then concatenated in the channel dimension to produce the output feature maps. In addition, we classify the convolution kernels into dynamically-predicted kernels and freely-updated kernels. The dynamic kernels are question-dependent, which are predicted based on the question feature vector f q . The freely-updated kernels are question-independent. They are trained as conventional convolution kernels via back-propagation. The dynamically-predicted kernels fuse the textual and visual information in early model stage which better capture the multi-model relationships. The freely-updated kernels reduce the parameter size and ensure the model can be trained efficiently. By shuffling parameters among these two kinds of kernels, our model can achieve both the accuracy and efficiency. During the testing phase, the dynamic kernels are decided by the questions while the freely updated kernels are fixed for all input image-question pairs. Formally, we substitute Eqn. (3) with the proposed QGHC for VQA, f v+q = CNN g I;θ v (f q ), θ v ,(4)a = MLP(f v+q ),(5) where CNN g denotes a group convolution network with dynamically-predicted kernelsθ v (f q ) and freely-updated kernels θ v . The output of the CNN f v+q fuses the textual and visual information and infers the final answers. MLP is a multilayer perception module and a is the predicted answers. The freely-updated kernels can capture pre-trained image patterns and we fix them during the testing stage. The dynamically-predicted kernels are dependent on the input questions and capture the question-image relationships. Our model fuses the textual and visual information in early model stage by the convolution operation. The spatial information between two modalities is well preserved which leads to more accurate results than previous feature concatenation strategies. The combination of the dynamic and freely-updated kernels is crucial important in keeping both the accuracy and efficiency and shows promising results in our experiments. QGHC module We stack multiple QGHC modules to better capture the interactions between the input image and question. Inspired by ResNet [27] and ResNeXt [14], our QGHC module consists of a series of 1 × 1, 3 × 3, and 1 × 1 convolutions. As shown in Figure 2, the module is designed similarly to the ShffuleNet [25] module with group convolution and identity shortcuts. The C i -channel input feature maps are first equally divided into N groups (paths). Each of the N groups then goes through 3 stages of convolutions and outputs C o /N -d feature maps. For each group, the first convolution is a 1 × 1 convolution that outputs C i /2N -channel feature maps. The second 3 × 3 convolution outputs C i /2Nchannel feature maps, and the final 1 × 1 convolution outputs C o /N -channel feature maps. We add a group shuffling layer after the 3 × 3 convolution layer to make features between different groups interact with each other and keep the advantages of both the dynamically-predicted kernels and freely-updated kernels. The output of C o /N -channel feature maps for the N groups are then concatenated together along the channel dimension. For the shortcut connection, a 1 × 1 convolution transforms the input feature maps to C o -d features, which are added with the output feature maps. Batch Normalization and ReLU are performed after each convolutional operation except for the last one, where ReLU is performed after the addition with the shortcut. The 3 × 3 group convolution is guided by the input questions. We randomly select n group kernels. Their parameters are predicted based on the question QGHC network for visual question answering The network structure for our QGHC network is illustrated in Figure 3. The ResNet [27] is first pre-trained on the ImageNet to extract mid-level visual features. The question features are generated by a language RNN model. The visual feature maps are then send to three QGHC modules with N = 8 groups and C o = 512. The output of the QGHC modules f v+q has the same spatial sizes with the input feature maps. A global average pooling is applied to the final feature maps to generate the final multi-modal feature representation for predicting the most likely answer a. To learn the dynamic convolution kernels in the QGHC modules, the question feature f q is transformed by two FC layers with a ReLU activation in between. The two FC layers first project the question to a 9216-d vector. The 3 × 3 question-dependent kernel weights of the three QGHC modules are obtained by reshaping the learned parameters into 3 × 3 × 32 × 32. However, directly training the proposed network with both dynamically-predicted kernels and freelyupdated kernels is non-trivial. The dynamic kernel parameters are the output of the ReLU non-linear function with different magnitudes compared with the freely-updated kernel parameters. We adopt the Weight Normalization [28] to balance the weights between the two types of 3 × 3 kernels, which stabilizes the training of the network. QGHC network with bilinear pooling and attention Our proposed QGHC network is also complementary with the existing bilinear pooling fusion methods and the attention mechanism. To combine with the MLB fusion scheme [11], the multi-modal features extracted from the global average pooling layer could be fused with the RNN question features again using a MLB. The fused features could be used to predict the final answers. The second stage fusion of textual and visual features brings a further improvement on the answering accuracy in our experiments. We also apply an attention model to better capture the spatial information. The original global average pooling layer is thus replaced by the the attention map. To weight more on locations of interest, a weighting map is learned by attention mechanism. A 1 × 1 convolution following a spatial Softmax function generates the attention weighting map. The final multi-modal features is the weighted summation of features at all the locations. The output feature maps from the last QGHC module are added with the linearly transformed question features. The attention mechanism is shown as the green rectangles in Figure 3. Experiments We test our proposed approach and compare it with the state-of-the-arts on two public datasets, the CLEVR dataset [29] and VQA dataset [6]. VQA Dataset Data and experimental setup. The VQA dataset is built from 204,721 MS-COCO images with human annotated questions and answers. On average, each image has 3 questions and 10 answers for each question. The dataset is divided into three splits: training (82,783 images), validation (40,504 images) and testing (81,434 images). A testing subset named test-dev with 25% samples can be evaluated multiple times a day. We follow the setup of previous methods and perform ablation studies on the testing subset. Our experiments focus on the open-ended task, which predict the correct answer in the free-form language expressions. If the predicted answer appears more than 3 times in the ground truth answers, the predicted answer would be considered as correct. Our models have the same setting when comparing with the state-of-the-art methods. The compared methods follow their original setup. For the proposed approach, images are resized to 448 × 448. The 14 × 14 × 2048 visual features are learned by an ImageNet pre-trained ResNet-152, and the question is encoded to a 2400-d feature vector by the skip-thought [30] using GRU. The candidate questions are selected as the most frequent 2,000 answers in the training and validation sets. The model is trained using the ADAM optimizer with an initial learning rate of 10 −4 . For results on the validation set, only the training set is used for training. For results on test-dev, we follow the setup of previous methods, both the training and validation data are used for training. Ablation studies on the VQA dataset. We conduct ablation studies to investigate factors that influence the final performance of our proposed QGHC network. The results are shown in Table 1. Our default QGHC network (denoted as QGHC ) has a visual ResNet-152 followed by three consecutive QGHC modules. Each QGHC module has a 1 × 1 stage-1 convolution with freely-updated kernels, a 3 × 3 stage-2 convolution with both dynamically-predicted kernels and freely-updated kernels, and another 1 × 1 convolution stage with freely-updated kernels (see Figure 2). Each of these three stage convolutions has 8 groups. They have 32, 32, and 64 output channels respectively. We first investigate the influence of the number of QGHC modules and the number of convolution channels. We list the results of different number of QGHC modules in Table 1. QGHC-1, QGHC-2, QGHC-4 represent 1, 2, and 4 QGHC modules respectively. As shown in Table 1, the parameter size improves as the number of QGHC increases but there is no further improvement when stacking more than 3 QGHC modules. We therefore keep 3 QGHC modules in our model. We also test halving the numbers of output channels of the three group convolutions to 16, 16, and 32 (denoted as QGHC-1/2 ). The results show that halving the number of channels only slightly decreases the final accuracy. We then test different group numbers. We change the group number from 8 to 4 (QGHC-group 4 ) and 16 (QGHC-group 16 ). Our proposed method is not sensitive to the group number of the convolutions and the model with 8 groups achieves the best performance. We also investigate the influence of the group shuffling layer. Removing the group shuffling layer (denoted as QGHCw/o shuffle) decreases the accuracy by 0.32% compared with our model. The shuffling layer makes features between different groups interact with each other and is helpful to the final results. For different QGHC module structures, we first test a naive solution. The QGHC module is implemented as a single 3×3 "full" convolution without groups. Its parameters are all dynamically predicted by question features (denoted as QGHC-1-naive). We then convert the single 3 × 3 full convolution to a series of 1 × 1, 3 × 3, 1 × 1 full convolutions with residual connection between the input and output feature maps (denoted as QGHC-1-full ), where the 3 × 3 convolution Table 2. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the VQA dataset without using the attention mechanism. kernels are all dynamically predicted by the question features. The improvement of QGHC-1-full over QGHC-1-naive demonstrates the advantages of the residual structure. Based on QGHC-1-full, we convert all the full convolutions to group convolutions with 8 groups (denoted as QGHC-1-group). The results outperforms QGHC-1-full, which show the effectiveness of the group convolution. However, the accuracy is still inferior to our proposed QGHC-1 with hybrid convolution. The results demonstrate that the question-guided kernels can help better fuse the textual and visual features and achieve robust answering performance. Finally, we test the combination of our method with different additional components. 1) The multi-modal features are concatenated with the question features, and then fed into the FC layer for answer prediction. (denoted as QGHC+concat). It results in a marginal improvement in the final accuracy. 2) We use MUTAN [12] to fuse our QGHC-generated multi-modal features with question features again for answer prediction (denoted as QGHC+MUTAN ). It has better results than QGHC+concat. 3) The attention is also added to QGHC following the descriptions in Section 3.5 (denoted as QGHC+att.). Comparison with state-of-the-art methods. QGHC fuses multi-modal features in an efficient way. The output feature maps of our QGHC module utilize the textual information to guide the learning of visual features and outperform state-of-the-art feature fusion methods. In this section, we compare our proposed approach (without using the attention module) with state-of-the-arts. The results on the VQA dataset are shown in Table 2. We compare our proposed approach with multi-modal feature concatenation methods including MCB [10], MLB [11], and MUTAN [12]. Our feature fusion is performed before the spatial pooling and can better capture the spatial information than previous methods. Since MUTAN can be combined with MLB (denoted as MUTAN+MLB) to further improve the overall performance. Attention mechanism is widely utilized in VQA algorithms for associating words with image regions. Our method can be combined with attention models Table 3. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the VQA dataset with the attention mechanism. for predicting more accurate answers. In Section 3.5, we adopt a simple attention implementation. More complex attention mechanisms, such as hierachical attention [19] and stacked attention [18] can also be combined with our approach. The results in Table 3 list the answering accuracies on the VQA dataset of different state-of-the-art methods with attention mechanism. We also compare our method with dynamic parameter prediction methods. DPPNet [22] (Table 2) and MODERN [23] (Table 3) are two state-of-the-art dynamic learning methods. Compared with DPPNet(VGG) and MODERN(ResNet-152), QGHC improves the performance by 6.78% and 3.73% respectively on the test-dev subset, which demonstrates the effectiveness of our QGHC model. CLEVR dataset The CLEVR dataset [29] is proposed to test the reasoning ability of VQA tasks, such as counting, comparing, and logical reasoning. Questions and images from CLEVR are generated by a simulation engine that randomly combines 3D objects. This dataset contains 699,989 training questions, 149,991 validation questions, and 149,988 test questions. Experimental setting. In our proposed model, the image is resized to 224 × 224. The question is first embedded to a 300-d vector through a FC layer followed by a ReLU non-linear function, and then input into a 2-layer LSTM with 256 hidden states to generate textual features. Our QGHC network contains three QGHC modules for fusing multi-modal information. All parameters are learned from scratch and trained in an end-to-end manner. The network is trained using the ADAM optimizer with the learning rate 5 × 10 −4 and batch size 64. All the results are reported on the validation subset. [41] learns to parse the question and predicts the answer distribution using dynamic network structure. The results of different methods on the CLEVR dataset are shown in Table 4. The multi-modal concatenation (CNN-LSTM) does not perform well, since it cannot model the complex interactions between images and questions. Stacked Attention (+SA) can improve the results since it utilizes the spatial information from input images. Our QGHC model still outperforms +SA by 17.40%. For the N2NMN, it parses the input question to dynamically predict the network structure. Our proposed method outperforms it by 2.20%. Compare integers Query attribute Compare attribute Model Overall Exist Count equal less more size color material shape size color material shape Human [42] 92 Table 4. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the CLVER dataset. Visualization of question-guided convolution Motivated by the class activation mapping (CAM) [9], we visualize the activation maps of the output feature maps generated by the QGHC modules. The weighted summation of the topmost feature maps can localize answer regions. Convolution activation maps for our last QGHC module are shown in Figure 4. We can observe that the activation regions relate to the questions and the answers are predicted correctly for different types of questions, including shape, color, and number. In addition, we also visualize the activation maps of different QGHC modules by training an answer prediction FC layer for each of them. As examples shown in Figure 1, the QGHC gradually focus on the correct regions. Conclusion In this paper, we propose a question-guided hybrid convolution for learning discriminative multi-modal feature representations. Our approach fully utilizes the spatial information and is able to capture complex relations between the image and question. By introducing the question-guided group convolution kernels with both dynamically-predicted and freely-updated kernels, the proposed QGHC network shows strong capability on solving the visual question answering problem. The proposed approach is complementary with existing feature fusion methods and attention mechanisms. Extensive experiments demonstrate the effectiveness of our QGHC network and its individual components.
3,746
1808.02632
2949402865
In this paper, we propose a novel Question-Guided Hybrid Convolution (QGHC) network for Visual Question Answering (VQA). Most state-of-the-art VQA methods fuse the high-level textual and visual features from the neural network and abandon the visual spatial information when learning multi-modal features.To address these problems, question-guided kernels generated from the input question are designed to convolute with visual features for capturing the textual and visual relationship in the early stage. The question-guided convolution can tightly couple the textual and visual information but also introduce more parameters when learning kernels. We apply the group convolution, which consists of question-independent kernels and question-dependent kernels, to reduce the parameter size and alleviate over-fitting. The hybrid convolution can generate discriminative multi-modal features with fewer parameters. The proposed approach is also complementary to existing bilinear pooling fusion and attention based VQA methods. By integrating with them, our method could further boost the performance. Extensive experiments on public VQA datasets validate the effectiveness of QGHC.
Network parameters could be dynamically predicted across different modalities. Our approach is mostly related to methods in this direction. In @cite_31 , language are used to predict parameters of a fully-connected (FC) layer for learning visual features. However, the predicted fully-connected layer cannot capture spatial information of the image. To avoid introducing too many parameters, they predict only a small portion of parameters using a hashing function. However, this strategy introduces redundancy because the FC parameters only contain a small amount of training parameters. In @cite_23 , language is used to modulate the mean and variance parameters of the Batch Normalization layers in the visual CNN. However, learning the interactions between two modalities by predicting the BN parameters has limited learning capacity. We conduct comparisons with @cite_31 and @cite_23 . Our proposed method shows favorable performance. We notice that @cite_6 use language-guided convolution for object tracking. However, they predict all the parameters which is difficult to train.
{ "abstract": [ "We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network---joint network with the CNN for ImageQA and the parameter prediction network---is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks.", "This paper strives to track a target object in a video. Rather than specifying the target in the first frame of a video by a bounding box, we propose to track the object based on a natural language specification of the target, which provides a more natural human-machine interaction as well as a means to improve tracking results. We define three variants of tracking by language specification: one relying on lingual target specification only, one relying on visual target specification based on language, and one leveraging their joint capacity. To show the potential of tracking by natural language specification we extend two popular tracking datasets with lingual descriptions and report experiments. Finally, we also sketch new tracking scenarios in surveillance and other live video streams that become feasible with a lingual specification of the target.", "It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic input are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the by linguistic input. Specifically, we condition the batch normalization parameters of a pretrained residual network (ResNet) on a language embedding. This approach, which we call MOdulated RESnet ( ), significantly improves strong baselines on two visual question answering tasks. Our ablation study shows that modulating from the early stages of the visual processing is beneficial." ], "cite_N": [ "@cite_31", "@cite_6", "@cite_23" ], "mid": [ "2175714310", "2747053578", "2727849499" ] }
Question-Guided Hybrid Convolution for Visual Question Answering
Convolution Neural Networks (CNN) [1] and Recurrent Neural Networks (RNN) [2] have shown great success in vision and language tasks. Recently, CNN and RNN are jointly trained for learning feature representations for multi-modal tasks, including image captioning [3,4], text-to-image retrieval [5,33], and Visual Question Answering (VQA) [6,11,12,38]. Among the vision-language tasks, VQA is one of the most challenging problems. Instead of embedding images and their textual descriptions into the same feature subspace as in the text-image matching problem [7,8,26], VQA requires algorithms to answer natural language questions about the visual contents. The methods are thus designed to understand both the questions and the image contents to reason the underlying truth. To infer the answer based on the input image and question, it is important to fuse the information from both modalities to create joint representations. Answers could be predicted by learning classifiers on the joint features. Early VQA methods [9] fuse textual and visual information by feature concatenation. State-of-the-art feature fusion methods, such as Multimodal Compact Bilinear pooling (MCB) [10], utilize bilinear pooling to learn multi-model features. However, the above type of methods have main limitations. The multi-modal features are fused in the latter model stage and the spatial information from visual features gets lost before feature fusion. The visual features are usually obtained by averaging the output of the last pooling layer and represented as 1d vectors. But such operation abandons the spatial information of input images. In addition, the textual and visual relationship is modeled only on the topmost layers and misses details from the low-level and mid-level layers. To solve these problems, we propose a feature fusion scheme that generates multi-modal features by applying question-guided convolutions on the visual features (see Figure 1). The mid-level visual features and language features are first learned independently using CNN and RNN. The visual features are designed to keep the spatial information. And then a series of kernels are generated based on the language features to convolve with the visual features. Our model tightly couples the multi-modal features in an early stage to better capture the spatial information before feature fusion. One problem induced by the question-guided kernels is that the large number of parameters make it hard to train the model. Directly predicting "full" convolutional filters requires estimating thousands of parameters (e.g. 256 number of 3 × 3 filters convolve with the 256-channel input feature map). This is memory-inefficient and time-consuming, and does not result in satisfactory performances (as shown in our experiments). Motivated by the group convolution [13,1,14], we decompose large convolution kernels into group kernels, each of which works on a small number of input feature maps. In addition, only a portion of such group convolution kernels (question-dependent kernels) are predicted by RNN and the remaining kernels (question-independent kernels) are freely learned via back-propagation. Both question-dependent and question-independent kernels are shown to be important, and we name the proposed operation as Question-guided Hybrid Convolution (QGHC). The visual and language features are deeply fused to generate discriminative multi-modal features. The spatial relations between the input image and question could be well captured by the question-guided convolution. Our experiments on VQA datasets validate the effectiveness of our approach and show advantages of the proposed feature fusion over the state-of-the-arts. Our contributions can be summarized in threefold. 1) We propose a novel multi-modal feature fusion method based on question-guided convolution kernels. The relative visual regions have high response to the input question and spatial information could be well captured by encoding such connection in the QGHC model. The QGHC explores deep multi-modal relationships which benefits the visual question reasoning. 2) To achieve memory efficiency and robust performance in the question-guided convolution, we propose the group convolution to learn kernel parameters. The question-dependent kernels model the relationship of visual and textual information while the question-independent kernels reduce parameter size and alleviate over-fitting. 3) Extensive experiments and ablation studies on the public datasets show the effectiveness of the proposed QGHC and each individual component. Our approach outperforms the state-of-the-art methods using much fewer parameters. Problem formulation Most state-of-the-art VQA methods rely on deep neural networks for learning discriminative features of the input image I and question q. Usually, Convolutional Neural Networks (CNN) are adopted for learning visual features, while Recurrent Neural Networks (RNN) (e.g., Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU)) encode the input question, i.e., f v = CNN(I; θ v ),(1)f q = RNN(q; θ q ),(2) where f v and f q represent visual features and question features respectively. Conventional ImageQA systems focus on designing robust feature fusion functions to generate multi-modal image-question features for answer prediction. Most state-of-the-art feature fusion methods fuse 1-d visual and language feature vectors in a symmetric way to generate the multi-modal representations. The 1-d visual features are usually generated by the deep neural networks (e.g., GoogleNet and ResNet) with a global average pooling layer. Such visual features f v and the later fused textual-visual features abandon spatial information of the input image and thus less robust to spatial variations. Question-guided Hybrid Convolution (QGHC) for multi-modal feature fusion To fully utilize the spatial information of the input image, we propose Languageguided Hybrid Convolution for feature fusion. Unlike bilinear pooling methods that treat visual and textual features in a symmetric way, our approach performs the convolution on visual feature maps and the convolution kernels are predicted based on the question features which can be formulated as: f v+q = CNN p (I;θ v (f q )),(3) where CNN p is the output before the last pooling layer,θ v (f q ) denotes the convolutional kernels predicted based on the question feature f q ∈ R d , and the convolution on visual feature maps with the predicted kernelsθ v (q) results in the multi-modal feature maps f v+q . However, the naive solution of directly predicting "full" convolutional kernels is memory-inefficient and time-consuming. Mapping the question features to generate full CNN kernels contains a huge number of learnable parameters. In our model, we use the fully-connected layer to learn the question-guided convolutional kernels. To predict a commonly used 3 × 3 × 256 × 256 kernel from a 2000-d question feature vector, the FC layer for learning the mapping generates 117 million parameters, which is hard to learn and causes over-fitting on existing VQA datasets. In our experiments, we validate that the performance of the naive solution is even worse than the simple feature concatenation. To mitigate the problem, we propose to predict parameters of group convolution kernels. The group convolution divides the input feature maps into several groups along the channel dimension, and thus each group has a reduced number of channels for convolution. Outputs of convolution with each group are then concatenated in the channel dimension to produce the output feature maps. In addition, we classify the convolution kernels into dynamically-predicted kernels and freely-updated kernels. The dynamic kernels are question-dependent, which are predicted based on the question feature vector f q . The freely-updated kernels are question-independent. They are trained as conventional convolution kernels via back-propagation. The dynamically-predicted kernels fuse the textual and visual information in early model stage which better capture the multi-model relationships. The freely-updated kernels reduce the parameter size and ensure the model can be trained efficiently. By shuffling parameters among these two kinds of kernels, our model can achieve both the accuracy and efficiency. During the testing phase, the dynamic kernels are decided by the questions while the freely updated kernels are fixed for all input image-question pairs. Formally, we substitute Eqn. (3) with the proposed QGHC for VQA, f v+q = CNN g I;θ v (f q ), θ v ,(4)a = MLP(f v+q ),(5) where CNN g denotes a group convolution network with dynamically-predicted kernelsθ v (f q ) and freely-updated kernels θ v . The output of the CNN f v+q fuses the textual and visual information and infers the final answers. MLP is a multilayer perception module and a is the predicted answers. The freely-updated kernels can capture pre-trained image patterns and we fix them during the testing stage. The dynamically-predicted kernels are dependent on the input questions and capture the question-image relationships. Our model fuses the textual and visual information in early model stage by the convolution operation. The spatial information between two modalities is well preserved which leads to more accurate results than previous feature concatenation strategies. The combination of the dynamic and freely-updated kernels is crucial important in keeping both the accuracy and efficiency and shows promising results in our experiments. QGHC module We stack multiple QGHC modules to better capture the interactions between the input image and question. Inspired by ResNet [27] and ResNeXt [14], our QGHC module consists of a series of 1 × 1, 3 × 3, and 1 × 1 convolutions. As shown in Figure 2, the module is designed similarly to the ShffuleNet [25] module with group convolution and identity shortcuts. The C i -channel input feature maps are first equally divided into N groups (paths). Each of the N groups then goes through 3 stages of convolutions and outputs C o /N -d feature maps. For each group, the first convolution is a 1 × 1 convolution that outputs C i /2N -channel feature maps. The second 3 × 3 convolution outputs C i /2Nchannel feature maps, and the final 1 × 1 convolution outputs C o /N -channel feature maps. We add a group shuffling layer after the 3 × 3 convolution layer to make features between different groups interact with each other and keep the advantages of both the dynamically-predicted kernels and freely-updated kernels. The output of C o /N -channel feature maps for the N groups are then concatenated together along the channel dimension. For the shortcut connection, a 1 × 1 convolution transforms the input feature maps to C o -d features, which are added with the output feature maps. Batch Normalization and ReLU are performed after each convolutional operation except for the last one, where ReLU is performed after the addition with the shortcut. The 3 × 3 group convolution is guided by the input questions. We randomly select n group kernels. Their parameters are predicted based on the question QGHC network for visual question answering The network structure for our QGHC network is illustrated in Figure 3. The ResNet [27] is first pre-trained on the ImageNet to extract mid-level visual features. The question features are generated by a language RNN model. The visual feature maps are then send to three QGHC modules with N = 8 groups and C o = 512. The output of the QGHC modules f v+q has the same spatial sizes with the input feature maps. A global average pooling is applied to the final feature maps to generate the final multi-modal feature representation for predicting the most likely answer a. To learn the dynamic convolution kernels in the QGHC modules, the question feature f q is transformed by two FC layers with a ReLU activation in between. The two FC layers first project the question to a 9216-d vector. The 3 × 3 question-dependent kernel weights of the three QGHC modules are obtained by reshaping the learned parameters into 3 × 3 × 32 × 32. However, directly training the proposed network with both dynamically-predicted kernels and freelyupdated kernels is non-trivial. The dynamic kernel parameters are the output of the ReLU non-linear function with different magnitudes compared with the freely-updated kernel parameters. We adopt the Weight Normalization [28] to balance the weights between the two types of 3 × 3 kernels, which stabilizes the training of the network. QGHC network with bilinear pooling and attention Our proposed QGHC network is also complementary with the existing bilinear pooling fusion methods and the attention mechanism. To combine with the MLB fusion scheme [11], the multi-modal features extracted from the global average pooling layer could be fused with the RNN question features again using a MLB. The fused features could be used to predict the final answers. The second stage fusion of textual and visual features brings a further improvement on the answering accuracy in our experiments. We also apply an attention model to better capture the spatial information. The original global average pooling layer is thus replaced by the the attention map. To weight more on locations of interest, a weighting map is learned by attention mechanism. A 1 × 1 convolution following a spatial Softmax function generates the attention weighting map. The final multi-modal features is the weighted summation of features at all the locations. The output feature maps from the last QGHC module are added with the linearly transformed question features. The attention mechanism is shown as the green rectangles in Figure 3. Experiments We test our proposed approach and compare it with the state-of-the-arts on two public datasets, the CLEVR dataset [29] and VQA dataset [6]. VQA Dataset Data and experimental setup. The VQA dataset is built from 204,721 MS-COCO images with human annotated questions and answers. On average, each image has 3 questions and 10 answers for each question. The dataset is divided into three splits: training (82,783 images), validation (40,504 images) and testing (81,434 images). A testing subset named test-dev with 25% samples can be evaluated multiple times a day. We follow the setup of previous methods and perform ablation studies on the testing subset. Our experiments focus on the open-ended task, which predict the correct answer in the free-form language expressions. If the predicted answer appears more than 3 times in the ground truth answers, the predicted answer would be considered as correct. Our models have the same setting when comparing with the state-of-the-art methods. The compared methods follow their original setup. For the proposed approach, images are resized to 448 × 448. The 14 × 14 × 2048 visual features are learned by an ImageNet pre-trained ResNet-152, and the question is encoded to a 2400-d feature vector by the skip-thought [30] using GRU. The candidate questions are selected as the most frequent 2,000 answers in the training and validation sets. The model is trained using the ADAM optimizer with an initial learning rate of 10 −4 . For results on the validation set, only the training set is used for training. For results on test-dev, we follow the setup of previous methods, both the training and validation data are used for training. Ablation studies on the VQA dataset. We conduct ablation studies to investigate factors that influence the final performance of our proposed QGHC network. The results are shown in Table 1. Our default QGHC network (denoted as QGHC ) has a visual ResNet-152 followed by three consecutive QGHC modules. Each QGHC module has a 1 × 1 stage-1 convolution with freely-updated kernels, a 3 × 3 stage-2 convolution with both dynamically-predicted kernels and freely-updated kernels, and another 1 × 1 convolution stage with freely-updated kernels (see Figure 2). Each of these three stage convolutions has 8 groups. They have 32, 32, and 64 output channels respectively. We first investigate the influence of the number of QGHC modules and the number of convolution channels. We list the results of different number of QGHC modules in Table 1. QGHC-1, QGHC-2, QGHC-4 represent 1, 2, and 4 QGHC modules respectively. As shown in Table 1, the parameter size improves as the number of QGHC increases but there is no further improvement when stacking more than 3 QGHC modules. We therefore keep 3 QGHC modules in our model. We also test halving the numbers of output channels of the three group convolutions to 16, 16, and 32 (denoted as QGHC-1/2 ). The results show that halving the number of channels only slightly decreases the final accuracy. We then test different group numbers. We change the group number from 8 to 4 (QGHC-group 4 ) and 16 (QGHC-group 16 ). Our proposed method is not sensitive to the group number of the convolutions and the model with 8 groups achieves the best performance. We also investigate the influence of the group shuffling layer. Removing the group shuffling layer (denoted as QGHCw/o shuffle) decreases the accuracy by 0.32% compared with our model. The shuffling layer makes features between different groups interact with each other and is helpful to the final results. For different QGHC module structures, we first test a naive solution. The QGHC module is implemented as a single 3×3 "full" convolution without groups. Its parameters are all dynamically predicted by question features (denoted as QGHC-1-naive). We then convert the single 3 × 3 full convolution to a series of 1 × 1, 3 × 3, 1 × 1 full convolutions with residual connection between the input and output feature maps (denoted as QGHC-1-full ), where the 3 × 3 convolution Table 2. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the VQA dataset without using the attention mechanism. kernels are all dynamically predicted by the question features. The improvement of QGHC-1-full over QGHC-1-naive demonstrates the advantages of the residual structure. Based on QGHC-1-full, we convert all the full convolutions to group convolutions with 8 groups (denoted as QGHC-1-group). The results outperforms QGHC-1-full, which show the effectiveness of the group convolution. However, the accuracy is still inferior to our proposed QGHC-1 with hybrid convolution. The results demonstrate that the question-guided kernels can help better fuse the textual and visual features and achieve robust answering performance. Finally, we test the combination of our method with different additional components. 1) The multi-modal features are concatenated with the question features, and then fed into the FC layer for answer prediction. (denoted as QGHC+concat). It results in a marginal improvement in the final accuracy. 2) We use MUTAN [12] to fuse our QGHC-generated multi-modal features with question features again for answer prediction (denoted as QGHC+MUTAN ). It has better results than QGHC+concat. 3) The attention is also added to QGHC following the descriptions in Section 3.5 (denoted as QGHC+att.). Comparison with state-of-the-art methods. QGHC fuses multi-modal features in an efficient way. The output feature maps of our QGHC module utilize the textual information to guide the learning of visual features and outperform state-of-the-art feature fusion methods. In this section, we compare our proposed approach (without using the attention module) with state-of-the-arts. The results on the VQA dataset are shown in Table 2. We compare our proposed approach with multi-modal feature concatenation methods including MCB [10], MLB [11], and MUTAN [12]. Our feature fusion is performed before the spatial pooling and can better capture the spatial information than previous methods. Since MUTAN can be combined with MLB (denoted as MUTAN+MLB) to further improve the overall performance. Attention mechanism is widely utilized in VQA algorithms for associating words with image regions. Our method can be combined with attention models Table 3. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the VQA dataset with the attention mechanism. for predicting more accurate answers. In Section 3.5, we adopt a simple attention implementation. More complex attention mechanisms, such as hierachical attention [19] and stacked attention [18] can also be combined with our approach. The results in Table 3 list the answering accuracies on the VQA dataset of different state-of-the-art methods with attention mechanism. We also compare our method with dynamic parameter prediction methods. DPPNet [22] (Table 2) and MODERN [23] (Table 3) are two state-of-the-art dynamic learning methods. Compared with DPPNet(VGG) and MODERN(ResNet-152), QGHC improves the performance by 6.78% and 3.73% respectively on the test-dev subset, which demonstrates the effectiveness of our QGHC model. CLEVR dataset The CLEVR dataset [29] is proposed to test the reasoning ability of VQA tasks, such as counting, comparing, and logical reasoning. Questions and images from CLEVR are generated by a simulation engine that randomly combines 3D objects. This dataset contains 699,989 training questions, 149,991 validation questions, and 149,988 test questions. Experimental setting. In our proposed model, the image is resized to 224 × 224. The question is first embedded to a 300-d vector through a FC layer followed by a ReLU non-linear function, and then input into a 2-layer LSTM with 256 hidden states to generate textual features. Our QGHC network contains three QGHC modules for fusing multi-modal information. All parameters are learned from scratch and trained in an end-to-end manner. The network is trained using the ADAM optimizer with the learning rate 5 × 10 −4 and batch size 64. All the results are reported on the validation subset. [41] learns to parse the question and predicts the answer distribution using dynamic network structure. The results of different methods on the CLEVR dataset are shown in Table 4. The multi-modal concatenation (CNN-LSTM) does not perform well, since it cannot model the complex interactions between images and questions. Stacked Attention (+SA) can improve the results since it utilizes the spatial information from input images. Our QGHC model still outperforms +SA by 17.40%. For the N2NMN, it parses the input question to dynamically predict the network structure. Our proposed method outperforms it by 2.20%. Compare integers Query attribute Compare attribute Model Overall Exist Count equal less more size color material shape size color material shape Human [42] 92 Table 4. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the CLVER dataset. Visualization of question-guided convolution Motivated by the class activation mapping (CAM) [9], we visualize the activation maps of the output feature maps generated by the QGHC modules. The weighted summation of the topmost feature maps can localize answer regions. Convolution activation maps for our last QGHC module are shown in Figure 4. We can observe that the activation regions relate to the questions and the answers are predicted correctly for different types of questions, including shape, color, and number. In addition, we also visualize the activation maps of different QGHC modules by training an answer prediction FC layer for each of them. As examples shown in Figure 1, the QGHC gradually focus on the correct regions. Conclusion In this paper, we propose a question-guided hybrid convolution for learning discriminative multi-modal feature representations. Our approach fully utilizes the spatial information and is able to capture complex relations between the image and question. By introducing the question-guided group convolution kernels with both dynamically-predicted and freely-updated kernels, the proposed QGHC network shows strong capability on solving the visual question answering problem. The proposed approach is complementary with existing feature fusion methods and attention mechanisms. Extensive experiments demonstrate the effectiveness of our QGHC network and its individual components.
3,746
1808.02632
2949402865
In this paper, we propose a novel Question-Guided Hybrid Convolution (QGHC) network for Visual Question Answering (VQA). Most state-of-the-art VQA methods fuse the high-level textual and visual features from the neural network and abandon the visual spatial information when learning multi-modal features.To address these problems, question-guided kernels generated from the input question are designed to convolute with visual features for capturing the textual and visual relationship in the early stage. The question-guided convolution can tightly couple the textual and visual information but also introduce more parameters when learning kernels. We apply the group convolution, which consists of question-independent kernels and question-dependent kernels, to reduce the parameter size and alleviate over-fitting. The hybrid convolution can generate discriminative multi-modal features with fewer parameters. The proposed approach is also complementary to existing bilinear pooling fusion and attention based VQA methods. By integrating with them, our method could further boost the performance. Extensive experiments on public VQA datasets validate the effectiveness of QGHC.
Recent research found that the combination of depth-wise convolution and channel shuffle with group convolution could reduce the number of parameters in CNN without hindering the final performance. Motivated by Xception @cite_39 , ResNeXt @cite_26 , and ShuffleNet @cite_22 , we decompose the visual CNN kernels into several groups. By shuffling parameters among different groups, our model can reduce the number of predicted parameters and improve the answering accuracy simultaneously. Note that for existing CNN methods with group convolution, the convolutional parameters are solely learned via back-propagation. In contrast, our QGHC consists of question-dependent kernels that are predicted based on language features and question-independent kernels that are freely updated.
{ "abstract": [ "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13x actual speedup over AlexNet while maintaining comparable accuracy.", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters." ], "cite_N": [ "@cite_26", "@cite_22", "@cite_39" ], "mid": [ "2953328958", "2724359148", "2951583185" ] }
Question-Guided Hybrid Convolution for Visual Question Answering
Convolution Neural Networks (CNN) [1] and Recurrent Neural Networks (RNN) [2] have shown great success in vision and language tasks. Recently, CNN and RNN are jointly trained for learning feature representations for multi-modal tasks, including image captioning [3,4], text-to-image retrieval [5,33], and Visual Question Answering (VQA) [6,11,12,38]. Among the vision-language tasks, VQA is one of the most challenging problems. Instead of embedding images and their textual descriptions into the same feature subspace as in the text-image matching problem [7,8,26], VQA requires algorithms to answer natural language questions about the visual contents. The methods are thus designed to understand both the questions and the image contents to reason the underlying truth. To infer the answer based on the input image and question, it is important to fuse the information from both modalities to create joint representations. Answers could be predicted by learning classifiers on the joint features. Early VQA methods [9] fuse textual and visual information by feature concatenation. State-of-the-art feature fusion methods, such as Multimodal Compact Bilinear pooling (MCB) [10], utilize bilinear pooling to learn multi-model features. However, the above type of methods have main limitations. The multi-modal features are fused in the latter model stage and the spatial information from visual features gets lost before feature fusion. The visual features are usually obtained by averaging the output of the last pooling layer and represented as 1d vectors. But such operation abandons the spatial information of input images. In addition, the textual and visual relationship is modeled only on the topmost layers and misses details from the low-level and mid-level layers. To solve these problems, we propose a feature fusion scheme that generates multi-modal features by applying question-guided convolutions on the visual features (see Figure 1). The mid-level visual features and language features are first learned independently using CNN and RNN. The visual features are designed to keep the spatial information. And then a series of kernels are generated based on the language features to convolve with the visual features. Our model tightly couples the multi-modal features in an early stage to better capture the spatial information before feature fusion. One problem induced by the question-guided kernels is that the large number of parameters make it hard to train the model. Directly predicting "full" convolutional filters requires estimating thousands of parameters (e.g. 256 number of 3 × 3 filters convolve with the 256-channel input feature map). This is memory-inefficient and time-consuming, and does not result in satisfactory performances (as shown in our experiments). Motivated by the group convolution [13,1,14], we decompose large convolution kernels into group kernels, each of which works on a small number of input feature maps. In addition, only a portion of such group convolution kernels (question-dependent kernels) are predicted by RNN and the remaining kernels (question-independent kernels) are freely learned via back-propagation. Both question-dependent and question-independent kernels are shown to be important, and we name the proposed operation as Question-guided Hybrid Convolution (QGHC). The visual and language features are deeply fused to generate discriminative multi-modal features. The spatial relations between the input image and question could be well captured by the question-guided convolution. Our experiments on VQA datasets validate the effectiveness of our approach and show advantages of the proposed feature fusion over the state-of-the-arts. Our contributions can be summarized in threefold. 1) We propose a novel multi-modal feature fusion method based on question-guided convolution kernels. The relative visual regions have high response to the input question and spatial information could be well captured by encoding such connection in the QGHC model. The QGHC explores deep multi-modal relationships which benefits the visual question reasoning. 2) To achieve memory efficiency and robust performance in the question-guided convolution, we propose the group convolution to learn kernel parameters. The question-dependent kernels model the relationship of visual and textual information while the question-independent kernels reduce parameter size and alleviate over-fitting. 3) Extensive experiments and ablation studies on the public datasets show the effectiveness of the proposed QGHC and each individual component. Our approach outperforms the state-of-the-art methods using much fewer parameters. Problem formulation Most state-of-the-art VQA methods rely on deep neural networks for learning discriminative features of the input image I and question q. Usually, Convolutional Neural Networks (CNN) are adopted for learning visual features, while Recurrent Neural Networks (RNN) (e.g., Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU)) encode the input question, i.e., f v = CNN(I; θ v ),(1)f q = RNN(q; θ q ),(2) where f v and f q represent visual features and question features respectively. Conventional ImageQA systems focus on designing robust feature fusion functions to generate multi-modal image-question features for answer prediction. Most state-of-the-art feature fusion methods fuse 1-d visual and language feature vectors in a symmetric way to generate the multi-modal representations. The 1-d visual features are usually generated by the deep neural networks (e.g., GoogleNet and ResNet) with a global average pooling layer. Such visual features f v and the later fused textual-visual features abandon spatial information of the input image and thus less robust to spatial variations. Question-guided Hybrid Convolution (QGHC) for multi-modal feature fusion To fully utilize the spatial information of the input image, we propose Languageguided Hybrid Convolution for feature fusion. Unlike bilinear pooling methods that treat visual and textual features in a symmetric way, our approach performs the convolution on visual feature maps and the convolution kernels are predicted based on the question features which can be formulated as: f v+q = CNN p (I;θ v (f q )),(3) where CNN p is the output before the last pooling layer,θ v (f q ) denotes the convolutional kernels predicted based on the question feature f q ∈ R d , and the convolution on visual feature maps with the predicted kernelsθ v (q) results in the multi-modal feature maps f v+q . However, the naive solution of directly predicting "full" convolutional kernels is memory-inefficient and time-consuming. Mapping the question features to generate full CNN kernels contains a huge number of learnable parameters. In our model, we use the fully-connected layer to learn the question-guided convolutional kernels. To predict a commonly used 3 × 3 × 256 × 256 kernel from a 2000-d question feature vector, the FC layer for learning the mapping generates 117 million parameters, which is hard to learn and causes over-fitting on existing VQA datasets. In our experiments, we validate that the performance of the naive solution is even worse than the simple feature concatenation. To mitigate the problem, we propose to predict parameters of group convolution kernels. The group convolution divides the input feature maps into several groups along the channel dimension, and thus each group has a reduced number of channels for convolution. Outputs of convolution with each group are then concatenated in the channel dimension to produce the output feature maps. In addition, we classify the convolution kernels into dynamically-predicted kernels and freely-updated kernels. The dynamic kernels are question-dependent, which are predicted based on the question feature vector f q . The freely-updated kernels are question-independent. They are trained as conventional convolution kernels via back-propagation. The dynamically-predicted kernels fuse the textual and visual information in early model stage which better capture the multi-model relationships. The freely-updated kernels reduce the parameter size and ensure the model can be trained efficiently. By shuffling parameters among these two kinds of kernels, our model can achieve both the accuracy and efficiency. During the testing phase, the dynamic kernels are decided by the questions while the freely updated kernels are fixed for all input image-question pairs. Formally, we substitute Eqn. (3) with the proposed QGHC for VQA, f v+q = CNN g I;θ v (f q ), θ v ,(4)a = MLP(f v+q ),(5) where CNN g denotes a group convolution network with dynamically-predicted kernelsθ v (f q ) and freely-updated kernels θ v . The output of the CNN f v+q fuses the textual and visual information and infers the final answers. MLP is a multilayer perception module and a is the predicted answers. The freely-updated kernels can capture pre-trained image patterns and we fix them during the testing stage. The dynamically-predicted kernels are dependent on the input questions and capture the question-image relationships. Our model fuses the textual and visual information in early model stage by the convolution operation. The spatial information between two modalities is well preserved which leads to more accurate results than previous feature concatenation strategies. The combination of the dynamic and freely-updated kernels is crucial important in keeping both the accuracy and efficiency and shows promising results in our experiments. QGHC module We stack multiple QGHC modules to better capture the interactions between the input image and question. Inspired by ResNet [27] and ResNeXt [14], our QGHC module consists of a series of 1 × 1, 3 × 3, and 1 × 1 convolutions. As shown in Figure 2, the module is designed similarly to the ShffuleNet [25] module with group convolution and identity shortcuts. The C i -channel input feature maps are first equally divided into N groups (paths). Each of the N groups then goes through 3 stages of convolutions and outputs C o /N -d feature maps. For each group, the first convolution is a 1 × 1 convolution that outputs C i /2N -channel feature maps. The second 3 × 3 convolution outputs C i /2Nchannel feature maps, and the final 1 × 1 convolution outputs C o /N -channel feature maps. We add a group shuffling layer after the 3 × 3 convolution layer to make features between different groups interact with each other and keep the advantages of both the dynamically-predicted kernels and freely-updated kernels. The output of C o /N -channel feature maps for the N groups are then concatenated together along the channel dimension. For the shortcut connection, a 1 × 1 convolution transforms the input feature maps to C o -d features, which are added with the output feature maps. Batch Normalization and ReLU are performed after each convolutional operation except for the last one, where ReLU is performed after the addition with the shortcut. The 3 × 3 group convolution is guided by the input questions. We randomly select n group kernels. Their parameters are predicted based on the question QGHC network for visual question answering The network structure for our QGHC network is illustrated in Figure 3. The ResNet [27] is first pre-trained on the ImageNet to extract mid-level visual features. The question features are generated by a language RNN model. The visual feature maps are then send to three QGHC modules with N = 8 groups and C o = 512. The output of the QGHC modules f v+q has the same spatial sizes with the input feature maps. A global average pooling is applied to the final feature maps to generate the final multi-modal feature representation for predicting the most likely answer a. To learn the dynamic convolution kernels in the QGHC modules, the question feature f q is transformed by two FC layers with a ReLU activation in between. The two FC layers first project the question to a 9216-d vector. The 3 × 3 question-dependent kernel weights of the three QGHC modules are obtained by reshaping the learned parameters into 3 × 3 × 32 × 32. However, directly training the proposed network with both dynamically-predicted kernels and freelyupdated kernels is non-trivial. The dynamic kernel parameters are the output of the ReLU non-linear function with different magnitudes compared with the freely-updated kernel parameters. We adopt the Weight Normalization [28] to balance the weights between the two types of 3 × 3 kernels, which stabilizes the training of the network. QGHC network with bilinear pooling and attention Our proposed QGHC network is also complementary with the existing bilinear pooling fusion methods and the attention mechanism. To combine with the MLB fusion scheme [11], the multi-modal features extracted from the global average pooling layer could be fused with the RNN question features again using a MLB. The fused features could be used to predict the final answers. The second stage fusion of textual and visual features brings a further improvement on the answering accuracy in our experiments. We also apply an attention model to better capture the spatial information. The original global average pooling layer is thus replaced by the the attention map. To weight more on locations of interest, a weighting map is learned by attention mechanism. A 1 × 1 convolution following a spatial Softmax function generates the attention weighting map. The final multi-modal features is the weighted summation of features at all the locations. The output feature maps from the last QGHC module are added with the linearly transformed question features. The attention mechanism is shown as the green rectangles in Figure 3. Experiments We test our proposed approach and compare it with the state-of-the-arts on two public datasets, the CLEVR dataset [29] and VQA dataset [6]. VQA Dataset Data and experimental setup. The VQA dataset is built from 204,721 MS-COCO images with human annotated questions and answers. On average, each image has 3 questions and 10 answers for each question. The dataset is divided into three splits: training (82,783 images), validation (40,504 images) and testing (81,434 images). A testing subset named test-dev with 25% samples can be evaluated multiple times a day. We follow the setup of previous methods and perform ablation studies on the testing subset. Our experiments focus on the open-ended task, which predict the correct answer in the free-form language expressions. If the predicted answer appears more than 3 times in the ground truth answers, the predicted answer would be considered as correct. Our models have the same setting when comparing with the state-of-the-art methods. The compared methods follow their original setup. For the proposed approach, images are resized to 448 × 448. The 14 × 14 × 2048 visual features are learned by an ImageNet pre-trained ResNet-152, and the question is encoded to a 2400-d feature vector by the skip-thought [30] using GRU. The candidate questions are selected as the most frequent 2,000 answers in the training and validation sets. The model is trained using the ADAM optimizer with an initial learning rate of 10 −4 . For results on the validation set, only the training set is used for training. For results on test-dev, we follow the setup of previous methods, both the training and validation data are used for training. Ablation studies on the VQA dataset. We conduct ablation studies to investigate factors that influence the final performance of our proposed QGHC network. The results are shown in Table 1. Our default QGHC network (denoted as QGHC ) has a visual ResNet-152 followed by three consecutive QGHC modules. Each QGHC module has a 1 × 1 stage-1 convolution with freely-updated kernels, a 3 × 3 stage-2 convolution with both dynamically-predicted kernels and freely-updated kernels, and another 1 × 1 convolution stage with freely-updated kernels (see Figure 2). Each of these three stage convolutions has 8 groups. They have 32, 32, and 64 output channels respectively. We first investigate the influence of the number of QGHC modules and the number of convolution channels. We list the results of different number of QGHC modules in Table 1. QGHC-1, QGHC-2, QGHC-4 represent 1, 2, and 4 QGHC modules respectively. As shown in Table 1, the parameter size improves as the number of QGHC increases but there is no further improvement when stacking more than 3 QGHC modules. We therefore keep 3 QGHC modules in our model. We also test halving the numbers of output channels of the three group convolutions to 16, 16, and 32 (denoted as QGHC-1/2 ). The results show that halving the number of channels only slightly decreases the final accuracy. We then test different group numbers. We change the group number from 8 to 4 (QGHC-group 4 ) and 16 (QGHC-group 16 ). Our proposed method is not sensitive to the group number of the convolutions and the model with 8 groups achieves the best performance. We also investigate the influence of the group shuffling layer. Removing the group shuffling layer (denoted as QGHCw/o shuffle) decreases the accuracy by 0.32% compared with our model. The shuffling layer makes features between different groups interact with each other and is helpful to the final results. For different QGHC module structures, we first test a naive solution. The QGHC module is implemented as a single 3×3 "full" convolution without groups. Its parameters are all dynamically predicted by question features (denoted as QGHC-1-naive). We then convert the single 3 × 3 full convolution to a series of 1 × 1, 3 × 3, 1 × 1 full convolutions with residual connection between the input and output feature maps (denoted as QGHC-1-full ), where the 3 × 3 convolution Table 2. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the VQA dataset without using the attention mechanism. kernels are all dynamically predicted by the question features. The improvement of QGHC-1-full over QGHC-1-naive demonstrates the advantages of the residual structure. Based on QGHC-1-full, we convert all the full convolutions to group convolutions with 8 groups (denoted as QGHC-1-group). The results outperforms QGHC-1-full, which show the effectiveness of the group convolution. However, the accuracy is still inferior to our proposed QGHC-1 with hybrid convolution. The results demonstrate that the question-guided kernels can help better fuse the textual and visual features and achieve robust answering performance. Finally, we test the combination of our method with different additional components. 1) The multi-modal features are concatenated with the question features, and then fed into the FC layer for answer prediction. (denoted as QGHC+concat). It results in a marginal improvement in the final accuracy. 2) We use MUTAN [12] to fuse our QGHC-generated multi-modal features with question features again for answer prediction (denoted as QGHC+MUTAN ). It has better results than QGHC+concat. 3) The attention is also added to QGHC following the descriptions in Section 3.5 (denoted as QGHC+att.). Comparison with state-of-the-art methods. QGHC fuses multi-modal features in an efficient way. The output feature maps of our QGHC module utilize the textual information to guide the learning of visual features and outperform state-of-the-art feature fusion methods. In this section, we compare our proposed approach (without using the attention module) with state-of-the-arts. The results on the VQA dataset are shown in Table 2. We compare our proposed approach with multi-modal feature concatenation methods including MCB [10], MLB [11], and MUTAN [12]. Our feature fusion is performed before the spatial pooling and can better capture the spatial information than previous methods. Since MUTAN can be combined with MLB (denoted as MUTAN+MLB) to further improve the overall performance. Attention mechanism is widely utilized in VQA algorithms for associating words with image regions. Our method can be combined with attention models Table 3. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the VQA dataset with the attention mechanism. for predicting more accurate answers. In Section 3.5, we adopt a simple attention implementation. More complex attention mechanisms, such as hierachical attention [19] and stacked attention [18] can also be combined with our approach. The results in Table 3 list the answering accuracies on the VQA dataset of different state-of-the-art methods with attention mechanism. We also compare our method with dynamic parameter prediction methods. DPPNet [22] (Table 2) and MODERN [23] (Table 3) are two state-of-the-art dynamic learning methods. Compared with DPPNet(VGG) and MODERN(ResNet-152), QGHC improves the performance by 6.78% and 3.73% respectively on the test-dev subset, which demonstrates the effectiveness of our QGHC model. CLEVR dataset The CLEVR dataset [29] is proposed to test the reasoning ability of VQA tasks, such as counting, comparing, and logical reasoning. Questions and images from CLEVR are generated by a simulation engine that randomly combines 3D objects. This dataset contains 699,989 training questions, 149,991 validation questions, and 149,988 test questions. Experimental setting. In our proposed model, the image is resized to 224 × 224. The question is first embedded to a 300-d vector through a FC layer followed by a ReLU non-linear function, and then input into a 2-layer LSTM with 256 hidden states to generate textual features. Our QGHC network contains three QGHC modules for fusing multi-modal information. All parameters are learned from scratch and trained in an end-to-end manner. The network is trained using the ADAM optimizer with the learning rate 5 × 10 −4 and batch size 64. All the results are reported on the validation subset. [41] learns to parse the question and predicts the answer distribution using dynamic network structure. The results of different methods on the CLEVR dataset are shown in Table 4. The multi-modal concatenation (CNN-LSTM) does not perform well, since it cannot model the complex interactions between images and questions. Stacked Attention (+SA) can improve the results since it utilizes the spatial information from input images. Our QGHC model still outperforms +SA by 17.40%. For the N2NMN, it parses the input question to dynamically predict the network structure. Our proposed method outperforms it by 2.20%. Compare integers Query attribute Compare attribute Model Overall Exist Count equal less more size color material shape size color material shape Human [42] 92 Table 4. Comparisons of question answering accuracy of the proposed approach and the state-of-the-art methods on the CLVER dataset. Visualization of question-guided convolution Motivated by the class activation mapping (CAM) [9], we visualize the activation maps of the output feature maps generated by the QGHC modules. The weighted summation of the topmost feature maps can localize answer regions. Convolution activation maps for our last QGHC module are shown in Figure 4. We can observe that the activation regions relate to the questions and the answers are predicted correctly for different types of questions, including shape, color, and number. In addition, we also visualize the activation maps of different QGHC modules by training an answer prediction FC layer for each of them. As examples shown in Figure 1, the QGHC gradually focus on the correct regions. Conclusion In this paper, we propose a question-guided hybrid convolution for learning discriminative multi-modal feature representations. Our approach fully utilizes the spatial information and is able to capture complex relations between the image and question. By introducing the question-guided group convolution kernels with both dynamically-predicted and freely-updated kernels, the proposed QGHC network shows strong capability on solving the visual question answering problem. The proposed approach is complementary with existing feature fusion methods and attention mechanisms. Extensive experiments demonstrate the effectiveness of our QGHC network and its individual components.
3,746
1808.02474
2887575628
Zero-shot learning transfers knowledge from seen classes to novel unseen classes to reduce human labor of labelling data for building new classifiers. Much effort on zero-shot learning however has focused on the standard multi-class setting, the more challenging multi-label zero-shot problem has received limited attention. In this paper we propose a transfer-aware embedding projection approach to tackle multi-label zero-shot learning. The approach projects the label embedding vectors into a low-dimensional space to induce better inter-label relationships and explicitly facilitate information transfer from seen labels to unseen labels, while simultaneously learning a max-margin multi-label classifier with the projected label embeddings. Auxiliary information can be conveniently incorporated to guide the label embedding projection to further improve label relation structures for zero-shot knowledge transfer. We conduct experiments for zero-shot multi-label image classification. The results demonstrate the efficacy of the proposed approach.
Multi-label Classification Multi-label classification is relevant in many application domains, where each data instance can be assigned into multiple classes. Many multi-label learning works developed in the literature have centered on exploiting the correlation interdependency information between the multiple labels, including the max-margin learning methods with pairwise ranking loss @cite_10 , weighted approximate pairwise ranking loss (WARP) @cite_11 , and calibrated separation ranking loss (CSRL) @cite_1 . Moreover, incomplete labels are frequently encountered in many multi-label applications due to noise or crowd-sourcing, where only a subset of true labels are provided on some training instances. Multi-label learning methods with missing labels have largely depended on observed label correlations to overcome the label incompleteness of the training data @cite_30 @cite_28 @cite_23 . These methods however assumed that all the labels are at least observed on a subset of training data and they cannot handle the more challenging zero-shot learning setting where some labels are completely missing from the training instances.
{ "abstract": [ "We consider a special type of multi-label learning where class assignments of training examples are incomplete. As an example, an instance whose true class assignment is (c 1 , c 2 , c 3 ) is only assigned to class c 1 when it is used as a training sample. We refer to this problem as multi-label learning with incomplete class assignment. Incompletely labeled data is frequently encountered when the number of classes is very large (hundreds as in MIR Flickr dataset) or when there is a large ambiguity between classes (e.g., jet vs plane). In both cases, it is difficult for users to provide complete class assignments for objects. We propose a ranking based multi-label learning framework that explicitly addresses the challenge of learning from incompletely labeled data by exploiting the group lasso technique to combine the ranking errors. We present a learning algorithm that is empirically shown to be efficient for solving the related optimization problem. Our empirical study shows that the proposed framework is more effective than the state-of-the-art algorithms for multi-label learning in dealing with incompletely labeled data.", "", "Multilabel classification is a central problem in many areas of data analysis, including text and multimedia categorization, where individual data objects need to be assigned multiple labels. A key challenge in these tasks is to learn a classifier that can properly exploit label correlations without requiring exponential enumeration of label subsets during training or testing. We investigate novel loss functions for multilabel training within a large margin framework—identifying a simple alternative that yields improved generalization while still allowing efficient training. We furthermore show how co-variances between the label models can be learned simultaneously with the classification model itself, in a jointly convex formulation, without compromising scalability. The resulting combination yields state of the art accuracy in multilabel webpage classification.", "Multi-label learning has attracted significant interests in computer vision recently, finding applications in many vision tasks such as multiple object recognition and automatic image annotation. Associating multiple labels to a complex image is very difficult, not only due to the intricacy of describing the image, but also because of the incompleteness nature of the observed labels. Existing works on the problem either ignore the label-label and instance-instance correlations or just assume these correlations are linear and unstructured. Considering that semantic correlations between images are actually structured, in this paper we propose to incorporate structured semantic correlations to solve the missing label problem of multi-label learning. Specifically, we project images to the semantic space with an effective semantic descriptor. A semantic graph is then constructed on these images to capture the structured correlations between them. We utilize the semantic graph Laplacian as a smooth term in the multi-label learning formulation to incorporate the structured semantic correlations. Experimental results demonstrate the effectiveness of the proposed semantic descriptor and the usefulness of incorporating the structured semantic correlations. We achieve better results than state-of-the-art multi-label learning methods on four benchmark datasets.", "", "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory." ], "cite_N": [ "@cite_30", "@cite_28", "@cite_1", "@cite_23", "@cite_10", "@cite_11" ], "mid": [ "2019899889", "", "2214660060", "2949111601", "", "21006490" ] }
Multi-Label Zero-Shot Learning with Transfer-Aware Label Embedding Projection
Despite the advances in the development of supervised learning techniques such as deep neural network models, the conventional supervised learning setting requires a large number of labelled instances for each single class to perform training, and hence induce substantial annotation costs. It is important to develop algorithms that enable the reduction of annotation cost for training classification models. Zero-shot learning (ZSL) which transfers knowledge from annotated seen classes to predict unseen classes that have no labeled data, hence has received a lot of attention [Lampert et al., 2009;Akata et al., 2015;Romera-Paredes and Torr, 2015;Zhang and Saligrama, 2015;Changpinyo et al., 2017]. One primary source deployed in zero-shot learning for bridging the gap between seen and unseen classes is the attribute description of the class labels [Lampert et al., 2009;Lampert et al., 2014;Romera-Paredes and Torr, 2015;Fu et al., 2015]. The attributes are typically defined by domain experts who are familiar with the common and specific characteristics of different category concepts, and hence are able to carry transferable information across classes. Nevertheless human labor is still involved in defining the attributebased class representations. This propels the research community to exploit more easily accessible free information sources from the Internet, including textual descriptions from Wikipedia articles [Qiao et al., 2016;Akata et al., 2015], word embedding vectors trained from large text corpus using natural language processing (NLP) techniques [Akata et al., 2015;Xian et al., 2016;Zhang and Saligrama, 2015;Al-Halah et al., 2016], co-occurrence statistics of hit-counts from search engine [Rohrbach et al., 2010;Mensink et al., 2014], and WordNet hierarchy information of the labels [Rohrbach et al., 2010;Rohrbach et al., 2011;Li et al., 2015b]. These works demonstrated impressive results on several standard zero-shot datasets. However, majority research effort has concentrated on multi-class zeroshot classifications, while the more challenging multi-label zero-shot learning problem has received very limited attention [Mensink et al., 2014;Zhang et al., 2016;Lee et al., 2017]. In this work we propose a novel transfer-aware label embedding projection method to tackle multi-label zero-shot learning, as shown in Figure 1. Label embeddings have been exploited in standard multi-label classification to capture label relationships. We exploit the word embeddings [Pennington et al., 2014] produced from large corpus with NLP techniques as the initial semantic label embedding vectors. These semantic embedding vectors have the nice property of catching general similarities between any pair of label phrases/words, but may not be optimal for multi-label classification and information transfer across classes. Hence we project the label embedding vectors into a low-dimensional semantic space in a transfer-aware manner to gain transferable label relationships by enforcing similarity between seen and unseen class labels and separability across unseen labels. We then simultaneously co-project the labeled seen class instances into the same semantic space under a max-margin multi-label classification framework to ensure the predictability of the embeddings. Moreover, we further incorporate auxiliary information to guide the label embedding projection for suitable inter-label relationships. To investigate the proposed approach, we conduct ZSL experiments on two standard multi-label image classification datasets, the PASCAL VOC2007 and VOC2012. ℝ " ℝ # ℝ $ WordNet Co-occurrence … Label embedding projection regularization Q Figure 1: Illustration of the proposed multi-label ZSL framework. Red dots represent images in their visual feature space R d . They are mapped into a semantic space R r by a visual projection matrix W . Yellow dots represent labels in the word embedding space R m and they are mapped into the same R r by a semantic projection matrix U . The projection matrices are learnt under a max-margin multi-label learning framework based on the matching scores of the images and labels in the projected semantic space. Embedding regularization and auxiliary information are leveraged to facilitate the knowledge transfer from seen classes to unseen classes on the projected common semantic space. the effectiveness of the proposed approach by comparing to a number of related ZSL methods. Proposed Approach Problem Definition and Notations We consider multi-label zero-shot learning in the following setting. Assume we have a set of n labeled training images D = (X, Y ), where X ∈ R n×d denotes the d-dim visual features extracted using CNNs for the n images, and Y ∈ {0, 1} n×L s denotes the corresponding label indicator matrix across a set of seen classes, S = {1, 2, ..., L s }: "1" indicates the presence of the corresponding label (i.e., positive labels) and "0" indicates the absence of the corresponding label (i.e., negative labels). For multi-label classification, each row of Y can have multiple "1" values. Moreover, we also assume there are a set of L u unseen classes, U = {L s + 1, ..., L} such that L = L s + L u , and the labels for the unseen classes are completely missing in our labeled training data. In addition, we assume the word embeddings of the seen classes and unseen classes are both given: M = [M s ; M u ] ∈ R L×m , where M s ∈ R L s ×m are the seen class embeddings, M u ∈ R L u ×m are the unseen class embeddings, and their concatenation M is for all the classes. We aim to learn a multi-label prediction model from the training data that allows us to perform multi-label classification on the unseen classes. We use the following general notations in the presentation below. For any matrix, e.g., X, we use X i to denote its i-th row vector. We use · F to denote the Frobenius norm of a matrix and use tr(·) to denote the trace of a matrix. For Y i , we useȲ i to denote its complement such thatȲ i = 1 − Y i . We also reuse the notation Y i to denote a set of indices of its non-zero values within proper contexts. We use · to denote the Euclidean norm and denote the rectified operator as [·] + = max(·, 0). We use 1 to denote a column vector of all 1s, assuming its size can be determined in the context, and use I to denote an identity matrix. We use 0 a,b to denote a a × b matrix with all 0s and use 1 a,b to denote a a × b matrix with all 1s. Max-margin Multi-label Learning with Semantic Embedding Projection Instead of entirely relying on the pre-given label embeddings in M obtained from word embeddings to facilitate cross-class information adaptation, we propose to co-project the input image visual features and the label embeddings into a more suitable common low-dimensional semantic space such that the similarity matching scores of each image with its positive labels in this semantic space will be higher than that with its negative labels. Specifically, we want to learn a projection function θ : R d → R r that maps an instance X i from the visual feature space R d into a semantic space R r ; assuming a linear projection we have θ(X i ) = X i W , where W is a d × r projection matrix. Simultaneously, we learn another linear projection function φ : R m → R r such that φ(M c ) = M c U , where U is a m × r projection matrix, which maps a class c from the original word embedding space R m into the same semantic space R r . Then the similarity matching score between an instance X i and the c-th class label can be computed as the inner product of their project representations in the common semantic space: F (i, c) = θ(X i )φ(M c ) = X i W U M c(1) To encode the assumption that the similarity score F (i, c) between an instance X i and any of its positive label c ∈ Y i should be higher than the similarity score F (i,c) between instance X i and any of its negative labelc ∈Ȳ i , i.e., F (i, c) F (i,c), we formulate the projection learning problem within a max-margin multi-label learning framework: min W,U :U U =I n i=1 L(W, U ; X i , Y i ) + R(W )(2) where L(·) denotes a max-margin ranking loss and R(W ) is a model regularization term. In this work we adopt a calibrated separation ranking loss: L(W, U ; X i , Y i ) = max c∈Yi 1+F (i, 0)−F (i, c) + +maxc ∈Ȳi 1+F (i,c)−F (i, 0) +(3) where F (i, 0) = X i W 0 can be considered as the matching score for an auxiliary class 0, which produces a separation threshold score on the i-th instance such that the scores for positive labels should be higher than it and the scores for negative labels should be lower than it, i.e., F (i, c) F (i, 0) F (i,c), to minimize the loss. We assume the project matrix U has orthogonal columns to maintain a succinct label embedding projection. For the regularization term over W , we consider a Frobenius norm regularizer, R(W ) = β 2 W 2 F + W 0 2 , where W 0 can be considered as an auxiliary column to W , and β is a tradeoff weight parameter. Transfer-Aware Label Embedding Projection Employing the ranking loss to minimize classification error on seen classes can ensure the predictability of the projected label embedding. However for ZSL our goal is to predict labels from the unseen classes. This requires a label embedding representation that can encode suitable inter-class label relations to facilitate information transfer from seen to the unseen classes such that the similarity score F (i, c) can well reflect the relative prediction scores on an unseen class c under the learned model parameters W and U . Our intuition is that classification or ranking on the target unseen class labels would be easier if they are well separated in the projected embedding space and knowledge transfer would be easier if unseen classes and seen classes have high similarities in the projected label embedding space. We hence propose to guide the label embedding projection learning by encoding this intuition through a transfer-aware regularization objective H(U ) such that: H(U ) = γ 2L u (L u − 1) i,j∈U ,i =j M i U U M j − γ 2L s L u i∈S,j∈U M i U U M j which can be equivalently expressed in a more compact form: H(U ) = γ 2 tr U M QM U(4) where γ is a balance parameter for H(·), and Q = 0 L s ,L s −1 2L s L u 1 L s ,L u −1 2L s L u 1 L u ,L s 1 L u (L u −1) (1 L u ,L u − I L u )(5) Here we use the inner product of a pair projected label embedding vectors as the similarity value for the corresponding pair of classes, and aim to maximize the similarities across seen and unseen classes and minimize the similarities between unseen classes. By incorporating this regularization objective into the framework in Eq. (2), we obtain the following Transfer-Aware max-margin Embedding Projection (TAEP) learning problem: min W,W0,ξ,η, U : U U =I 1 ξ + 1 η + β 2 ( W 2 F + W 0 2 ) + H(U ) (6) s.t. F (i, c) − F (i, 0) ≥ 1 − ξ i , ∀c ∈ Y i , ∀i; ξ ≥ 0; F (i, 0) − F (i,c) ≥ 1 − η i , ∀c ∈Ȳ i , ∀i; η ≥ 0 The objective learns W and U by enforcing positive labels to rank higher than negative labels, while incorporating the regularization term H(U ) to refine the label embedding structure in the semantic space. H(U ) can help produce better interclass relationship structure for cross-class knowledge transfer. The regularization form H(U ) also has a nice property -it allows a closed-form solution for U to be derived and hence simplifies the training procedure. Note after learning the projection matrices W and U , it will be straightforward to rank all unseen labels for instance i based on the prediction scores F (i, c) for all c ∈ U. Integration of Auxiliary Information In addition to explicit word embeddings, similarity information about the class labels can be derived from some external resources. We propose to leverage such auxiliary information to further improve label embedding projection. In general, we can assume there is some auxiliary source in terms of a similarity matrix R over the seen and unseen labels; i.e., R ij , i, j ∈ {1, 2, ..., L} defines the similarity between a label pair (i, j). Then Q A = I − D −1/2 RD −1/2 , where D = diag(R1), is the normalized Lapalacian matrix of R. We use a manifold regularization term to enforce the projected label embeddings to be better aligned with the inter-class affinity R: A(U ) = λ 2 tr U M Q A M U(7) where λ is a balance parameter for A(·). This regularization form has the following advantages. First, it can be conveniently integrated into the learning framework in Eq.(6) by simply updating the regularization function H(U ) to: H(U ) = γ 2 tr U M (Q + λ γ Q A )M U(8) Second, it is convenient to exploit different auxiliary resources by simply replacing R (or Q A ) with the one computed from the specific resource. In this work we study two different auxiliary information resources, WordNet [Miller, 1995] hierarchy and web co-occurrence statistics. WordNet: WordNet [Miller, 1995] is a large lexical database of English. Words are grouped into a hierarchical tree structure based on their semantic meanings. Since words are organized based on ontology, their semantic relationships can be reflected by their connection paths. We find the shortest path between any two words based on "is-a" taxonomy, and then define the similarity between two labels i and j as the reciprocal of the path length between the corresponding words, i.e., R ij = 1 path len(i,j)+1 . Co-occurrence statistics: Many researchers have exploited the usage of online data, for example Hit-Count, to compute similarity between labels [Rohrbach et al., 2010;Mensink et al., 2014]. The Hit-Count HC(i, j) denotes how many times in total i and j appear together in the auxiliary source -for example, the number of records returned by a search engine. It is the co-occurrence statistics of i and j in the scale of the entire World Wide Web. Following previous works, we use the Flickr Image Hit Count to compute the dice-coefficient as similarity between two labels, i.e., R ij = HC(i,j) HC(i)+HC(j) . Dual Formulation and Learning Algorithm With the orthogonal constraint on U and the appearance of U in both the objective function and the linear inequality constraints, it is difficult to perform learning directly on Eq.(6). We hence deploy the standard Lagrangian dual formulation of the max-margin learning problem for fixed U . This leads to the following equivalent dual formulation of Eq.(6): min U :U U =I max Ψ tr Ψ (2Y −11 ) + γ 2 (U M QM U ) − 1 2β tr Ψ XX Ψ M s U U M s +11 (9) s.t. Ψ i diag(Y i ) ≥ 0, Ψ i Y i ≤ 1, ∀i; Ψ i diag(Y i − 1) ≥ 0, Ψ i (Y i − 1) ≤ 1, ∀i where the primal W and W 0 can be recovered from the dual variables Ψ by W = 1 β X ΨM s U and W 0 = −1 β X Ψ1. One nice property about the dual formulation in Eq.(9) is that it allows a convenient closed-form solution for U . To solve this min-max optimization problem, we develop an iterative alternating optimization algorithm to perform training. We start from an infeasible initialization point by setting both U and Ψ as zeros. Then in each iteration, we perform the following two steps, which will quickly move into the feasible region after one iteration. Step 1: Given the current fixed U , the inner maximization over Ψ is a linear constrained convex quadratic programming. Though we can solve it directly using a quadratic solver, it subjects to a scalability problem-the Hessian matrix over Ψ will be very large whenever the data size n or the label size L s is large. Hence we adopt a coordinate descent method to iteratively update each row of Ψ given other rows fixed, since the constraints over each row of Ψ can be separated. The maximization over the i-th row Ψ i can be equivalently written as the following simple quadratic minimization problem: z * = arg min z 1 2 z Hz + f z (10) s.t. diag(Y i )z ≥ 0, diag(Y i − 1)z ≥ 0, Y i z ≤ 1, (Y i − 1)z ≤ 1 where H = 1 β X i X i (M s U U M s + 11 ) and f = 1 − 2Y i + 1 β (M s U U M s + 11 )Ψ XX i . After obtaining the optimal solution z * , we can update Ψ with Ψ ← Ψ + 1 i z * , where 1 i denote a one-hot vector with a single 1 in its i-th entry and 0s in all other entries. Step 2: After updating each row in Ψ, we fix the value Ψ and perform minimization over U . By taking a negative sign from Eq.(9), we have the following maximization problem: max U :U U =I tr U 1 2β M s Ψ XX ΨM s − γ 2 M QM U (11) which has a closed-form solution. Let S = 1 2β M s Ψ XX ΨM s − γ 2 M QM . Then the solution for U is the top-r eigenvectors of S. Experiments To investigate the empirical performance of the proposed method, we conducted experiments on two standard multilabel image classification datasets to test its performance on multi-label zero-shot classification and generalized multilabel zero-shot classification. Experimental Setting Datasets In our experiments we used two standard multilabel datasets: The PASCAL VOC2007 dataset and VOC2012 dataset. The PASCAL VOC2007 dataset contains 20 visual object classes. There are 9963 images in total, 5011 for training and 4952 for testing. The VOC2012 dataset contains 5717 and 5823 images from 20 classes for training and validation. We used the validation set for test evaluation. Detailed settings For each image, we used VGG19 [Simonyan and Zisserman, 2014] pre-trained on ImageNet to extract the 4096-dim visual features. For the label embeddings, we used the 300-dim word embedding vectors pre-trained by GloVe [Pennington et al., 2014]. All image feature vectors and word embedding vectors are l 2 normalized. To determine the hyper-parameters, we further split the seen classes into two disjoint subsets with equal number of classes for training and validation. We train the model on the training set and choose hyper-parameters based on the test performance on the validation set. For the proposed model, we choose β, γ and λ from β ∈ {1, 2, ..., 10} and γ, λ ∈ {0.01, 0.1, 1, 10} respectively. After parameter selection, the training and validation data are put back together to train the model for the final evaluation on unseen test data. Evaluation metric We used four different multi-label evaluation metrics: MiAP, micro-F1, macro-F1 and Hamming loss. The Mean image Average Precision (MiAP) [Li et al., 2016] measures how well are the labels ranked on a given image based on the prediction scores. The other three standard evaluation metrics for multi-label classification measure how well the predicted labels match with the ground truth labels on the test data. Multi-label Zero-shot Learning Results Comparison methods We compared the proposed method with four related multi-label ZSL methods, ConSE, LatEm-M, DMP and Fast0Tag, which also adopted the visual-semantic projection strategy. The first two methods are the multi-label adaptations of two standard ZSL approaches, the convex combination of semantic embedding (ConSE) [Norouzi et al., 2013] and the latent embedding (LatEm) method [Xian et al., 2016]. For LatEm, we adopted a multi-label ranking objective to replace the original one of LatEm and denote this variant as Latent Embedding Multi-label method (LatEm-M). The direct multi-label zero-shot prediction method (DMP) [Fu et al., 2014] and the fast tagging method (Fast0Tag) [Zhang et al., 2016] are specifically developed for mulit-label zero-shot learning. For our proposed transfer-aware max-margin embedding projection (TAEP) method, we also provide comparisons for two TAEP variants with different types of auxiliary information: TAEP-H uses WordNet Hierarchy as auxiliary information, and TAEP-C uses Flickr Image Hit-Count as auxiliary information. Zero-shot multi-label learning results. We divided the datasets into two subsets of equal number of classes, and then use them as seen and unseen classes respectively. All methods use seen class instances in the training set to train their models and make predictions on the unseen class instances in test set. We selected the hyper-parameters for the comparison methods based on grid search. With selected fixed parameters, for each approach we repeated 5 runs and reported its mean performance in Table 1. We can see the direct multi-label prediction method, DMP, outperforms both ConSE and LatEm-M on the two datasets in terms of almost all measures. This shows that the specialized multi-label ZSL method, DMP, does have advantage over extended multi-class ZSL methods. Fast0Tag is a bit less effective than DMP, but still consistently outperforms ConSE. The proposed TAEP on the other hand consistently outperforms all the four comparison methods across all measures and with notable improvements on both datasets. By integrating auxiliary information, the proposed TAEP-C and TAEP-H further improve the performance of the proposed model TAEP, while TAEP-C achieves the best results in terms of all measures. These results verified the efficacy of the proposed model. They also demonstrated the usefulness of auxiliary information and validated the effective information integration mechanism of our proposed model. Generalized multi-label zero-shot learning results. Although zero-shot learning has often been evaluated only on the unseen classes in the literature, it is natural to evaluate multi-label zero-shot learning on all the classes, which is referred to as generalized multi-label zero-shot learning. Hence we conducted experiments to test the generalized zero-shot classification performance of the comparison methods. Each method is still trained on the same seen classes S, but the test set now contains all the seen and unseen labels, i.e., S ∪ U. The average comparison results on the two datasets are reported in Table 2. We can see that the two specialized multilabel zero-shot learning methods, DMP and Fast0Tag, outperform the adapted methods ConSE and LatEm-M in terms of most measures on both VOC2007 and VOC2012, while TAEP achieves competitive performances with them. By further incorporating the auxiliary information, the proposed methods, TAEP-C and TAEP-H, not only consistently outperform all the three comparison methods on both datasets in terms of all the evaluation metrics, they also consistently outperform the base model TAEP. TAEP-C again produced the best results in most cases. These results suggest our proposed model provides an effective framework on learning transferaware label embeddings for generalized multi-label zero-shot learning, and it also provides the effective mechanism on incorporating free auxiliary information. Impact of Label Embedding Regularization In this section we study the impact of label embedding projection regularization term H(U ), i.e., the transfer-aware part of the proposed model. For TAEP, we firstly set the parameters to the same values, γ 0 , as those that generate Table 1, and then reduce γ by a factor of 10 each time to repeat the experiments. That is, we try γ=γ 0 ×{10 0 , 10 −1 , 10 −2 , 10 −3 }. Since γ is the weight for the regularization term H(U ), by doing this we are actually reducing the contribution of the embedding projection regularization term. The results in terms of MiAP are presented in Figure 2. Similarly, we also tested the impact of auxiliary information through the regularization term H(U ) for TAEP-H and TAEP-C by reducing λ by factors of {10 0 , 10 −1 , 10 −2 , 10 −3 }. From Figure 2 we can see that, as γ decreases, the performance of TAEP decreases on both datasets. This suggests that the label embedding projection regularization term H(U ) is a necessary and useful component. By regularizing the label embeddings to induce better inter-label relationships, the cross-class information transfer can be facilitated in zero-shot learning. Similarly, we also observe that when λ decreases, the performance of TAEP-C and TAEP-H decreases as well on both datasets. This again verifies the usefulness of auxiliary information and the effectiveness of auxiliary integration mechanism of the proposed transfer-aware embedding projection method. Conclusion In this paper we proposed a transfer-aware label embedding approach for multi-label zero-shot image classification. This approach projects both images and labels into the same semantic space to rank the similarity scores of the images with positive and negative labels under a max-margin learning framework, while guiding the label embedding projection with a transfer-aware regularization objective to achieve a suitable inter-label relations for information adaptation. The regularization framework also allows convenient incorporations of auxiliary information. We conducted experiments to compare our approach with a few related ZSL methods on multi-label image classification tasks. The results demonstrated the efficacy of the proposed approach.
4,251
1808.02455
2886572283
Data augmentation in deep neural networks is the process of generating artificial data in order to reduce the variance of the classifier with the goal to reduce the number of errors. This idea has been shown to improve deep neural network's generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike in image recognition problems, data augmentation techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved, especially for small datasets that exhibit overfitting, when a data augmentation method is adopted. In this paper, we fill this gap by investigating the application of a recently proposed data augmentation technique based on the Dynamic Time Warping distance, for a deep learning model for TSC. To evaluate the potential of augmenting the training set, we performed extensive experiments using the UCR TSC benchmark. Our preliminary experiments reveal that data augmentation can drastically increase deep CNN's accuracy on some datasets and significantly improve the deep model's accuracy when the method is used in an ensemble approach.
The most used data augmentation method for TSC is the slicing window technique, originally introduced for deep CNNs in @cite_2 . The method was originally inspired by the image cropping technique for data augmentation in computer vision tasks @cite_10 . This data transformation technique can, to a certain degree, guarantee that the cropped image still holds the same information as the original image. On the other hand, for time series data, one cannot make sure that the discriminative information has not been lost when a certain region of the time series is cropped. Nevertheless, this method was used in several TSC problems, such as in @cite_7 where it improved the Support Vector Machines accuracy for classifying electroencephalographic time series. @cite_15 , this slicing window technique was also adopted to improve the CNNs' mortgage delinquency prediction using customers' historical transactional data. In addition to the slicing window technique, jittering, scaling, warping and permutation were proposed in @cite_9 as generic time series data augmentation approaches. The authors in @cite_9 proposed a novel data augmentation method specific to wearable sensor time series data that rotates the trajectory of a person's arm around an axis (e.g. @math axis).
{ "abstract": [ "On image data, data augmentation is becoming less relevant due to the large amount of available training data and regularization techniques. Common approaches are moving windows (cropping), scaling, affine distortions, random noise, and elastic deformations. For electroencephalographic data, the lack of sufficient training data is still a major issue. We suggest and evaluate different approaches to generate augmented data using temporal and spatial rotational distortions. Our results on the perception of rare stimuli (P300 data) and movement prediction (MRCP data) show that these approaches are feasible and can significantly increase the performance of signal processing chains for brain-computer interfaces by 1 to 6 .", "While convolutional neural networks (CNNs) have been successfully applied to many challenging classification applications, they typically require large datasets for training. When the availability of labeled data is limited, data augmentation is a critical preprocessing step for CNNs. However, data augmentation for wearable sensor data has not been deeply investigated yet. In this paper, various data augmentation methods for wearable sensor data are proposed. The proposed methods and CNNs are applied to the classification of the motor state of Parkinson’s Disease patients, which is challenging due to small dataset size, noisy labels, and large intra-class variability. Appropriate augmentation improves the classification performance from 77.54 to 86.88 .", "", "Abstract We predict mortgage default by applying convolutional neural networks to consumer transaction data. For each consumer we have the balances of the checking account, savings account, and the credit card, in addition to the daily number of transactions on the checking account, and amount transferred into the checking account. With no other information about each consumer we are able to achieve a ROC AUC of 0.918 for the networks, and 0.926 for the networks in combination with a random forests classifier.", "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models." ], "cite_N": [ "@cite_7", "@cite_9", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "2783546442", "2620664872", "", "2790611518", "2950220847" ] }
Data augmentation using synthetic data for time series classification with deep residual networks
Deep learning usually benefits from large training sets [22]. However, for many applications only relatively small training data exist. In Time Series Classification (TSC), this phenomenon can be observed by analyzing the UCR archive's datasets [3], where 20 datasets have 50 or fewer training instances. These numbers are relatively small compared to the billions of labeled images in computer vision, where deep learning has seen its most successful applications [16]. Although the recently proposed deep Convolutional Neural Networks (CNNs) reached state of the art performance in TSC on the UCR archive [21], they still show low generalization capabilities on some small datasets such as the Diatom-SizeReduction dataset. This is surprising since the nearest neighbor approach (1-NN) coupled with the Dynamic Time Warping (DTW) performs exceptionally well on this dataset which shows the relative easiness of this classification task. Thus, inter-time series similarities in such small datasets cannot be captured by the CNNs due to the lack of labeled instances, which pushes the network's optimization algorithm to be stuck in local minimums [22]. Fig. 1 illustrates on an example that the lack of labeled data can sometimes be compensated by the addition of synthetic data. This phenomenon, also known as overfitting in the machine learning community, can be solved using different techniques such as regularization or simply collecting more labeled data [22] (which in some domains are hard to obtain). Another well-known technique is data augmentation, where synthetic data are generated using a specific method. For example, images containing street numbers on houses can be slightly rotated without changing what number they actually are [13]. For deep learning models, these methods are usually proposed for image data and do not generalize well to time series [20]. This is probably due to the fact that for images, a visual comparison can confirm if the transformation (such as rotation) did not alter the image's class, while for time series data, one cannot easily confirm the effect of such ad-hoc transformations on the nature of a time series. This is the main reason why data augmentation for TSC have been limited to mainly two relatively simple techniques: slicing and manual warping, which are further discussed in Section 2. In this paper, we propose to leverage from a DTW based data augmentation technique specifically developed for time series, in order to boost the performance of a deep Residual Network (ResNet) for TSC. Our preliminary experiments reveal that data augmentation can drastically increase the accuracy for CNNs on some datasets while having a small negative impact on other datasets. We finally propose to combine the decision of the two trained models and show how it can reduce significantly the rare negative effect of data augmentation while maintaining its high gain in accuracy on other datasets. Method Architecture We have chosen to improve the generalization capability of the deep ResNet proposed in [21] for two main reasons, whose corresponding architecture is illustrated in Fig. 2. First, by adopting an already validated architecture, we can attribute any improvement in the network's performance solely to the data augmentation technique. The second reason is that ResNet [21], to the best of our knowledge, is the deepest neural network validated on large number of TSC tasks (such as the UCR archive [3]), which according to the deep learning literature will benefit the most from the data augmentation techniques as opposed to shallow architectures [2]. Deep ResNets were first proposed by He et al. [9] for computer vision tasks. They are mainly composed of convolutions, with one important characteristic: the residual connections which acts like shortcuts that enable the flow of the gradient directly through these connections. The input of this network is a univariate time series with a varying length l. The output consists of a probability distribution over the C classes in the dataset. The network's core contains three residual blocks followed by a Global Average Pooling layer and a final softmax classifier with C neurons. Each residual block contains three 1-D convolutions of respectively 8, 5 and 3 filter lengths. Each convolution is followed by a batch normalization [10] and a Rectified Linear Unit (ReLU) as the activation function. The residual connection consists in linking the input of a residual block to the input of its consecutive layer with the simple addition operation. The number of filters in the first residual blocks is set to 64 filters, while the second and third blocks contains 128 filters. All network's parameters were initialized using Glorot's Uniform initialization method [8]. These parameters were learned using Adam [11] as the optimization algorithm. Following [21], without any fine-tuning, the learning rate was set to 0.001 and the exponential decay rates of the first and second moment estimates were set to 0.9 and 0.999 respectively. Finally, the categorical cross-entropy was used as the objective cost function during the optimization process. Data augmentation The data augmentation method we have chosen to test with this deep architecture, was first proposed in [6] to augment the training set for a 1-NN coupled with the DTW distance in a cold start simulation problem. In addition, the 1-NN was shown to sometimes benefit from augmenting the size of the train set even when the whole dataset is available for training. Thus, we hypothesize that this synthetic time series generation method should improve deep neural network's performance, especially that the generated examples in [6] were shown to closely follow the distribution from which the original dataset was sampled. The method is mainly based on a weighted form of DTW Barycentric Averaging (DBA) technique [19,18,17]. The latter algorithm averages a set of time series in a DTW induced space and by leveraging a weighted version of DBA, the method can thus create an infinite number of new time series from a given set of time series by simply varying these weights. Three techniques were proposed to select these weights, from which we chose only one in our approach for the sake of simplicity, although we consider evaluating other techniques in our future work. The weighting method is called Average Selected which consists of selecting a subset of close time series and fill their bounding boxes. We start by describing in details how the weights are assigned, which constitutes the main difference between an original version of DBA and the weighted version originally proposed in [6]. Starting with a random initial time series chosen from the training set, we assign it a weight equal to 0.5. The latter randomly selected time series will act as the initialization of DBA. Then, we search for its 5 nearest neighbors using the DTW distance. We then randomly select 2 out these 5 neighbors and assign them a weight value equal to 0.15 each, making thus the total sum of assigned weights till now equal to 0.5 + 2 × 0.15 = 0.8. Therefore, in order to have a normalized sum of weights (equal to 1), the rest of the time series in the subset will share the rest of the weight 0.2. We should note that the process of generating synthetic time series leveraged only the training set thus eliminating any bias due to having seen the test set's distribution. As for computing the average sequence, we adopted the DBA algorithm in our data augmentation framework. Although other time series averaging methods exist in the literature, we chose the weighted version of DBA since it was already proposed as a data augmentation technique to solve the cold start problem when using a nearest neighbor classifier [6]. Therefore we emphasize that other weighted averaging methods such as soft-DTW [5] could be used instead of DBA in our framework, but we leave such exploration for our future work. We did not test the effect of imbalanced classes in the training set and how it could affect the model's generalization capabilities. Note that imbalanced time series classification is a recent active area of research that merits an empirical study of its own [7]. At last, we should add that the number of generated time series in our framework was chosen to be equal to double the amount of time series in the most represented class (which is a hyper-parameter of our approach that we aim to further investigate in our future work). Results Experimental Setup We evaluated the data augmentation method for ResNet on the UCR archive [3], which is the largest publicly available TSC benchmark. The archive is composed of datasets from different real world applications with varying characteristics such the number of classes and the size of the training set. Finally, for training the deep learning models, we leveraged the high computational power of more than 60 GPUs in one huge cluster 1 We should also note that the same parameters' initial values were used for all compared approaches, thus eliminating any bias due to the random initialization of the network's weights. Effect of data augmentation Our results show that data augmentation can drastically improve the accuracy of a deep learning model while having a small negative impact on some datasets in the worst case scenario. Fig. 3a shows the difference in accuracy between ResNet with and without data augmentation, it shows that the data augmentation technique does not lead a significant decrease in accuracy. Additionally, we observe a huge increase of accuracy for the DiatomSizeReduction dataset (the accuracy increases from 30% to 96% when using data augmentation). This result is very interesting for two main reasons. First, DiatomSizeReduction has the smallest training set in the UCR archive [3] (with 16 training instances), which shows the benefit of increasing the number of training instances by generating synthetic time series. Secondly, the DiatomSizeReduction dataset is the one where ResNet yield the worst accuracy without augmentation. On the other hand, the 1-NN coupled with DTW (or the Euclidean distance) gives an accuracy of 97% which shows the relative easiness of this dataset where time series exhibit similarities that can be captured by the simple Euclidean distance, but missed by the deep ResNet due to the lack of training data (which is compensated by our data augmentation technique). The results for the Wine dataset (57 training instances) also show an important improvement when using data augmentation. While we did show that deep ResNet can benefit from synthetic time series on some datasets, we did not manage to show any significant improvement over the whole UCR archive (p-value > 0.41 for the Wilcoxon signed rank test). Therefore, we decided to leverage an ensemble technique where we take into consideration the decisions of two ResNets (trained with and without data augmentation). In fact, we average the a posteriori probability for each class over both classifier outputs, then assign for each time series the label for which the averaged probability is maximum, thus giving a more robust approach to out-ofsample generated time series. The results in Fig. 3b show that the datasets which benefited the most from data augmentation exhibit almost no change to their accuracy improvement. While on the other hand the number of datasets where data augmentation harmed the model's accuracy decreased from 30 to 21. The Wilcoxon signed rank test shows a significant difference (p-value < 0.0005). The ensemble's results are in compliance with the recent consensus in the TSC community, where ensembles tend to improve the individual classifiers' accuracy [1]. Conclusion In this paper, we showed how overfitting small time series datasets can be mitigated using a recent data augmentation technique that is based on DTW and a weighted version of the DBA algorithm. These findings are very interesting since no previous observation made a link between the space induced by the classic DTW and the features learned by the CNNs, whereas our experiments showed that by providing enough time series, CNNs are able to learn time invariant features that are useful for classification. In our future work, we aim to further test other variant weighting schemes for the DTW-based data augmentation technique, while providing a method that predicts when and for which datasets, data augmentation would be beneficial.
2,000
1808.02455
2886572283
Data augmentation in deep neural networks is the process of generating artificial data in order to reduce the variance of the classifier with the goal to reduce the number of errors. This idea has been shown to improve deep neural network's generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike in image recognition problems, data augmentation techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved, especially for small datasets that exhibit overfitting, when a data augmentation method is adopted. In this paper, we fill this gap by investigating the application of a recently proposed data augmentation technique based on the Dynamic Time Warping distance, for a deep learning model for TSC. To evaluate the potential of augmenting the training set, we performed extensive experiments using the UCR TSC benchmark. Our preliminary experiments reveal that data augmentation can drastically increase deep CNN's accuracy on some datasets and significantly improve the deep model's accuracy when the method is used in an ensemble approach.
@cite_13 , the authors proposed to extend the slicing window technique with a warping window that generates synthetic time series by warping the data through time. This method was used to improve the classification of their deep CNN for TSC, which was also shown to significantly decrease the accuracy of a NN-DTW classifier when compared to our adopted data augmentation algorithm @cite_3 . We should note that the use of a window slicing technique means that the model should classify each subsequence alone and then finally classify the whole time series using a majority voting approach. Alternatively, our method does not crop time series into shorter subsequences which enables the network to learn discriminative properties from the whole time series in an end-to-end manner.
{ "abstract": [ "Time series classification has been around for decades in the data-mining and machine learning communities. In this paper, we investigate the use of convolutional neural networks (CNN) for time series classification. Such networks have been widely used in many domains like computer vision and speech recognition, but only a little for time series classification. We design a convolu-tional neural network that consists of two convolutional layers. One drawback with CNN is that they need a lot of training data to be efficient. We propose two ways to circumvent this problem: designing data-augmentation techniques and learning the network in a semi-supervised way using training time series from different datasets. These techniques are experimentally evaluated on a benchmark of time series datasets.", "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22." ], "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2515503816", "2773662615" ] }
Data augmentation using synthetic data for time series classification with deep residual networks
Deep learning usually benefits from large training sets [22]. However, for many applications only relatively small training data exist. In Time Series Classification (TSC), this phenomenon can be observed by analyzing the UCR archive's datasets [3], where 20 datasets have 50 or fewer training instances. These numbers are relatively small compared to the billions of labeled images in computer vision, where deep learning has seen its most successful applications [16]. Although the recently proposed deep Convolutional Neural Networks (CNNs) reached state of the art performance in TSC on the UCR archive [21], they still show low generalization capabilities on some small datasets such as the Diatom-SizeReduction dataset. This is surprising since the nearest neighbor approach (1-NN) coupled with the Dynamic Time Warping (DTW) performs exceptionally well on this dataset which shows the relative easiness of this classification task. Thus, inter-time series similarities in such small datasets cannot be captured by the CNNs due to the lack of labeled instances, which pushes the network's optimization algorithm to be stuck in local minimums [22]. Fig. 1 illustrates on an example that the lack of labeled data can sometimes be compensated by the addition of synthetic data. This phenomenon, also known as overfitting in the machine learning community, can be solved using different techniques such as regularization or simply collecting more labeled data [22] (which in some domains are hard to obtain). Another well-known technique is data augmentation, where synthetic data are generated using a specific method. For example, images containing street numbers on houses can be slightly rotated without changing what number they actually are [13]. For deep learning models, these methods are usually proposed for image data and do not generalize well to time series [20]. This is probably due to the fact that for images, a visual comparison can confirm if the transformation (such as rotation) did not alter the image's class, while for time series data, one cannot easily confirm the effect of such ad-hoc transformations on the nature of a time series. This is the main reason why data augmentation for TSC have been limited to mainly two relatively simple techniques: slicing and manual warping, which are further discussed in Section 2. In this paper, we propose to leverage from a DTW based data augmentation technique specifically developed for time series, in order to boost the performance of a deep Residual Network (ResNet) for TSC. Our preliminary experiments reveal that data augmentation can drastically increase the accuracy for CNNs on some datasets while having a small negative impact on other datasets. We finally propose to combine the decision of the two trained models and show how it can reduce significantly the rare negative effect of data augmentation while maintaining its high gain in accuracy on other datasets. Method Architecture We have chosen to improve the generalization capability of the deep ResNet proposed in [21] for two main reasons, whose corresponding architecture is illustrated in Fig. 2. First, by adopting an already validated architecture, we can attribute any improvement in the network's performance solely to the data augmentation technique. The second reason is that ResNet [21], to the best of our knowledge, is the deepest neural network validated on large number of TSC tasks (such as the UCR archive [3]), which according to the deep learning literature will benefit the most from the data augmentation techniques as opposed to shallow architectures [2]. Deep ResNets were first proposed by He et al. [9] for computer vision tasks. They are mainly composed of convolutions, with one important characteristic: the residual connections which acts like shortcuts that enable the flow of the gradient directly through these connections. The input of this network is a univariate time series with a varying length l. The output consists of a probability distribution over the C classes in the dataset. The network's core contains three residual blocks followed by a Global Average Pooling layer and a final softmax classifier with C neurons. Each residual block contains three 1-D convolutions of respectively 8, 5 and 3 filter lengths. Each convolution is followed by a batch normalization [10] and a Rectified Linear Unit (ReLU) as the activation function. The residual connection consists in linking the input of a residual block to the input of its consecutive layer with the simple addition operation. The number of filters in the first residual blocks is set to 64 filters, while the second and third blocks contains 128 filters. All network's parameters were initialized using Glorot's Uniform initialization method [8]. These parameters were learned using Adam [11] as the optimization algorithm. Following [21], without any fine-tuning, the learning rate was set to 0.001 and the exponential decay rates of the first and second moment estimates were set to 0.9 and 0.999 respectively. Finally, the categorical cross-entropy was used as the objective cost function during the optimization process. Data augmentation The data augmentation method we have chosen to test with this deep architecture, was first proposed in [6] to augment the training set for a 1-NN coupled with the DTW distance in a cold start simulation problem. In addition, the 1-NN was shown to sometimes benefit from augmenting the size of the train set even when the whole dataset is available for training. Thus, we hypothesize that this synthetic time series generation method should improve deep neural network's performance, especially that the generated examples in [6] were shown to closely follow the distribution from which the original dataset was sampled. The method is mainly based on a weighted form of DTW Barycentric Averaging (DBA) technique [19,18,17]. The latter algorithm averages a set of time series in a DTW induced space and by leveraging a weighted version of DBA, the method can thus create an infinite number of new time series from a given set of time series by simply varying these weights. Three techniques were proposed to select these weights, from which we chose only one in our approach for the sake of simplicity, although we consider evaluating other techniques in our future work. The weighting method is called Average Selected which consists of selecting a subset of close time series and fill their bounding boxes. We start by describing in details how the weights are assigned, which constitutes the main difference between an original version of DBA and the weighted version originally proposed in [6]. Starting with a random initial time series chosen from the training set, we assign it a weight equal to 0.5. The latter randomly selected time series will act as the initialization of DBA. Then, we search for its 5 nearest neighbors using the DTW distance. We then randomly select 2 out these 5 neighbors and assign them a weight value equal to 0.15 each, making thus the total sum of assigned weights till now equal to 0.5 + 2 × 0.15 = 0.8. Therefore, in order to have a normalized sum of weights (equal to 1), the rest of the time series in the subset will share the rest of the weight 0.2. We should note that the process of generating synthetic time series leveraged only the training set thus eliminating any bias due to having seen the test set's distribution. As for computing the average sequence, we adopted the DBA algorithm in our data augmentation framework. Although other time series averaging methods exist in the literature, we chose the weighted version of DBA since it was already proposed as a data augmentation technique to solve the cold start problem when using a nearest neighbor classifier [6]. Therefore we emphasize that other weighted averaging methods such as soft-DTW [5] could be used instead of DBA in our framework, but we leave such exploration for our future work. We did not test the effect of imbalanced classes in the training set and how it could affect the model's generalization capabilities. Note that imbalanced time series classification is a recent active area of research that merits an empirical study of its own [7]. At last, we should add that the number of generated time series in our framework was chosen to be equal to double the amount of time series in the most represented class (which is a hyper-parameter of our approach that we aim to further investigate in our future work). Results Experimental Setup We evaluated the data augmentation method for ResNet on the UCR archive [3], which is the largest publicly available TSC benchmark. The archive is composed of datasets from different real world applications with varying characteristics such the number of classes and the size of the training set. Finally, for training the deep learning models, we leveraged the high computational power of more than 60 GPUs in one huge cluster 1 We should also note that the same parameters' initial values were used for all compared approaches, thus eliminating any bias due to the random initialization of the network's weights. Effect of data augmentation Our results show that data augmentation can drastically improve the accuracy of a deep learning model while having a small negative impact on some datasets in the worst case scenario. Fig. 3a shows the difference in accuracy between ResNet with and without data augmentation, it shows that the data augmentation technique does not lead a significant decrease in accuracy. Additionally, we observe a huge increase of accuracy for the DiatomSizeReduction dataset (the accuracy increases from 30% to 96% when using data augmentation). This result is very interesting for two main reasons. First, DiatomSizeReduction has the smallest training set in the UCR archive [3] (with 16 training instances), which shows the benefit of increasing the number of training instances by generating synthetic time series. Secondly, the DiatomSizeReduction dataset is the one where ResNet yield the worst accuracy without augmentation. On the other hand, the 1-NN coupled with DTW (or the Euclidean distance) gives an accuracy of 97% which shows the relative easiness of this dataset where time series exhibit similarities that can be captured by the simple Euclidean distance, but missed by the deep ResNet due to the lack of training data (which is compensated by our data augmentation technique). The results for the Wine dataset (57 training instances) also show an important improvement when using data augmentation. While we did show that deep ResNet can benefit from synthetic time series on some datasets, we did not manage to show any significant improvement over the whole UCR archive (p-value > 0.41 for the Wilcoxon signed rank test). Therefore, we decided to leverage an ensemble technique where we take into consideration the decisions of two ResNets (trained with and without data augmentation). In fact, we average the a posteriori probability for each class over both classifier outputs, then assign for each time series the label for which the averaged probability is maximum, thus giving a more robust approach to out-ofsample generated time series. The results in Fig. 3b show that the datasets which benefited the most from data augmentation exhibit almost no change to their accuracy improvement. While on the other hand the number of datasets where data augmentation harmed the model's accuracy decreased from 30 to 21. The Wilcoxon signed rank test shows a significant difference (p-value < 0.0005). The ensemble's results are in compliance with the recent consensus in the TSC community, where ensembles tend to improve the individual classifiers' accuracy [1]. Conclusion In this paper, we showed how overfitting small time series datasets can be mitigated using a recent data augmentation technique that is based on DTW and a weighted version of the DBA algorithm. These findings are very interesting since no previous observation made a link between the space induced by the classic DTW and the features learned by the CNNs, whereas our experiments showed that by providing enough time series, CNNs are able to learn time invariant features that are useful for classification. In our future work, we aim to further test other variant weighting schemes for the DTW-based data augmentation technique, while providing a method that predicts when and for which datasets, data augmentation would be beneficial.
2,000
1808.01756
2952978061
We consider practical hardware implementation of Polar decoders. To reduce latency due to the serial nature of successive cancellation (SC), existing optimizations improve parallelism with two approaches, i.e., multi-bit decision or reduced path splitting. In this paper, we combine the two procedures into one with an error-pattern-based architecture. It simultaneously generates a set of candidate paths for multiple bits with pre-stored patterns. For rate-1 (R1) or single parity-check (SPC) nodes, we prove that a small number of deterministic patterns are required to guarantee performance preservation. For general nodes, low-weight error patterns are indexed by syndrome in a look-up table and retrieved in O(1) time. The proposed flip-syndrome-list (FSL) decoder fully parallelizes all constituent code blocks without sacrificing performance, thus is suitable for ultra-low-latency applications. Meanwhile, two code construction optimizations are presented to further reduce complexity and improve performance, respectively.
Polar codes @cite_8 @cite_16 have been selected for the fifth generation (5G) wireless standard. With state-of-the-art code construction techniques @cite_19 @cite_0 @cite_5 and SC-List (SCL) decoding algorithm @cite_10 @cite_9 @cite_11 @cite_12 @cite_1 @cite_14 @cite_3 @cite_7 , Polar codes demonstrate competitive performance over LDPC and Turbo codes in terms of block error rate (BLER). Beyond 5G, ultra-low decoding latency emerges as a key requirement for applications such as autonomous driving and virtual reality. The latency of practical Polar decoders, e.g., an SC-list decoder with list size @math , is relatively long due to the serial processing nature.
{ "abstract": [ "In this paper, we propose a decision-aided scheme for parallel SC-List decoding of polar codes. At the parallel SC-List decoder, each survival path is extended based on multiple information bits, therefore the number of split paths becomes very large and the sorting to find the top L paths becomes very complex. We propose a decision-aided scheme to reduce the number of split paths and thus reduce the sorting complexity.", "", "", "", "Polar codes provably achieve the symmetric capacity of a memoryless channel while having an explicit construction. The adoption of polar codes however, has been hampered by the low throughput of their decoding algorithm. This work aims to increase the throughput of polar decoding hardware by an order of magnitude relative to successive-cancellation decoders and is more than 8 times faster than the current fastest polar decoder. We present an algorithm, architecture, and FPGA implementation of a flexible, gigabit-per-second polar decoder.", "This paper focuses on low complexity successive cancellation list (SCL) decoding of polar codes. In particular, using the fact that splitting may be unnecessary when the reliability of decoding the unfrozen bit is sufficiently high, a novel splitting rule is proposed. Based on this rule, it is conjectured that, if the correct path survives at some stage, it tends to survive till termination without splitting with high probability. On the other hand, the incorrect paths are more likely to split at the following stages. Motivated by these observations, a simple counter that counts the successive number of stages without splitting is introduced for each decoding path to facilitate the identification of correct and incorrect paths. Specifically, any path with counter value larger than a predefined threshold @math is deemed to be the correct path, which will survive at the decoding stage, while other paths with counter value smaller than the threshold will be pruned, thereby reducing the decoding complexity. Furthermore, it is proved that there exists a unique unfrozen bit @math , after which the successive cancellation decoder achieves the same error performance as the maximum likelihood decoder if all the prior unfrozen bits are correctly decoded, which enables further complexity reduction. Simulation results demonstrate that the proposed low complexity SCL decoder attains performance similar to that of the conventional SCL decoder, while achieving substantial complexity reduction.", "In this work, we introduce @math -expansion, a notion borrowed from number theory, as a theoretical framework to study fast construction of polar codes based on a recursive structure of universal partial order (UPO) and polarization weight (PW) algorithm. We show that polar codes can be recursively constructed from UPO by continuously solving several polynomial equations at each recursive step. From these polynomial equations, we can extract an interval for @math , such that ranking the synthetic channels through a closed-form @math -expansion preserves the property of nested frozen sets, which is a desired feature for low-complex construction. In an example of AWGN channels, we show that this interval for @math converges to a constant close to @math when the code block-length trends to infinity. Both asymptotic analysis and simulation results validate our theoretical claims.", "", "We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, @math decoding paths are considered concurrently at each decoding stage, where @math is an integer parameter. At the end of the decoding process, the most likely among the @math paths is selected as the single codeword at the decoder output. Simulations show that the resulting performance is very close to that of maximum-likelihood decoding, even for moderate values of @math . Alternatively, if a genie is allowed to pick the transmitted codeword from the list, the results are comparable with the performance of current state-of-the-art LDPC codes. We show that such a genie can be easily implemented using simple CRC precoding. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths for each information bit, and then uses a pruning procedure to discard all but the @math most likely paths. However, straightforward implementation of this algorithm requires @math time, which is in stark contrast with the @math complexity of the original successive-cancellation decoder. In this paper, we utilize the structure of polar codes along with certain algorithmic transformations in order to overcome this problem: we devise an efficient, numerically stable, implementation of the proposed list decoder that takes only @math time and @math space.", "", "", "Polar codes, as the first provable capacity-achieving error-correcting codes, have received much attention in recent years. However, the decoding performance of polar codes with traditional successive-cancellation (SC) algorithm cannot match that of the low-density parity-check or Turbo codes. Because SC list (SCL) decoding algorithm can significantly improve the error-correcting performance of polar codes, design of SCL decoders is important for polar codes to be deployed in practical applications. However, because the prior latency reduction approaches for SC decoders are not applicable for SCL decoders, these list decoders suffer from the long-latency bottleneck. In this paper, we propose a multibit-decision approach that can significantly reduce latency of SCL decoders. First, we present a reformulated SCL algorithm that can perform intermediate decoding of 2 b together. The proposed approach, referred as 2-bit reformulated SCL ( 2b-rSCL ) algorithm , can reduce the latency of SCL decoder from ( @math ) to ( @math ) clock cycles without any performance loss. Then, we extend the idea of 2-b-decision to general case, and propose a general decoding scheme that can perform intermediate decoding of any @math bits simultaneously. This general approach, referred as @math -bit reformulated SCL ( @math b-rSCL ) algorithm , can reduce the overall decoding latency to as short as @math cycles. Furthermore, on the basis of the proposed algorithms, very large-scale integration architectures for 2b-rSCL and 4b-rSCL decoders are synthesized. Compared with a prior SCL decoder, the proposed (1024, 512) 2b-rSCL and 4b-rSCL decoders can achieve 21 and 60 reduction in latency, 1.66 and 2.77 times increase in coded throughput with list size 2, and 2.11 and 3.23 times increase in coded throughput with list size 4, respectively.", "Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable tradeoff between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of path splitting required to decode rate one and single parity check codes. Thus, the number of splitting can be limited while guaranteeing exactly the same error-correction performance as if the paths were forked at each bit estimation. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of path forks in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: It is shown that our design can achieve @math Gb s throughput, higher than the best state-of-the-art decoders." ], "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_5", "@cite_16", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2240900403", "", "", "", "2003204237", "2228116382", "2950110977", "", "2106045225", "", "", "2101207956", "2601496744" ] }
A Flip-Syndrome-List Polar Decoder Architecture for Ultra-Low-Latency Communications
B. Motivation and our contributions It is well known that an SC decoder requires 2N − 2 time steps for a length-N code [1]. The SC decoding factor graph reveals that, the main source of latency is the left hand side (LHS, or information bit side) of the graph. In contrast, the right hand side (RHS, or codeword side) of the graph consists of independent code blocks and already supports parallel decoding. With the above observations, the key to low-latency decoding is to parallelize LHS processing. Existing hardware decoder designs are pioneered by [7]- [11], which view SC decoding as binary tree search, i.e., a length-N code (a parent node) is recursively decomposed into two length-N/2 codes (child nodes). Upon reaching certain special nodes, their child nodes are not traversed [7] and the corresponding path metrics are directly updated at the parent node [8]. Even though, there is still room for further optimizations: • The processing of an R1/SPC node is not fully parallel (e.g., a number of sequential path extension & pruning are still required [9]). A higher degree of parallelism can be exploited to further reduce latency. • Optimizations (e.g., parallel processing) are applied to some special nodes (e.g., R0/Rep/SPC/R1), and the length of such blocks, denoted by B, is often short due to insufficient polarization. According to our measurement under typical code lengths, the main source of latency is now incurred by the general nodes whose constituent code rates are between 2 B and B−2 B . Motivated by [7]- [11], and thanks to the recent advances in efficient list pruning [14], [15], we find it profitable to further improve parallelism for ultra-low-latency applications. Our contributions are summarized below: 1) We propose to fully parallelize the processing of R1/SPC nodes via multi-bit hard estimation and flipping at intermediate stages. Only one-time path extension/pruning per node is required by applying a small number of flipping patterns on the raw hard estimation. Such simplification is proven to preserve performance. 2) For general nodes, we apply flip-syndrome-list (FSL) decoding to constituent code blocks. Specifically, a small set of low-weight error patterns are pre-stored in a table indexed by syndrome. During decoding, syndrome is calculated per constituent code block. Its associated error patterns are retrieved from the syndrome table, and used for bit-flip-based sub-path generation. Similar to R1/SPC nodes, the FSL decoder narrows down the candidates for path extension, and enjoys the simplicity of a hard-input decoder. The proposed optimization is shown to incur negligible performance loss. 3) The complexity of an FSL decoder is mainly incurred by constituent code blocks with medium rates. We propose to re-adjust the distribution of information bits in order to avoid certain constituent code rates, such that decoder complexity can be significantly reduced. We show that the performance loss can be negligible. 4) With the FSL decoder's capability to decode arbitrary linear outer constituent codes, not necessarily Polar codes, we propose to adopt hybrid outer codes with optimized distance spectrum. The hybrid-Polar codes demonstrate better performance than the original Polar codes. Paper is organized as follows, Section II introduces the fundamentals of Polar SCL decoding, Section III provides the details of FSL decoder including R1/SPC nodes, general nodes, latency analysis and BLER performance, Section IV proposes two improved code construction methods that benefit from the FSL decoder architecture, Section V concludes the paper. II. POLAR CODES AND SCL DECODING A binary Polar code of mother code length N = 2 n can be defined by c = uG and a set of information sub-channel indices I. The information bits are assigned to sub-channels with indices in I, i.e., u I , and the frozen bits (zero-valued by default) are assigned to the rest sub-channels. The Polar kernel matrix is G = F ⊗n , where F = 1 0 1 1 is the kernel and ⊗ denotes Kronecker power, and c is the code word. The transmitted BPSK symbols are x N −1 0 = 1 − 2 · c N −1 0 and the received vector is y N −1 0 . For completeness, the original SCL decoder [6] is briefly revisited. The SC decoding factor graph of a length-N Polar code consists of N × (log 2 N + 1) nodes. The row indices i = {0, 1, · · · , N − 1} denote the N bit indices. The column indices s = 0, 1, · · · , log 2 N denote decoding stages, with s = 0 labeling the information bit side and s = log 2 N labeling the input LLR side (or codeword side). Each node in the factor graph can be indexed by a (s, i) pair, and is associated with a soft LLR value α s,i , which is initialized by α log 2 N,i = y i , and a hard estimate β s,i . For all s and i satisfying i mod 2 s+1 < 2 s , a hardwarefriendly right-to-left updating rule for α is: α s,i = sgn(α s+1,i )sgn(α s+1,i+2 s ) min(|α s+1,i |, |α s+1,i+2 s |), α s,i+2 s = (1 − 2β s,i )α s+1,i + α s+1,i+2 s . The hard estimate of the i-th bit is β 0,i = 1−sgn(α0,i) 2 . The corresponding left-to-right updating rule for β is: β s,i = β s−1,i ⊕ β s−1,i+2 s−1 , β s,i+2 s−1 = β s−1,i+2 s−1 . An SCL decoder with list size L executes path split upon each information bit, and preserves L paths with smallest path metrics (PM). Given the l-th path withû l i as the i-th hard output bit, a hardware-friendly PM updating rule [16] is PM l i = PM l i−1 , ifû l i = β l 0,i , PM l i−1 + |α l 0,i |, otherwise, where PM l i denotes the path metric of the l-th path at bit index i, and α l 0,i and β l 0,i denote its corresponding soft LLR and hard estimation, respectively. After decoding the last bit, the first path 1 is selected as decoding output. III. FLIP-SYNDROME-LIST (FSL) DECODING SC-based decoding of length-N Polar codes requires log 2 N + 1 stages to propagate received signal (s = log 2 N ) to information bits (s = 0). The degree of parallelism is 2 s , i.e., reduces by half after each decoding stage. To increase parallelism, we propose to terminate the LLR propagation at intermediate stage s = log 2 B, and process all length-B constituent code blocks with a hardinput decoder. The design is detailed throughout this section, where differences to existing works mainly include (i) fully parallelized processing for B bits and L paths, and (ii) supporting arbitrary-rate blocks rather than special ones (e.g., R0/Rep/SPC/R1). A. Multi-bit hard decision at intermediate stage The indices of a constituent code block is denoted by B {i, i + 1, · · · , i + B − 1}. Once the soft LLRs at the s-th stage are obtained, where s = log 2 B, a raw hard estimation is immediately obtained by β β β s,B = 1 − sgn(α α α s,B ) 2 .(1) In contrast to SCL that uses the soft LLR α α α s,B , a constituent block decoder takes β β β s,B as its hard input, and directly generates hard code wordβ β β s,B as decoded output. The hard-input decoders for R1, SPC and general nodes will be described next in Section III-B and III-C. For now, we assume such a decoder outputs a hard code wordβ β β s,B for each candidate path, and recover the corresponding information vector byû B =β β β s,B F ⊗s .(2) Given the soft LLRsα α α s,B and the recovered codeword β β β s,B , the multi-bit version of PM updating rule [8] is PM l i = PM l i−B + j∈B β l s,j − β l s,j α l s,j .(3) The remaining updating of α and β is based on the hard decisionβ β β s,B rather than the raw estimation β β β s,B . B. Parallelized path extension via bit flipping 1) Rate-1 nodes: For an R1 node, the state-of-the-art decoding method [9] requires min(L − 1, B) times path extensions. First, the input soft LLRs α α α l s,B for each list path is sorted. Then, path extensions are performed only on the min(L − 1, B) least LLR positions to reduce complexity. Such simplification incurs no performance loss since additional path extensions are proven to be redundant [9]. The searching space becomes L×2 min(L−1,B) , much smaller than L×2 B for conventional SCL [6] and SSCL [8]. Another work [17] also proposes to reduce searching space for R1 nodes. But its candidate paths generation is LLR-dependent, thus is suitable for software implementation as suggested in [17]. In this paper, we focus on hardware implementation and propose a parallel path extension based on pre-stored errorpatterns. As shown in Fig. 1, only one-time path extension and pruning is required for a constituent block. The optimization exploits the deterministic partial ordering of incremental path metrics within a block. Accordingly, the search for survived paths can be narrowed down to a limited set, and pre-stored in the form of error patterns in a look-up table (LUT). The LUT is shown to be very small for a practical list size L = 8. As such, the advantages are: • B bits are decoded in parallel. • Sub-paths are generated in parallel. • The above two procedures are combined into one. Notation 1 (soft/hard vectors): The soft LLR input of a constituent block is indexed by ascending reliability order, i.e., α α α l s,B such that |α l s,0 | < |α l s,1 | < · · · < |α l s,B−1 | for each list path. The corresponding raw hard estimation is denoted by β β β l s,B β l s,0 , β l s,1 , · · · , β l s,B−1 . Notation 2 (sub-paths extension): For a constituent block with indices B, a sub-path that extends from the i-bit to the (i + B − 1)-th bit can be well defined by the blockwise decoding output. For example, the t-th sub-path of the l-th path is denoted by the vectorβ β β l,t s,B . Notation 3 (bit-flipping): Each vectorβ β β l,t s,B is generated by flipping β β β l s,B based on an error pattern e e e. A single-bit-error pattern is denoted by e e e p if it has one at the p-th bit position (p = 0, 1, · · · ) and zeros otherwise. For L = 8, we narrow down the searching space per list path from 2 min(L−1,B) to 13 by the following proposition. Proposition 1: For each path in an SCL with L = 8, its L maximum-likelihood sub-paths (i.e., with minimum incremental path metrics) fall into a deterministic set of size 13. These sub-paths can be obtained by bit flipping the original hard estimation of each list path based on the following error patterns: β β β l,t s,B =                      β β β l s,B , t = 0; β β β l s,B ⊕ e e e t−1 , 1 ≤ t ≤ 7, β β β l s,B ⊕ e (4) Proof: To survive from the sub-paths of all L paths, a sub-path must first survive from the sub-paths of its own parent path. That means for each parent path, we only need to consider its L maximum-likelihood sub-paths. Altogether, there are at most L 2 sub-path to be considered. According to (3), the path metric penalty is received only on the flipped positions. For each sub-path and its associated error patterns, the incremental path metric is computed by ∆PM l,t i+B−1 PM l,t i+B−1 − PM l i (5) =                    0, t = 0; |α l s,t−1 |, We prove Proposition 1 with the above directed graph. Any node with a minimum distance to the root node "0" larger than L = 8 cannot survive path pruning. First, if the 8-th smallest incremental path metric is caused by a single bit error, then it cannot be |α l s,7 | or larger, otherwise there will be more than 8 sub-paths with incremental path metrics smaller than the 8-th one, which contradicts the assumption. The argument is obvious since there are already 8 nodes upstream of |α l s,7 | in the directed graph. Similarly, the 8-th smallest incremental path metric caused by two bit errors cannot be equal to or larger than |α l s,1 | + |α l s,3 |, because there are already more than 8 sub-paths with smaller path metrics in its upstream. Finally, the sub-paths with incremental path metric |α l s,0 |+ |α l s,1 | + |α l s,2 | also has 8 nodes in its upstream (including itself), and any error patterns with larger incremental path metric (including the 4-bit patterns) will lead to contradiction if they are included in the surviving set. Thus, we can reduce the tested error patterns per path to 13 with only one-time path extension without any performance loss. Remark 1: The bit-flipping-based path extension is mainly constituted of binary/LUT operations. The 13 error patterns are pre-stored. The resulting path metrics for all error patterns can be computed in parallel according to (3) or (5). The path extension and pruning are as summarized by "(13 → 8 → 64 → 8) × 1", explained as follows. For each path, the 13 error patterns lead to 13 sub-paths, among which the 8 with smallest path metrics are pre-selected (13 → 8). Altogether, there will be 8 × L = 64 extended paths (8 → 64) for the case of L = 8. The 64 extended paths are pruned back to 8 (64 → 8). The above procedures are executed only one time. In contrast, the fast-SSCL decoder [9] requires L − 1 = 7 times path extension and pruning, i.e., (8 → 16 → 8) × 7. According to Section III-D, the minimum number of "cycles" reduces from 49 to 14 in the case of a length-16 R1 block. To avoid any misunderstanding, the "cycles" here captures implementation details in our fabricated ASIC [19], thus should be distinguished from the "time steps" concept in [9]. Remark 2: The proposition addresses list size L = 8, but its idea naturally extends to all list sizes as long as the corresponding error patterns are identified. Among them, decoders with list size L = 8 are particularly important since they are widely accepted by the industry during the 5G standardization process [18]. The conclusion is drawn after extensive evaluations on the tradeoff among BLER, latency, throughput and power consumption, in which decoders with L = 8 achieve the best overall efficiency. The tradeoff in real hardware is further verified in our implemented decoder ASIC in [19]. 2) SPC nodes: For an SPC node, the state-of-the-art decoding method [9] requires min(L, B) times path extensions. In this work, we propose only one-time path extension and reduce the searching space from 2 min(L,B) to 13 as follows. 0 - - - |α l s,0 | + |α l s,1 | - - - |α l s,0 | + |α l s,2 | |α l s,1 | + |α l s,2 | - - |α l s,0 | + |α l s,3 | |α l s,1 | + |α l s,3 | |α l s,2 | + |α l s,3 | 3 j=0 |α l s,i | |α l s,0 | + |α l s,4 | |α l s,1 | + |α l s,4 | · · · · · · · · · · · · · · · · · · |α l s,0 | + |α l s,7 | · · · · · · · · · Odd checksum case for β β β l s,B |α l s,0 | - - - |α l s,1 | - - - |α l s,2 | j=0,1,2 |α l s,i | - - |α l s,3 | j=0,1,3 |α l s,i | j=0,2,3 |α l s,i | j=1,2,3 |α l s,i | |α l s,4 | j=0,1,4 |α l s,i | · · · · · · · · · · · · · · · · · · |α l s,7 | · · · · · · · · · Proposition 2: For SCL with L = 8, following Notation 1, if the checksum of β β β l s,B is even, i.e., j∈B β l s,j = 0, then the L surviving paths can be obtained from bit flipping each list path based on the following 13 error patterns: β β β l,t s,B =                      β β β l s,B , t = 0; β β β l s, Otherwise, if the checksum of β β β l s,B is odd, i.e., j∈B β l s,j = 1, then the L surviving paths can be obtained from bit flipping each list path based on the following 13 error patterns: β β β l,t s,B =                  β β β l s,B ⊕ e e e t Proof: The proof follows that of Proposition 1. For simplicity, we change the directed graph to Table I, where the right and lower cells are always larger than the left and upper ones. As seen, any error patterns other than the those given in Proposition 2 will lead to more than 8 surviving paths with path metrics smaller than the 8-th path, which contradicts the assumption. Remark 3: According to Section III-D, the latency (cycles) reduction from [9] is 56 → 15 under L = 8. C. Error pattern identification via syndrome decoding Existing optimizations operate on special rates, e.g., R0/R1/SPC/Rep nodes. In this work, we suggest a paral- ) 00 00 05 11 41 01 01 04 10 40 10 03 09 21 81 11 02 08 20 80 lelization method for arbitrary nodes with larger sizes (e.g., B = 8, 16, · · · ). For general nodes, it is not easy to identify all possible error patterns as in R1/SPC nodes. However, it is possible to quickly narrow down to a subset of highly-likely error patterns for parallelized path extension. Syndrome decoding is particularly suitable here for two reasons, e.g., (i) blockwise syndrome calculation is simple and reuses the Kronecker product module, (ii) multiple error patterns (coset) can be pre-stored and retrieved in parallel. 1) General nodes: As shown in Fig. 2, we first obtain a set of input vectors via multi-bit hard decision and bit flipping. Given the flipping pattern, the syndrome-decoding-based parallel path extension is illustrated in Fig. 2. The key steps, e.g., syndrome calculation and error pattern retrieval, are hardware-friendly binary operations and LUT. Denote by G B F ⊗ log 2 B the kernel of a general node and its frozen set F B , the parity-check matrix H B is obtained by extracting the columns with indices in F B from G B . Thus, the syndrome of vector β β β l,t s,B contains B − K B bits and is calculated by d d d l,t s,B = H B × β β β l,t s,B . For each syndrome, its associated error patterns are computed offline [20] and pre-stored by ascending weight order in LUT. Since a low-weight error pattern is more likely than a high-weight one, we only need to store a small number of lowest-weight patterns to reduce memory. There are 2 B−KB different syndromes for a (B, K B ) constituent code block, where K B is the number of information bits within the block. As a result, the size of a syndrome table is (2 B−KB ) × L sd , where L sd is a constant number of error patterns pre-stored for each syndrome. For example, the syndrome table for a general node with B = 8, K B = 6 and L sd = 4 has size 4 × 4 and is given in Table II. The error patterns retrieved from LUT are used to simultaneously generate a set of candidate sub-paths, denoted by where t and l sd are the flipping pattern index and syndromewise error pattern index, respectively. For each list path, we have 2 T × L sd extended sub-paths. The path metrics are updated according to (3) except that, the T smallest LLRs are modified to a large value, i.e., α l,t s,j → (−1)β l,t s,j × ∞, ∀j ∈ T , whereβ l,t s,j is the j-th hard bit after flipping. This procedure ensures at most one flip for each bit position and therefore no duplicate paths will survive, which is crucial to the overall performance. Similar to R1/SPC nodes, the path extension and pruning is performed only one time for each block to keep L surviving paths, i.e., (L → L × 2 T × L sd → L). Remark 4: For small K B , an exhaustive-search-based path extension is more convenient since it generates 2 KB paths [11]. For K B > T + log 2 L sd , it is more efficient to extend paths by the proposed flip-syndrome method. Therefore, we recommend to switch between exhaustive-search-based and syndrome-based path extension depending on the constituent code rate. As such, the maximum path extension is min 2 KB , 2 T × L sd . Remark 5: For a practical list size L = 8, we can set B = 8, T ≤ 2, L sd ≤ 4 for 8-bit parallel decoding, or B = 16, T ≤ 3, L sd ≤ 8 for 16-bit parallel decoding to achieve a good tradeoff between complexity and latency, yet with negligible performance loss. D. Latency analysis The minimum number of cycles is analyzed with the assumption that independent operations can be executed in parallel. In reality, the latency will be different depending on the number of processing elements available per implementation. However, the minimum cycle analysis represents the number of logical steps and provides a hardware-independent latency evaluation. For an R1 node, the 13 error patterns in (4) are retrieved from a pre-stored table, among which 8 are pre-selected according to path metric. The 13 → 8 path sorting and pruning logic is shown in Fig. 3. For simplicity, |α l s,t | is abbreviated by α t . All relevant LLR pairs are compared in cycle 1. Among them, the first 3 pre-selected paths are β β β l s,B , β β β l s,B ⊕ e e e 0 and β β β l s,B ⊕ e e e 1 . The remaining paths are sequentially selected according to the comparison results and their preceding selection choices. Finally, the 8 candidate paths are pre-selected and sorted by ascending order. The process only requires 5 cycles. Combining all sub-paths in an L = 8 list decoder, there will be 8 × 8 = 64 paths for another round of pruning. Since the 8 sub-paths for each list are already ordered, the pruning requires an additional 9 cycles to identify the 8 survival paths [14]. The number of cycles are 14 and 15 for an R1 and SPC node, respectively. For comparison, fast-SSCL [9] requires 7 and 8 rounds of path extension and pruning for a Rate-1 and an SPC node, respectively. Each round takes a minimum of 7 cycles with bitonic sort [14]. Overall, a minimum of 7 × 7 = 49 and 7 × 8 = 56 cycles are required. For general nodes, the proposed FSL decoder also has lower latency since more bits are processed in parallel. The Step 1: all comparisons Step 2: select Step 3: select Step 4: select Step 5: select overall latency is influenced by two factors (i) the number of leaf nodes in an SC decoding tree, (ii) the degree of parallelism within a leaf node. For a rough estimation, the number of leaf nodes of a N = 1024, K = 512 Polar code is summarized in Table III. The code is constructed by Polarization Weight (PW) [4]. For all schemes, the frozen bits before the first information bit are skipped. For R0/Rep/SPC/R1 nodes, the maximum length of a parallel processing block is B max = 32. For general nodes, the parallel processing length is 8-bit or 16-bit, denoted by 8b and 16b FSL, respectively. As seen, 16b FSL only requires to visit a half of nodes to traverse the SC decoding tree. To determine real latency, we synthesized the proposed decoders in TSMC 16nm CMOS with a frequency of 1GHz. The maximum supported code length is N max = 16384, with LLRs and path metrics quantized to 6 bits. The number of processing elements is 128. The decoding latency of 4b multibit [11], Fast-SSCL [9], 8b FSL and 16b FSL decoders is measured at a code rate of 1/3. For N = 1024, the latency is 1258ns, 1079ns, 870ns and 697ns, respectively. For N = 4096, the latency is 5134ns, 4239ns, 3640ns and 3003ns, respectively. The latency reduction from [9], [11] is 35% ∼ 45% and 29% ∼ 42%, respectively. As seen, even compared with the most advanced SCL decoders [9], [11] in literature, the proposed 8b and 16b FSL decoders can further reduce latency. A detailed latency comparison is given in Table IV. E. BLER performance The BLER performance of an FSL decoder is simulated and compared with its SCL decoder counterpart. For FSL, we adopt 16-bit parallel processing with B = 16, T ≤ 3, L sd ≤ 8. We simulated a wide range of code rates and lengths, and observe negligible performance loss. In the interest of space, only code rates {1/2, 1/3} and lengths {1024, 4096, 16384} are plotted in Fig. 4. Throughout the paper, 16 CRC bits are appended to, but not included in, the K payload bits. The code rate is calculated by K/N . IV. IMPROVED CODE CONSTRUCTION Based on the proposed FSL decoder, we propose two code construction methods to further (i) reduce complexity and (ii) improve performance. The first method re-adjusts the information bit positions to avoid certain high-complexity constituent code blocks. The second one replaces outer constituent codes with optimized block codes to improve BLER performance. A. Adjusted Polar codes The complexity of an FSL decoder mainly arises from the size of syndrome tables. According to Section III-C, the size of a syndrome table is (2 B−KB ) × L sd for a constituent code block with K B information bits and L sd error patterns per syndrome. According to Remark 4, a rate-dependent path extension is adopted, where the maximum path extension is min 2 KB , 2 T × L sd . In other words, high-complexity operations are incurred by medium-rate blocks, while highrate or low-rate blocks can be processed with low complexity. Thanks to the polarization effect, most blocks will diverge to high or low rates as code length increases, which is helpful. In the following, we show that, even for finite-length codes with insufficient polarization, we can deliberately eliminate some of the medium-rate blocks by re-adjusting their information bit positions. For example, a 16-bit parallel FSL decoder with B = 16, T = 3 and L sd = 8 is used to decode a N = 2048, K = 1024, CRC16 Polar code. The original block rate distribution is shown on the left side of Fig. 5. As seen, many code blocks have already polarized to either high rate or low rate. Among the medium rate blocks, those with K B = 6 are responsible for the majority decoding complexity (syndrome table size 1024). However, there are only 3 such blocks. On the right side of Fig. 5, we eliminate blocks with K B = 6, 7 and 8 by re-allocating their information bits to blocks with lower and higher rates. Although the adjusted Polar codes deviate from the actual polarization, which implies performance loss, they demand much lower decoding complexity. In particular, the largest syndrome table size reduces from 1024 to 128, with the information re-adjustment in Fig. 5 ; Output: I adj 1) Re-adjust to eliminate medium-rate block. for each block with K low B < K B < K high B do if K B − K low B < K high B − K B then Reduce K B to K ′ B = K low B else Increase K B to K ′ B = K high B end if end for 2) Balance overall rate when necessary. for each block with K ′ B = K low B (or K ′ B = K high B ) do while Total # info. bits K ′ B > K (or < K) do Reduce K ′ B to K ′ B = K low B − 1 (or Increase K ′ B to K ′ B = K high B + 1) end while end for 3) Select information bits. for each constituent code block do Select K ′ B most reliable bit positions to I adj end for The BLER loss due to information bit re-adjustment is only 0.02dB at BLER 1%. The same experiment is conducted for N = 8192, K = 4096, whereas the performance loss becomes negligible as shown in Fig. 7. This can be well explained: medium-rate blocks reduce as polarization increases with code length, thus requiring less re-adjustment and incurring less performance loss. The proposed construction allows us to trade some performance for significant complexity reduction, thus bears practical importance. B. Optimized outer codes Observe that the proposed hard-input decoder for outer block codes is no longer an SC decoder, but similar to an ML decoder. However, the default Polar outer codes have poor minimum distance and may not be suitable for the proposed decoder. To obtain a better performance, a straightforward idea is to adopt outer codes with optimized distance spectrum. Es/N0 (dB) 1 1. Note that the error-pattern-based decoders do not need to change at all. As long as the generator/parity-check matrices are defined, the outer decoders only need to update the error patterns according to that specific code. That means any linear block codes fit well into the FSL decoding framework, offering full freedom to optimize the outer codes. For B = 16, we present a specific outer code design for each K B . For example, K = 2 simplex codes repeated to length-16 have a minimum distance 10, which is larger than 8 of (16, 2) Polar codes. Following this idea, we individually optimize each (B, K B ) outer codes with respect to code distance. For K B = 2, 3, 4, repetition over simplex codes always yields higher code distance than the corresponding Polar codes. Their respective generator matrices G KB are For K B = 6, 7, extended BCH (eBCH) codes also yields better distance spectrum than the corresponding Polar codes. Their respective generator matrices G KB are 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 G 2 = S 2 S 2 S 2 S 2 S 2 1 1 , S 2 = 1 1 0 1 0 1 ; G 0 G 0 G 16 G 7 ... ...G 6 =        1           . For K B = 8, 9, the dual of eBCH codes are adopted; for K B = 12, 13, 14, the dual of simplex codes are adopted. For the remaining rates, the original Polar codes are adopted. Depending on K B , the outer codes are combination of different codes, or hybrid outer codes. The resulting concatenated codes are thus called hybrid-Polar codes. Note that the lengths of the outer codes are not necessarily power of 2, making the concatenated codes length compatible. The encoding steps are shown in Fig. 8, and explained as follows: 1) First, an original (N, K) Polar code is constructed, in order to determine the rate of each (B, K B ) outer code. 2) Second, each block is individually encoded, i.e., multiplying a length-K B information vector by the corresponding generator matrix. 3) Thirdly, the outer code words are concatenated into a long intermediate vector, upon which inner polarization is performed to obtain a single code word. The proposed outer codes have better distance spectrum than Polar codes. The code weights {w} are enumerated in Table V. The numbers of code words having a specific weight are displayed, and those of minimum weight are highlighted in boldface. As seen, the distance spectrum of the hybrid codes improves upon Polar codes with the same K B in two ways: • The minimum distance increases, e.g., K B = 2, 6, 7; • The minimum distance remains the same, but the number of minimum-weight codewords reduces, e.g., K B = 3, 4, 9, 10, 12, 13, 14. Fig. 9 and Fig. 10 show the performance of N = 256 and N = 1024 Polar codes, respectively, along with hybrid-Polar codes of the same length and rate. As seen, a performance gain between 0.1 ∼ 0.2 dB is demonstrated. Since such BLER improvement comes with no additional cost within the FSL decoder architecture, the Hybrid-Polar codes is considered worthwhile in practical implementations. V. CONCLUSIONS In this work, we propose the hardware architecture of a flip-syndrome-list decoder to reduce decoding latency with improved parallelism. A limited number of error patterns are pre-stored, and simultaneously retrieved for bit-flippingbased path extension. For R1 and SPC nodes, only 13 error patterns are pre-stored with no performance loss under list size L = 8; for general nodes, we may further reduce latency with a syndrome table to quickly identify a set of highly likely error patterns. Based on the decoder, two code construction optimizations are proposed to either further reduce complexity or improve performance. The proposed decoder architecture and code construction are designed particularly for applications with low-latency requirements.
5,785
1808.01756
2952978061
We consider practical hardware implementation of Polar decoders. To reduce latency due to the serial nature of successive cancellation (SC), existing optimizations improve parallelism with two approaches, i.e., multi-bit decision or reduced path splitting. In this paper, we combine the two procedures into one with an error-pattern-based architecture. It simultaneously generates a set of candidate paths for multiple bits with pre-stored patterns. For rate-1 (R1) or single parity-check (SPC) nodes, we prove that a small number of deterministic patterns are required to guarantee performance preservation. For general nodes, low-weight error patterns are indexed by syndrome in a look-up table and retrieved in O(1) time. The proposed flip-syndrome-list (FSL) decoder fully parallelizes all constituent code blocks without sacrificing performance, thus is suitable for ultra-low-latency applications. Meanwhile, two code construction optimizations are presented to further reduce complexity and improve performance, respectively.
Continuous efforts @cite_10 @cite_9 @cite_11 @cite_12 @cite_1 @cite_14 @cite_3 have been made to significantly reduce decoding latency. Among them, we are particularly interested in hardware implementations, which are dominant in real-world products, due to better power- and area-efficiency. According to our cross-validation, three approaches are shown to be cost-effective, yet incur no or negligible performance loss compared to the original SCL decoder, as summarized below: Pruning on the SC decoding tree @cite_10 (parallelizing constituent code blocks with mult-bit decision) Rate-0 (R0), repetition (Rep) nodes @cite_9 @cite_11 . General (Gen) nodes comprised of consecutive bits @cite_12 @cite_1 . Reduce the number of path splitting Rate-1 (R1), single parity-check (SPC) nodes @cite_11 . Do not split upon the most reliable (good) bits @cite_14 @cite_3 . Reduce the latency of list pruning Adopt bitonic sort @cite_6 for efficient pruning. Quick list pruning @cite_4 .
{ "abstract": [ "In this paper, we propose a decision-aided scheme for parallel SC-List decoding of polar codes. At the parallel SC-List decoder, each survival path is extended based on multiple information bits, therefore the number of split paths becomes very large and the sorting to find the top L paths becomes very complex. We propose a decision-aided scheme to reduce the number of split paths and thus reduce the sorting complexity.", "For polar codes with short-to-medium code length, list successive cancellation decoding is used to achieve a good error-correcting performance. However, list pruning in the current list decoding is based on the sorting strategy and its timing complexity is high. This results in a long decoding latency for large list size. In this work, aiming at a low-latency list decoding implementation, a double thresholding algorithm is proposed for a fast list pruning. As a result, with a negligible performance degradation, the list pruning delay is greatly reduced. Based on the double thresholding, a low-latency list decoding architecture is proposed and implemented using a UMC 90nm CMOS technology. Synthesis results show that, even for a large list size of 16, the proposed low-latency architecture achieves a decoding throughput of 220 Mbps at a frequency of 641 MHz.", "", "Polar codes provably achieve the symmetric capacity of a memoryless channel while having an explicit construction. The adoption of polar codes however, has been hampered by the low throughput of their decoding algorithm. This work aims to increase the throughput of polar decoding hardware by an order of magnitude relative to successive-cancellation decoders and is more than 8 times faster than the current fastest polar decoder. We present an algorithm, architecture, and FPGA implementation of a flexible, gigabit-per-second polar decoder.", "This paper focuses on low complexity successive cancellation list (SCL) decoding of polar codes. In particular, using the fact that splitting may be unnecessary when the reliability of decoding the unfrozen bit is sufficiently high, a novel splitting rule is proposed. Based on this rule, it is conjectured that, if the correct path survives at some stage, it tends to survive till termination without splitting with high probability. On the other hand, the incorrect paths are more likely to split at the following stages. Motivated by these observations, a simple counter that counts the successive number of stages without splitting is introduced for each decoding path to facilitate the identification of correct and incorrect paths. Specifically, any path with counter value larger than a predefined threshold @math is deemed to be the correct path, which will survive at the decoding stage, while other paths with counter value smaller than the threshold will be pruned, thereby reducing the decoding complexity. Furthermore, it is proved that there exists a unique unfrozen bit @math , after which the successive cancellation decoder achieves the same error performance as the maximum likelihood decoder if all the prior unfrozen bits are correctly decoded, which enables further complexity reduction. Simulation results demonstrate that the proposed low complexity SCL decoder attains performance similar to that of the conventional SCL decoder, while achieving substantial complexity reduction.", "Long polar codes can achieve the symmetric capacity of arbitrary binary-input discrete memoryless channels under a low-complexity successive cancelation (SC) decoding algorithm. However, for polar codes with short and moderate code lengths, the decoding performance of the SC algorithm is inferior. The cyclic-redundancy-check (CRC)-aided SC-list (SCL)-decoding algorithm has better error performance than the SC algorithm for short or moderate polar codes. In this paper, we propose an efficient list decoder architecture for the CRC-aided SCL algorithm, based on both algorithmic reformulations and architectural techniques. In particular, an area efficient message memory architecture is proposed to reduce the area of the proposed decoder architecture. An efficient path pruning unit suitable for large list size is also proposed. For a polar code of length 1024 and rate 1 2, when list size @math and 4, the proposed list decoder architecture is implemented under a Taiwan Semiconductor Manufacturing Company (TSMC) 90-nm CMOS technology. Compared with the list decoders in the literature, our decoder achieves 1.24–1.83 times the area efficiency.", "", "Polar codes, as the first provable capacity-achieving error-correcting codes, have received much attention in recent years. However, the decoding performance of polar codes with traditional successive-cancellation (SC) algorithm cannot match that of the low-density parity-check or Turbo codes. Because SC list (SCL) decoding algorithm can significantly improve the error-correcting performance of polar codes, design of SCL decoders is important for polar codes to be deployed in practical applications. However, because the prior latency reduction approaches for SC decoders are not applicable for SCL decoders, these list decoders suffer from the long-latency bottleneck. In this paper, we propose a multibit-decision approach that can significantly reduce latency of SCL decoders. First, we present a reformulated SCL algorithm that can perform intermediate decoding of 2 b together. The proposed approach, referred as 2-bit reformulated SCL ( 2b-rSCL ) algorithm , can reduce the latency of SCL decoder from ( @math ) to ( @math ) clock cycles without any performance loss. Then, we extend the idea of 2-b-decision to general case, and propose a general decoding scheme that can perform intermediate decoding of any @math bits simultaneously. This general approach, referred as @math -bit reformulated SCL ( @math b-rSCL ) algorithm , can reduce the overall decoding latency to as short as @math cycles. Furthermore, on the basis of the proposed algorithms, very large-scale integration architectures for 2b-rSCL and 4b-rSCL decoders are synthesized. Compared with a prior SCL decoder, the proposed (1024, 512) 2b-rSCL and 4b-rSCL decoders can achieve 21 and 60 reduction in latency, 1.66 and 2.77 times increase in coded throughput with list size 2, and 2.11 and 3.23 times increase in coded throughput with list size 4, respectively.", "Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable tradeoff between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of path splitting required to decode rate one and single parity check codes. Thus, the number of splitting can be limited while guaranteeing exactly the same error-correction performance as if the paths were forked at each bit estimation. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of path forks in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: It is shown that our design can achieve @math Gb s throughput, higher than the best state-of-the-art decoders." ], "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2240900403", "1547113325", "", "2003204237", "2228116382", "2963678405", "", "2101207956", "2601496744" ] }
A Flip-Syndrome-List Polar Decoder Architecture for Ultra-Low-Latency Communications
B. Motivation and our contributions It is well known that an SC decoder requires 2N − 2 time steps for a length-N code [1]. The SC decoding factor graph reveals that, the main source of latency is the left hand side (LHS, or information bit side) of the graph. In contrast, the right hand side (RHS, or codeword side) of the graph consists of independent code blocks and already supports parallel decoding. With the above observations, the key to low-latency decoding is to parallelize LHS processing. Existing hardware decoder designs are pioneered by [7]- [11], which view SC decoding as binary tree search, i.e., a length-N code (a parent node) is recursively decomposed into two length-N/2 codes (child nodes). Upon reaching certain special nodes, their child nodes are not traversed [7] and the corresponding path metrics are directly updated at the parent node [8]. Even though, there is still room for further optimizations: • The processing of an R1/SPC node is not fully parallel (e.g., a number of sequential path extension & pruning are still required [9]). A higher degree of parallelism can be exploited to further reduce latency. • Optimizations (e.g., parallel processing) are applied to some special nodes (e.g., R0/Rep/SPC/R1), and the length of such blocks, denoted by B, is often short due to insufficient polarization. According to our measurement under typical code lengths, the main source of latency is now incurred by the general nodes whose constituent code rates are between 2 B and B−2 B . Motivated by [7]- [11], and thanks to the recent advances in efficient list pruning [14], [15], we find it profitable to further improve parallelism for ultra-low-latency applications. Our contributions are summarized below: 1) We propose to fully parallelize the processing of R1/SPC nodes via multi-bit hard estimation and flipping at intermediate stages. Only one-time path extension/pruning per node is required by applying a small number of flipping patterns on the raw hard estimation. Such simplification is proven to preserve performance. 2) For general nodes, we apply flip-syndrome-list (FSL) decoding to constituent code blocks. Specifically, a small set of low-weight error patterns are pre-stored in a table indexed by syndrome. During decoding, syndrome is calculated per constituent code block. Its associated error patterns are retrieved from the syndrome table, and used for bit-flip-based sub-path generation. Similar to R1/SPC nodes, the FSL decoder narrows down the candidates for path extension, and enjoys the simplicity of a hard-input decoder. The proposed optimization is shown to incur negligible performance loss. 3) The complexity of an FSL decoder is mainly incurred by constituent code blocks with medium rates. We propose to re-adjust the distribution of information bits in order to avoid certain constituent code rates, such that decoder complexity can be significantly reduced. We show that the performance loss can be negligible. 4) With the FSL decoder's capability to decode arbitrary linear outer constituent codes, not necessarily Polar codes, we propose to adopt hybrid outer codes with optimized distance spectrum. The hybrid-Polar codes demonstrate better performance than the original Polar codes. Paper is organized as follows, Section II introduces the fundamentals of Polar SCL decoding, Section III provides the details of FSL decoder including R1/SPC nodes, general nodes, latency analysis and BLER performance, Section IV proposes two improved code construction methods that benefit from the FSL decoder architecture, Section V concludes the paper. II. POLAR CODES AND SCL DECODING A binary Polar code of mother code length N = 2 n can be defined by c = uG and a set of information sub-channel indices I. The information bits are assigned to sub-channels with indices in I, i.e., u I , and the frozen bits (zero-valued by default) are assigned to the rest sub-channels. The Polar kernel matrix is G = F ⊗n , where F = 1 0 1 1 is the kernel and ⊗ denotes Kronecker power, and c is the code word. The transmitted BPSK symbols are x N −1 0 = 1 − 2 · c N −1 0 and the received vector is y N −1 0 . For completeness, the original SCL decoder [6] is briefly revisited. The SC decoding factor graph of a length-N Polar code consists of N × (log 2 N + 1) nodes. The row indices i = {0, 1, · · · , N − 1} denote the N bit indices. The column indices s = 0, 1, · · · , log 2 N denote decoding stages, with s = 0 labeling the information bit side and s = log 2 N labeling the input LLR side (or codeword side). Each node in the factor graph can be indexed by a (s, i) pair, and is associated with a soft LLR value α s,i , which is initialized by α log 2 N,i = y i , and a hard estimate β s,i . For all s and i satisfying i mod 2 s+1 < 2 s , a hardwarefriendly right-to-left updating rule for α is: α s,i = sgn(α s+1,i )sgn(α s+1,i+2 s ) min(|α s+1,i |, |α s+1,i+2 s |), α s,i+2 s = (1 − 2β s,i )α s+1,i + α s+1,i+2 s . The hard estimate of the i-th bit is β 0,i = 1−sgn(α0,i) 2 . The corresponding left-to-right updating rule for β is: β s,i = β s−1,i ⊕ β s−1,i+2 s−1 , β s,i+2 s−1 = β s−1,i+2 s−1 . An SCL decoder with list size L executes path split upon each information bit, and preserves L paths with smallest path metrics (PM). Given the l-th path withû l i as the i-th hard output bit, a hardware-friendly PM updating rule [16] is PM l i = PM l i−1 , ifû l i = β l 0,i , PM l i−1 + |α l 0,i |, otherwise, where PM l i denotes the path metric of the l-th path at bit index i, and α l 0,i and β l 0,i denote its corresponding soft LLR and hard estimation, respectively. After decoding the last bit, the first path 1 is selected as decoding output. III. FLIP-SYNDROME-LIST (FSL) DECODING SC-based decoding of length-N Polar codes requires log 2 N + 1 stages to propagate received signal (s = log 2 N ) to information bits (s = 0). The degree of parallelism is 2 s , i.e., reduces by half after each decoding stage. To increase parallelism, we propose to terminate the LLR propagation at intermediate stage s = log 2 B, and process all length-B constituent code blocks with a hardinput decoder. The design is detailed throughout this section, where differences to existing works mainly include (i) fully parallelized processing for B bits and L paths, and (ii) supporting arbitrary-rate blocks rather than special ones (e.g., R0/Rep/SPC/R1). A. Multi-bit hard decision at intermediate stage The indices of a constituent code block is denoted by B {i, i + 1, · · · , i + B − 1}. Once the soft LLRs at the s-th stage are obtained, where s = log 2 B, a raw hard estimation is immediately obtained by β β β s,B = 1 − sgn(α α α s,B ) 2 .(1) In contrast to SCL that uses the soft LLR α α α s,B , a constituent block decoder takes β β β s,B as its hard input, and directly generates hard code wordβ β β s,B as decoded output. The hard-input decoders for R1, SPC and general nodes will be described next in Section III-B and III-C. For now, we assume such a decoder outputs a hard code wordβ β β s,B for each candidate path, and recover the corresponding information vector byû B =β β β s,B F ⊗s .(2) Given the soft LLRsα α α s,B and the recovered codeword β β β s,B , the multi-bit version of PM updating rule [8] is PM l i = PM l i−B + j∈B β l s,j − β l s,j α l s,j .(3) The remaining updating of α and β is based on the hard decisionβ β β s,B rather than the raw estimation β β β s,B . B. Parallelized path extension via bit flipping 1) Rate-1 nodes: For an R1 node, the state-of-the-art decoding method [9] requires min(L − 1, B) times path extensions. First, the input soft LLRs α α α l s,B for each list path is sorted. Then, path extensions are performed only on the min(L − 1, B) least LLR positions to reduce complexity. Such simplification incurs no performance loss since additional path extensions are proven to be redundant [9]. The searching space becomes L×2 min(L−1,B) , much smaller than L×2 B for conventional SCL [6] and SSCL [8]. Another work [17] also proposes to reduce searching space for R1 nodes. But its candidate paths generation is LLR-dependent, thus is suitable for software implementation as suggested in [17]. In this paper, we focus on hardware implementation and propose a parallel path extension based on pre-stored errorpatterns. As shown in Fig. 1, only one-time path extension and pruning is required for a constituent block. The optimization exploits the deterministic partial ordering of incremental path metrics within a block. Accordingly, the search for survived paths can be narrowed down to a limited set, and pre-stored in the form of error patterns in a look-up table (LUT). The LUT is shown to be very small for a practical list size L = 8. As such, the advantages are: • B bits are decoded in parallel. • Sub-paths are generated in parallel. • The above two procedures are combined into one. Notation 1 (soft/hard vectors): The soft LLR input of a constituent block is indexed by ascending reliability order, i.e., α α α l s,B such that |α l s,0 | < |α l s,1 | < · · · < |α l s,B−1 | for each list path. The corresponding raw hard estimation is denoted by β β β l s,B β l s,0 , β l s,1 , · · · , β l s,B−1 . Notation 2 (sub-paths extension): For a constituent block with indices B, a sub-path that extends from the i-bit to the (i + B − 1)-th bit can be well defined by the blockwise decoding output. For example, the t-th sub-path of the l-th path is denoted by the vectorβ β β l,t s,B . Notation 3 (bit-flipping): Each vectorβ β β l,t s,B is generated by flipping β β β l s,B based on an error pattern e e e. A single-bit-error pattern is denoted by e e e p if it has one at the p-th bit position (p = 0, 1, · · · ) and zeros otherwise. For L = 8, we narrow down the searching space per list path from 2 min(L−1,B) to 13 by the following proposition. Proposition 1: For each path in an SCL with L = 8, its L maximum-likelihood sub-paths (i.e., with minimum incremental path metrics) fall into a deterministic set of size 13. These sub-paths can be obtained by bit flipping the original hard estimation of each list path based on the following error patterns: β β β l,t s,B =                      β β β l s,B , t = 0; β β β l s,B ⊕ e e e t−1 , 1 ≤ t ≤ 7, β β β l s,B ⊕ e (4) Proof: To survive from the sub-paths of all L paths, a sub-path must first survive from the sub-paths of its own parent path. That means for each parent path, we only need to consider its L maximum-likelihood sub-paths. Altogether, there are at most L 2 sub-path to be considered. According to (3), the path metric penalty is received only on the flipped positions. For each sub-path and its associated error patterns, the incremental path metric is computed by ∆PM l,t i+B−1 PM l,t i+B−1 − PM l i (5) =                    0, t = 0; |α l s,t−1 |, We prove Proposition 1 with the above directed graph. Any node with a minimum distance to the root node "0" larger than L = 8 cannot survive path pruning. First, if the 8-th smallest incremental path metric is caused by a single bit error, then it cannot be |α l s,7 | or larger, otherwise there will be more than 8 sub-paths with incremental path metrics smaller than the 8-th one, which contradicts the assumption. The argument is obvious since there are already 8 nodes upstream of |α l s,7 | in the directed graph. Similarly, the 8-th smallest incremental path metric caused by two bit errors cannot be equal to or larger than |α l s,1 | + |α l s,3 |, because there are already more than 8 sub-paths with smaller path metrics in its upstream. Finally, the sub-paths with incremental path metric |α l s,0 |+ |α l s,1 | + |α l s,2 | also has 8 nodes in its upstream (including itself), and any error patterns with larger incremental path metric (including the 4-bit patterns) will lead to contradiction if they are included in the surviving set. Thus, we can reduce the tested error patterns per path to 13 with only one-time path extension without any performance loss. Remark 1: The bit-flipping-based path extension is mainly constituted of binary/LUT operations. The 13 error patterns are pre-stored. The resulting path metrics for all error patterns can be computed in parallel according to (3) or (5). The path extension and pruning are as summarized by "(13 → 8 → 64 → 8) × 1", explained as follows. For each path, the 13 error patterns lead to 13 sub-paths, among which the 8 with smallest path metrics are pre-selected (13 → 8). Altogether, there will be 8 × L = 64 extended paths (8 → 64) for the case of L = 8. The 64 extended paths are pruned back to 8 (64 → 8). The above procedures are executed only one time. In contrast, the fast-SSCL decoder [9] requires L − 1 = 7 times path extension and pruning, i.e., (8 → 16 → 8) × 7. According to Section III-D, the minimum number of "cycles" reduces from 49 to 14 in the case of a length-16 R1 block. To avoid any misunderstanding, the "cycles" here captures implementation details in our fabricated ASIC [19], thus should be distinguished from the "time steps" concept in [9]. Remark 2: The proposition addresses list size L = 8, but its idea naturally extends to all list sizes as long as the corresponding error patterns are identified. Among them, decoders with list size L = 8 are particularly important since they are widely accepted by the industry during the 5G standardization process [18]. The conclusion is drawn after extensive evaluations on the tradeoff among BLER, latency, throughput and power consumption, in which decoders with L = 8 achieve the best overall efficiency. The tradeoff in real hardware is further verified in our implemented decoder ASIC in [19]. 2) SPC nodes: For an SPC node, the state-of-the-art decoding method [9] requires min(L, B) times path extensions. In this work, we propose only one-time path extension and reduce the searching space from 2 min(L,B) to 13 as follows. 0 - - - |α l s,0 | + |α l s,1 | - - - |α l s,0 | + |α l s,2 | |α l s,1 | + |α l s,2 | - - |α l s,0 | + |α l s,3 | |α l s,1 | + |α l s,3 | |α l s,2 | + |α l s,3 | 3 j=0 |α l s,i | |α l s,0 | + |α l s,4 | |α l s,1 | + |α l s,4 | · · · · · · · · · · · · · · · · · · |α l s,0 | + |α l s,7 | · · · · · · · · · Odd checksum case for β β β l s,B |α l s,0 | - - - |α l s,1 | - - - |α l s,2 | j=0,1,2 |α l s,i | - - |α l s,3 | j=0,1,3 |α l s,i | j=0,2,3 |α l s,i | j=1,2,3 |α l s,i | |α l s,4 | j=0,1,4 |α l s,i | · · · · · · · · · · · · · · · · · · |α l s,7 | · · · · · · · · · Proposition 2: For SCL with L = 8, following Notation 1, if the checksum of β β β l s,B is even, i.e., j∈B β l s,j = 0, then the L surviving paths can be obtained from bit flipping each list path based on the following 13 error patterns: β β β l,t s,B =                      β β β l s,B , t = 0; β β β l s, Otherwise, if the checksum of β β β l s,B is odd, i.e., j∈B β l s,j = 1, then the L surviving paths can be obtained from bit flipping each list path based on the following 13 error patterns: β β β l,t s,B =                  β β β l s,B ⊕ e e e t Proof: The proof follows that of Proposition 1. For simplicity, we change the directed graph to Table I, where the right and lower cells are always larger than the left and upper ones. As seen, any error patterns other than the those given in Proposition 2 will lead to more than 8 surviving paths with path metrics smaller than the 8-th path, which contradicts the assumption. Remark 3: According to Section III-D, the latency (cycles) reduction from [9] is 56 → 15 under L = 8. C. Error pattern identification via syndrome decoding Existing optimizations operate on special rates, e.g., R0/R1/SPC/Rep nodes. In this work, we suggest a paral- ) 00 00 05 11 41 01 01 04 10 40 10 03 09 21 81 11 02 08 20 80 lelization method for arbitrary nodes with larger sizes (e.g., B = 8, 16, · · · ). For general nodes, it is not easy to identify all possible error patterns as in R1/SPC nodes. However, it is possible to quickly narrow down to a subset of highly-likely error patterns for parallelized path extension. Syndrome decoding is particularly suitable here for two reasons, e.g., (i) blockwise syndrome calculation is simple and reuses the Kronecker product module, (ii) multiple error patterns (coset) can be pre-stored and retrieved in parallel. 1) General nodes: As shown in Fig. 2, we first obtain a set of input vectors via multi-bit hard decision and bit flipping. Given the flipping pattern, the syndrome-decoding-based parallel path extension is illustrated in Fig. 2. The key steps, e.g., syndrome calculation and error pattern retrieval, are hardware-friendly binary operations and LUT. Denote by G B F ⊗ log 2 B the kernel of a general node and its frozen set F B , the parity-check matrix H B is obtained by extracting the columns with indices in F B from G B . Thus, the syndrome of vector β β β l,t s,B contains B − K B bits and is calculated by d d d l,t s,B = H B × β β β l,t s,B . For each syndrome, its associated error patterns are computed offline [20] and pre-stored by ascending weight order in LUT. Since a low-weight error pattern is more likely than a high-weight one, we only need to store a small number of lowest-weight patterns to reduce memory. There are 2 B−KB different syndromes for a (B, K B ) constituent code block, where K B is the number of information bits within the block. As a result, the size of a syndrome table is (2 B−KB ) × L sd , where L sd is a constant number of error patterns pre-stored for each syndrome. For example, the syndrome table for a general node with B = 8, K B = 6 and L sd = 4 has size 4 × 4 and is given in Table II. The error patterns retrieved from LUT are used to simultaneously generate a set of candidate sub-paths, denoted by where t and l sd are the flipping pattern index and syndromewise error pattern index, respectively. For each list path, we have 2 T × L sd extended sub-paths. The path metrics are updated according to (3) except that, the T smallest LLRs are modified to a large value, i.e., α l,t s,j → (−1)β l,t s,j × ∞, ∀j ∈ T , whereβ l,t s,j is the j-th hard bit after flipping. This procedure ensures at most one flip for each bit position and therefore no duplicate paths will survive, which is crucial to the overall performance. Similar to R1/SPC nodes, the path extension and pruning is performed only one time for each block to keep L surviving paths, i.e., (L → L × 2 T × L sd → L). Remark 4: For small K B , an exhaustive-search-based path extension is more convenient since it generates 2 KB paths [11]. For K B > T + log 2 L sd , it is more efficient to extend paths by the proposed flip-syndrome method. Therefore, we recommend to switch between exhaustive-search-based and syndrome-based path extension depending on the constituent code rate. As such, the maximum path extension is min 2 KB , 2 T × L sd . Remark 5: For a practical list size L = 8, we can set B = 8, T ≤ 2, L sd ≤ 4 for 8-bit parallel decoding, or B = 16, T ≤ 3, L sd ≤ 8 for 16-bit parallel decoding to achieve a good tradeoff between complexity and latency, yet with negligible performance loss. D. Latency analysis The minimum number of cycles is analyzed with the assumption that independent operations can be executed in parallel. In reality, the latency will be different depending on the number of processing elements available per implementation. However, the minimum cycle analysis represents the number of logical steps and provides a hardware-independent latency evaluation. For an R1 node, the 13 error patterns in (4) are retrieved from a pre-stored table, among which 8 are pre-selected according to path metric. The 13 → 8 path sorting and pruning logic is shown in Fig. 3. For simplicity, |α l s,t | is abbreviated by α t . All relevant LLR pairs are compared in cycle 1. Among them, the first 3 pre-selected paths are β β β l s,B , β β β l s,B ⊕ e e e 0 and β β β l s,B ⊕ e e e 1 . The remaining paths are sequentially selected according to the comparison results and their preceding selection choices. Finally, the 8 candidate paths are pre-selected and sorted by ascending order. The process only requires 5 cycles. Combining all sub-paths in an L = 8 list decoder, there will be 8 × 8 = 64 paths for another round of pruning. Since the 8 sub-paths for each list are already ordered, the pruning requires an additional 9 cycles to identify the 8 survival paths [14]. The number of cycles are 14 and 15 for an R1 and SPC node, respectively. For comparison, fast-SSCL [9] requires 7 and 8 rounds of path extension and pruning for a Rate-1 and an SPC node, respectively. Each round takes a minimum of 7 cycles with bitonic sort [14]. Overall, a minimum of 7 × 7 = 49 and 7 × 8 = 56 cycles are required. For general nodes, the proposed FSL decoder also has lower latency since more bits are processed in parallel. The Step 1: all comparisons Step 2: select Step 3: select Step 4: select Step 5: select overall latency is influenced by two factors (i) the number of leaf nodes in an SC decoding tree, (ii) the degree of parallelism within a leaf node. For a rough estimation, the number of leaf nodes of a N = 1024, K = 512 Polar code is summarized in Table III. The code is constructed by Polarization Weight (PW) [4]. For all schemes, the frozen bits before the first information bit are skipped. For R0/Rep/SPC/R1 nodes, the maximum length of a parallel processing block is B max = 32. For general nodes, the parallel processing length is 8-bit or 16-bit, denoted by 8b and 16b FSL, respectively. As seen, 16b FSL only requires to visit a half of nodes to traverse the SC decoding tree. To determine real latency, we synthesized the proposed decoders in TSMC 16nm CMOS with a frequency of 1GHz. The maximum supported code length is N max = 16384, with LLRs and path metrics quantized to 6 bits. The number of processing elements is 128. The decoding latency of 4b multibit [11], Fast-SSCL [9], 8b FSL and 16b FSL decoders is measured at a code rate of 1/3. For N = 1024, the latency is 1258ns, 1079ns, 870ns and 697ns, respectively. For N = 4096, the latency is 5134ns, 4239ns, 3640ns and 3003ns, respectively. The latency reduction from [9], [11] is 35% ∼ 45% and 29% ∼ 42%, respectively. As seen, even compared with the most advanced SCL decoders [9], [11] in literature, the proposed 8b and 16b FSL decoders can further reduce latency. A detailed latency comparison is given in Table IV. E. BLER performance The BLER performance of an FSL decoder is simulated and compared with its SCL decoder counterpart. For FSL, we adopt 16-bit parallel processing with B = 16, T ≤ 3, L sd ≤ 8. We simulated a wide range of code rates and lengths, and observe negligible performance loss. In the interest of space, only code rates {1/2, 1/3} and lengths {1024, 4096, 16384} are plotted in Fig. 4. Throughout the paper, 16 CRC bits are appended to, but not included in, the K payload bits. The code rate is calculated by K/N . IV. IMPROVED CODE CONSTRUCTION Based on the proposed FSL decoder, we propose two code construction methods to further (i) reduce complexity and (ii) improve performance. The first method re-adjusts the information bit positions to avoid certain high-complexity constituent code blocks. The second one replaces outer constituent codes with optimized block codes to improve BLER performance. A. Adjusted Polar codes The complexity of an FSL decoder mainly arises from the size of syndrome tables. According to Section III-C, the size of a syndrome table is (2 B−KB ) × L sd for a constituent code block with K B information bits and L sd error patterns per syndrome. According to Remark 4, a rate-dependent path extension is adopted, where the maximum path extension is min 2 KB , 2 T × L sd . In other words, high-complexity operations are incurred by medium-rate blocks, while highrate or low-rate blocks can be processed with low complexity. Thanks to the polarization effect, most blocks will diverge to high or low rates as code length increases, which is helpful. In the following, we show that, even for finite-length codes with insufficient polarization, we can deliberately eliminate some of the medium-rate blocks by re-adjusting their information bit positions. For example, a 16-bit parallel FSL decoder with B = 16, T = 3 and L sd = 8 is used to decode a N = 2048, K = 1024, CRC16 Polar code. The original block rate distribution is shown on the left side of Fig. 5. As seen, many code blocks have already polarized to either high rate or low rate. Among the medium rate blocks, those with K B = 6 are responsible for the majority decoding complexity (syndrome table size 1024). However, there are only 3 such blocks. On the right side of Fig. 5, we eliminate blocks with K B = 6, 7 and 8 by re-allocating their information bits to blocks with lower and higher rates. Although the adjusted Polar codes deviate from the actual polarization, which implies performance loss, they demand much lower decoding complexity. In particular, the largest syndrome table size reduces from 1024 to 128, with the information re-adjustment in Fig. 5 ; Output: I adj 1) Re-adjust to eliminate medium-rate block. for each block with K low B < K B < K high B do if K B − K low B < K high B − K B then Reduce K B to K ′ B = K low B else Increase K B to K ′ B = K high B end if end for 2) Balance overall rate when necessary. for each block with K ′ B = K low B (or K ′ B = K high B ) do while Total # info. bits K ′ B > K (or < K) do Reduce K ′ B to K ′ B = K low B − 1 (or Increase K ′ B to K ′ B = K high B + 1) end while end for 3) Select information bits. for each constituent code block do Select K ′ B most reliable bit positions to I adj end for The BLER loss due to information bit re-adjustment is only 0.02dB at BLER 1%. The same experiment is conducted for N = 8192, K = 4096, whereas the performance loss becomes negligible as shown in Fig. 7. This can be well explained: medium-rate blocks reduce as polarization increases with code length, thus requiring less re-adjustment and incurring less performance loss. The proposed construction allows us to trade some performance for significant complexity reduction, thus bears practical importance. B. Optimized outer codes Observe that the proposed hard-input decoder for outer block codes is no longer an SC decoder, but similar to an ML decoder. However, the default Polar outer codes have poor minimum distance and may not be suitable for the proposed decoder. To obtain a better performance, a straightforward idea is to adopt outer codes with optimized distance spectrum. Es/N0 (dB) 1 1. Note that the error-pattern-based decoders do not need to change at all. As long as the generator/parity-check matrices are defined, the outer decoders only need to update the error patterns according to that specific code. That means any linear block codes fit well into the FSL decoding framework, offering full freedom to optimize the outer codes. For B = 16, we present a specific outer code design for each K B . For example, K = 2 simplex codes repeated to length-16 have a minimum distance 10, which is larger than 8 of (16, 2) Polar codes. Following this idea, we individually optimize each (B, K B ) outer codes with respect to code distance. For K B = 2, 3, 4, repetition over simplex codes always yields higher code distance than the corresponding Polar codes. Their respective generator matrices G KB are For K B = 6, 7, extended BCH (eBCH) codes also yields better distance spectrum than the corresponding Polar codes. Their respective generator matrices G KB are 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 G 2 = S 2 S 2 S 2 S 2 S 2 1 1 , S 2 = 1 1 0 1 0 1 ; G 0 G 0 G 16 G 7 ... ...G 6 =        1           . For K B = 8, 9, the dual of eBCH codes are adopted; for K B = 12, 13, 14, the dual of simplex codes are adopted. For the remaining rates, the original Polar codes are adopted. Depending on K B , the outer codes are combination of different codes, or hybrid outer codes. The resulting concatenated codes are thus called hybrid-Polar codes. Note that the lengths of the outer codes are not necessarily power of 2, making the concatenated codes length compatible. The encoding steps are shown in Fig. 8, and explained as follows: 1) First, an original (N, K) Polar code is constructed, in order to determine the rate of each (B, K B ) outer code. 2) Second, each block is individually encoded, i.e., multiplying a length-K B information vector by the corresponding generator matrix. 3) Thirdly, the outer code words are concatenated into a long intermediate vector, upon which inner polarization is performed to obtain a single code word. The proposed outer codes have better distance spectrum than Polar codes. The code weights {w} are enumerated in Table V. The numbers of code words having a specific weight are displayed, and those of minimum weight are highlighted in boldface. As seen, the distance spectrum of the hybrid codes improves upon Polar codes with the same K B in two ways: • The minimum distance increases, e.g., K B = 2, 6, 7; • The minimum distance remains the same, but the number of minimum-weight codewords reduces, e.g., K B = 3, 4, 9, 10, 12, 13, 14. Fig. 9 and Fig. 10 show the performance of N = 256 and N = 1024 Polar codes, respectively, along with hybrid-Polar codes of the same length and rate. As seen, a performance gain between 0.1 ∼ 0.2 dB is demonstrated. Since such BLER improvement comes with no additional cost within the FSL decoder architecture, the Hybrid-Polar codes is considered worthwhile in practical implementations. V. CONCLUSIONS In this work, we propose the hardware architecture of a flip-syndrome-list decoder to reduce decoding latency with improved parallelism. A limited number of error patterns are pre-stored, and simultaneously retrieved for bit-flippingbased path extension. For R1 and SPC nodes, only 13 error patterns are pre-stored with no performance loss under list size L = 8; for general nodes, we may further reduce latency with a syndrome table to quickly identify a set of highly likely error patterns. Based on the decoder, two code construction optimizations are proposed to either further reduce complexity or improve performance. The proposed decoder architecture and code construction are designed particularly for applications with low-latency requirements.
5,785
1808.01752
2949534134
The electroencephalography classifier is the most important component of brain-computer interface based systems. There are two major problems hindering the improvement of it. First, traditional methods do not fully exploit multimodal information. Second, large-scale annotated EEG datasets are almost impossible to acquire because biological data acquisition is challenging and quality annotation is costly. Herein, we propose a novel deep transfer learning approach to solve these two problems. First, we model cognitive events based on EEG data by characterizing the data using EEG optical flow, which is designed to preserve multimodal EEG information in a uniform representation. Second, we design a deep transfer learning framework which is suitable for transferring knowledge by joint training, which contains a adversarial network and a special loss function. The experiments demonstrate that our approach, when applied to EEG classification tasks, has many advantages, such as robustness and accuracy.
Transfer learning enables the use of different domains, tasks, and distributions for training and testing @cite_0 . @cite_12 reviewed the current state-of-the-art transfer learning approaches in BCI. @cite_10 transferred general features via a convolutional network across subjects and experiments. @cite_19 applied kernel principle analysis and transductive parameter transfer to identify the relationships between classifier parameter vectors across subjects. @cite_9 evaluated the transferability between subjects by calculating distance and transferred knowledge in comparable feature spaces to improve accuracy. @cite_15 proposed an approach which can simultaneously transfer knowledge across domains and tasks. @cite_1 and @cite_13 attempted the learning of transferable features by embedding task-specific layers in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched.
{ "abstract": [ "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.", "", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "", "Individual differences across subjects and nonstationary characteristic of electroencephalography (EEG) limit the generalization of affective brain-computer interfaces in real-world applications. On the other hand, it is very time consuming and expensive to acquire a large number of subject-specific labeled data for learning subject-specific models. In this paper, we propose to build personalized EEG-based affective models without labeled target data using transfer learning techniques. We mainly explore two types of subject-to-subject transfer approaches. One is to exploit shared structure underlying source domain (source subject) and target domain (target subject). The other is to train multiple individual classifiers on source subjects and transfer knowledge about classifier parameters to target subjects, and its aim is to learn a regression function that maps the relationship between feature distribution and classifier parameters. We compare the performance of five different approaches on an EEG dataset for constructing an affective model with three affective states: positive, neutral, and negative. The experimental results demonstrate that our proposed subject transfer framework achieves the mean accuracy of 76.31 in comparison with a conventional generic classifier with 56.73 in average.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Transfer learning (TL) has gained significant interests recently in brain computer interface (BCI) as a key approach to design robust predictors for cross-subject and cross-experiment prediction of the brain activities in response to cognitive events. We carried out in this.aper the first comprehensive investigation of the transferability of deep convolutional neural network (CNN) for cross-subject and cross-experiment prediction of image Rapid Serial Visual Presentation (RSVP) events. We show that for both cross-subject and cross-experiment predictions, all convolutional layers and fully connected layers contain both general and subject experiment-specific features and transfer learning with weights fine-tuning can improve the prediction performance over that without transfer. However, for cross-subject prediction, the convolutional layers capture more subject-specific features, whereas for cross-experiment prediction, the convolutional layers capture more general features across experiment. Our study provides important information that will guide the design of more sophisticated deep transfer learning algorithms for EEG based classifications in BCI applications.", "" ], "cite_N": [ "@cite_13", "@cite_9", "@cite_1", "@cite_0", "@cite_19", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2408201877", "", "2951670162", "", "2578674746", "2953226914", "2614288205", "" ] }
DEEP TRANSFER LEARNING FOR EEG-BASED BRAIN COMPUTER INTERFACE
For patients suffering from stroke, it is meaningful to provide a communication method apart from the normal nerve-muscle output pathway to deliver brain messages and commands to the external world. Due to natural and non-intrusive characteristics, most brain-computer interface (BCI) systems select electroencephalography (EEG) signals as the input [1]. The biggest challenge and most important component in a BCI system is the EEG classifier, which translates a raw EEG signal into the commands of the human brain. Currently, two major problems hinder the improvement of EEG classification. First, traditional EEG classification methods focus on frequency-domain information and cannot fully exploit multimodal information. Second, high-quality, largescale annotated EEG datasets are extremely difficult to con-This work was supported by the China National Natural Fund: 91420302 and 91520201. struct because biological data acquisition is challenging and quality annotation is costly. We solved these problems in the following ways. First, we modeled cognitive events based on EEG data by characterizing the data using EEG optical flow, which is designed to preserve multimodal EEG information. In this way, the EEG classification problem is reduced to a video classification problem and can be solved using advanced computer vision technology. Second, we designed a deep transfer learning framework suitable for transferring knowledge by joint training, which contains an adversarial network and a special loss function. The architecture of the transfer network was borrowed from computer vision, and the parameters of the transfer network were jointly trained on ImageNet and EEG optical flow. In order to achieve a more efficient transfer learning, a special loss function which considered the performance of the adversarial network was used in the joint training. After that, a classification network was trained on EEG optical flow to obtain the final category label. Although natural images and EEG signals are significantly different, EEG optical flow provides a similar representation of the two different domains, and our joint training algorithm brings the two different domains closer together. The contributions of this paper are as follows: (1) We propose EEG optical flow, which was designed to preserve multimodal EEG information in a uniform representation, reduce the EEG classification problem to a video classification problem. (2) We construct a deep transfer learning framework to transfer knowledge from computer vision in a sophisticate way, which solves the problem of insufficient biological training data. (3) We perform experiments on public dataset, and the results show that our approach has many advantages over traditional methods. METHODS EEG Optical Flow Traditional methods do not fully exploit multimodal information. For example, they ignore the locations of the electrodes and the inherent information in the spatial dimension. In our approach, we convert raw EEG signals into EEG optical flow to represent the multimodal information of EEG. EEG video is converted from the raw EEG signal. First, filtering is performed using five stereotyped frequency filters (α: 8-13 Hz, β: 14-30 Hz, γ: 31-51 Hz, δ: 0.5-3 Hz, θ: 4-7 Hz) to characterize different EEG signal rhythms. Second, EEG video frames are generated from each raw EEG signal frame in the time dimension. We project the 3D locations of the electrodes to 2D points via azimuthal equidistant projection (AEP), which borrows from mapping applications, and interpolate them to gray image by clough-tocher algorithm. The processes are shown in Figure 1. AEP can maintain the distance between electrodes more proportionately to represent more useful information in the spatial dimension. EEG optical flow is extracted from the converted EEG video. Optical flow [15] is introduced in our approach to describe the variant information of the EEG signal. Optical flow is widely used in video classification approaches because it can describe the obvious motion of objects in a visual scene by calculating the motion between two neighboring image frames taken at times t and t + ∆t at every pixel position. We store the optical flow as an image and rescale it to [0, 255]. The visualization image is shown in Figure 2 as the mapping direction and magnitude to an HSV image. Many benefits can be gained from using the EEG optical flow. Uniform representation of multimodal information: The spatial structure of the electrodes is preserved by the AEP, and the spectral information extracted via five stereotyped frequency filters and the temporal information are represented by the optical flow. Suitable for CNN: Due to the inherent structure of CNNs, the EEG optical flow is more compatible to the image and video data structure. CNN can discover the regional information of EEG optical flow, which reflects the regional information of brain regions. Transfer learning ability: By reducing the EEG classification problem to a video classification problem, we gain the ability to transfer knowledge from computer vision, which has large-scale annotated datasets, such as ImageNet, and many excellent networks. The entire dataset can be divided into independent epochs, which are the responses to stimulus events, according to the label of the stimulus channel. We employ a resampling algorithm to extend the training data, which plays a positive role in improving the generalization performance of the classifier, which we discuss in the following section. Network Architecture and Transfer Learning Insufficient training data is a serious problem in all domains related to bioinformatics. Our answer to this problem is transfer learning, which transfers knowledge from computer vision. We construct a deep transfer learning framework that contains two steps to obtain the final EEG category labels. The architecture of our network is shown in Figure 3. Fig. 3. Architecture of our deep transfer learning framework, using AlexNet as an example of transfer network. Joint training aimed to learn a better representation for natural images and EEG optical flow. Many studies have demonstrated that front layers in an CNN network can extract the general features of images, such as edges and corners (shown in the red box). But the general feature extractor trained by natural images does not fully match the EEG optical flow. Many previous works have shown that in order to achieve a better effect of transfer, the edge distribution of features from the source domain and target domain should be as similar as possible. Inspired by generative adversarial nets (GAN), we apply an adversarial network (shown in the green box) to train a better general feature extractor. We use features extracted from natural images and EEG optical flow as the inputs for the adversarial network and train it to identify their origins. If the adversarial network achieves worse performance, it means a small difference between the two types of feature and better transferability, and vice versa. Suppose we have ImageNet {X img , Y img } and EEG optical flow {X of , Y of }, where Y is labels of data. Our goal of joint training is to produce a general feature extractor with parameters θ extr that can represent natural images and EEG optical flow in a suitable way and can correctly classify in the future. θ img and θ adver points to parameters of the ImageNet classification network and the adversarial network. In order to extract more transferable general features, we use a special loss function while ImageNet classification training, which takes into account the performance of the adversarial network. It can be defined as follows: L img = − k I[y = k] log p k + αL adver (1) L adver = − d 1 D log p d(2) where k is the number of categories, p k is the softmax of the classifier activations, L adver is the cross entropy of the adversarial network and α is the hyper parameter of how strongly L adver influences the optimization. This loss function will minimize the performance of the adversarial network. It is means that the two types of feature will have similar edge distributions as possible, that is not easily distinguish by the adversarial network. While training the ImageNet classification network, the loss function wants to reduce the performance of the adversarial network by optimizing Equation (3). But while training the adversarial network, it tries to improve the performance of the adversarial network by optimizing Equation (4). These two goals stand in direct opposition to one another, and we overcome this by iteratively optimizing the following two goals while fixing other parameters. argmin θextrθimg L img (X img , X of , θ adver ; θ extr , θ img ) argmin θextrθ adver L adver (X img , X of , θ img ; θ extr , θ adver ) (4) Joint training force the transfer network to discover general features with more transferability, which is important to obtain useful knowledge from natural images and transfer it to EEG optical flow. EEG classification aimed to obtain the final EEG label. General features are extracted by transfer network and pretrained parameters (identified by the red arrow). Then features are fed to a classification network (shown in the purple box) with two RNN layers and two fully connected layers. We use long-short term memory (LSTM) to prevent vanishing gradient problems in the time dimension when training. Two fully connected layers are applied at the end of classification network, with the last layer applying a softmax activation function to obtain the final EEG label. If a fine-tuning strategy is used, the last one or two layers of the transfer network will be updated simultaneously in the EEG Classification. EXPERIMENTS We apply our approach to a famous public dataset in the BCI field, named Open Music Imagery Information Retrieval (OpenMIIR) published by the Brain and Mind Institute at the University of Western Ontario [16,17]. The parameters used in our approach are described as follows. We convert raw EEG signals into EEG videos with thirteen frames and a 32*32 resolution. These frames are resampled 50 times and converted to EEG optical flow with twelve frames. In the classification network, the recurrent layers contain 128 nodes. After the LSTM layers are applied, a dropout layer with a 0.25 ratio is applied to disable a portion of the neurons. The fully connected layers in the EEG classification network contains 64 nodes. We employ many popular, excellent networks as the target of transfer learning. These networks, including AlexNet, VGG16, VGG19 and ResNet, are the winners of past ImageNet Large Scale Visual Recognition Competition (ILSVRC) competitions [18]. Results According to the approach described in the previous sections, we carry out classification experiments on the dataset Open-MIIR. The OpenMIIR dataset does not distinguish between training and test sets, therefore, we randomly selected 10% of the dataset to be used as the test dataset. The experimental results show that our approach is superior to other current stateof-the-art methods with better classification accuracy. Figure 4 shows the 12-class confusion matrix of the experimental results when different transfer networks are used for our deep transfer learning framework. One important goal of our approach is solve the problem of insufficient training data. Due to the design of our approach, it is possible to train a large-scale deep neural network using a limited EEG training dataset by transferring knowledge from computer vision. To determine the utility of our solution, we test our approach while further reducing the training dataset. As the baseline, we tested three recent proposed methods: the support vector machine classifier (SVC) described in [19], the deep neural network (DNN) described in [17] and the CNN described in [20]. Experiments on the dataset OpenMIIR are carried out to compare the performance of our approach and the baseline methods while further reducing the training dataset, and the results are shown in Table 1. In addition, we tested our approach without joint training. Discussion We can draw the following conclusions from the experimental results presented in the previous section: (1) The experimental results shown in Figure 4 and Table 1 demonstrate that our proposed approach achieves accuracy that is obviously superior to that of the traditional methods; (2) VGG16 and VGG19 are good choices of transfer network; (3) Table 1 shows that our approach can achieve acceptable results while further reducing the size of the training set; (4) Joint training play a important and positive role in the final results. Due to the benefits of transfer learning, we can train these large neural networks with a limited training dataset. However, compared to the traditional EEG classification approaches, our network requires more time for prediction due to the network complexity, which may present obstacles when our approach is applied to real-time BCI systems. CONCLUSIONS In this paper, we propose a novel EEG signal classification approach with EEG optical flow and deep transfer learning in response to two major problems in EEG classification: (1) the inability of traditional methods to fully exploit multimodal information and (2) insufficient training data. Our approach is superior to other state-of-the-art methods, which is important for building better BCI systems, and provides a new perspective for solving the problem of EEG classification. In the future, we plan to develop an improved network based on stateof-the-art methods in computer vision and other domains. In addition, our approach can be viewed as a general bioelectrical signal classification framework that is suitable for other bioelectrical signals, such as functional magnetic resonance imaging (fMRI).
2,184
1808.01050
2952888378
With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNN networks, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.
Crowd counting is active an area of research with works tackling the three aspects of the problem: counting-by-regression @cite_28 , @cite_11 , @cite_16 , @cite_22 , @cite_2 , density map estimation @cite_11 , @cite_15 , @cite_8 , @cite_29 , @cite_1 and localization @cite_20 , @cite_17 .
{ "abstract": [ "We present a privacy-preserving system for estimating the size of inhomogeneous crowds, composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking. First, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic textures motion model. Second, a set of simple holistic features is extracted from each segmented region, and the correspondence between features and the number of people per segment is learned with Gaussian process regression. We validate both the crowd segmentation algorithm, and the crowd counting system, on a large pedestrian dataset (2000 frames of video, containing 49,885 total pedestrian instances). Finally, we present results of the system running on a full hour of video.", "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach.", "In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.", "This paper presents a patch-based approach for crowd density estimation in public scenes. We formulate the problem of estimating density in a structured learning framework applied to random decision forests. Our approach learns the mapping between patch features and relative locations of all objects inside each patch, which contribute to generate the patch density map through Gaussian kernel density estimation. We build the forest in a coarse-to-fine manner with two split node layers, and further propose a crowdedness prior and an effective forest reduction method to improve the estimation accuracy and speed. Moreover, we introduce a semi-automatic training method to learn the estimator for a specific scene. We achieved state-of-the-art results on the public Mall dataset and UCSD dataset, and also proposed two potential applications in traffic counts and scene understanding with promising results.", "", "", "People counting in extremely dense crowds is an important step for video surveillance and anomaly warning. The problem becomes especially more challenging due to the lack of training samples, severe occlusions, cluttered scenes and variation of perspective. Existing methods either resort to auxiliary human and face detectors or surrogate by estimating the density of crowds. Most of them rely on hand-crafted features, such as SIFT, HOG etc, and thus are prone to fail when density grows or the training sample is scarce. In this paper we propose an end-to-end deep convolutional neural networks (CNN) regression model for counting people of images in extremely dense crowds. Our method has following characteristics. Firstly, it is a deep model built on CNN to automatically learn effective features for counting. Besides, to weaken influence of background like buildings and trees, we purposely enrich the training data with expanded negative samples whose ground truth counting is set as zero. With these negative samples, the robustness can be enhanced. Extensive experimental results show that our method achieves superior performance than the state-of-the-arts in term of the mean and variance of absolute difference.", "Following [Lempitsky and Zisserman, 2010], we seek to count objects by integrating over an object density map that is predicted from an input image. In contrast to that work, we propose to estimate the object density map by averaging over structured, namely patch-wise, predictions. Using an ensemble of randomized regression trees that use dense features as input, we obtain results that are of similar quality, at a fraction of the training time, and with low implementation effort. An open source implementation will be provided in the framework of http: ilastik.org.", "We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance.", "We propose a novel object detection framework for partially-occluded small instances, such as pedestrians in low resolution surveillance video, cells under a microscope, flocks of small animals (e.g. birds, fishes), or even tiny insects like honeybees and flies. These scenarios are very challenging for traditional detectors, which are typically trained on individual instances. In our approach, we first estimate the object density map of the input image, and then divide it into local regions. For each region, a sliding window (ROI) is passed over the density map to calculate the instance count within each ROI. 2D integer programming is used to recover the locations of object instances from the set of ROI counts, and the global count estimate of the density map is used as a constraint to regularize the detection performance. Finally, the bounding box for each instance is estimated using the local density map. Compared with current small-instance detection methods, our proposed approach achieves state-of-the-art performance on several challenging datasets including fluorescence microscopy cell images, UCSD pedestrians, small animals and insects.", "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data." ], "cite_N": [ "@cite_22", "@cite_8", "@cite_28", "@cite_29", "@cite_1", "@cite_17", "@cite_2", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2123175289", "1910776219", "2155916750", "2207893099", "", "", "1978232622", "1542079534", "2072232009", "1908321067", "2145983039" ] }
Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds
Counting dense crowds is significant both from socio-political and safety perspective. At one end of the spectrum, there are large ritual gatherings such as during pilgrimages that typically have large crowds occurring in known and pre-defined locations. Although they generally have passive crowds coming together for peaceful purposes, disasters have known to occur, for instance, during Love Parade [9] and Hajj [1]. For active crowds, such as expressive mobs in demonstrations and protests, counting is important both from political and safety standpoint. It is very common for different sides to claim divergent numbers for crowd gathering, inclined towards their political standing on the concerned issue. Beyond subjectivity and preference for certain political or social outcomes, the disparate counting estimates from opposing parties have a basis in numerical cognition as well. In humans, the results on subitizing [21] suggest that once the number of observed objects increases beyond four, the brain switches from the exact Parallel Individuation System (PIS) to the inaccurate but scalable Approximate Number System (ANS) to count objects [11]. Thus, computer vision based crowd counting offers alternative fast and objective estimation of the number of people in such events. Furthermore, crowd counting is extendable to other domains, for instance, counting cells or bacteria from microscopic images [17,27], animal crowd estimates in wildlife sanctuaries [2], or estimating the number of vehicles at transportation hubs or traffic jams [19]. In this paper, we propose a novel approach to crowd counting, density map estimation and localization of people in a given crowd image. Our approach stems from the observation that these three problems are very interrelated -in fact, they can be decomposed with respect to each other. Counting provides an estimate of the number of people / objects without any information about their location. Density maps, which can be computed at multiple levels, provide weak information about location of each person. Localization does provide accurate location information, nevertheless, it is extremely difficult to estimate directly due to its very sparse nature. Therefore, we propose to estimate all three tasks simultaneously, while employing the fact that each is special case of another one. Density maps can be 'sharpened' till they approximate the localization map, whose integral should equal to the true count. Furthermore, we introduce a new and the largest dataset to-date for training and evaluating dense crowd counting, density map estimation and localization methods, particularly suitable for training very deep Convolutional Neural Networks (CNNs). Though counting has traditionally been considered the primary focus of research, density map estimation and localization have significance and utility beyond counting. In particular, two applications are noteworthy: initialization / detection of people for tracking in dense crowds [13]; and rectifying counting errors from an automated computer vision algorithm. That is, a real user or analyst who desires to estimate the exact count for a real image without any error, the results of counting alone are insufficient. The single number for an entire image makes it difficult to assess the error or the source of the error. However, the localization can provide an initial set of dot locations of the individuals, the user then can quickly go through the image and remove the false positives and add the false negatives. The count using such an approach will be much more accurate and the user can get 100% precise count for the query image. This is particularly important when the number of image samples are few, and reliable counts are desired. Prior to 2013, much of the work in crowd counting focused on low-density scenarios. For instance, UCSD dataset [4] contains 2, 000 video frames with 49, 885 annotated persons. The dataset is low density and low resolution compared to many recent datasets, where train and test splits belong to a single scene. WorldExpo'10 dataset [29], contains 108 low-to-medium density scenes and overcomes the issue of diversity to some extent. UCF dataset [12] contains 50 different images with counts ranging between 96 and 4, 633 per image. Each image has a different resolution, camera angle, and crowd density. Although it was the first dataset for dense crowd images, it has problems with annotations ( Figure 1) due to limited availability of high-resolution crowd images at the time. The ShanghaiTech crowd dataset [30] contains 1, 198 annotated images with a total of 330, 165 annotations. This dataset is divided into two parts: Part A contains 482 images and Part B with 716 images. The number of training images are 300 and 400 in both parts, respectively. Only the images in Part A contain high-density crowds, with 482 images and 250K annotations. Table 1 summarizes the statistics of the multi-scene datasets for dense crowd counting. The proposed UCF-QNRF dataset has the most number of high-count crowd images and annotations, and a wider variety of scenes containing the most diverse set of viewpoints, densities and lighting variations. The resolution is large compared to WorldExpo'10 [29] and ShanghaiTech [30], as can be seen in Fig. 2(b). The average density, i.e., the number of people per pixel over all images is also the lowest, signifying high-quality large images. Lower per-pixel density is partly due to inclusion of background regions, where there are many high-density regions as well as zero-density regions. Part A of Shanghai dataset has high-count crowd images as well, however, they are severely cropped to contain crowds only. On the other hand, the new UCF-QNRF dataset contains buildings, vegetation, sky and roads as they are present in realistic scenarios captured in the wild. This makes this dataset more realistic as well as difficult. Similarly, Figure 2(a) shows the diversity in counts among the datasets. The distribution of proposed dataset is similar to UCF CC 50 [12], however, the new dataset is 30 and 20 times larger in terms of number of images and annotations, respectively, compared to UCF CC 50 [12]. We hope the new dataset will significantly increase research activity in visual crowd analysis and will pave way for building deployable practical counting and localization systems for dense crowds. The rest of the paper is organized as follows. In Sec. 2 we review related work, and present the proposed approach for simultaneous crowd counting, density map estimation and localization in Sec. 3. The process for collection and annotation of the UCF-QNRF dataset is covered in Sec. 4, while the three tasks and evaluation measures are motivated in Sec. 5. The experimental evaluation and comparison are presented in Sec. 6. We conclude with suggestions for future work in Sec. 7. Deep CNN with Composition Loss In this section, we present the motivation for decomposing the loss of three interrelated problems of counting, density map estimation and localization, followed by details about the deep Convolutional Neural Network which can enable training and estimation of the three tasks simultaneously. Composition Loss Let x = [x, y] denote a pixel location in a given image, and N be the number of people annotated with {x i : i = 1, 2, . . . N } as their respective locations. Dense crowds typically depict heads of people as they are the only parts least occluded and mostly visible. In localization maps, only a single pixel is activated, i.e., set to 1 per head, while all other pixels are set to 0. This makes localization maps extremely sparse and therefore difficult to train and estimate. We observe that successive computation of 'sharper' density maps which are relatively easier to train can aid in localization as well. Moreover, all three tasks should influence count, which is the integral over density or localization map. We use the Gaussian Kernel and adapt it for our problem of simultaneous solution for the three tasks. Due to perspective effect and possibly variable density of the crowd, a single value of bandwidth, σ, cannot be used for the Gaussian kernel, as it might lead to well-defined separation between people close to the camera or in regions of low density, while excess The proposed Composition Loss is implemented through multiple dense blocks after branching off the base network. We also test the effect of additional constraint on the density and localization maps (shown with amber and orange blocks) such that the count after integral in each should also be consistent with the groundtruth count. blurring in other regions. Many images of dense crowds depict crowds in their entirety, making automatic perspective rectification difficult. Thus, we propose to define σ i for each person i as the minimum of the 2 distance to its nearest neighbor in spatial domain of the image or some maximum threshold, τ . This ensures that the location information of each person is preserved precisely irrespective of default kernel bandwidth, τ . Thus, the adaptive Gaussian kernel is given by, D(x, f(·)) = N i=1 1 √ 2πf(σ i ) exp − (x − x i ) 2 + (y − y i ) 2 2f(σ i ) 2 ,(1) where the function f is used to produce a successive set of 'sharper' density maps. We define f k (σ) = σ 1/k . Thus, D k = D(x, f k (·) ). As can be seen when k = 1, D k is a very smoothed-out density map using nearest-neighbor dependent bandwidth and τ , whereas as k −→ ∞, D k approaches the binary localization map with a Dirac Delta function placed at each annotated pixel. Since each pixel has a unit area, the localization map assumes a unit value at the annotated location. For our experiments we used three density levels with last one being the localization map. It is also interesting to note that the various connections between density levels and base CNN also serve to provide intermediate supervision which aid in training the filters of base CNN towards counting and density estimation early on in the network. Hypothetically, since integral over each estimatedD k yields a count for that density level, the final count can be obtained by taking the mean of counts from the density and localization maps as well as regression output from base CNN. This has two potential advantages: 1) the final count relies on multiple sources -each capturing count at a different scale. 2) During training the mean of four counts should equal the true count, which implicitly enforces an additional constraint thatD k should not only capture the density and localization information, but that each of their counts should also sum to the groundtruth count. For training, the loss function of density and localization maps is the mean square error between the predicted and ground truth maps, i.e. L k = MSE(D k , D k ), where k = 1, 2, and ∞, and regression loss, L c , is Euclidean loss between predicted and groundtruth counts, while the final loss is defined as the weighted mean all four losses. For density map estimation and localization, we branch out from DenseBlock2 and feed it to our Density Network (see Table 2). The density network introduces 2 new dense blocks and three 1 × 1 convolutional layers. Each dense block has features computed at the previous step, concatenated with all the density levels predicted thus far as input, and learns features aimed at computing the current density / localization map. We used 1 × 1 convolutions to get the output density map from these features. Density Level 1 is computed directly from DenseBlock2 features. DenseNet with Composition Loss We used Adam solver with a step learning rate in all our experiments. We used 0.001 as initial learning rate and reduce the learning rate by a factor of 2 after every 20 epochs. We trained the entire network for 70 epoch with a batch size of 16. The UCF-QNRF Dataset Dataset Collection. The images for the dataset were collected from three sources: Flickr, Web Search and the Hajj footage. The Hajj images were carefully selected so that there are multiple images that capture different locations, viewpoints, perspective effects and times of the day. For Flickr and Web Search, we manually generated the following queries: CROWD, HAJJ, SPECTATOR CROWD, PILGRIMAGE, PROTEST CROWD and CONCERT CROWD. These queries were then passed onto the Flickr and Google Image Search APIs. We selected desired number of images for each query to be 2000 for Flickr and 200 for Google Image Search. The search sorted all the results by RELE-VANCE incorporating both titles and tags, and for Flickr we also ensured that only those images were downloaded for which original resolutions were permitted to be downloaded (through the URL O specifier). The static links to all the images were extracted and saved for all the query terms, which were then downloaded using the respective APIs. The images were also checked for duplicates by computing image similarities followed by manual verification and discarding of duplicates. Initial Pruning. The initial set of images were then manually checked for desirability. Many of the images were pruned due to one or more of the following reasons: -Scenes that did not depict crowds at all or low-density crowds -Objects or visualizations of objects other than humans -Motion blur or low resolution -Very high perspective effect that is camera height is similar to average human height -Images with watermarks or those where text occupied more than 10% of the image In high-density crowd images, it is mostly the heads that are visible. However, people who appear far away from the camera become indistinguishable beyond a certain distance, which depends on crowd density, lighting as well as resolution of the camera sensor. During pruning, we kept those images where the heads were separable visually. Such images were annotated with the others, however, they were cropped afterwards to ensure that regions with problematic annotations or those with none at all due to difficulty in recognizing human heads were discarded. We performed the entire annotation process in two stages. In the first stage, unannotated images were given to the annotators, while in the second stage, the images were given to verifiers who corrected any mistakes or errors in annotations. There were 14 annotators and 4 verifiers, who clocked 1, 300 and 200 hours respectively. In total, the entire procedure involved 2, 000 human-hours spent through to its completion. Statistics. The dataset has 1, 535 jpeg images with 1, 251, 642 annotations. The train and test sets were created by sorting the images with respect to absolute counts, and selecting every 5th image into the test set. Thus, the training and test set consist of 1201 and 334 images, respectively. 21,7], respectively. In the dataset, the minimum and maximum counts are 49 and 12, 865, respectively, whereas the median and mean counts are 425 and 815.4, respectively. Definition and Quantification of Tasks In this section, we define the three tasks and the associated quantification measures. Counting: The first task involves estimation of count for a crowd image i, given by c i . Although this measure does not give any information about location or distribution of people in the image, this is still very useful for many applications, for instance, estimating size of an entire crowd spanning several square kilometers or miles. For the application of counting large crowds, Jacob's Method [14] due to Herbert Jacob is typically employed which involves dividing the area A into smaller sections, finding the average number of people or density d in each section, computing the mean densityd and extrapolating the results to entire region. However, with automated crowd counting, it is now possible to obtain counts and density for multiple images at different locations, thereby, permitting the more accurate integration of density over entire area covered by crowd. Moreover, counting through multiple aerial images requires cartographic tools to map the images onto the earth to compute ground areas. The density here is defined as the number of people in the image divided by ground area covered by the image. We propose to use the same evaluation measures as used in literature for this task: the Mean Absolute Error (C-MAE), Mean Squared Error (C-MSE) with the addition of Normalized Absolute Error (C-NAE). Density Map Estimation amounts to computing per-pixel density at each location in the image, thus preserving spatial information about distribution of people. This is particularly relevant for safety and surveillance, since very high density at a particular location in the scene can be catastrophic [1]. This is different from counting since an image can have counts within safe limits, while containing regions that have very high density. This can happen due to the presence of empty regions in the image, such as walls and sky for mounted cameras; and roads, vehicles, buildings and forestation in aerial cameras. The metrics for evaluating density map estimation are similar to counting, except that they are per-pixel, i.e., the per-pixel Mean Absolute Error (DM-MAE) and Mean Squared Error (DM-MSE). Finally, we also propose to compute the 2D Histogram Intersection (DM-HI) distance after normalizing both the groundtruth and estimated density maps. This discards the effect of absolute counts and emphasizes the error in distribution of density compared to the groundtruth. Localization: The ideal approach to crowd counting would be to detect all the people in an image and then count the number of detections. But since dense crowd images contain severe occlusions among individuals and fewer pixels per person for those away from the camera, this is not a feasible solution. This is why, most approaches to crowd counting bypass explicit detection and perform direct regression on input images. However, for many applications, the precise location of individuals is needed, for instance, to initialize a tracking algorithm in very high-density crowd videos. To quantify the localization error, estimated locations are associated with the ground truth locations through 1-1 matching using greedy association, followed by computation of Precision and Recall at various distance thresholds (1, 2, 3, . . . , 100 pixels). The overall performance of the localization task is then computed through area under the Precision-Recall curve, L-AUC. Experiments Next, we present the results of experiments for the three tasks defined in Section 5. Table 3: We show counting results obtained using state-of-the-art methods in comparison with the proposed approach. Methods with '*' regress counts without computing density maps. For counting, we evaluated the new UCF-QNRF dataset using the proposed method which estimates counts, density maps and locations of people simultaneously with several state-of-the-art deep neural networks [3], [8], [10] as well as those specifically developed for crowd counting [30], [25], [24]. To train the networks, we extracted patches of sizes 448, 224 and 112 pixels at random locations from each training image. While deciding on image locations to extract patch from, we assigned higher probability of selection to image regions with higher count. We used mean square error of counts as the loss function. At test time, we divide the image into a grid of 224 × 224 pixel cells -zero-padding the image for dimensions not divisible by 224 -and evaluate each cell using the trained network. Final image count is given by aggregating the counts in all cells. Table 3 summarizes the results which shows the proposed network significantly outperforms the competing deep CNNs and crowd counting approaches. In Figure 4, we show the images with the lowest and highest error in the test set, for counts obtained through different components of the Composition Loss. Density Map Estimation Method DM-MAE DM-MSE DM-HI MCNN [30] 0.006670 0.0223 0.5354 SwitchCNN [24] For density map estimation, we describe and compare the proposed approach with several methods that directly regress crowd density during training. Among the deep learning methods, MCNN [30] consists of three columns of convolution networks with different filter sizes to capture different head sizes and combines the output of all the columns to make a final density estimate. SwitchCNN [24] uses a similar three column network; however, it also employs a switching network that decides which column should exclusively handle the input patch. CMTL [25] employs a multi-task network that computes a high level prior over the image patch (crowd count classification) and density estimation. These networks are specifically designed for crowd density estimation and their results are reported in first three rows of Table 4. The results of proposed approach are shown in the bottom row of Table 4. The proposed approach outperforms existing approaches by an order of magnitude. Localization For the localization task, we adopt the same network configurations used for density map estimation to perform localization. To get the accurate head locations, we postprocess the outputs by finding the local peaks / maximums based on a threshold, also known as non-maximal suppression. Once the peaks are found, we match the predicted location with the ground truth location using 1-1 matching, and compute precision and recall. We use different distance thresholds as the pixel distance, i.e., if the detection is within the a particular distance threshold of the groundtruth, it is treated as True Positive, otherwise it is a False Positive. Similarly, if there is no detection within a groundtruth location, it becomes a False Negative. The results of localization are reported in Table 5. This table shows that DenseNet [10] and Encoder-Decoder [3] outperform ResNet [8] and MCNN [30], while the proposed approach is superior to all the compared methods. The performance on the localization task is dependent on post-processing, which can alter results. Therefore, finding optimal strategy for localization from neural network output or incorporating the post-processing into the network is an important direction for future research. We also show some qualitative results of localization in Figure 5. The red dots represent the groundtruth while yellow circles are the locations estimated by the our approach. Ablation Study We performed an ablation study to validate the efficacy of composition loss introduced in this paper, as well as various choices in designing the network. These results are Method Av. Precision Av. Recall L-AUC MCNN [30] 59.93% 63.50% 0.591 ResNet74 [8] 61.60% 66.90% 0.612 DenseNet63 [10] 70.19% 58.10% 0.637 Encoder-Decoder [3] 71.80% 62.98% 0.670 Proposed 75.8% 59.75% 0.714 shown in Table 6. Next, we describe and provide details for the experiment corresponding to each row in the table. BaseNetwork: This row shows the results with base network of our choice, which is DenseNet201. A fully-connected layer is appended to the last layer of the network followed by a single neuron which outputs the count. The input patch size is 224 × 224. DenseBlock4: This experiment studies the effect of connecting the Density Network (Table 2) containing the different density levels with DenseBlock4 of the base DenseNet instead of DenseBlock2. Since DenseBlock4 outputs feature maps of size 7 × 7, we therefore used deconvolution layer with stride 4 to upsample the features before feeding in to our Density Network. DenseBlock3: This experiment is similar to DenseBlock4, except that we connect our Density Network to Denseblock3 of the base network. DenseBlock3 outputs feature maps which are 14 × 14 in spatial dimensions, whereas we intend to predict density maps of spatial dimension 28 × 28, so we upsample the feature maps by using deconvolution layer before feeding them to the proposed Density Network. Concatenate: Here, we take the sum of the two density and one localization map to obtain 3 counts. We then concatenate these counts to the output of fully-connected layer of the base network to predict count from the single neuron. Thus, we leave to the optimization algorithm to find appropriate weights for these 3 values along with the rest of 1920 features of the fully-connected layer. Mean: We also tested the effect of using equal weights for counts obtained from the base network and three density levels. We take sum of each density / localization map and take the mean of 4 values (2 density map sums, one localization sum, and one count from base network). We treat this mean value as final count output -both during training and testing. Thus, this imposes the constraint that not only the density and localization map correctly predict the location of people, but also their counts should be consistent with groundtruth counts irrespective of predicted locations. Proposed: In this experiment, the Density Network is connected with the DenseBlock2 of base network, however, the Density Network simply outputs two density and one localization maps, none of which are connected to count output (see Figure 3). In summary, these results show that the Density Network contributes significantly to performance on the three tasks. It is better to branch out from the middle layers of the base network, nevertheless the idea of multiple connections back and forth from the base network and Density Network is an interesting direction for further research. Furthermore, enforcing counts from all sources to be equal to the groundtruth count slightly worsens the counting performance. Nevertheless, it does help in estimating better density and localization maps. Finally, the decrease in error rates from the right to left in Table 6 highlights the positive influence of the proposed Composition Loss. Conclusion This paper introduced a novel method to estimate counts, density maps and localization in dense crowd images. We showed that these three problems are interrelated, and can be decomposed with respect to each other through Composition Loss which can then be used to train a neural network. We solved the three tasks simultaneously with the counting performance benefiting from the density map estimation and localization as well. We also proposed the large-scale UCF-QNRF dataset for dense crowds suitable for the three tasks described in the paper. We provided details of the process of dataset collection and annotation, where we ensured that only high-resolution images were curated for the dataset. Finally, we presented extensive set of experiments using several recent deep architectures, and show how the proposed approach is able to achieve good performance through detailed ablation study. We hope the new dataset will prove useful for this type of research, with applications in safety and surveillance, design and expansion of public infrastructures, and gauging political significance of various crowd events.
4,322
1808.01050
2952888378
With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNN networks, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.
Earlier regression-based approaches mapped global image features or a combination of local patch features to obtain counts @cite_25 , @cite_4 , @cite_16 , @cite_5 . Since these methods only produce counts, they cannot be used for density map estimation or localization. The features were hand-crafted and in some cases multiple features were used @cite_22 , @cite_16 to handle low resolution, perspective distortion and severe occlusion. On the other hand, CNNs inherently learn multiple feature maps automatically, and therefore are now being extensively used for crowd counting and density map estimation.
{ "abstract": [ "", "We present a privacy-preserving system for estimating the size of inhomogeneous crowds, composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking. First, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic textures motion model. Second, a set of simple holistic features is extracted from each segmented region, and the correspondence between features and the number of people per segment is learned with Gaussian process regression. We validate both the crowd segmentation algorithm, and the crowd counting system, on a large pedestrian dataset (2000 frames of video, containing 49,885 total pedestrian instances). Finally, we present results of the system running on a full hour of video.", "A number of computer vision problems such as human age estimation, crowd density estimation and body face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.", "We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance.", "This paper describes a viewpoint invariant learning-based method for counting people in crowds from a single camera. Our method takes into account feature normalization to deal with perspective projection and different camera orientation. The training features include edge orientation and blob size histograms resulted from edge detection and background subtraction. A density map that measures the relative size of individuals and a global scale measuring camera orientation are estimated and used for feature normalization. The relationship between the feature histograms and the number of pedestrians in the crowds is learned from labeled training data. Experimental results from different sites with different camera orientation demonstrate the performance and the potential of our method" ], "cite_N": [ "@cite_4", "@cite_22", "@cite_5", "@cite_16", "@cite_25" ], "mid": [ "", "2123175289", "2075875861", "2072232009", "2121864252" ] }
Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds
Counting dense crowds is significant both from socio-political and safety perspective. At one end of the spectrum, there are large ritual gatherings such as during pilgrimages that typically have large crowds occurring in known and pre-defined locations. Although they generally have passive crowds coming together for peaceful purposes, disasters have known to occur, for instance, during Love Parade [9] and Hajj [1]. For active crowds, such as expressive mobs in demonstrations and protests, counting is important both from political and safety standpoint. It is very common for different sides to claim divergent numbers for crowd gathering, inclined towards their political standing on the concerned issue. Beyond subjectivity and preference for certain political or social outcomes, the disparate counting estimates from opposing parties have a basis in numerical cognition as well. In humans, the results on subitizing [21] suggest that once the number of observed objects increases beyond four, the brain switches from the exact Parallel Individuation System (PIS) to the inaccurate but scalable Approximate Number System (ANS) to count objects [11]. Thus, computer vision based crowd counting offers alternative fast and objective estimation of the number of people in such events. Furthermore, crowd counting is extendable to other domains, for instance, counting cells or bacteria from microscopic images [17,27], animal crowd estimates in wildlife sanctuaries [2], or estimating the number of vehicles at transportation hubs or traffic jams [19]. In this paper, we propose a novel approach to crowd counting, density map estimation and localization of people in a given crowd image. Our approach stems from the observation that these three problems are very interrelated -in fact, they can be decomposed with respect to each other. Counting provides an estimate of the number of people / objects without any information about their location. Density maps, which can be computed at multiple levels, provide weak information about location of each person. Localization does provide accurate location information, nevertheless, it is extremely difficult to estimate directly due to its very sparse nature. Therefore, we propose to estimate all three tasks simultaneously, while employing the fact that each is special case of another one. Density maps can be 'sharpened' till they approximate the localization map, whose integral should equal to the true count. Furthermore, we introduce a new and the largest dataset to-date for training and evaluating dense crowd counting, density map estimation and localization methods, particularly suitable for training very deep Convolutional Neural Networks (CNNs). Though counting has traditionally been considered the primary focus of research, density map estimation and localization have significance and utility beyond counting. In particular, two applications are noteworthy: initialization / detection of people for tracking in dense crowds [13]; and rectifying counting errors from an automated computer vision algorithm. That is, a real user or analyst who desires to estimate the exact count for a real image without any error, the results of counting alone are insufficient. The single number for an entire image makes it difficult to assess the error or the source of the error. However, the localization can provide an initial set of dot locations of the individuals, the user then can quickly go through the image and remove the false positives and add the false negatives. The count using such an approach will be much more accurate and the user can get 100% precise count for the query image. This is particularly important when the number of image samples are few, and reliable counts are desired. Prior to 2013, much of the work in crowd counting focused on low-density scenarios. For instance, UCSD dataset [4] contains 2, 000 video frames with 49, 885 annotated persons. The dataset is low density and low resolution compared to many recent datasets, where train and test splits belong to a single scene. WorldExpo'10 dataset [29], contains 108 low-to-medium density scenes and overcomes the issue of diversity to some extent. UCF dataset [12] contains 50 different images with counts ranging between 96 and 4, 633 per image. Each image has a different resolution, camera angle, and crowd density. Although it was the first dataset for dense crowd images, it has problems with annotations ( Figure 1) due to limited availability of high-resolution crowd images at the time. The ShanghaiTech crowd dataset [30] contains 1, 198 annotated images with a total of 330, 165 annotations. This dataset is divided into two parts: Part A contains 482 images and Part B with 716 images. The number of training images are 300 and 400 in both parts, respectively. Only the images in Part A contain high-density crowds, with 482 images and 250K annotations. Table 1 summarizes the statistics of the multi-scene datasets for dense crowd counting. The proposed UCF-QNRF dataset has the most number of high-count crowd images and annotations, and a wider variety of scenes containing the most diverse set of viewpoints, densities and lighting variations. The resolution is large compared to WorldExpo'10 [29] and ShanghaiTech [30], as can be seen in Fig. 2(b). The average density, i.e., the number of people per pixel over all images is also the lowest, signifying high-quality large images. Lower per-pixel density is partly due to inclusion of background regions, where there are many high-density regions as well as zero-density regions. Part A of Shanghai dataset has high-count crowd images as well, however, they are severely cropped to contain crowds only. On the other hand, the new UCF-QNRF dataset contains buildings, vegetation, sky and roads as they are present in realistic scenarios captured in the wild. This makes this dataset more realistic as well as difficult. Similarly, Figure 2(a) shows the diversity in counts among the datasets. The distribution of proposed dataset is similar to UCF CC 50 [12], however, the new dataset is 30 and 20 times larger in terms of number of images and annotations, respectively, compared to UCF CC 50 [12]. We hope the new dataset will significantly increase research activity in visual crowd analysis and will pave way for building deployable practical counting and localization systems for dense crowds. The rest of the paper is organized as follows. In Sec. 2 we review related work, and present the proposed approach for simultaneous crowd counting, density map estimation and localization in Sec. 3. The process for collection and annotation of the UCF-QNRF dataset is covered in Sec. 4, while the three tasks and evaluation measures are motivated in Sec. 5. The experimental evaluation and comparison are presented in Sec. 6. We conclude with suggestions for future work in Sec. 7. Deep CNN with Composition Loss In this section, we present the motivation for decomposing the loss of three interrelated problems of counting, density map estimation and localization, followed by details about the deep Convolutional Neural Network which can enable training and estimation of the three tasks simultaneously. Composition Loss Let x = [x, y] denote a pixel location in a given image, and N be the number of people annotated with {x i : i = 1, 2, . . . N } as their respective locations. Dense crowds typically depict heads of people as they are the only parts least occluded and mostly visible. In localization maps, only a single pixel is activated, i.e., set to 1 per head, while all other pixels are set to 0. This makes localization maps extremely sparse and therefore difficult to train and estimate. We observe that successive computation of 'sharper' density maps which are relatively easier to train can aid in localization as well. Moreover, all three tasks should influence count, which is the integral over density or localization map. We use the Gaussian Kernel and adapt it for our problem of simultaneous solution for the three tasks. Due to perspective effect and possibly variable density of the crowd, a single value of bandwidth, σ, cannot be used for the Gaussian kernel, as it might lead to well-defined separation between people close to the camera or in regions of low density, while excess The proposed Composition Loss is implemented through multiple dense blocks after branching off the base network. We also test the effect of additional constraint on the density and localization maps (shown with amber and orange blocks) such that the count after integral in each should also be consistent with the groundtruth count. blurring in other regions. Many images of dense crowds depict crowds in their entirety, making automatic perspective rectification difficult. Thus, we propose to define σ i for each person i as the minimum of the 2 distance to its nearest neighbor in spatial domain of the image or some maximum threshold, τ . This ensures that the location information of each person is preserved precisely irrespective of default kernel bandwidth, τ . Thus, the adaptive Gaussian kernel is given by, D(x, f(·)) = N i=1 1 √ 2πf(σ i ) exp − (x − x i ) 2 + (y − y i ) 2 2f(σ i ) 2 ,(1) where the function f is used to produce a successive set of 'sharper' density maps. We define f k (σ) = σ 1/k . Thus, D k = D(x, f k (·) ). As can be seen when k = 1, D k is a very smoothed-out density map using nearest-neighbor dependent bandwidth and τ , whereas as k −→ ∞, D k approaches the binary localization map with a Dirac Delta function placed at each annotated pixel. Since each pixel has a unit area, the localization map assumes a unit value at the annotated location. For our experiments we used three density levels with last one being the localization map. It is also interesting to note that the various connections between density levels and base CNN also serve to provide intermediate supervision which aid in training the filters of base CNN towards counting and density estimation early on in the network. Hypothetically, since integral over each estimatedD k yields a count for that density level, the final count can be obtained by taking the mean of counts from the density and localization maps as well as regression output from base CNN. This has two potential advantages: 1) the final count relies on multiple sources -each capturing count at a different scale. 2) During training the mean of four counts should equal the true count, which implicitly enforces an additional constraint thatD k should not only capture the density and localization information, but that each of their counts should also sum to the groundtruth count. For training, the loss function of density and localization maps is the mean square error between the predicted and ground truth maps, i.e. L k = MSE(D k , D k ), where k = 1, 2, and ∞, and regression loss, L c , is Euclidean loss between predicted and groundtruth counts, while the final loss is defined as the weighted mean all four losses. For density map estimation and localization, we branch out from DenseBlock2 and feed it to our Density Network (see Table 2). The density network introduces 2 new dense blocks and three 1 × 1 convolutional layers. Each dense block has features computed at the previous step, concatenated with all the density levels predicted thus far as input, and learns features aimed at computing the current density / localization map. We used 1 × 1 convolutions to get the output density map from these features. Density Level 1 is computed directly from DenseBlock2 features. DenseNet with Composition Loss We used Adam solver with a step learning rate in all our experiments. We used 0.001 as initial learning rate and reduce the learning rate by a factor of 2 after every 20 epochs. We trained the entire network for 70 epoch with a batch size of 16. The UCF-QNRF Dataset Dataset Collection. The images for the dataset were collected from three sources: Flickr, Web Search and the Hajj footage. The Hajj images were carefully selected so that there are multiple images that capture different locations, viewpoints, perspective effects and times of the day. For Flickr and Web Search, we manually generated the following queries: CROWD, HAJJ, SPECTATOR CROWD, PILGRIMAGE, PROTEST CROWD and CONCERT CROWD. These queries were then passed onto the Flickr and Google Image Search APIs. We selected desired number of images for each query to be 2000 for Flickr and 200 for Google Image Search. The search sorted all the results by RELE-VANCE incorporating both titles and tags, and for Flickr we also ensured that only those images were downloaded for which original resolutions were permitted to be downloaded (through the URL O specifier). The static links to all the images were extracted and saved for all the query terms, which were then downloaded using the respective APIs. The images were also checked for duplicates by computing image similarities followed by manual verification and discarding of duplicates. Initial Pruning. The initial set of images were then manually checked for desirability. Many of the images were pruned due to one or more of the following reasons: -Scenes that did not depict crowds at all or low-density crowds -Objects or visualizations of objects other than humans -Motion blur or low resolution -Very high perspective effect that is camera height is similar to average human height -Images with watermarks or those where text occupied more than 10% of the image In high-density crowd images, it is mostly the heads that are visible. However, people who appear far away from the camera become indistinguishable beyond a certain distance, which depends on crowd density, lighting as well as resolution of the camera sensor. During pruning, we kept those images where the heads were separable visually. Such images were annotated with the others, however, they were cropped afterwards to ensure that regions with problematic annotations or those with none at all due to difficulty in recognizing human heads were discarded. We performed the entire annotation process in two stages. In the first stage, unannotated images were given to the annotators, while in the second stage, the images were given to verifiers who corrected any mistakes or errors in annotations. There were 14 annotators and 4 verifiers, who clocked 1, 300 and 200 hours respectively. In total, the entire procedure involved 2, 000 human-hours spent through to its completion. Statistics. The dataset has 1, 535 jpeg images with 1, 251, 642 annotations. The train and test sets were created by sorting the images with respect to absolute counts, and selecting every 5th image into the test set. Thus, the training and test set consist of 1201 and 334 images, respectively. 21,7], respectively. In the dataset, the minimum and maximum counts are 49 and 12, 865, respectively, whereas the median and mean counts are 425 and 815.4, respectively. Definition and Quantification of Tasks In this section, we define the three tasks and the associated quantification measures. Counting: The first task involves estimation of count for a crowd image i, given by c i . Although this measure does not give any information about location or distribution of people in the image, this is still very useful for many applications, for instance, estimating size of an entire crowd spanning several square kilometers or miles. For the application of counting large crowds, Jacob's Method [14] due to Herbert Jacob is typically employed which involves dividing the area A into smaller sections, finding the average number of people or density d in each section, computing the mean densityd and extrapolating the results to entire region. However, with automated crowd counting, it is now possible to obtain counts and density for multiple images at different locations, thereby, permitting the more accurate integration of density over entire area covered by crowd. Moreover, counting through multiple aerial images requires cartographic tools to map the images onto the earth to compute ground areas. The density here is defined as the number of people in the image divided by ground area covered by the image. We propose to use the same evaluation measures as used in literature for this task: the Mean Absolute Error (C-MAE), Mean Squared Error (C-MSE) with the addition of Normalized Absolute Error (C-NAE). Density Map Estimation amounts to computing per-pixel density at each location in the image, thus preserving spatial information about distribution of people. This is particularly relevant for safety and surveillance, since very high density at a particular location in the scene can be catastrophic [1]. This is different from counting since an image can have counts within safe limits, while containing regions that have very high density. This can happen due to the presence of empty regions in the image, such as walls and sky for mounted cameras; and roads, vehicles, buildings and forestation in aerial cameras. The metrics for evaluating density map estimation are similar to counting, except that they are per-pixel, i.e., the per-pixel Mean Absolute Error (DM-MAE) and Mean Squared Error (DM-MSE). Finally, we also propose to compute the 2D Histogram Intersection (DM-HI) distance after normalizing both the groundtruth and estimated density maps. This discards the effect of absolute counts and emphasizes the error in distribution of density compared to the groundtruth. Localization: The ideal approach to crowd counting would be to detect all the people in an image and then count the number of detections. But since dense crowd images contain severe occlusions among individuals and fewer pixels per person for those away from the camera, this is not a feasible solution. This is why, most approaches to crowd counting bypass explicit detection and perform direct regression on input images. However, for many applications, the precise location of individuals is needed, for instance, to initialize a tracking algorithm in very high-density crowd videos. To quantify the localization error, estimated locations are associated with the ground truth locations through 1-1 matching using greedy association, followed by computation of Precision and Recall at various distance thresholds (1, 2, 3, . . . , 100 pixels). The overall performance of the localization task is then computed through area under the Precision-Recall curve, L-AUC. Experiments Next, we present the results of experiments for the three tasks defined in Section 5. Table 3: We show counting results obtained using state-of-the-art methods in comparison with the proposed approach. Methods with '*' regress counts without computing density maps. For counting, we evaluated the new UCF-QNRF dataset using the proposed method which estimates counts, density maps and locations of people simultaneously with several state-of-the-art deep neural networks [3], [8], [10] as well as those specifically developed for crowd counting [30], [25], [24]. To train the networks, we extracted patches of sizes 448, 224 and 112 pixels at random locations from each training image. While deciding on image locations to extract patch from, we assigned higher probability of selection to image regions with higher count. We used mean square error of counts as the loss function. At test time, we divide the image into a grid of 224 × 224 pixel cells -zero-padding the image for dimensions not divisible by 224 -and evaluate each cell using the trained network. Final image count is given by aggregating the counts in all cells. Table 3 summarizes the results which shows the proposed network significantly outperforms the competing deep CNNs and crowd counting approaches. In Figure 4, we show the images with the lowest and highest error in the test set, for counts obtained through different components of the Composition Loss. Density Map Estimation Method DM-MAE DM-MSE DM-HI MCNN [30] 0.006670 0.0223 0.5354 SwitchCNN [24] For density map estimation, we describe and compare the proposed approach with several methods that directly regress crowd density during training. Among the deep learning methods, MCNN [30] consists of three columns of convolution networks with different filter sizes to capture different head sizes and combines the output of all the columns to make a final density estimate. SwitchCNN [24] uses a similar three column network; however, it also employs a switching network that decides which column should exclusively handle the input patch. CMTL [25] employs a multi-task network that computes a high level prior over the image patch (crowd count classification) and density estimation. These networks are specifically designed for crowd density estimation and their results are reported in first three rows of Table 4. The results of proposed approach are shown in the bottom row of Table 4. The proposed approach outperforms existing approaches by an order of magnitude. Localization For the localization task, we adopt the same network configurations used for density map estimation to perform localization. To get the accurate head locations, we postprocess the outputs by finding the local peaks / maximums based on a threshold, also known as non-maximal suppression. Once the peaks are found, we match the predicted location with the ground truth location using 1-1 matching, and compute precision and recall. We use different distance thresholds as the pixel distance, i.e., if the detection is within the a particular distance threshold of the groundtruth, it is treated as True Positive, otherwise it is a False Positive. Similarly, if there is no detection within a groundtruth location, it becomes a False Negative. The results of localization are reported in Table 5. This table shows that DenseNet [10] and Encoder-Decoder [3] outperform ResNet [8] and MCNN [30], while the proposed approach is superior to all the compared methods. The performance on the localization task is dependent on post-processing, which can alter results. Therefore, finding optimal strategy for localization from neural network output or incorporating the post-processing into the network is an important direction for future research. We also show some qualitative results of localization in Figure 5. The red dots represent the groundtruth while yellow circles are the locations estimated by the our approach. Ablation Study We performed an ablation study to validate the efficacy of composition loss introduced in this paper, as well as various choices in designing the network. These results are Method Av. Precision Av. Recall L-AUC MCNN [30] 59.93% 63.50% 0.591 ResNet74 [8] 61.60% 66.90% 0.612 DenseNet63 [10] 70.19% 58.10% 0.637 Encoder-Decoder [3] 71.80% 62.98% 0.670 Proposed 75.8% 59.75% 0.714 shown in Table 6. Next, we describe and provide details for the experiment corresponding to each row in the table. BaseNetwork: This row shows the results with base network of our choice, which is DenseNet201. A fully-connected layer is appended to the last layer of the network followed by a single neuron which outputs the count. The input patch size is 224 × 224. DenseBlock4: This experiment studies the effect of connecting the Density Network (Table 2) containing the different density levels with DenseBlock4 of the base DenseNet instead of DenseBlock2. Since DenseBlock4 outputs feature maps of size 7 × 7, we therefore used deconvolution layer with stride 4 to upsample the features before feeding in to our Density Network. DenseBlock3: This experiment is similar to DenseBlock4, except that we connect our Density Network to Denseblock3 of the base network. DenseBlock3 outputs feature maps which are 14 × 14 in spatial dimensions, whereas we intend to predict density maps of spatial dimension 28 × 28, so we upsample the feature maps by using deconvolution layer before feeding them to the proposed Density Network. Concatenate: Here, we take the sum of the two density and one localization map to obtain 3 counts. We then concatenate these counts to the output of fully-connected layer of the base network to predict count from the single neuron. Thus, we leave to the optimization algorithm to find appropriate weights for these 3 values along with the rest of 1920 features of the fully-connected layer. Mean: We also tested the effect of using equal weights for counts obtained from the base network and three density levels. We take sum of each density / localization map and take the mean of 4 values (2 density map sums, one localization sum, and one count from base network). We treat this mean value as final count output -both during training and testing. Thus, this imposes the constraint that not only the density and localization map correctly predict the location of people, but also their counts should be consistent with groundtruth counts irrespective of predicted locations. Proposed: In this experiment, the Density Network is connected with the DenseBlock2 of base network, however, the Density Network simply outputs two density and one localization maps, none of which are connected to count output (see Figure 3). In summary, these results show that the Density Network contributes significantly to performance on the three tasks. It is better to branch out from the middle layers of the base network, nevertheless the idea of multiple connections back and forth from the base network and Density Network is an interesting direction for further research. Furthermore, enforcing counts from all sources to be equal to the groundtruth count slightly worsens the counting performance. Nevertheless, it does help in estimating better density and localization maps. Finally, the decrease in error rates from the right to left in Table 6 highlights the positive influence of the proposed Composition Loss. Conclusion This paper introduced a novel method to estimate counts, density maps and localization in dense crowd images. We showed that these three problems are interrelated, and can be decomposed with respect to each other through Composition Loss which can then be used to train a neural network. We solved the three tasks simultaneously with the counting performance benefiting from the density map estimation and localization as well. We also proposed the large-scale UCF-QNRF dataset for dense crowds suitable for the three tasks described in the paper. We provided details of the process of dataset collection and annotation, where we ensured that only high-resolution images were curated for the dataset. Finally, we presented extensive set of experiments using several recent deep architectures, and show how the proposed approach is able to achieve good performance through detailed ablation study. We hope the new dataset will prove useful for this type of research, with applications in safety and surveillance, design and expansion of public infrastructures, and gauging political significance of various crowd events.
4,322
1808.01050
2952888378
With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNN networks, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.
For localization in crowded scenes, Rodriguez al @cite_17 use density map as a regularizer during the detection. They optimize an objective function that prefers density map generated on detected locations to be similar to predicted density map @cite_11 . This results in both better precision and recall. The density map is generated by placing a Gaussian kernel at the location of each detection. Zheng al @cite_20 first obtain density map using sliding window over the image through @cite_11 , and then use integer programming to localize objects on the density maps. Similarly, in the domain of medical imaging, Sirinukunwattana al @cite_21 introduced spatially-constrained CNNs for detection and classification of cancer nuclei. In this paper, we present results and analysis for simultaneous crowd counting, density map estimation, and localization using Composition Loss on the proposed UCF-QNRF dataset.
{ "abstract": [ "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the centers of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and potentially lead to a better understanding of cancer.", "We propose a novel object detection framework for partially-occluded small instances, such as pedestrians in low resolution surveillance video, cells under a microscope, flocks of small animals (e.g. birds, fishes), or even tiny insects like honeybees and flies. These scenarios are very challenging for traditional detectors, which are typically trained on individual instances. In our approach, we first estimate the object density map of the input image, and then divide it into local regions. For each region, a sliding window (ROI) is passed over the density map to calculate the instance count within each ROI. 2D integer programming is used to recover the locations of object instances from the set of ROI counts, and the global count estimate of the density map is used as a constraint to regularize the detection performance. Finally, the bounding box for each instance is estimated using the local density map. Compared with current small-instance detection methods, our proposed approach achieves state-of-the-art performance on several challenging datasets including fluorescence microscopy cell images, UCSD pedestrians, small animals and insects.", "" ], "cite_N": [ "@cite_11", "@cite_21", "@cite_20", "@cite_17" ], "mid": [ "2145983039", "2312404985", "1908321067", "" ] }
Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds
Counting dense crowds is significant both from socio-political and safety perspective. At one end of the spectrum, there are large ritual gatherings such as during pilgrimages that typically have large crowds occurring in known and pre-defined locations. Although they generally have passive crowds coming together for peaceful purposes, disasters have known to occur, for instance, during Love Parade [9] and Hajj [1]. For active crowds, such as expressive mobs in demonstrations and protests, counting is important both from political and safety standpoint. It is very common for different sides to claim divergent numbers for crowd gathering, inclined towards their political standing on the concerned issue. Beyond subjectivity and preference for certain political or social outcomes, the disparate counting estimates from opposing parties have a basis in numerical cognition as well. In humans, the results on subitizing [21] suggest that once the number of observed objects increases beyond four, the brain switches from the exact Parallel Individuation System (PIS) to the inaccurate but scalable Approximate Number System (ANS) to count objects [11]. Thus, computer vision based crowd counting offers alternative fast and objective estimation of the number of people in such events. Furthermore, crowd counting is extendable to other domains, for instance, counting cells or bacteria from microscopic images [17,27], animal crowd estimates in wildlife sanctuaries [2], or estimating the number of vehicles at transportation hubs or traffic jams [19]. In this paper, we propose a novel approach to crowd counting, density map estimation and localization of people in a given crowd image. Our approach stems from the observation that these three problems are very interrelated -in fact, they can be decomposed with respect to each other. Counting provides an estimate of the number of people / objects without any information about their location. Density maps, which can be computed at multiple levels, provide weak information about location of each person. Localization does provide accurate location information, nevertheless, it is extremely difficult to estimate directly due to its very sparse nature. Therefore, we propose to estimate all three tasks simultaneously, while employing the fact that each is special case of another one. Density maps can be 'sharpened' till they approximate the localization map, whose integral should equal to the true count. Furthermore, we introduce a new and the largest dataset to-date for training and evaluating dense crowd counting, density map estimation and localization methods, particularly suitable for training very deep Convolutional Neural Networks (CNNs). Though counting has traditionally been considered the primary focus of research, density map estimation and localization have significance and utility beyond counting. In particular, two applications are noteworthy: initialization / detection of people for tracking in dense crowds [13]; and rectifying counting errors from an automated computer vision algorithm. That is, a real user or analyst who desires to estimate the exact count for a real image without any error, the results of counting alone are insufficient. The single number for an entire image makes it difficult to assess the error or the source of the error. However, the localization can provide an initial set of dot locations of the individuals, the user then can quickly go through the image and remove the false positives and add the false negatives. The count using such an approach will be much more accurate and the user can get 100% precise count for the query image. This is particularly important when the number of image samples are few, and reliable counts are desired. Prior to 2013, much of the work in crowd counting focused on low-density scenarios. For instance, UCSD dataset [4] contains 2, 000 video frames with 49, 885 annotated persons. The dataset is low density and low resolution compared to many recent datasets, where train and test splits belong to a single scene. WorldExpo'10 dataset [29], contains 108 low-to-medium density scenes and overcomes the issue of diversity to some extent. UCF dataset [12] contains 50 different images with counts ranging between 96 and 4, 633 per image. Each image has a different resolution, camera angle, and crowd density. Although it was the first dataset for dense crowd images, it has problems with annotations ( Figure 1) due to limited availability of high-resolution crowd images at the time. The ShanghaiTech crowd dataset [30] contains 1, 198 annotated images with a total of 330, 165 annotations. This dataset is divided into two parts: Part A contains 482 images and Part B with 716 images. The number of training images are 300 and 400 in both parts, respectively. Only the images in Part A contain high-density crowds, with 482 images and 250K annotations. Table 1 summarizes the statistics of the multi-scene datasets for dense crowd counting. The proposed UCF-QNRF dataset has the most number of high-count crowd images and annotations, and a wider variety of scenes containing the most diverse set of viewpoints, densities and lighting variations. The resolution is large compared to WorldExpo'10 [29] and ShanghaiTech [30], as can be seen in Fig. 2(b). The average density, i.e., the number of people per pixel over all images is also the lowest, signifying high-quality large images. Lower per-pixel density is partly due to inclusion of background regions, where there are many high-density regions as well as zero-density regions. Part A of Shanghai dataset has high-count crowd images as well, however, they are severely cropped to contain crowds only. On the other hand, the new UCF-QNRF dataset contains buildings, vegetation, sky and roads as they are present in realistic scenarios captured in the wild. This makes this dataset more realistic as well as difficult. Similarly, Figure 2(a) shows the diversity in counts among the datasets. The distribution of proposed dataset is similar to UCF CC 50 [12], however, the new dataset is 30 and 20 times larger in terms of number of images and annotations, respectively, compared to UCF CC 50 [12]. We hope the new dataset will significantly increase research activity in visual crowd analysis and will pave way for building deployable practical counting and localization systems for dense crowds. The rest of the paper is organized as follows. In Sec. 2 we review related work, and present the proposed approach for simultaneous crowd counting, density map estimation and localization in Sec. 3. The process for collection and annotation of the UCF-QNRF dataset is covered in Sec. 4, while the three tasks and evaluation measures are motivated in Sec. 5. The experimental evaluation and comparison are presented in Sec. 6. We conclude with suggestions for future work in Sec. 7. Deep CNN with Composition Loss In this section, we present the motivation for decomposing the loss of three interrelated problems of counting, density map estimation and localization, followed by details about the deep Convolutional Neural Network which can enable training and estimation of the three tasks simultaneously. Composition Loss Let x = [x, y] denote a pixel location in a given image, and N be the number of people annotated with {x i : i = 1, 2, . . . N } as their respective locations. Dense crowds typically depict heads of people as they are the only parts least occluded and mostly visible. In localization maps, only a single pixel is activated, i.e., set to 1 per head, while all other pixels are set to 0. This makes localization maps extremely sparse and therefore difficult to train and estimate. We observe that successive computation of 'sharper' density maps which are relatively easier to train can aid in localization as well. Moreover, all three tasks should influence count, which is the integral over density or localization map. We use the Gaussian Kernel and adapt it for our problem of simultaneous solution for the three tasks. Due to perspective effect and possibly variable density of the crowd, a single value of bandwidth, σ, cannot be used for the Gaussian kernel, as it might lead to well-defined separation between people close to the camera or in regions of low density, while excess The proposed Composition Loss is implemented through multiple dense blocks after branching off the base network. We also test the effect of additional constraint on the density and localization maps (shown with amber and orange blocks) such that the count after integral in each should also be consistent with the groundtruth count. blurring in other regions. Many images of dense crowds depict crowds in their entirety, making automatic perspective rectification difficult. Thus, we propose to define σ i for each person i as the minimum of the 2 distance to its nearest neighbor in spatial domain of the image or some maximum threshold, τ . This ensures that the location information of each person is preserved precisely irrespective of default kernel bandwidth, τ . Thus, the adaptive Gaussian kernel is given by, D(x, f(·)) = N i=1 1 √ 2πf(σ i ) exp − (x − x i ) 2 + (y − y i ) 2 2f(σ i ) 2 ,(1) where the function f is used to produce a successive set of 'sharper' density maps. We define f k (σ) = σ 1/k . Thus, D k = D(x, f k (·) ). As can be seen when k = 1, D k is a very smoothed-out density map using nearest-neighbor dependent bandwidth and τ , whereas as k −→ ∞, D k approaches the binary localization map with a Dirac Delta function placed at each annotated pixel. Since each pixel has a unit area, the localization map assumes a unit value at the annotated location. For our experiments we used three density levels with last one being the localization map. It is also interesting to note that the various connections between density levels and base CNN also serve to provide intermediate supervision which aid in training the filters of base CNN towards counting and density estimation early on in the network. Hypothetically, since integral over each estimatedD k yields a count for that density level, the final count can be obtained by taking the mean of counts from the density and localization maps as well as regression output from base CNN. This has two potential advantages: 1) the final count relies on multiple sources -each capturing count at a different scale. 2) During training the mean of four counts should equal the true count, which implicitly enforces an additional constraint thatD k should not only capture the density and localization information, but that each of their counts should also sum to the groundtruth count. For training, the loss function of density and localization maps is the mean square error between the predicted and ground truth maps, i.e. L k = MSE(D k , D k ), where k = 1, 2, and ∞, and regression loss, L c , is Euclidean loss between predicted and groundtruth counts, while the final loss is defined as the weighted mean all four losses. For density map estimation and localization, we branch out from DenseBlock2 and feed it to our Density Network (see Table 2). The density network introduces 2 new dense blocks and three 1 × 1 convolutional layers. Each dense block has features computed at the previous step, concatenated with all the density levels predicted thus far as input, and learns features aimed at computing the current density / localization map. We used 1 × 1 convolutions to get the output density map from these features. Density Level 1 is computed directly from DenseBlock2 features. DenseNet with Composition Loss We used Adam solver with a step learning rate in all our experiments. We used 0.001 as initial learning rate and reduce the learning rate by a factor of 2 after every 20 epochs. We trained the entire network for 70 epoch with a batch size of 16. The UCF-QNRF Dataset Dataset Collection. The images for the dataset were collected from three sources: Flickr, Web Search and the Hajj footage. The Hajj images were carefully selected so that there are multiple images that capture different locations, viewpoints, perspective effects and times of the day. For Flickr and Web Search, we manually generated the following queries: CROWD, HAJJ, SPECTATOR CROWD, PILGRIMAGE, PROTEST CROWD and CONCERT CROWD. These queries were then passed onto the Flickr and Google Image Search APIs. We selected desired number of images for each query to be 2000 for Flickr and 200 for Google Image Search. The search sorted all the results by RELE-VANCE incorporating both titles and tags, and for Flickr we also ensured that only those images were downloaded for which original resolutions were permitted to be downloaded (through the URL O specifier). The static links to all the images were extracted and saved for all the query terms, which were then downloaded using the respective APIs. The images were also checked for duplicates by computing image similarities followed by manual verification and discarding of duplicates. Initial Pruning. The initial set of images were then manually checked for desirability. Many of the images were pruned due to one or more of the following reasons: -Scenes that did not depict crowds at all or low-density crowds -Objects or visualizations of objects other than humans -Motion blur or low resolution -Very high perspective effect that is camera height is similar to average human height -Images with watermarks or those where text occupied more than 10% of the image In high-density crowd images, it is mostly the heads that are visible. However, people who appear far away from the camera become indistinguishable beyond a certain distance, which depends on crowd density, lighting as well as resolution of the camera sensor. During pruning, we kept those images where the heads were separable visually. Such images were annotated with the others, however, they were cropped afterwards to ensure that regions with problematic annotations or those with none at all due to difficulty in recognizing human heads were discarded. We performed the entire annotation process in two stages. In the first stage, unannotated images were given to the annotators, while in the second stage, the images were given to verifiers who corrected any mistakes or errors in annotations. There were 14 annotators and 4 verifiers, who clocked 1, 300 and 200 hours respectively. In total, the entire procedure involved 2, 000 human-hours spent through to its completion. Statistics. The dataset has 1, 535 jpeg images with 1, 251, 642 annotations. The train and test sets were created by sorting the images with respect to absolute counts, and selecting every 5th image into the test set. Thus, the training and test set consist of 1201 and 334 images, respectively. 21,7], respectively. In the dataset, the minimum and maximum counts are 49 and 12, 865, respectively, whereas the median and mean counts are 425 and 815.4, respectively. Definition and Quantification of Tasks In this section, we define the three tasks and the associated quantification measures. Counting: The first task involves estimation of count for a crowd image i, given by c i . Although this measure does not give any information about location or distribution of people in the image, this is still very useful for many applications, for instance, estimating size of an entire crowd spanning several square kilometers or miles. For the application of counting large crowds, Jacob's Method [14] due to Herbert Jacob is typically employed which involves dividing the area A into smaller sections, finding the average number of people or density d in each section, computing the mean densityd and extrapolating the results to entire region. However, with automated crowd counting, it is now possible to obtain counts and density for multiple images at different locations, thereby, permitting the more accurate integration of density over entire area covered by crowd. Moreover, counting through multiple aerial images requires cartographic tools to map the images onto the earth to compute ground areas. The density here is defined as the number of people in the image divided by ground area covered by the image. We propose to use the same evaluation measures as used in literature for this task: the Mean Absolute Error (C-MAE), Mean Squared Error (C-MSE) with the addition of Normalized Absolute Error (C-NAE). Density Map Estimation amounts to computing per-pixel density at each location in the image, thus preserving spatial information about distribution of people. This is particularly relevant for safety and surveillance, since very high density at a particular location in the scene can be catastrophic [1]. This is different from counting since an image can have counts within safe limits, while containing regions that have very high density. This can happen due to the presence of empty regions in the image, such as walls and sky for mounted cameras; and roads, vehicles, buildings and forestation in aerial cameras. The metrics for evaluating density map estimation are similar to counting, except that they are per-pixel, i.e., the per-pixel Mean Absolute Error (DM-MAE) and Mean Squared Error (DM-MSE). Finally, we also propose to compute the 2D Histogram Intersection (DM-HI) distance after normalizing both the groundtruth and estimated density maps. This discards the effect of absolute counts and emphasizes the error in distribution of density compared to the groundtruth. Localization: The ideal approach to crowd counting would be to detect all the people in an image and then count the number of detections. But since dense crowd images contain severe occlusions among individuals and fewer pixels per person for those away from the camera, this is not a feasible solution. This is why, most approaches to crowd counting bypass explicit detection and perform direct regression on input images. However, for many applications, the precise location of individuals is needed, for instance, to initialize a tracking algorithm in very high-density crowd videos. To quantify the localization error, estimated locations are associated with the ground truth locations through 1-1 matching using greedy association, followed by computation of Precision and Recall at various distance thresholds (1, 2, 3, . . . , 100 pixels). The overall performance of the localization task is then computed through area under the Precision-Recall curve, L-AUC. Experiments Next, we present the results of experiments for the three tasks defined in Section 5. Table 3: We show counting results obtained using state-of-the-art methods in comparison with the proposed approach. Methods with '*' regress counts without computing density maps. For counting, we evaluated the new UCF-QNRF dataset using the proposed method which estimates counts, density maps and locations of people simultaneously with several state-of-the-art deep neural networks [3], [8], [10] as well as those specifically developed for crowd counting [30], [25], [24]. To train the networks, we extracted patches of sizes 448, 224 and 112 pixels at random locations from each training image. While deciding on image locations to extract patch from, we assigned higher probability of selection to image regions with higher count. We used mean square error of counts as the loss function. At test time, we divide the image into a grid of 224 × 224 pixel cells -zero-padding the image for dimensions not divisible by 224 -and evaluate each cell using the trained network. Final image count is given by aggregating the counts in all cells. Table 3 summarizes the results which shows the proposed network significantly outperforms the competing deep CNNs and crowd counting approaches. In Figure 4, we show the images with the lowest and highest error in the test set, for counts obtained through different components of the Composition Loss. Density Map Estimation Method DM-MAE DM-MSE DM-HI MCNN [30] 0.006670 0.0223 0.5354 SwitchCNN [24] For density map estimation, we describe and compare the proposed approach with several methods that directly regress crowd density during training. Among the deep learning methods, MCNN [30] consists of three columns of convolution networks with different filter sizes to capture different head sizes and combines the output of all the columns to make a final density estimate. SwitchCNN [24] uses a similar three column network; however, it also employs a switching network that decides which column should exclusively handle the input patch. CMTL [25] employs a multi-task network that computes a high level prior over the image patch (crowd count classification) and density estimation. These networks are specifically designed for crowd density estimation and their results are reported in first three rows of Table 4. The results of proposed approach are shown in the bottom row of Table 4. The proposed approach outperforms existing approaches by an order of magnitude. Localization For the localization task, we adopt the same network configurations used for density map estimation to perform localization. To get the accurate head locations, we postprocess the outputs by finding the local peaks / maximums based on a threshold, also known as non-maximal suppression. Once the peaks are found, we match the predicted location with the ground truth location using 1-1 matching, and compute precision and recall. We use different distance thresholds as the pixel distance, i.e., if the detection is within the a particular distance threshold of the groundtruth, it is treated as True Positive, otherwise it is a False Positive. Similarly, if there is no detection within a groundtruth location, it becomes a False Negative. The results of localization are reported in Table 5. This table shows that DenseNet [10] and Encoder-Decoder [3] outperform ResNet [8] and MCNN [30], while the proposed approach is superior to all the compared methods. The performance on the localization task is dependent on post-processing, which can alter results. Therefore, finding optimal strategy for localization from neural network output or incorporating the post-processing into the network is an important direction for future research. We also show some qualitative results of localization in Figure 5. The red dots represent the groundtruth while yellow circles are the locations estimated by the our approach. Ablation Study We performed an ablation study to validate the efficacy of composition loss introduced in this paper, as well as various choices in designing the network. These results are Method Av. Precision Av. Recall L-AUC MCNN [30] 59.93% 63.50% 0.591 ResNet74 [8] 61.60% 66.90% 0.612 DenseNet63 [10] 70.19% 58.10% 0.637 Encoder-Decoder [3] 71.80% 62.98% 0.670 Proposed 75.8% 59.75% 0.714 shown in Table 6. Next, we describe and provide details for the experiment corresponding to each row in the table. BaseNetwork: This row shows the results with base network of our choice, which is DenseNet201. A fully-connected layer is appended to the last layer of the network followed by a single neuron which outputs the count. The input patch size is 224 × 224. DenseBlock4: This experiment studies the effect of connecting the Density Network (Table 2) containing the different density levels with DenseBlock4 of the base DenseNet instead of DenseBlock2. Since DenseBlock4 outputs feature maps of size 7 × 7, we therefore used deconvolution layer with stride 4 to upsample the features before feeding in to our Density Network. DenseBlock3: This experiment is similar to DenseBlock4, except that we connect our Density Network to Denseblock3 of the base network. DenseBlock3 outputs feature maps which are 14 × 14 in spatial dimensions, whereas we intend to predict density maps of spatial dimension 28 × 28, so we upsample the feature maps by using deconvolution layer before feeding them to the proposed Density Network. Concatenate: Here, we take the sum of the two density and one localization map to obtain 3 counts. We then concatenate these counts to the output of fully-connected layer of the base network to predict count from the single neuron. Thus, we leave to the optimization algorithm to find appropriate weights for these 3 values along with the rest of 1920 features of the fully-connected layer. Mean: We also tested the effect of using equal weights for counts obtained from the base network and three density levels. We take sum of each density / localization map and take the mean of 4 values (2 density map sums, one localization sum, and one count from base network). We treat this mean value as final count output -both during training and testing. Thus, this imposes the constraint that not only the density and localization map correctly predict the location of people, but also their counts should be consistent with groundtruth counts irrespective of predicted locations. Proposed: In this experiment, the Density Network is connected with the DenseBlock2 of base network, however, the Density Network simply outputs two density and one localization maps, none of which are connected to count output (see Figure 3). In summary, these results show that the Density Network contributes significantly to performance on the three tasks. It is better to branch out from the middle layers of the base network, nevertheless the idea of multiple connections back and forth from the base network and Density Network is an interesting direction for further research. Furthermore, enforcing counts from all sources to be equal to the groundtruth count slightly worsens the counting performance. Nevertheless, it does help in estimating better density and localization maps. Finally, the decrease in error rates from the right to left in Table 6 highlights the positive influence of the proposed Composition Loss. Conclusion This paper introduced a novel method to estimate counts, density maps and localization in dense crowd images. We showed that these three problems are interrelated, and can be decomposed with respect to each other through Composition Loss which can then be used to train a neural network. We solved the three tasks simultaneously with the counting performance benefiting from the density map estimation and localization as well. We also proposed the large-scale UCF-QNRF dataset for dense crowds suitable for the three tasks described in the paper. We provided details of the process of dataset collection and annotation, where we ensured that only high-resolution images were curated for the dataset. Finally, we presented extensive set of experiments using several recent deep architectures, and show how the proposed approach is able to achieve good performance through detailed ablation study. We hope the new dataset will prove useful for this type of research, with applications in safety and surveillance, design and expansion of public infrastructures, and gauging political significance of various crowd events.
4,322
1906.04281
2950055289
Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https: github.com samlobel RaCT_CF
Deep Learning for Collaborative Filtering. To take advantage of the expressiveness of DNNs, there are many recent efforts focused on developing deep learning models for collaborative filtering @cite_34 @cite_55 @cite_8 @cite_51 @cite_15 @cite_3 . Early work on DNNs focused on explicit feedback settings @cite_54 @cite_10 @cite_26 , such as rating predictions. Recent research gradually recognized the importance of implicit feedback @cite_58 @cite_57 @cite_11 , where the user's preference is not explicitly presented @cite_0 . This setting is more practical but challenging, and is the focus of our work. Our method is closely related to three papers, on VAEs @cite_11 , collaborative denoising autoencoder (CDAE) @cite_58 and neural collaborative filtering (NCF) @cite_57 . CDAE and NCF may suffer from scalability issues: the model size grows linearly with both the number of users as well as items. The VAE @cite_11 alleviates this problem via amortized inference. Our work builds on top of the VAE, and improves it by optimizing to the ranking-based metric.
{ "abstract": [ "This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.", "In this work, we contribute a new multi-layer neural network architecture named ONCF to perform collaborative filtering. The idea is to use an outer product to explicitly model the pairwise correlations between the dimensions of the embedding space. In contrast to existing neural recommender models that combine user embedding and item embedding via a simple concatenation or element-wise product, our proposal of using outer product above the embedding layer results in a two-dimensional interaction map that is more expressive and semantically plausible. Above the interaction map obtained by outer product, we propose to employ a convolutional neural network to learn high-order correlations among embedding dimensions. Extensive experiments on two public implicit feedback data demonstrate the effectiveness of our proposed ONCF framework, in particular, the positive effect of using outer product to model the correlations between embedding dimensions in the low level of multi-layer neural recommender model. The experiment codes are available at: this https URL", "Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system.", "Recommender systems usually make personalized recommendation with user-item interaction ratings, implicit feedback and auxiliary information. Matrix factorization is the basic idea to predict a personalized ranking over a set of items for an individual user with the similarities among users and items. In this paper, we propose a novel matrix factorization model with neural network architecture. Firstly, we construct a user-item matrix with explicit ratings and non-preference implicit feedback. With this matrix as the input, we present a deep structure learning architecture to learn a common low dimensional space for the representations of users and items. Secondly, we design a new loss function based on binary cross entropy, in which we consider both explicit ratings and implicit feedback for a better optimization. The experimental results show the effectiveness of both our proposed model and the loss function. On several benchmark datasets, our model outperformed other state-of-the-art methods. We also conduct extensive experiments to evaluate the performance within different experimental settings.", "We propose a framework for collaborative filtering based on Restricted Boltzmann Machines (RBM), which extends previous RBM-based approaches in several important directions. First, while previous RBM research has focused on modeling the correlation between item ratings, we model both user-user and item-item correlations in a unified hybrid non-IID framework. We further use real values in the visible layer as opposed to multinomial variables, thus taking advantage of the natural order between user-item ratings. Finally, we explore the potential of combining the original training data with data generated by the RBM-based model itself in a bootstrapping fashion. The evaluation on two MovieLens datasets (with 100K and 1M user-item ratings, respectively), shows that our RBM model rivals the best previously-proposed approaches.", "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "Multimedia content is dominating today's Web information. The nature of multimedia user-item interactions is 1 0 binary implicit feedback (e.g., photo likes, video views, song downloads, etc.), which can be collected at a larger scale with a much lower cost than explicit feedback (e.g., product ratings). However, the majority of existing collaborative filtering (CF) systems are not well-designed for multimedia recommendation, since they ignore the implicitness in users' interactions with multimedia content. We argue that, in multimedia recommendation, there exists item- and component-level implicitness which blurs the underlying users' preferences. The item-level implicitness means that users' preferences on items (e.g. photos, videos, songs, etc.) are unknown, while the component-level implicitness means that inside each item users' preferences on different components (e.g. regions in an image, frames of a video, etc.) are unknown. For example, a 'view'' on a video does not provide any specific information about how the user likes the video (i.e.item-level) and which parts of the video the user is interested in (i.e.component-level). In this paper, we introduce a novel attention mechanism in CF to address the challenging item- and component-level implicit feedback in multimedia recommendation, dubbed Attentive Collaborative Filtering (ACF). Specifically, our attention model is a neural network that consists of two attention modules: the component-level attention module, starting from any content feature extraction network (e.g. CNN for images videos), which learns to select informative components of multimedia items, and the item-level attention module, which learns to score the item preferences. ACF can be seamlessly incorporated into classic CF models with implicit feedback, such as BPR and SVD++, and efficiently trained using SGD. Through extensive experiments on two real-world multimedia Web services: Vine and Pinterest, we show that ACF significantly outperforms state-of-the-art CF methods.", "", "A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.", "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.", "This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.", "Item recommendation is a personalized ranking task. To this end, many recommender systems optimize models with pairwise ranking objectives, such as the Bayesian Personalized Ranking (BPR). Using matrix Factorization (MF) - the most widely used model in recommendation - as a demonstration, we show that optimizing it with BPR leads to a recommender model that is not robust. In particular, we find that the resultant model is highly vulnerable to adversarial perturbations on its model parameters, which implies the possibly large error in generalization. To enhance the robustness of a recommender model and thus improve its generalization performance, we propose a new optimization framework, namely Adversarial Personalized Ranking (APR). In short, our APR enhances the pairwise ranking method BPR by performing adversarial training. It can be interpreted as playing a minimax game, where the minimization of the BPR objective function meanwhile defends an adversary, which adds adversarial perturbations on model parameters to maximize the BPR objective function. To illustrate how it works, we implement APR on MF by adding adversarial perturbations on the embedding vectors of users and items. Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR - by optimizing MF with APR, it outperforms BPR with a relative improvement of 11.2 on average and achieves state-of-the-art performance for item recommendation. Our implementation is available at: : github.com hexiangnan adversarial_personalized_ranking.", "We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements." ], "cite_N": [ "@cite_26", "@cite_8", "@cite_10", "@cite_55", "@cite_54", "@cite_58", "@cite_3", "@cite_57", "@cite_0", "@cite_15", "@cite_34", "@cite_51", "@cite_11" ], "mid": [ "2409498980", "2807899908", "2099866409", "2740920897", "2116802659", "2253995343", "2741249238", "", "2101409192", "2739273093", "1720514416", "2798868970", "2787512446" ] }
0
1906.04281
2950055289
Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https: github.com samlobel RaCT_CF
Learned Metrics in Vision & Languages. Recent research in computer vision and natural language processing has generated excellent results, by using learned metrics instead of hand-crafted metrics. Among the rich literature of generating realistic images via generative adversarial networks (GANs) @cite_19 @cite_47 @cite_45 @cite_21 , our work is most similar to @cite_29 , where the VAE objective @cite_49 @cite_36 is augmented with the learned representations in the GAN discriminator @cite_19 to better measure image similarities. For language generation, the discrepancy between word-level MLE training and sequence-level semantic evaluation has been alleviated with GANs or RL techniques @cite_22 @cite_56 @cite_50 @cite_41 @cite_42 . The RL approach directly optimizes the metric used at test time, and has shown improvement on various applications, including dialogue @cite_4 , image captioning @cite_35 and translations @cite_24 . Despite the significant successes in vision and language analysis, there has been little if any research reported for directly learning the metrics with deep neural networks for collaborative filtering. Our work fills the gap, and we hope it inspires more research in this direction.
{ "abstract": [ "Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.", "Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.", "In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. In addition to adversarial training we describe a model for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines.", "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.", "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder (VAE) with a generative adversarial network (GAN) we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.", "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple \"truncation trick,\" allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.", "Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach.", "We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a network that is trained to predict the value of an output token, given the policy of an network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-specific score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.", "Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "", "Image captioning is a challenging problem owing to the complexity in understanding the image content and diverse ways of describing it in natural language. Recent advances in deep neural networks have substantially improved the performance of this task. Most state-of-the-art approaches follow an encoder-decoder framework, which generates captions using a sequential recurrent prediction model. However, in this paper, we introduce a novel decision-making framework for image captioning. We utilize a \"policy network\" and a \"value network\" to collaboratively generate captions. The policy network serves as a local guidance by providing the confidence of predicting the next word according to the current state. Additionally, the value network serves as a global and lookahead guidance by evaluating all possible extensions of the current state. In essence, it adjusts the goal of predicting the correct words towards the goal of generating captions similar to the ground truth captions. We train both networks using an actor-critic reinforcement learning model, with a novel reward defined by visual-semantic embedding. Extensive experiments and analyses on the Microsoft COCO dataset show that the proposed framework outperforms state-of-the-art approaches across different evaluation metrics.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ], "cite_N": [ "@cite_35", "@cite_4", "@cite_22", "@cite_36", "@cite_41", "@cite_29", "@cite_21", "@cite_42", "@cite_56", "@cite_24", "@cite_19", "@cite_45", "@cite_49", "@cite_50", "@cite_47" ], "mid": [ "2963084599", "2410983263", "2951520714", "1909320841", "2964268978", "2964167449", "2952716587", "2616969219", "2487501366", "2176263492", "2099471712", "2766527293", "", "2952591111", "2173520492" ] }
0
1906.04281
2950055289
Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https: github.com samlobel RaCT_CF
Learning to Rank (L2R). The idea of L2R has existed for two decades in the information-retrieval community. The goal is to directly optimize against ranking-based evaluation metrics @cite_32 @cite_43 . Previous work on L2R employs objective relaxations @cite_33 . Some techniques can be extended to recommendation settings @cite_46 @cite_6 @cite_44 @cite_9 @cite_13 . Many L2R methods in recommendation are essentially trained by optimizing a classification function, such as the popular pairwise L2R method BPR @cite_46 and WARP @cite_12 described Section 2.1. One limitation is that they are computationally expensive when the number of items is large. To accelerate these approaches, cheap approximations are made in each training step, which results in degraded performance. In contrast, the proposed RaCT is efficient and scalable. In fact, the traditional L2R methods can be integrated into our actor-critic framework, yielding improved performance as shown in our experiments.
{ "abstract": [ "In this paper, we consider collaborative filtering as a ranking problem. We present a method which uses Maximum Margin Matrix Factorization and optimizes ranking instead of rating. We employ structured output prediction to optimize directly for ranking scores. Experimental results show that our method gives very good ranking scores and scales well on collaborative filtering tasks.", "In this paper, we tackle the problem of top-N context-aware recommendation for implicit feedback scenarios. We frame this challenge as a ranking problem in collaborative filtering (CF). Much of the past work on CF has not focused on evaluation metrics that lead to good top-N recommendation lists in designing recommendation models. In addition, previous work on context-aware recommendation has mainly focused on explicit feedback data, i.e., ratings. We propose TFMAP, a model that directly maximizes Mean Average Precision with the aim of creating an optimally ranked list of items for individual users under a given context. TFMAP uses tensor factorization to model implicit feedback data (e.g., purchases, clicks) with contextual information. The optimization of MAP in a large data collection is computationally too complex to be tractable in practice. To address this computational bottleneck, we present a fast learning algorithm that exploits several intrinsic properties of average precision to improve the learning efficiency of TFMAP, and to ensure its scalability. We experimentally verify the effectiveness of the proposed fast learning algorithm, and demonstrate that TFMAP significantly outperforms state-of-the-art recommendation approaches.", "Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank.", "A ranking approach, ListRank-MF, is proposed for collaborative filtering that combines a list-wise learning-to-rank algorithm with matrix factorization (MF). A ranked list of items is obtained by minimizing a loss function that represents the uncertainty between training lists and output lists produced by a MF ranking model. ListRank-MF enjoys the advantage of low complexity and is analytically shown to be linear with the number of observed ratings for a given user-item matrix. We also experimentally demonstrate the effectiveness of ListRank-MF by comparing its performance with that of item-based collaborative recommendation and a related state-of-the-art collaborative ranking approach (CoFiRank).", "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.", "Learning to rank refers to machine learning techniques for training the model in a ranking task. Learning to rank is useful for many applications in information retrieval, natural language processing, and data mining. Intensive studies have been conducted on the problem recently and significant progress has been made. This lecture gives an introduction to the area including the fundamental problems, existing approaches, theories, applications, and future work. The author begins by showing that various ranking problems in information retrieval and natural language processing can be formalized as two basic ranking tasks, namely ranking creation (or simply ranking) and ranking aggregation. In ranking creation, given a request, one wants to generate a ranking list of offerings based on the features derived from the request and the offerings. In ranking aggregation, given a request, as well as a number of ranking lists of offerings, one wants to generate a new ranking list of the offerings. Ranking creation (or ranking) is the major problem in learning to rank. It is usually formalized as a supervised learning task. The author gives detailed explanations on learning for ranking creation and ranking aggregation, including training and testing, evaluation, feature creation, and major approaches. Many methods have been proposed for ranking creation. The methods can be categorized as the pointwise, pairwise, and listwise approaches according to the loss functions they employ. They can also be categorized according to the techniques they employ, such as the SVM based, Boosting SVM, Neural Network based approaches. The author also introduces some popular learning to rank methods in details. These include PRank, OC SVM, Ranking SVM, IR SVM, GBRank, RankNet, LambdaRank, ListNet & ListMLE, AdaRank, SVM MAP, SoftRank, Borda Count, Markov Chain, and CRanking. The author explains several example applications of learning to rank including web search, collaborative filtering, definition search, keyphrase extraction, query dependent summarization, and re-ranking in machine translation. A formulation of learning for ranking creation is given in the statistical learning framework. Ongoing and future research directions for learning to rank are also discussed. Table of Contents: Introduction Learning for Ranking Creation Learning for Ranking Aggregation Methods of Learning to Rank Applications of Learning to Rank Theory of Learning to Rank Ongoing and Future Work", "Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.", "RNNs have been shown to be excellent models for sequential data and in particular for data that is generated by users in an session-based manner. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce novel ranking loss functions tailored to RNNs in the recommendation setting. The improved performance of these losses over alternatives, along with further tricks and refinements described in this work, allow for an overall improvement of up to 35 in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 53 over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly. We further demonstrate the performance gain of the RNN over baselines in an online A B test.", "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory." ], "cite_N": [ "@cite_33", "@cite_9", "@cite_32", "@cite_6", "@cite_44", "@cite_43", "@cite_46", "@cite_13", "@cite_12" ], "mid": [ "2119384858", "1999956270", "2149427297", "2071111773", "1968598835", "2009077327", "2140310134", "2626454364", "21006490" ] }
0
1808.00587
2885930799
In this paper, we study parametric analysis of semidefinite optimization problems with respect to the perturbation of objective function. We investigate the behavior of the optimal partition and optimal set mapping in a so called nonlinearity interval. Furthermore, we investigate the sensitivity of the approximation of the optimal partition, which have been recently studied by Mohammad-Nezhad and Terlaky. The approximation of the optimal partition was obtained from a bounded sequence of interior solutions on, or in a neighborhood of the central path. We derive an upper bound on the distance between the invariant subspaces spanned by the approximation of the optimal partition.
Adler and Monteiro @cite_34 studied the parametric analysis of LO problems using the concept of optimal partition. Another treatment of sensitivity analysis for LO based on the optimal partition approach was given by @cite_15 and Greenberg @cite_3 . @cite_12 @cite_33 extended the optimal partition approach to linearly constrained quadratic optimization (LCQO) with perturbation in the right hand side vector and showed that the optimal value function is convex and piecewise quadratic. There have been further studies on optimal partition and parametric analysis of conic optimization problems. In contrast to LO, the optimal partition of SDO is defined as a 3-tuple of mutually orthogonal subspaces of @math , see . Goldfarb and Scheinberg @cite_22 considered a parametric SDO problem, where the objective is perturbed along a fixed direction. They derived auxiliary problems to compute the directional derivatives of the optimal value function and the so-called invariancy set of the optimal partition. Yildirim @cite_18 extended the concept of the optimal partition and the auxiliary problems in @cite_22 for linear conic optimization problems.
{ "abstract": [ "We study convex conic optimization problems in which the right-hand side and the cost vectors vary linearly as functions of a scalar parameter. We present a unifying geometric framework that subsumes the concept of the optimal partition in linear programming (LP) and semidefinite programming (SDP) and extends it to conic optimization. Similar to the optimal partition approach to sensitivity analysis in LP and SDP, the range of perturbations for which the optimal partition remains constant can be computed by solving two conic optimization problems. Under a weaker notion of nondegeneracy, this range is simply given by a minimum ratio test. We discuss briefly the properties of the optimal value function under such perturbations.", "In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transition-points of the optimal value function. The advantage of using this approach instead of the classical approach (using optimal bases) is shown. Moreover, we present an algorithm to compute the partitions, optimal sets and the optimal value function. This is a new algorithm and uses primal and dual optimal solutions. We also extend some of the results to parametric quadratic programming, and discuss differences and resemblances with the linear programming case.", "In this paper we consider a semidefinite programming (SDP) problem in which the objective function depends linearly on a scalar parameter. We study the properties of the optimal objective function value as a function of that parameter and extend the concept of the optimal partition and its range in linear programming to SDP. We also consider an approach to sensitivity analysis in SDP and the extension of our results to an SDP problem with a parametric right-hand side.", "Over the years we have learned to use an optimal basic solution to perform sensitivity analysis. Recently, the importance of an optimal partition, induced by a strictly complementary solution, has surfaced in connection with the interior point method. This paper gives examples where the partition is what is needed or desired to perform the analysis.", "", "We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem ź(ź) = min c t x¦Ax =b + ź¯b,x ź 0 . We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function ź(ź). As a consequence, the optimality intervals form a partition of the closed interval ź; ¦ź(ź)¦ < ź . Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of ź(ź) is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.", "In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a right--hand side element (or an element of the linear part of the objective) is piecewise quadratic, where the pieces can be characterized by maximal complementary solutions and tripartitions. Further, we investigate differentiability of this function. A new algorithm to compute the optimal value function is proposed. Finally, we discuss the advantages of this approach when applied to mean--variance portfolio models." ], "cite_N": [ "@cite_18", "@cite_33", "@cite_22", "@cite_3", "@cite_15", "@cite_34", "@cite_12" ], "mid": [ "2144600542", "1490853808", "1983800681", "2055716186", "", "2062716676", "1496393375" ] }
0
1807.11632
2950710378
Most neural-network based speaker-adaptive acoustic models for speech synthesis can be categorized into either layer-based or input-code approaches. Although both approaches have their own pros and cons, most existing works on speaker adaptation focus on improving one or the other. In this paper, after we first systematically overview the common principles of neural-network based speaker-adaptive models, we show that these approaches can be represented in a unified framework and can be generalized further. More specifically, we introduce the use of scaling and bias codes as generalized means for speaker-adaptive transformation. By utilizing these codes, we can create a more efficient factorized speaker-adaptive model and capture advantages of both approaches while reducing their disadvantages. The experiments show that the proposed method can improve the performance of speaker adaptation compared with speaker adaptation based on the conventional input code.
Constrained Maximum Likelihood Linear Regression (CMLLR) @cite_30 @cite_31 , also known as feature-space MLLR (fMLLR), is a widely used speaker adaptation technique for hidden Markov model (HMM)-based speech processing systems in which a speaker-dependent affine transformation is applied to source acoustic features to explain target data better. In the case of automatic speech recognition (ASR), the transformation acts as a method of normalization, whereas in the case of speech synthesis, the transformation purpose is to diverge the acoustic output to each target speaker @cite_24 . The fMLLR method can be described using the following equation: where @math is the source acoustic features, @math represents approximated acoustic features of the target speaker @math , @math is a full linear matrix and @math is the bias vector. @math and @math are transformation parameters specific to each speaker.
{ "abstract": [ "A trend in automatic speech recognition systems is the use of continuous mixture-density hidden Markov models (HMMs). Despite the good recognition performance that these systems achieve on average in large vocabulary applications, there is a large variability in performance across speakers. Performance degrades dramatically when the user is radically different from the training population. A popular technique that can improve the performance and robustness of a speech recognition system is adapting speech models to the speaker, and more generally to the channel and the task. In continuous mixture-density HMMs the number of component densities is typically very large, and it may not be feasible to acquire a sufficient amount of adaptation data for robust maximum-likelihood estimates. To solve this problem, the authors propose a constrained estimation technique for Gaussian mixture densities. The algorithm is evaluated on the large-vocabulary Wall Street Journal corpus for both native and nonnative speakers of American English. For nonnative speakers, the recognition error rate is approximately halved with only a small amount of adaptation data, and it approaches the speaker-independent accuracy achieved for native speakers. For native speakers, the recognition performance after adaptation improves to the accuracy of speaker-dependent systems that use six times as much training data. >", "Abstract This paper examines the application of linear transformations for speaker and environmental adaptation in an HMM-based speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Only model-based linear transforms are considered, since, for linear transforms, they subsume the appropriate feature–space transforms. The paper compares the two possible forms of model-based transforms: (i) unconstrained, where any combination of mean and variance transform may be used, and (ii) constrained, which requires the variance transform to have the same form as the mean transform. Re-estimation formulae for all appropriate cases of transform are given. This includes a new and efficient full variance transform and the extension of the constrained model–space transform from the simple diagonal case to the full or block–diagonal case. The constrained and unconstrained transforms are evaluated in terms of computational cost, recognition time efficiency, and use for speaker adaptive training. The recognition performance of the two model–space transforms on a large vocabulary speech recognition task using incremental adaptation is investigated. In addition, initial experiments using the constrained model–space transform for speaker adaptive training are detailed.", "" ], "cite_N": [ "@cite_30", "@cite_31", "@cite_24" ], "mid": [ "2121981798", "2002342963", "" ] }
SCALING AND BIAS CODES FOR MODELING SPEAKER-ADAPTIVE DNN-BASED SPEECH SYNTHESIS SYSTEMS
Recent speaker-dependent speech synthesis systems can generate high-quality reading speech indistinguishable from natural human speech when their training data is recorded in a quality-controlled condition and have sufficient amount of data [1]. The speech synthesis community is currently trying to solve more challenging problems. A good example is multi-speaker speech synthesis and its adaptation [2,3,4,5]. Here multi-speaker synthesis means generating synthetic speech of multiple known speakers included in a training dataset using a common model, and adaptation means adapting the speaker-independent common model to unseen speakers and generating their speech. This speakeradaptive speech synthesis systems are expected to opens possibilities for a wide range of new applications for speech synthesis such as a customizable, user-specific voice interface and voice preservation for people with medical conditions involving voice losses. However, training the multi-speaker This work was partially supported by MEXT KAKENHI Grants (16H06302, 17H04687, 18H04120, and 18H04112). synthesis models and adapting them to unseen speakers are still challenging problems, and resulting models are far from perfect, especially when less than ideal datasets are used [6]. Most adaptation methods for neural network models can be described as either (a) fine-tuning a set of or all of parameters of speaker-independent network so it explains unseen speaker's data better or (b) factorizing a neural network into speaker-specific and common parts and estimating the speaker-specific components for the unseen speaker's data. The speaker-specific components may be composed by input codes (e.g. one-hot vector) [7], embedding vectors obtained externally (e.g. i-vector) [8], or latent variables (e.g. variational auto-encoder) [3,9,10]. Of course any of those speaker-specific components may be jointly optimized with the common parts (e.g. [7,10,11]). Although there are a lot of variants on multi-speaker modeling and adaptation, most approaches for augmenting the speaker-specific components into a neural network are equivalent to adapting a bias term of each hidden layer and this bias term is typically constant across all frames of all utterances. Although Wu et al. [12] and Nachmachi et al. [13] proposed frame-dependent components, these components are still bias adaptation and their underlying frameworks and concepts have mathematical similarities. In this paper we first systematically overview the common concepts of neural-network based speaker-adaptive models and show that these approaches can be represented in a unified framework. Further, we introduce a scaling code as an extended speaker-adaptive transformation. As its name indicates, this code introduces an additional scaling operation as an approximation to adaptation of weight matrices unlike the conventional deep neural network (DNN) adaptation approaches. Section 2 details relevant work. Section 3 describes our factorized speaker adaptation based on scaling and bias codes. Section 4 explains our experiments and shows both objective and subjective results. We conclude our work and describe the future direction for this method in Section 5. FACTORIZED SPEAKER TRANSFORMATION BASED ON SCALING AND BIAS CODES Scaling and bias codes The above approaches are obviously complementary. Our proposal, illustrated in Figure 1, is therefore the design of a new speaker transformation by combining the above two types of approaches and further factorizing its essential components on the basic of "scaling" and "bias" codes. The main idea is to explicitly transform both the weight matrix and the bias vector as: h l = f (A (k) l W l h l−1 + c l + b (k) l )(11)A (k) l = diag(W A l s A,(k) ) (12) b (k) l = W b l s b,(k)(13)W A W b W s A,(k) s b,(k) speaker-embeded table where A (k) l ∈ R m×m is a diagonal matrix for the scaling operation at the l-th layer. The matrix is further factorized into a speaker-independent projection matrix W A l ∈ R m×p and a scaling code vector s A,(k) ∈ R p×1 . diag is an operation to change a m × 1 vector into a diagonal m × m matrix. The speaker-specific bias term b (k) l is also factorized in the same way using W b l ∈ R m×q and s b,(k) ∈ R q×1 . As described previously, s b,(k) is basically equivalent to the conventional speaker code, but we call it as bias code here to better outline its property. These codes may have arbitrary lengths, but, p and q are usually chosen to be much smaller than m to reduce the number of free parameters further. Factorizing models explicitly and using lower-dimensional subspaces is a powerful concept used in various models (e.g. Heteroscedastic Linear Discriminant Analysis (HLDA) [26], subspace Gaussian mixture model [27]). The proposed factorization is somewhat similar to Factorize Hidden Layer (FHL) introduced by Samrakoon and Sim [20], but we focus on performing the scaling and bias adaptation simultaneously using lower dimensional vectors. A concept similar to scaling and bias codes was also investigated for ASR in [28,29], but instead of mapping the scaling and bias transformation from a common vector we use separated vectors as scaling and bias codes to give ourselves more degrees of freedom to design a speaker-adaptive architecture. If necessary, we may directly adapt A (k) l and b (k) l when the amount of adaptation data is sufficient. Extensions of the proposed method In this paper, we investigate two more strategies as extensions of the proposed method. The first strategy is to separately use the scaling and bias codes at different layers and to explicitly perform either scaling or bias operations only as illustrated by Figure 2-a. This is a special case of the proposed method. The second strategy is to combine the proposed method with other type of matrix decomposition. For example, in the work of Xue et al. [30], a weight matrix is decomposed into three linearly connected matrices using singular value decomposition (SVD). Therefore, instead of multiplying a scaling matrix to a weight matrix, we may first decompose the weight matrix into the three linearly connected matrices and use the proposed scaling matrix to approximate one of the decomposed matrices further as follows: h l = f (W (k) l h l−1 + c l + b (k) l + h l−1 )(14)W (k) l = U l A (k) l V l(15)A (k) l = diag(W A l s A,(k) )(16)b (k) l = W b l s b,(k)(17) where U l ∈ R m×n , V l ∈ R n×m and A (k) l ∈ R n×n with n m 1 . Note that residual connections are also added here. When we use this model for time-series speech data, the input varies at each time and the residual part becomes a timevariant bias term as h l,t = f (W We also investigate to which layers we should inject the proposed transformation and what kinds of activation functions should be used after the speaker transformation. More specifically, we investigate whether the proposed transformation should be used at intermediate hidden layers with nonlinear activation functions as shown in Figure 3-a or at a specific layer where all remaining operations are linear as shown in Figure 3-b. By analyzing this, we can understand whether the relationship between the proposed speaker transformation functions and generated acoustic features should be represented in a non-linear way like the former case, or in a linear one like the latter case. 2 EXPERIMENTS Experimental condition We use two speech corpora to evaluate our proposal: an English corpus containing 80 speakers, which is a subset of the VCTK [32,33], and an in-house Japanese speech corpus with over 250 speakers. The English corpus was used to objectively evaluate various aspects of our proposal while the Japanese corpus is used to reproduce the results and evaluate subjectively with native Japanese listeners. We split each corpora into the base and target sets as shown in Table 1 and conducted two tasks (multi-speaker and adaptation) 2 For the combination of the linear case with the strategy in Figure 2-a, which has operations at two different layers, we first used speaker transformation based on the bias code at a hidden layer with the non-linear activation functions and further used speaker transformation based on the scaling code at the next linear layer. This is technically a mix of linear and non-linear speaker transformations, but we included this in "the linear setup" in our experiments. as follows. In the multi-speaker task, we used en.base and one of en.target.{10, 40, 160, or 320} for training a multispeaker neural network common to all speakers per strategy. In the adaptation task, we used en.base for training a multispeaker neural network per strategy and adapted it to each target speaker included in en.target.*. In both the tasks, the evaluation was performed using target speakers included in en.target.*. This increased the number of models needed to be constructed but reduced the mismatch between the multispeaker and adaptation tasks so we could directly compare them. For the DNN-based acoustic model, we used a conventional multi-task learning neural network similar to our previous works [7,34]. The neural network maps linguistic features (depending on languages) to several acoustic features including 60-dimensional mel-cepstral coefficients, 25-dimensional band-limited aperiodicities, interpolated logarithm fundamental frequencies, and their dynamic counterpart. A voiced/unvoiced binary flag is also included. The neural network model has five feedforward layers each with 1024 neurons, followed by a linear layer to map to the desired dimensional output. All layers have the sigmoid activation function unless stated otherwise. We experimented with five strategies utilizing either scaling code, bias code, or both as shown in Table 2. Further, to investigate the impacts of different waveform generation methods, we used both a speaker-independent Wavenet vocoder [35,36] and the WORLD vocoder [37] for speech waveform generation . However, our Wavenet model is still under development and we experienced the collapse of generated speech problems, which is described in [38]. Objective evaluation We first evaluated the scaling code by itself in a nonlinear setup since, at the time of writing, using scaling code for multi-speaker speech synthesis has not been investigated. We changed the size of scaling codes from 1 to 128 to see how they impact the objective performance of the multi-speaker task in a similar way to experiments that we did on bias codes previously [7]. The multi-speaker models were trained using en.base and en.target.320 together. The objective evaluation results, including mel-cepstral distortion (MCD) in dB and F 0 root mean square error (F 0 RMSE) in Hz, are illustrated in Figure 4. We can see that both the distortions decrease when we increase the size of the scaling code. Next we evaluated multiple strategies described in Table 2 for the multi-speaker task in either nonlinear or linear setups. Again the multi-speaker models were trained using the en.base and en.target.320 data together. Figure 5 shows objective evaluation results of the strategies. If we look at the non-linear setups, we see that there are no obvious differences between these strategies. However, at least we can determine that the proposed scaling code can be used by itself without decreasing the performance. If we look at the linear setups, we can clearly see that the using the bias code by itself is a poor strategy for multi-speaker modeling. It resulted in much worse MCD even though its F 0 RMSE is comparable to other systems. In [39], Wang found out that the model structures required for mel-cepstrum and fundamental frequency are different. Our results also support this finding. Figure 6 shows objective evaluation results of the strategies in the adaptation task using different amounts of data. The first block indicated bias m corresponds to reference results in the multi-speaker task (i.e., systems where multi- speaker neural networks were trained using en.base and one of en.target.{10, 40, 160, or 320} and synthetic speech was generated using text of the test set of target speakers ) using the bias code in the nonlinear setup. All other results are adaptation results for the unseen speaker task. The amounts of adaptation data vary from 10 to 320. From this figure, we see that adaptation to the unseen speakers is more difficult than multi-speaker modeling. Moreover, while the results of multi-speaker modeling are improved significantly when we increase the amount of data, the adaptation results for the unseen speakers show marginal improvements when more data is available. This suggests that the proposed adaptation transformation needs to be generalized better. Another important pattern that we can see from the figure is that in terms of F 0 RMSE, all strategies in the linear setup outperform their nonlinear counterparts. Subjective evaluations Next we reproduced several selected strategies using the Japanese dataset. We doubled the size of speaker codes shown in Table 2 and chose strategies that showed reasonable improvements in the objective evaluation using the English dataset. The objective evaluation results using the Japanese corpus are shown in Figure 7, from which we can see the same trend as the result using the English one 3 . We used the Japanese systems and conducted a subjective listening test to see how participants perceived these differences. The listening test contained two sets of questions. In the first part, participants were asked to judge the naturalness of the presented speech sample using a five-point scale Fig. 7. Objective evaluation results of selected strategies in adaptation task using Japanese corpus. Like the English test, bias m shows reference results in the multi-speaker task using the bias code in the nonlinear setup. All other results are adaptation results. ranged from 1 (very unnatural) to 5 (very natural). In the second part, participants were asked to compare a speech sample of a system with recorded speech of the same speaker and judge if they are the same speaker or not using a four-point scale ranged from 1 (different, sure) to 4 (same, sure). This evaluation methodology is similar to our previous study [34]. In addition to synthetic speech generated from the proposed speech synthesis systems using the above selected strategies, we also evaluated recorded speech, WOLRD vocoded speech, and Wavenet vocoded speech for comparison. A large-scale listening test was done with 289 subjects. The statistical analysis was conducted using pairwise t-tests with a 95% confidence margin and Holm-Bonferroni compensation for multiple comparisons. Subjective evaluation results are presented in Figure 8. In the quality test, we can first see that participants judged all systems using our speaker-independent Wavenet vocoder samples to be worse than counterparts using the WORLD vocoder. This is inconsistent with other publication results and indicates that our Wavenet is not properly trained. For the future works, we could further fine-tune a part of the speaker-independent Wavenet model to stabilize the neuralnet vocoder [40,41]. However, unlike the quality test, the subjects judged synthetic speech using the Wavenet vocoder to be closer to the target speakers in the speaker similarity test although there are still large gaps between vocoded speech and synthetic speech. We can also see that a reference multi-speaker system marked as bias m using 100 utterances has the highest similarity score among the other systems, and this is consistent with the objective evaluation results. Regarding the adaptation to the unseen speakers, we could see that the proposed method using both the scaling and bias codes and its bottle- neck variant (in the linear setting) have better results than the adaptation method using the bias code in the nonlinear setting (which is our previous work) for both WORLD and Wavenet vocoders. This would be because of improved F0 adaptation, as we can see objectively in Figure 7. Regarding the quantity of the adaptation data, more data seems to slightly improve speaker similarity of synthetic speech in general but does not improve the perception of quality. In some cases, it makes the quality of synthetic speech slightly worse. CONCLUSIONS In this paper, we have explained several major existing adaptation frameworks for DNN speech synthesis and showed one generalized speaker-adaptive transformation. Further, we have factorized the proposed transformation on the basic of scaling and bias codes and investigated its variants such as bottleneck. From objective and subjective experiments, we showed that the proposed method, specifically the ones using both the scaling and bias codes in the linear setting, can reduce acoustic errors and improve subjective speaker similarity in the adaptation of unseen speakers . Moreover, our results clearly indicate that there are still large gaps between vocoded speech and synthetic speech in terms of speaker similarity and this clearly indicates that there is room for improving multispeaker modeling and speaker adaptation. Our future work includes comparing our method with other adaptation methods such as LHUC and SVD bottleneck speaker adaptation with low-rank approximation. Another interesting experiment we would like to see is the use of i-vector or d-vector [24] as a scaling code.
2,767
1807.11632
2950710378
Most neural-network based speaker-adaptive acoustic models for speech synthesis can be categorized into either layer-based or input-code approaches. Although both approaches have their own pros and cons, most existing works on speaker adaptation focus on improving one or the other. In this paper, after we first systematically overview the common principles of neural-network based speaker-adaptive models, we show that these approaches can be represented in a unified framework and can be generalized further. More specifically, we introduce the use of scaling and bias codes as generalized means for speaker-adaptive transformation. By utilizing these codes, we can create a more efficient factorized speaker-adaptive model and capture advantages of both approaches while reducing their disadvantages. The experiments show that the proposed method can improve the performance of speaker adaptation compared with speaker adaptation based on the conventional input code.
Next we explain the existing DNN-based speaker adaptation methods, that is, speaker-dependent layers and speaker-dependent input code using similar notations to the above fMLLR. For the speaker-dependent layers @cite_13 @cite_27 approach, the weight matrices and bias vectors of specific layers are fine-tuned using adaptation data, therefore we can rewrite Equation as: where @math and @math are now specific to a target speaker @math and @math also represents an adapted hidden layer . The method has the advantage of modeling both a full matrix @math and the bias vector @math , which usually yield favorable result when the adaptation data is sufficient @cite_29 @cite_27 . However when the amount of adaptation data is limited, the result is unstable as number of parameters estimated is very large @cite_22 . This is also the reason that this method typically involves reducing the number of parameters estimated @cite_11 @cite_7 @cite_27 in order to retain the adaptation performance.
{ "abstract": [ "", "Recently, the low-rank plus diagonal (LRPD) adaptation was proposed for speaker adaptation of deep neural network (DNN) models. The LRPD restructures the adaptation matrix as a superposition of a diagonal matrix and a product of two low-rank matrices. In this paper, we extend the LRPD adaptation into the subspace-based approach to further reduce the speaker-dependent (SD) footprint. We apply the extended LRPD (eLRPD) adaptation for the DNN and LSTM models with emphasis placed on the applicability of the adaptation to large-scale speech recognition systems. To speed up the adaptation in test time, we propose the bottleneck (BN) caching approach to eliminate the redundant computations during multiple sweeps of development data. Experimental results on the short message dictation (SMD) task show that the eLRPD adaptation can reduce the SD footprints by 82 for the SVD DNN and 96 for the LSTM-RNN over the linear adaptation, while maintaining the comparable accuracy. The BN caching achieves up to 3.5 times speedup in adaptation at no loss of recognition accuracy.", "A major advantage of statistical parametric speech synthesis (SPSS) over unit-selection speech synthesis is its adaptability and controllability in changing speaker characteristics and speaking style. Recently, several studies using deep neural networks (DNNs) as acoustic models for SPSS have shown promising results. However, the adaptability of DNNs in SPSS has not been systematically studied. In this paper, we conduct an experimental analysis of speaker adaptation for DNN-based speech synthesis at different levels. In particular, we augment a low-dimensional speaker-specific vector with linguistic features as input to represent speaker identity, perform model adaptation to scale the hidden activation weights, and perform a feature space transformation at the output layer to modify generated acoustic features. We systematically analyse the performance of each individual adaptation technique and that of their combinations. Experimental results confirm the adaptability of the DNN, and listening tests demonstrate that the DNN can achieve significantly better adaptation performance than the hidden Markov model (HMM) baseline in terms of naturalness and speaker similarity.", "Speaker adaptation methods aim to create fair quality synthesis speech voice font for target speakers while only limited resources available. Recently, as deep neural networks based statistical parametric speech synthesis (SPSS) methods become dominant in SPSS TTS back-end modeling, speaker adaptation under the neural network based SPSS framework has also became an important task. In this paper, linear networks (LN) is inserted in multiple neural network layers and fine-tuned together with output layer for best speaker adaptation performance. When adaptation data is extremely small, the low-rank plus diagonal(LRPD) decomposition for LN is employed to make the adapted voice more stable. Speaker adaptation experiments are conducted under a range of adaptation utterances numbers. Moreover, speaker adaptation from 1) female to female, 2) male to female and 3) female to male are investigated. Objective measurement and subjective tests show that LN with LRPD decomposition performs most stable when adaptation data is extremely limited, and our best speaker adaptation (SA) model with only 200 adaptation utterances achieves comparable quality with speaker dependent (SD) model trained with 1000 utterances, in both naturalness and similarity to target speaker.", "In DNN-based TTS synthesis, DNNs hidden layers can be viewed as deep transformation for linguistic features and the output layers as representation of acoustic space to regress the transformed linguistic features to acoustic parameters. The deep-layered architectures of DNN can not only represent highly-complex transformation compactly, but also take advantage of huge amount of training data. In this paper, we propose an approach to model multiple speakers TTS with a general DNN, where the same hidden layers are shared among different speakers while the output layers are composed of speaker-dependent nodes explaining the target of each speaker. The experimental results show that our approach can significantly improve the quality of synthesized speech objectively and subjectively, comparing with speech synthesized from the individual, speaker-dependent DNN-based TTS. We further transfer the hidden layers for a new speaker with limited training data and the resultant synthesized speech of the new speaker can also achieve a good quality in term of naturalness and speaker similarity.", "" ], "cite_N": [ "@cite_22", "@cite_7", "@cite_29", "@cite_27", "@cite_13", "@cite_11" ], "mid": [ "", "2695252763", "2605320104", "2964169091", "1492383498", "" ] }
SCALING AND BIAS CODES FOR MODELING SPEAKER-ADAPTIVE DNN-BASED SPEECH SYNTHESIS SYSTEMS
Recent speaker-dependent speech synthesis systems can generate high-quality reading speech indistinguishable from natural human speech when their training data is recorded in a quality-controlled condition and have sufficient amount of data [1]. The speech synthesis community is currently trying to solve more challenging problems. A good example is multi-speaker speech synthesis and its adaptation [2,3,4,5]. Here multi-speaker synthesis means generating synthetic speech of multiple known speakers included in a training dataset using a common model, and adaptation means adapting the speaker-independent common model to unseen speakers and generating their speech. This speakeradaptive speech synthesis systems are expected to opens possibilities for a wide range of new applications for speech synthesis such as a customizable, user-specific voice interface and voice preservation for people with medical conditions involving voice losses. However, training the multi-speaker This work was partially supported by MEXT KAKENHI Grants (16H06302, 17H04687, 18H04120, and 18H04112). synthesis models and adapting them to unseen speakers are still challenging problems, and resulting models are far from perfect, especially when less than ideal datasets are used [6]. Most adaptation methods for neural network models can be described as either (a) fine-tuning a set of or all of parameters of speaker-independent network so it explains unseen speaker's data better or (b) factorizing a neural network into speaker-specific and common parts and estimating the speaker-specific components for the unseen speaker's data. The speaker-specific components may be composed by input codes (e.g. one-hot vector) [7], embedding vectors obtained externally (e.g. i-vector) [8], or latent variables (e.g. variational auto-encoder) [3,9,10]. Of course any of those speaker-specific components may be jointly optimized with the common parts (e.g. [7,10,11]). Although there are a lot of variants on multi-speaker modeling and adaptation, most approaches for augmenting the speaker-specific components into a neural network are equivalent to adapting a bias term of each hidden layer and this bias term is typically constant across all frames of all utterances. Although Wu et al. [12] and Nachmachi et al. [13] proposed frame-dependent components, these components are still bias adaptation and their underlying frameworks and concepts have mathematical similarities. In this paper we first systematically overview the common concepts of neural-network based speaker-adaptive models and show that these approaches can be represented in a unified framework. Further, we introduce a scaling code as an extended speaker-adaptive transformation. As its name indicates, this code introduces an additional scaling operation as an approximation to adaptation of weight matrices unlike the conventional deep neural network (DNN) adaptation approaches. Section 2 details relevant work. Section 3 describes our factorized speaker adaptation based on scaling and bias codes. Section 4 explains our experiments and shows both objective and subjective results. We conclude our work and describe the future direction for this method in Section 5. FACTORIZED SPEAKER TRANSFORMATION BASED ON SCALING AND BIAS CODES Scaling and bias codes The above approaches are obviously complementary. Our proposal, illustrated in Figure 1, is therefore the design of a new speaker transformation by combining the above two types of approaches and further factorizing its essential components on the basic of "scaling" and "bias" codes. The main idea is to explicitly transform both the weight matrix and the bias vector as: h l = f (A (k) l W l h l−1 + c l + b (k) l )(11)A (k) l = diag(W A l s A,(k) ) (12) b (k) l = W b l s b,(k)(13)W A W b W s A,(k) s b,(k) speaker-embeded table where A (k) l ∈ R m×m is a diagonal matrix for the scaling operation at the l-th layer. The matrix is further factorized into a speaker-independent projection matrix W A l ∈ R m×p and a scaling code vector s A,(k) ∈ R p×1 . diag is an operation to change a m × 1 vector into a diagonal m × m matrix. The speaker-specific bias term b (k) l is also factorized in the same way using W b l ∈ R m×q and s b,(k) ∈ R q×1 . As described previously, s b,(k) is basically equivalent to the conventional speaker code, but we call it as bias code here to better outline its property. These codes may have arbitrary lengths, but, p and q are usually chosen to be much smaller than m to reduce the number of free parameters further. Factorizing models explicitly and using lower-dimensional subspaces is a powerful concept used in various models (e.g. Heteroscedastic Linear Discriminant Analysis (HLDA) [26], subspace Gaussian mixture model [27]). The proposed factorization is somewhat similar to Factorize Hidden Layer (FHL) introduced by Samrakoon and Sim [20], but we focus on performing the scaling and bias adaptation simultaneously using lower dimensional vectors. A concept similar to scaling and bias codes was also investigated for ASR in [28,29], but instead of mapping the scaling and bias transformation from a common vector we use separated vectors as scaling and bias codes to give ourselves more degrees of freedom to design a speaker-adaptive architecture. If necessary, we may directly adapt A (k) l and b (k) l when the amount of adaptation data is sufficient. Extensions of the proposed method In this paper, we investigate two more strategies as extensions of the proposed method. The first strategy is to separately use the scaling and bias codes at different layers and to explicitly perform either scaling or bias operations only as illustrated by Figure 2-a. This is a special case of the proposed method. The second strategy is to combine the proposed method with other type of matrix decomposition. For example, in the work of Xue et al. [30], a weight matrix is decomposed into three linearly connected matrices using singular value decomposition (SVD). Therefore, instead of multiplying a scaling matrix to a weight matrix, we may first decompose the weight matrix into the three linearly connected matrices and use the proposed scaling matrix to approximate one of the decomposed matrices further as follows: h l = f (W (k) l h l−1 + c l + b (k) l + h l−1 )(14)W (k) l = U l A (k) l V l(15)A (k) l = diag(W A l s A,(k) )(16)b (k) l = W b l s b,(k)(17) where U l ∈ R m×n , V l ∈ R n×m and A (k) l ∈ R n×n with n m 1 . Note that residual connections are also added here. When we use this model for time-series speech data, the input varies at each time and the residual part becomes a timevariant bias term as h l,t = f (W We also investigate to which layers we should inject the proposed transformation and what kinds of activation functions should be used after the speaker transformation. More specifically, we investigate whether the proposed transformation should be used at intermediate hidden layers with nonlinear activation functions as shown in Figure 3-a or at a specific layer where all remaining operations are linear as shown in Figure 3-b. By analyzing this, we can understand whether the relationship between the proposed speaker transformation functions and generated acoustic features should be represented in a non-linear way like the former case, or in a linear one like the latter case. 2 EXPERIMENTS Experimental condition We use two speech corpora to evaluate our proposal: an English corpus containing 80 speakers, which is a subset of the VCTK [32,33], and an in-house Japanese speech corpus with over 250 speakers. The English corpus was used to objectively evaluate various aspects of our proposal while the Japanese corpus is used to reproduce the results and evaluate subjectively with native Japanese listeners. We split each corpora into the base and target sets as shown in Table 1 and conducted two tasks (multi-speaker and adaptation) 2 For the combination of the linear case with the strategy in Figure 2-a, which has operations at two different layers, we first used speaker transformation based on the bias code at a hidden layer with the non-linear activation functions and further used speaker transformation based on the scaling code at the next linear layer. This is technically a mix of linear and non-linear speaker transformations, but we included this in "the linear setup" in our experiments. as follows. In the multi-speaker task, we used en.base and one of en.target.{10, 40, 160, or 320} for training a multispeaker neural network common to all speakers per strategy. In the adaptation task, we used en.base for training a multispeaker neural network per strategy and adapted it to each target speaker included in en.target.*. In both the tasks, the evaluation was performed using target speakers included in en.target.*. This increased the number of models needed to be constructed but reduced the mismatch between the multispeaker and adaptation tasks so we could directly compare them. For the DNN-based acoustic model, we used a conventional multi-task learning neural network similar to our previous works [7,34]. The neural network maps linguistic features (depending on languages) to several acoustic features including 60-dimensional mel-cepstral coefficients, 25-dimensional band-limited aperiodicities, interpolated logarithm fundamental frequencies, and their dynamic counterpart. A voiced/unvoiced binary flag is also included. The neural network model has five feedforward layers each with 1024 neurons, followed by a linear layer to map to the desired dimensional output. All layers have the sigmoid activation function unless stated otherwise. We experimented with five strategies utilizing either scaling code, bias code, or both as shown in Table 2. Further, to investigate the impacts of different waveform generation methods, we used both a speaker-independent Wavenet vocoder [35,36] and the WORLD vocoder [37] for speech waveform generation . However, our Wavenet model is still under development and we experienced the collapse of generated speech problems, which is described in [38]. Objective evaluation We first evaluated the scaling code by itself in a nonlinear setup since, at the time of writing, using scaling code for multi-speaker speech synthesis has not been investigated. We changed the size of scaling codes from 1 to 128 to see how they impact the objective performance of the multi-speaker task in a similar way to experiments that we did on bias codes previously [7]. The multi-speaker models were trained using en.base and en.target.320 together. The objective evaluation results, including mel-cepstral distortion (MCD) in dB and F 0 root mean square error (F 0 RMSE) in Hz, are illustrated in Figure 4. We can see that both the distortions decrease when we increase the size of the scaling code. Next we evaluated multiple strategies described in Table 2 for the multi-speaker task in either nonlinear or linear setups. Again the multi-speaker models were trained using the en.base and en.target.320 data together. Figure 5 shows objective evaluation results of the strategies. If we look at the non-linear setups, we see that there are no obvious differences between these strategies. However, at least we can determine that the proposed scaling code can be used by itself without decreasing the performance. If we look at the linear setups, we can clearly see that the using the bias code by itself is a poor strategy for multi-speaker modeling. It resulted in much worse MCD even though its F 0 RMSE is comparable to other systems. In [39], Wang found out that the model structures required for mel-cepstrum and fundamental frequency are different. Our results also support this finding. Figure 6 shows objective evaluation results of the strategies in the adaptation task using different amounts of data. The first block indicated bias m corresponds to reference results in the multi-speaker task (i.e., systems where multi- speaker neural networks were trained using en.base and one of en.target.{10, 40, 160, or 320} and synthetic speech was generated using text of the test set of target speakers ) using the bias code in the nonlinear setup. All other results are adaptation results for the unseen speaker task. The amounts of adaptation data vary from 10 to 320. From this figure, we see that adaptation to the unseen speakers is more difficult than multi-speaker modeling. Moreover, while the results of multi-speaker modeling are improved significantly when we increase the amount of data, the adaptation results for the unseen speakers show marginal improvements when more data is available. This suggests that the proposed adaptation transformation needs to be generalized better. Another important pattern that we can see from the figure is that in terms of F 0 RMSE, all strategies in the linear setup outperform their nonlinear counterparts. Subjective evaluations Next we reproduced several selected strategies using the Japanese dataset. We doubled the size of speaker codes shown in Table 2 and chose strategies that showed reasonable improvements in the objective evaluation using the English dataset. The objective evaluation results using the Japanese corpus are shown in Figure 7, from which we can see the same trend as the result using the English one 3 . We used the Japanese systems and conducted a subjective listening test to see how participants perceived these differences. The listening test contained two sets of questions. In the first part, participants were asked to judge the naturalness of the presented speech sample using a five-point scale Fig. 7. Objective evaluation results of selected strategies in adaptation task using Japanese corpus. Like the English test, bias m shows reference results in the multi-speaker task using the bias code in the nonlinear setup. All other results are adaptation results. ranged from 1 (very unnatural) to 5 (very natural). In the second part, participants were asked to compare a speech sample of a system with recorded speech of the same speaker and judge if they are the same speaker or not using a four-point scale ranged from 1 (different, sure) to 4 (same, sure). This evaluation methodology is similar to our previous study [34]. In addition to synthetic speech generated from the proposed speech synthesis systems using the above selected strategies, we also evaluated recorded speech, WOLRD vocoded speech, and Wavenet vocoded speech for comparison. A large-scale listening test was done with 289 subjects. The statistical analysis was conducted using pairwise t-tests with a 95% confidence margin and Holm-Bonferroni compensation for multiple comparisons. Subjective evaluation results are presented in Figure 8. In the quality test, we can first see that participants judged all systems using our speaker-independent Wavenet vocoder samples to be worse than counterparts using the WORLD vocoder. This is inconsistent with other publication results and indicates that our Wavenet is not properly trained. For the future works, we could further fine-tune a part of the speaker-independent Wavenet model to stabilize the neuralnet vocoder [40,41]. However, unlike the quality test, the subjects judged synthetic speech using the Wavenet vocoder to be closer to the target speakers in the speaker similarity test although there are still large gaps between vocoded speech and synthetic speech. We can also see that a reference multi-speaker system marked as bias m using 100 utterances has the highest similarity score among the other systems, and this is consistent with the objective evaluation results. Regarding the adaptation to the unseen speakers, we could see that the proposed method using both the scaling and bias codes and its bottle- neck variant (in the linear setting) have better results than the adaptation method using the bias code in the nonlinear setting (which is our previous work) for both WORLD and Wavenet vocoders. This would be because of improved F0 adaptation, as we can see objectively in Figure 7. Regarding the quantity of the adaptation data, more data seems to slightly improve speaker similarity of synthetic speech in general but does not improve the perception of quality. In some cases, it makes the quality of synthetic speech slightly worse. CONCLUSIONS In this paper, we have explained several major existing adaptation frameworks for DNN speech synthesis and showed one generalized speaker-adaptive transformation. Further, we have factorized the proposed transformation on the basic of scaling and bias codes and investigated its variants such as bottleneck. From objective and subjective experiments, we showed that the proposed method, specifically the ones using both the scaling and bias codes in the linear setting, can reduce acoustic errors and improve subjective speaker similarity in the adaptation of unseen speakers . Moreover, our results clearly indicate that there are still large gaps between vocoded speech and synthetic speech in terms of speaker similarity and this clearly indicates that there is room for improving multispeaker modeling and speaker adaptation. Our future work includes comparing our method with other adaptation methods such as LHUC and SVD bottleneck speaker adaptation with low-rank approximation. Another interesting experiment we would like to see is the use of i-vector or d-vector [24] as a scaling code.
2,767
1807.11632
2950710378
Most neural-network based speaker-adaptive acoustic models for speech synthesis can be categorized into either layer-based or input-code approaches. Although both approaches have their own pros and cons, most existing works on speaker adaptation focus on improving one or the other. In this paper, after we first systematically overview the common principles of neural-network based speaker-adaptive models, we show that these approaches can be represented in a unified framework and can be generalized further. More specifically, we introduce the use of scaling and bias codes as generalized means for speaker-adaptive transformation. By utilizing these codes, we can create a more efficient factorized speaker-adaptive model and capture advantages of both approaches while reducing their disadvantages. The experiments show that the proposed method can improve the performance of speaker adaptation compared with speaker adaptation based on the conventional input code.
Learning Hidden Unit Contribution (LHUC) @cite_12 is an adaptation method that transforms outputs of the activation function using a speaker-dependent diagonal transformation matrix, which significantly reduces the number of parameters: where @math is a diagonal matrix for speaker @math , @math is an operation to extract diagonal elements of a @math matrix as a @math vector, and @math is an element-wise multiplication of vectors. In LHUC, since we apply the transformation after the activation function of the current layer, we may write the LHUC operation at the next hidden layer as follows: From these equations, we see that a speaker-specific weight matrix @math is factorized as @math .
{ "abstract": [ "This paper proposes a simple yet effective model-based neural network speaker adaptation technique that learns speaker-specific hidden unit contributions given adaptation data, without requiring any form of speaker-adaptive training, or labelled adaptation data. An additional amplitude parameter is defined for each hidden unit; the amplitude parameters are tied for each speaker, and are learned using unsupervised adaptation. We conducted experiments on the TED talks data, as used in the International Workshop on Spoken Language Translation (IWSLT) evaluations. Our results indicate that the approach can reduce word error rates on standard IWSLT test sets by about 8–15 relative compared to unadapted systems, with a further reduction of 4–6 relative when combined with feature-space maximum likelihood linear regression (fMLLR). The approach can be employed in most existing feed-forward neural network architectures, and we report results using various hidden unit activation functions: sigmoid, maxout, and rectifying linear units (ReLU)." ], "cite_N": [ "@cite_12" ], "mid": [ "2094147890" ] }
SCALING AND BIAS CODES FOR MODELING SPEAKER-ADAPTIVE DNN-BASED SPEECH SYNTHESIS SYSTEMS
Recent speaker-dependent speech synthesis systems can generate high-quality reading speech indistinguishable from natural human speech when their training data is recorded in a quality-controlled condition and have sufficient amount of data [1]. The speech synthesis community is currently trying to solve more challenging problems. A good example is multi-speaker speech synthesis and its adaptation [2,3,4,5]. Here multi-speaker synthesis means generating synthetic speech of multiple known speakers included in a training dataset using a common model, and adaptation means adapting the speaker-independent common model to unseen speakers and generating their speech. This speakeradaptive speech synthesis systems are expected to opens possibilities for a wide range of new applications for speech synthesis such as a customizable, user-specific voice interface and voice preservation for people with medical conditions involving voice losses. However, training the multi-speaker This work was partially supported by MEXT KAKENHI Grants (16H06302, 17H04687, 18H04120, and 18H04112). synthesis models and adapting them to unseen speakers are still challenging problems, and resulting models are far from perfect, especially when less than ideal datasets are used [6]. Most adaptation methods for neural network models can be described as either (a) fine-tuning a set of or all of parameters of speaker-independent network so it explains unseen speaker's data better or (b) factorizing a neural network into speaker-specific and common parts and estimating the speaker-specific components for the unseen speaker's data. The speaker-specific components may be composed by input codes (e.g. one-hot vector) [7], embedding vectors obtained externally (e.g. i-vector) [8], or latent variables (e.g. variational auto-encoder) [3,9,10]. Of course any of those speaker-specific components may be jointly optimized with the common parts (e.g. [7,10,11]). Although there are a lot of variants on multi-speaker modeling and adaptation, most approaches for augmenting the speaker-specific components into a neural network are equivalent to adapting a bias term of each hidden layer and this bias term is typically constant across all frames of all utterances. Although Wu et al. [12] and Nachmachi et al. [13] proposed frame-dependent components, these components are still bias adaptation and their underlying frameworks and concepts have mathematical similarities. In this paper we first systematically overview the common concepts of neural-network based speaker-adaptive models and show that these approaches can be represented in a unified framework. Further, we introduce a scaling code as an extended speaker-adaptive transformation. As its name indicates, this code introduces an additional scaling operation as an approximation to adaptation of weight matrices unlike the conventional deep neural network (DNN) adaptation approaches. Section 2 details relevant work. Section 3 describes our factorized speaker adaptation based on scaling and bias codes. Section 4 explains our experiments and shows both objective and subjective results. We conclude our work and describe the future direction for this method in Section 5. FACTORIZED SPEAKER TRANSFORMATION BASED ON SCALING AND BIAS CODES Scaling and bias codes The above approaches are obviously complementary. Our proposal, illustrated in Figure 1, is therefore the design of a new speaker transformation by combining the above two types of approaches and further factorizing its essential components on the basic of "scaling" and "bias" codes. The main idea is to explicitly transform both the weight matrix and the bias vector as: h l = f (A (k) l W l h l−1 + c l + b (k) l )(11)A (k) l = diag(W A l s A,(k) ) (12) b (k) l = W b l s b,(k)(13)W A W b W s A,(k) s b,(k) speaker-embeded table where A (k) l ∈ R m×m is a diagonal matrix for the scaling operation at the l-th layer. The matrix is further factorized into a speaker-independent projection matrix W A l ∈ R m×p and a scaling code vector s A,(k) ∈ R p×1 . diag is an operation to change a m × 1 vector into a diagonal m × m matrix. The speaker-specific bias term b (k) l is also factorized in the same way using W b l ∈ R m×q and s b,(k) ∈ R q×1 . As described previously, s b,(k) is basically equivalent to the conventional speaker code, but we call it as bias code here to better outline its property. These codes may have arbitrary lengths, but, p and q are usually chosen to be much smaller than m to reduce the number of free parameters further. Factorizing models explicitly and using lower-dimensional subspaces is a powerful concept used in various models (e.g. Heteroscedastic Linear Discriminant Analysis (HLDA) [26], subspace Gaussian mixture model [27]). The proposed factorization is somewhat similar to Factorize Hidden Layer (FHL) introduced by Samrakoon and Sim [20], but we focus on performing the scaling and bias adaptation simultaneously using lower dimensional vectors. A concept similar to scaling and bias codes was also investigated for ASR in [28,29], but instead of mapping the scaling and bias transformation from a common vector we use separated vectors as scaling and bias codes to give ourselves more degrees of freedom to design a speaker-adaptive architecture. If necessary, we may directly adapt A (k) l and b (k) l when the amount of adaptation data is sufficient. Extensions of the proposed method In this paper, we investigate two more strategies as extensions of the proposed method. The first strategy is to separately use the scaling and bias codes at different layers and to explicitly perform either scaling or bias operations only as illustrated by Figure 2-a. This is a special case of the proposed method. The second strategy is to combine the proposed method with other type of matrix decomposition. For example, in the work of Xue et al. [30], a weight matrix is decomposed into three linearly connected matrices using singular value decomposition (SVD). Therefore, instead of multiplying a scaling matrix to a weight matrix, we may first decompose the weight matrix into the three linearly connected matrices and use the proposed scaling matrix to approximate one of the decomposed matrices further as follows: h l = f (W (k) l h l−1 + c l + b (k) l + h l−1 )(14)W (k) l = U l A (k) l V l(15)A (k) l = diag(W A l s A,(k) )(16)b (k) l = W b l s b,(k)(17) where U l ∈ R m×n , V l ∈ R n×m and A (k) l ∈ R n×n with n m 1 . Note that residual connections are also added here. When we use this model for time-series speech data, the input varies at each time and the residual part becomes a timevariant bias term as h l,t = f (W We also investigate to which layers we should inject the proposed transformation and what kinds of activation functions should be used after the speaker transformation. More specifically, we investigate whether the proposed transformation should be used at intermediate hidden layers with nonlinear activation functions as shown in Figure 3-a or at a specific layer where all remaining operations are linear as shown in Figure 3-b. By analyzing this, we can understand whether the relationship between the proposed speaker transformation functions and generated acoustic features should be represented in a non-linear way like the former case, or in a linear one like the latter case. 2 EXPERIMENTS Experimental condition We use two speech corpora to evaluate our proposal: an English corpus containing 80 speakers, which is a subset of the VCTK [32,33], and an in-house Japanese speech corpus with over 250 speakers. The English corpus was used to objectively evaluate various aspects of our proposal while the Japanese corpus is used to reproduce the results and evaluate subjectively with native Japanese listeners. We split each corpora into the base and target sets as shown in Table 1 and conducted two tasks (multi-speaker and adaptation) 2 For the combination of the linear case with the strategy in Figure 2-a, which has operations at two different layers, we first used speaker transformation based on the bias code at a hidden layer with the non-linear activation functions and further used speaker transformation based on the scaling code at the next linear layer. This is technically a mix of linear and non-linear speaker transformations, but we included this in "the linear setup" in our experiments. as follows. In the multi-speaker task, we used en.base and one of en.target.{10, 40, 160, or 320} for training a multispeaker neural network common to all speakers per strategy. In the adaptation task, we used en.base for training a multispeaker neural network per strategy and adapted it to each target speaker included in en.target.*. In both the tasks, the evaluation was performed using target speakers included in en.target.*. This increased the number of models needed to be constructed but reduced the mismatch between the multispeaker and adaptation tasks so we could directly compare them. For the DNN-based acoustic model, we used a conventional multi-task learning neural network similar to our previous works [7,34]. The neural network maps linguistic features (depending on languages) to several acoustic features including 60-dimensional mel-cepstral coefficients, 25-dimensional band-limited aperiodicities, interpolated logarithm fundamental frequencies, and their dynamic counterpart. A voiced/unvoiced binary flag is also included. The neural network model has five feedforward layers each with 1024 neurons, followed by a linear layer to map to the desired dimensional output. All layers have the sigmoid activation function unless stated otherwise. We experimented with five strategies utilizing either scaling code, bias code, or both as shown in Table 2. Further, to investigate the impacts of different waveform generation methods, we used both a speaker-independent Wavenet vocoder [35,36] and the WORLD vocoder [37] for speech waveform generation . However, our Wavenet model is still under development and we experienced the collapse of generated speech problems, which is described in [38]. Objective evaluation We first evaluated the scaling code by itself in a nonlinear setup since, at the time of writing, using scaling code for multi-speaker speech synthesis has not been investigated. We changed the size of scaling codes from 1 to 128 to see how they impact the objective performance of the multi-speaker task in a similar way to experiments that we did on bias codes previously [7]. The multi-speaker models were trained using en.base and en.target.320 together. The objective evaluation results, including mel-cepstral distortion (MCD) in dB and F 0 root mean square error (F 0 RMSE) in Hz, are illustrated in Figure 4. We can see that both the distortions decrease when we increase the size of the scaling code. Next we evaluated multiple strategies described in Table 2 for the multi-speaker task in either nonlinear or linear setups. Again the multi-speaker models were trained using the en.base and en.target.320 data together. Figure 5 shows objective evaluation results of the strategies. If we look at the non-linear setups, we see that there are no obvious differences between these strategies. However, at least we can determine that the proposed scaling code can be used by itself without decreasing the performance. If we look at the linear setups, we can clearly see that the using the bias code by itself is a poor strategy for multi-speaker modeling. It resulted in much worse MCD even though its F 0 RMSE is comparable to other systems. In [39], Wang found out that the model structures required for mel-cepstrum and fundamental frequency are different. Our results also support this finding. Figure 6 shows objective evaluation results of the strategies in the adaptation task using different amounts of data. The first block indicated bias m corresponds to reference results in the multi-speaker task (i.e., systems where multi- speaker neural networks were trained using en.base and one of en.target.{10, 40, 160, or 320} and synthetic speech was generated using text of the test set of target speakers ) using the bias code in the nonlinear setup. All other results are adaptation results for the unseen speaker task. The amounts of adaptation data vary from 10 to 320. From this figure, we see that adaptation to the unseen speakers is more difficult than multi-speaker modeling. Moreover, while the results of multi-speaker modeling are improved significantly when we increase the amount of data, the adaptation results for the unseen speakers show marginal improvements when more data is available. This suggests that the proposed adaptation transformation needs to be generalized better. Another important pattern that we can see from the figure is that in terms of F 0 RMSE, all strategies in the linear setup outperform their nonlinear counterparts. Subjective evaluations Next we reproduced several selected strategies using the Japanese dataset. We doubled the size of speaker codes shown in Table 2 and chose strategies that showed reasonable improvements in the objective evaluation using the English dataset. The objective evaluation results using the Japanese corpus are shown in Figure 7, from which we can see the same trend as the result using the English one 3 . We used the Japanese systems and conducted a subjective listening test to see how participants perceived these differences. The listening test contained two sets of questions. In the first part, participants were asked to judge the naturalness of the presented speech sample using a five-point scale Fig. 7. Objective evaluation results of selected strategies in adaptation task using Japanese corpus. Like the English test, bias m shows reference results in the multi-speaker task using the bias code in the nonlinear setup. All other results are adaptation results. ranged from 1 (very unnatural) to 5 (very natural). In the second part, participants were asked to compare a speech sample of a system with recorded speech of the same speaker and judge if they are the same speaker or not using a four-point scale ranged from 1 (different, sure) to 4 (same, sure). This evaluation methodology is similar to our previous study [34]. In addition to synthetic speech generated from the proposed speech synthesis systems using the above selected strategies, we also evaluated recorded speech, WOLRD vocoded speech, and Wavenet vocoded speech for comparison. A large-scale listening test was done with 289 subjects. The statistical analysis was conducted using pairwise t-tests with a 95% confidence margin and Holm-Bonferroni compensation for multiple comparisons. Subjective evaluation results are presented in Figure 8. In the quality test, we can first see that participants judged all systems using our speaker-independent Wavenet vocoder samples to be worse than counterparts using the WORLD vocoder. This is inconsistent with other publication results and indicates that our Wavenet is not properly trained. For the future works, we could further fine-tune a part of the speaker-independent Wavenet model to stabilize the neuralnet vocoder [40,41]. However, unlike the quality test, the subjects judged synthetic speech using the Wavenet vocoder to be closer to the target speakers in the speaker similarity test although there are still large gaps between vocoded speech and synthetic speech. We can also see that a reference multi-speaker system marked as bias m using 100 utterances has the highest similarity score among the other systems, and this is consistent with the objective evaluation results. Regarding the adaptation to the unseen speakers, we could see that the proposed method using both the scaling and bias codes and its bottle- neck variant (in the linear setting) have better results than the adaptation method using the bias code in the nonlinear setting (which is our previous work) for both WORLD and Wavenet vocoders. This would be because of improved F0 adaptation, as we can see objectively in Figure 7. Regarding the quantity of the adaptation data, more data seems to slightly improve speaker similarity of synthetic speech in general but does not improve the perception of quality. In some cases, it makes the quality of synthetic speech slightly worse. CONCLUSIONS In this paper, we have explained several major existing adaptation frameworks for DNN speech synthesis and showed one generalized speaker-adaptive transformation. Further, we have factorized the proposed transformation on the basic of scaling and bias codes and investigated its variants such as bottleneck. From objective and subjective experiments, we showed that the proposed method, specifically the ones using both the scaling and bias codes in the linear setting, can reduce acoustic errors and improve subjective speaker similarity in the adaptation of unseen speakers . Moreover, our results clearly indicate that there are still large gaps between vocoded speech and synthetic speech in terms of speaker similarity and this clearly indicates that there is room for improving multispeaker modeling and speaker adaptation. Our future work includes comparing our method with other adaptation methods such as LHUC and SVD bottleneck speaker adaptation with low-rank approximation. Another interesting experiment we would like to see is the use of i-vector or d-vector [24] as a scaling code.
2,767
1807.11061
2883912069
Original and learnt clauses in Conflict-Driven Clause Learning (CDCL) SAT solvers often contain redundant literals. This may have a negative impact on performance because redundant literals may deteriorate both the effectiveness of Boolean constraint propagation and the quality of subsequent learnt clauses. To overcome this drawback, we propose a clause vivification approach that eliminates redundant literals by applying unit propagation. The proposed clause vivification is activated before the SAT solver triggers some selected restarts, and only affects a subset of original and learnt clauses, which are considered to be more relevant according to metrics like the literal block distance (LBD). Moreover, we conducted an empirical investigation with instances coming from the hard combinatorial and application categories of recent SAT competitions. The results show that a remarkable number of additional instances are solved when the proposed approach is incorporated into five of the best performing CDCL SAT solvers (Glucose, TC_Glucose, COMiniSatPS, MapleCOMSPS and MapleCOMSPS_LRB). More importantly, the empirical investigation includes an in-depth analysis of the effectiveness of clause vivification. It is worth mentioning that one of the SAT solvers described here was ranked first in the main track of SAT Competition 2017 thanks to the incorporation of the proposed clause vivification. That solver was further improved in this paper and won the bronze medal in the main track of SAT Competition 2018.
Eliminating redundant literals in clauses (see e.g. @cite_11 @cite_19 @cite_30 @cite_33 @cite_13 @cite_7 @cite_8 @cite_16 @cite_17 ) before and during the search is crucial for the performance of CDCL SAT solvers for several reasons: (i) shorter clauses need less memory; (ii) shorter clauses are easier to become unit, and thus increase the power of unit propagation; and (iii) shorter clauses can lead to shorter learnt clauses.
{ "abstract": [ "In this paper, we present a new way to preprocess Boolean formulae in Conjunctive Normal Form (CNF). In contrast to most of the current pre-processing techniques, our approach aims at improving the filtering power of the original clauses while producing a small number of additional and relevant clauses. More precisely, an incomplete redundancy check is performed on each original clauses through unit propagation, leading to either a sub-clause or to a new relevant one generated by the clause learning scheme. This preprocessor is empirically compared to the best existing one in terms of size reduction and the ability to improve a state-of-the-art satisfiability solver.", "Minimizing learned clauses is an effective technique to reduce memory usage and also speed up solving time. It has been implemented in MiniSat since 2005 and is now adopted by most modern SAT solvers in academia, even though it has not been described in the literature properly yet. With this paper we intend to close this gap and also provide a thorough experimental analysis of it's effectiveness for the first time.", "", "", "This work presents a novel strategy for improving SAT solver performance by using concurrency. Rather than aiming to parallelize search, we use concurrency to aid a conventional CDCL search procedure. More concretely, our work extends a conventional CDCL SAT solver with a second computation thread, which is solely used to strengthen the clauses learned by the solver. This provides a simple and natural way to exploit the availability of multi-core hardware. We have employed our technique to extend two well established solvers, MiniSAT and Glucose. Despite its conceptual simplicity the technique yields a significant improvement of those solvers' performances, in particular for unsatisfiable benchmarks. For such benchmarks an extensive empirical evaluation revealed a remarkably consistent reduction of the wall clock time required to determine unsatisfiability, as well as an ability to solve more benchmarks in the same CPU time. The proposed technique can be applied in combination with existing parallel SAT solving techniques, including both portfolio and search space splitting approaches. The approach presented here can thus be seen as orthogonal to those existing techniques.", "", "", "Applying pre- and inprocessing techniques to simplify CNF formulas both before and during search can considerably improve the performance of modern SAT solvers. These algorithms mostly aim at reducing the number of clauses, literals, and variables in the formula. However, to be worthwhile, it is necessary that their additional runtime does not exceed the runtime saved during the subsequent SAT solver execution. In this paper we investigate the efficiency and the practicability of selected simplification algorithms for CDCL-based SAT solving. We first analyze them by means of their expected impact on the CNF formula and SAT solving at all. While testing them on real-world and combinatorial SAT instances, we show which techniques and combinations of them yield a desirable speedup and which ones should be avoided.", "Preprocessing SAT instances can reduce their size considerably. We combine variable elimination with subsumption and self-subsuming resolution, and show that these techniques not only shrink the formula further than previous preprocessing efforts based on variable elimination, but also decrease runtime of SAT solvers substantially for typical industrial SAT problems. We discuss critical implementation details that make the reduction procedure fast enough to be practical." ], "cite_N": [ "@cite_30", "@cite_33", "@cite_7", "@cite_8", "@cite_17", "@cite_19", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "1681144494", "1480459634", "", "", "97284975", "", "", "1830554976", "1950282396" ] }
Clause Vivification by Unit Propagation in CDCL SAT Solvers $
In propositional logic, a variable x may take the truth value 0 (false) or 1 (true). A literal l is a variable x or its negation ¬x, a clause is a disjunction of literals, a CNF formula φ is a conjunction of clauses, and the size of a clause is the number of literals in it. An assignment of truth values to the propositional variables satisfies a literal x if x takes the value 1 and satisfies a literal ¬x if x takes the value 0, satisfies a clause if it satisfies at least one of its literals, and satisfies a CNF formula if it satisfies all of its clauses. The empty clause, denoted by 2, contains no literals and is unsatisfiable; it represents a conflict. A unit clause contains exactly one literal and is satisfied by assigning the appropriate truth value to the variable. An assignment for a CNF formula φ is complete if each variable in φ has been assigned a value; otherwise, it is considered partial. The SAT problem for a CNF formula φ is to find an assignment to the variables that satisfies all clauses of φ. One of the interests of SAT is that many combinatorial problems can easily be encoded as a SAT problem. For example, Cook showed that any NP problem can be encoded to SAT in polynomial time [1], founding the theory of NP-completeness. Because of the high expressive power of SAT and the progress made on SAT solving techniques, modern Conflict-Driven Clause Learning (CDCL) SAT solvers are routinely used as core solving engines in many real-world applications. Their ability to solve challenging problems comes from the combination of different components: variable selection heuristics, unit clause propagation, clause learning, restarts, clause database management, data structures, pre-and inprocessing. Formula simplification techniques applied during preprocessing have proven useful in enabling efficient SAT solving for real-world application domains (e.g. [2,3,4]). The most successful preprocessing techniques include variants of bounded variable elimination, addition or elimination of redundant clauses, detection of subsumed clauses and suitable combinations of them. They aim mostly at reducing the number of clauses, literals and variables in the input formula. More recently, interleaving formula simplification techniques with CDCL search has provided significant performance improvements. Among such inprocessing techniques [5], we mention local and recursive clause minimization [6,7], which remove redundant literals from learnt clauses immediately after their creation; clause vivification in a concurrent context [8,9]; and on-the-fly clause subsumption [10,11], which efficiently removes clauses subsumed by the resolvents derived during clause learning. In this paper, we focus on clause vivification, which consists in eliminating redundant literals from clauses. A clause is a logical consequence of a CNF φ if every solution of φ satisfies it. Let C = l 1 ∨ l 2 ∨ · · · ∨ l k be a clause of φ, l i be a literal of C and C \ l i be the clause obtained from C after removing l i . If C \ l i is a logical consequence of φ, then l i is said to be redundant in C and should be eliminated from C. Indeed, from a problem solving pespective, C \ l i is always better than C when l i is redundant in C. Nevertheless, identifying a redundant literal in a clause C of φ is NP-hard in general. So, in this paper, we restrict clause vivification to the elimination of redundant literals that can be identified by applying unit clause propagation (or simply unit propagation), as described below. Solving a SAT problem φ amounts to satisfy every clause of φ. We pay special attention to the unit clauses in φ (if any). Satisfying a unit clause l implies to falsify the literal ¬l, which should be removed from each clause C containing it because C cannot satisfied by ¬l. If C becomes unit (C = l ), then l is satisfied and ¬l is removed from the remaining clauses, and so on. This process is called unit propagation and is denoted by UP(φ), and continues until there is no unit clause in φ or a clause becomes empty. In the latter case, φ is proved to be unsatisfiable. For any literal l i in C, if UP(φ ∪ {¬l 1 , . . . , ¬l i−1 , ¬l i+1 , . . . , ¬l k }) results in the empty clause, then C \ l i is a logical consequence of φ and l i is a redundant literal. Thus, C should replaced by C \ l i in φ. Clause vivification was independently proposed in [12] and [4] to preprocess the input formula in SAT solvers 1 . Unfortunately, it is not easy to make clause vivification effective because of the cost of unit propagation. This can be illustrated by the evolution of Lingeling, a frequently awarded solver in SAT competitions 2 : Clause vivification was implemented in Lingeling 271 [13] in 2010 but was removed in 2012 because it did not have any observable impact on the runtime of the tested benchmarks [14]. In the descriptions of Lingeling in subsequent SAT competitions [15,16,17,18], clause vivification is not even mentioned. Although clause vivification was proposed as a preprocessing technique in [4], the authors mentioned inprocessing clause vivification as future work. Actually, making inprocessing clause vivification effective is much harder. We quote a statement from three leading experts in the field to partly appreciate this hardness [5]: "However, developing and implementing sound inprocessing solvers in the presence of a wide range of different simplification techniques is highly non-trivial. It requires in-depth understanding on how different techniques can be combined together and interleaved with the CDCL algorithm in a satisfiabilitypreserving way." A question that should be addressed in inprocessing clause vivification is to determine whether or not to apply vivification to learnt clauses, because each learnt clauses is a logical consequence of the input formula and can be removed when the clause database is reduced. To the best of our knowledge, none of the awarded CDCL SAT solvers in SAT Competition 2016 used clause vivification to eliminate redundant literals in learnt clauses by applying unit propagation. In particular, the solver Riss6, the silver medal winner of the main track, disabled the learnt clause vivification used in Riss 5.05, because it turned out to be ineffective for formulas of more recent years [19]. The main purpose of this paper is to design an effective and efficient approach to vivifying both original clauses and learnt clauses in pre-and inprocessing. Because of the difficulty stated above, we designed the proposed approach in an incremental way. Firstly, we limited vivification to inprocessing and only vivified a subset of learnt clauses selected using a heuristic at some carefully defined moments during the search process. Moreover, each learnt clause was vivified at most once. This preliminary approach was presented in [20] and implemented in five of the best performing CDCL SAT solvers: Glucose [21], COMiniSatPS [22], MapleCOMSPS [23], MapleCOMSPS LRB [24] and TC Glucose [25]. The experimental results show that the proposed vivification allows these solvers to solve a remarkable number of additional instances coming from the hard combinatorial and application categories of the SAT Competition 2014 and 2016. More importantly, we submitted four solvers based on our approach to SAT competition 2017 -Maple LCM and three variants with some other techniques -and won the gold medal of the main track. The four solvers solved 20, 18, 16 and 10 instances more than MapleCOMSPS LRB VSIDS 2, the first solver without our approach, in the main track consisting of 350 industrial and hard combinatorial instances. In addition, the best two solvers in the main track of SAT competition 2018, Maple LCM Dist ChronoBT [26] and Maple LCM Scavel [27], are developed from Maple LCM by integrating some other techniques. Furthermore, the best 13 solvers of the main track of SAT Competition 2018, developed by different author sets, all use our learnt clause vivification approach [28]. Secondly, in this paper, we extend the above preliminary approach after conducting a more extensive investigation that includes an in-depth analysis of the most important steps of the vivification process. In fact, our investigation indicates that the most crucial point is to determine which clauses must be vivified rather than to decide when vivification should be activated. So, we define a new clause selection heuristic that extends the vivification approach to original clauses and allows some clauses to be vivified more than once. Implemented in Maple LCM, the extended approach allows it to solve 34 more instances among the 1450 instances in the main track of the SAT Competition 2014, 2016 and 2017 (of which 20 more instances among the 350 instances in the main track of the SAT Competition 2017). The new solver, called Maple CV+ in this paper, participated in SAT competition 2018 under the name Maple CM [29] and won the bronze medal of the main track. Note that the benchmarks used in SAT competition were not known before the solvers were submitted. The main contribution of this paper is that it definitely shows the benefits of vivifying original and learnt clauses during the search process. Furthermore, the empirical results indicate that the proposed approach is robust and is not necessary to carefully adjust parameters to obtain significant gains, even with an implementation that is not fully optimized. We quote a statement of two leading SAT solver developers [30] to better appreciate the significance of our contributions: "We must also say, as a preliminary, that improving SAT solvers is often a cruel world. To give an idea, improving a solver by solving at least ten more instances (on a fixed set of benchmarks of a competition) is generally showing a critical new feature. In general, the winner of a competition is decided based on a couple of additional solved benchmarks." The above statement was also quoted in [31] to appreciate the advances brought about by the CHB branching heuristic. The paper is organized as follows: Section 2 gives some basic concepts about propositional satisfiability and CDCL SAT solvers. Section 3 presents some related works on elimination of redundant literals. Section 4 describes our learnt clause vivification approach, as well as how it is implemented in different CDCL SAT solvers. Section 5 reports on the in-depth empirical investigation of the proposed clause vivification approach. Section 6 extends clause vivification to original clauses during the search process, defines the conditions to re-vivify learnt and original clauses, and assesses the performance of these extensions. Section 7 contains the concluding remarks. Preliminaries Consider two clauses C and C. If every literal of C is also a literal of C, then C is a sub-clause of C. A clause C subsumes a clause C iff C is a sub-clause of C, because when C is satisfied, C is also necessarily satisfied. For any literal l, it holds that l ≡ ¬¬l. A CNF formula is also represented as a set of clauses. A clause C is redundant in a CNF formula φ if C is a logical consequence of φ \ {C}. Two CNF formulas φ and φ are equivalent if they are satisfied by the same assignments. If C is a logical consequence of φ and C is a sub-clause of a clause C of φ, then replacing C by C in φ results in a formula equivalent to φ that is easier to solve, because a solver considers fewer possibilities to satisfy C . Given a CNF formula φ and a literal l, we define φ| l to be the CNF formula resulting from φ after removing the clauses containing an occurrence of l and all the occurrences of ¬l. Unit Propagation (UP) can be recursively defined using the following two rules: (R1) U P (φ) = φ if φ does not contain any unit clause; and (R2) U P (φ) = U P (φ| l ) if there is a unit clause l in φ. Literal l is asserted by U P (φ) if U P (φ| l ) is computed. In the sequel, U P (φ) denotes the CNF formula obtained from φ after repeatedly applying R2 until there is no unit clause or the empty clause is derived. Concretely, U P (φ) = 2 if the empty clause is derived. Otherwise, U P (φ) denotes the CNF formula in which all unit clauses have been propagated using R2. When U P (φ ∪ {l}) is computed, we say that l is propagated in φ. The key idea behind vivifying a clause C in a CNF formula φ is to identify a sub-clause C = l 1 ∨ l 2 ∨ · · · ∨ l i of C such that UP(φ ∪ {¬l 1 , ¬l 2 , . . . , ¬l i }) = 2. In this case, C is a logical consequence of φ, because UP(φ ∪ {¬l 1 , ¬l 2 , . . . , ¬l i }) = 2 means that C cannot be falsified when φ is satisfied. Hence, we can replace C by C in φ. In this way, we eliminate redundant literals from C and obtain an easier formula to solve. A CDCL SAT solver [32,33] performs a non-chronological backtrack search in the space of partial truth assignments. Concretely, the solver repeatedly picks a decision literal l i (for i = 1, 2, . . .) and applies unit propagation in φ ∪ {l 1 , l 2 , . . . , l i } (i.e., it computes U P (φ ∪ {l 1 , l 2 , . . . , l i })) until the empty formula or the empty clause are derived. If the empty clause is derived, the reasons of the conflict are analyzed and a clause (nogood) is learnt using a particular method, usually the First UIP (Unique Implication Point) scheme [34]. The learnt clause is then added to the clause database. Since too many learnt clauses slow down the solver and may overflow the available memory, the solver periodically removes a subset of learnt clauses using a particular clause deletion strategy. Let φ be a CNF formula, let l i be the i th decision literal picked by a CDCL SAT solver, and let φ i = U P (φ ∪ {l 1 , l 2 , . . . , l i }). We say that l i and all the literals asserted when computing U P (φ i−1 ∪ {l i }) belong to level i. Literals asserted in U P (φ) do not depend on any decision literal and form level 0. When UP derives the empty clause and a learnt clause is extracted from the conflict analysis, the solver cancels the literal assertions in the reverse order until the level where the learnt clause contains one non-asserted literal, and continues the search from that level after propagating the non-asserted literal. Under certain conditions, the solver cancels the literal assertions until level 0 and restarts the search from level 0. Algorithm 1 in Section 4 depicts a generic CDCL SAT solver. The literals of a learnt clause C are partitioned w.r.t. their assertion level. The number of sets in the partition is called the Literal Block Distance (LBD) of C [21]. As shown in [21], LBD measures the quality of learnt clauses. Clauses with small LBD values are considered to be more relevant. The best performing CDCL SAT solvers of the last SAT competitions use LBD as a measure of the quality of learnt clauses to determine which clauses must be removed or retained. Moreover, solvers like Glucose and its descendants use the LBD of recent learnt clauses to decide when a restart must be triggered. A CDCL SAT solver essentially constructs a (directed acyclic) implication graph G as follows. Given a CNF φ without unit clauses, a decision literal is a vertex of G. If there is a clause ¬l 1 ∨ ¬l 2 ∨ · · · ∨ ¬l k−1 ∨ l k that becomes unit because l 1 , l 2 , . . . , l k−1 are satisfied (i.e., l 1 , l 2 , . . . , l k−1 are in G), then vertex l k and arrows (l 1 , l k ), (l 2 , l k ), . . . , (l k−1 , l k ) are added to the graph. Figure 1 shows an example of implication graph. Following the terminology of [35], the vertex name a@b, where a and b are positive integers, means that literal l a is asserted at decision level b. A variable x i is associated with two literals: ¬x i = l 2i−1 and x i = l 2i . Thus, an even number a in the graph represents a positive literal and an odd number a negative literal. Therefore, l a = x a 2 if a is even and l a = ¬x a+1 2 if a is odd. A vertex in an implication graph represents a satisfied literal l and also identifies a clause which is the reason of the satisfaction of l. A vertex without any predecessor identifies a unit clause (i.e., a decision literal). For example, vertex 30@4 in Figure 1 identifies the unit clause l 30 . A vertex with incoming arrows identifies a non-unit clause. For example, vertex 140@4 identifies, together with the incoming arrows, clause ¬l 8 ∨ ¬l 58 ∨ ¬l 83 ∨ ¬l 100 ∨ l 140 , which is the reason to satisfy l 140 . Note that the predecessors are negated in the clause, because ¬l 8 ∨¬l 58 ∨¬l 83 ∨¬l 100 ∨l 140 ≡ l 8 ∧ l 58 ∧ l 83 ∧ l 100 → l 140 . A clause vivification procedure executing UP(φ∪{¬l 1 , ¬l 2 , . . . , ¬l i }) also constructs an implication graph, where literals in {¬l 1 , ¬l 2 , . . . , ¬l i } are considered as successive decision literals. The length of a path between two vertices in an implication graph is the number of arrows in the path, and the distance between two vertices is the length of the shortest path between the two vertices. A UIP in an implication graph deriving a conflict is a vertex through which all paths from the last decision literal to the conflict go. For (2) (4)(3) (1) (1) (3)(2) (3) (2) (1) 5@1 (1) 20@3 23@3 (2) 30@4 (5) Figure 1: A complete implication graph. The number in parentheses below a vertex represents the distance from the vertex to 2. The arrows involving at least one vertex of a lower level are dotted. example, the implication graph in Figure 1 has four levels, the last decision literal is l 30 , and there are two UIPs: l 30 and l 45 , of which the first UIP (counting from the conflict) is l 45 . Using the first UIP scheme, the learnt clause ¬l 45 ∨ ¬l 8 ∨ ¬l 5 ∨ ¬l 16 ∨ ¬l 11 is derived from the implication graph, consisting of the negation of all literals of a lower level whose distance to a literal of the highest level is 1, together with the negation of the first UIP. The learnt clause is recorded to avoid the re-construction of the implication graph when the literals in the learnt clause are falsified, because the learnt clause is already falsified in this case. A clause plays a role in a search process only when it becomes unit. This fact is exploited in the two-literal watching technique [33] to speed up UP. Using this technique, a solver watches only two literals of each clause C, and does nothing on C when other literals of C are satisfied or falsified, because when C becomes unit, one of the two watched literals is necessarily falsified. Concretely, when one of the two watched literals is falsified, the technique inspects C to see if it becomes unit or falsified. If not, the solver replaces the falsified literal by another non-falsified literal and watches it instead. In this way, backtracking just needs to cancel the variable assignments. The Proposed Clause Vivification Approach We first describe the general principle of the proposed approach, and then present its implementation in the solvers Glucose, TC Glucose, COMiniSatPS, MapleCOMSPS and MapleCOMSPS LRB. General Principles In order to make inprocessing clause vivification effective and efficient, we address the following questions: 1. When should we activate clause vivification? Clause vivification should be activated at level 0 to ensure that the vivification is independent of any branching decision. In other words, it should be activated upon a restart. Hence, the relevant questions are: should we activate clause vivification in every restart? If not, how do we determine the restarts upon which we activate clause vivification? 2. Should we vivify each clause? If not, which clauses should be vivified? 3. Clause vivification depends on the ordering selected to propagate the negation of the literals of a clause; different orderings may derive vivified clauses of different lengths. What is the best order to propagate the literals of a clause when we apply vivification? The most satisfactory answers to these questions may depend on other techniques implemented in the solver. Nevertheless, we want to argue the following general principles for guiding the implementation of inprocessing clause vivification: 1. It is not necessary to activate clause vivification at each restart. In fact, the number of new learnt clauses per restart may not be sufficient to easily deduce a conflict by unit propagation. Roughly, we can say that it depends on the number of clauses which were learnt since the last learnt clause vivification (nbN ewLearnts), and the number of clause vivifications performed so far (σ). We define later function liveRestart(nbN ewLearnts, σ) to determine whether or not clause vivification has to be activated at the beginning of a restart. 2. Chanseok Oh ([22]) demonstrated empirically that learnt clauses with high LBD values are not very useful to solve practical SAT instances. Indeed, the LBD of a learnt clause is correlated to the number of decisions needed to falsify the clause, meaning that it is much harder to reduce the LBD of a learnt clause than to reduce its size. Moreover, clauses with high LBD values are generally long and need more unit propagations to be vivified. In Section 5, we will provide empirical evidence that vivifying clauses with high LBD values is very costly and useless. Therefore, we propose to use LBD values as a measure to determine whether or not a clause should be vivified and the proposal is to vivify clauses with small LBD values. In practice, we define the function liveClause(C) to determine whether or not the clause C has to be vivified. 3. We should propagate the negation of the literals of a clause C in their current ordering to vivify the clause. In the next subsection, we will explain why this is probably the best ordering, which is also confirmed by the reported experimental investigation. Anyway, we assume the existence of function sort(C) for convenience, which sorts the literals of clause C before applying clause vivification to C. The above general principle can be summarized by saying that clause vivification should be applied to selected relevant clauses when a restart is triggered if a considerable number of conflicts have been detected since the last clause vivification. However, we will provide evidence that the most crucial aspect is to determine which clauses should be vivified or re-vivified rather than when they should be vivified or re-vivified. Algorithm 1 is a generic CDCL SAT solver. It calls function vivifyIfPromising(φ, nbN ewLearnts, σ) (i.e., Algorithm 2) at the beginning of each restart to apply clause vivification if function liveRestart(nbN ewLearnts, σ) returns true. Function liveRestart(nbN ewLearnts, σ) will be defined when Algorithm 2 is implemented in a real CDCL SAT solver. Function vivify(φ), which realizes clause vivification in φ, is defined in Algorithm 3. In Algorithm 3, function liveClause(C) will be defined when Algorithm 3 is implemented in a real CDCL SAT solver, and function sort(C) will be defined in the next subsection. Given a clause C = l 1 ∨ l 2 ∨ · · · ∨ l k such that liveClause(C) is true, Algorithm 3 (i.e., function vivify(φ)) applies the following simplification rules, using the same data structure and unit propagation algorithm as in the CDCL SAT solver: x ← a non-assigned variable selected using some heuristic; 27 add the unit clause x or ¬x into φ according to a polarity heuristic such as phase saving; 1. Rule 1: If UP(φ ∪ {¬l 1 , . . . , ¬l i−1 }) deduces ¬l i for some i ≤ k, then clause l 1 ∨ · · · ∨ l i−1 ∨ ¬l i is a logical consequence of φ. The algorithm removes l i from C and tries to further vivify l 1 ∨ · · · ∨ l i−1 ∨ l i+1 ∨ · · · ∨ l k . 2. Rule 2: If UP(φ ∪ {¬l 1 , . . . , ¬l i−1 }) deduces l i for some i ≤ k, then clause l 1 ∨ · · · ∨ l i−1 ∨ l i is a logical Algorithm 3: vivify(φ): vivifying clauses in φ Input: φ: A CNF formula Output: φ with some simplified clauses 1 begin 2 foreach C = l 1 ∨ · · · ∨ l k ∈ φ do 3 if !liveClause(C) then continue; 4 C ← sort(C); C ← ∅; /* C will be the vivified clause */ 5 for i := 1 to k do 6 if l i is false then continue; /* Rule 1 */ 7 if l i is true then 8 Copy the literals of the reason clause of l i into R; 9 R ← (R \ {l i }) ∪ {¬l i }; 10 C ← conf lAnalysis(φ, ¬C ∪ {¬l i }, R); /* Rule 2: call Algorithm 4 */ 11 break; 12 else if (R ← UP(φ ∪ ¬C ∪ {¬l i })) is a falsified clause then 13 C ← conf lAnalysis(φ, ¬C ∪ {¬l i }, R); /* Rule 3: call Algorithm 4 */ 14 break; 15 else 16 C ← C ∨ l i ; /* Rule 4: add l i to clause C */ 17 φ ← (φ \ {C}) ∪ {C }; 18 return φ; consequence of φ. Hence, C could be replaced with clause l 1 ∨ · · · ∨ l i−1 ∨ l i because it subsumes C. However, the algorithm replaces C with a clause obtained after analyzing the implication graph that allows to derive l i . Let R be the set of literals of the reason clause of l i . Note that all literals in R but l i are false. The algorithm replaces l i with ¬l i in R to obtain a set R in which all literals are false. Then, the algorithm executes function conflAnalysis(φ, ¬C ∪ {¬l i }, R), where ¬C = {¬l 1 , ¬l 2 , . . . , ¬l i−1 } ⊆ {¬l 1 , ¬l 2 , . . . , ¬l k } is a subset of negated literals of C already propagated, that retraces the implication graph from the literals in R until the literals of ¬C ∪ {¬l i }, in order to collect the literals of ¬C ∪ {¬l i } from which there is a path to a literal of R in the implication graph. The function returns a disjunction of the negation of the collected literals. Algorithm 4 implements function conflAnalysis(φ, ¬C ∪ {¬l i }, R). Note that literals in ¬C ∪ {¬l i } do not have any reason clause, but other asserted literals do. A literal l of ¬C ∪ {¬l i } such that seen[l] == 0 in line 5 is a literal from which no path exists to the conflict represented by R and will not be collected in D . For example, in Figure 1, ¬C ∪ {¬l i } = {l 1 , l 8 , l 20 , l 30 }, there is no path from l 20 to 2, and function conflAnalysis(φ, ¬C ∪ {¬l i }, R) returns ¬l 1 ∨ ¬l 8 ∨ ¬l 30 . 3. If UP(φ ∪ {¬l 1 , . . . , ¬l i−1 }) neither deduces l i nor ¬l i . We distinguish two cases: (a) Rule 3: If UP(φ ∪ {¬l 1 , . . . , ¬l i }) = 2, then φ ∪ {¬l 1 , . . . , ¬l i } is unsatisfiable and clause l 1 ∨ · · · ∨ l i is a logical consequence of φ and could replace C. However, as before, let R be the set of literals of the falsified clause, the algorithm replaces C with a disjunction of a subset of negated literals of ¬C ∪ {¬l i } returned by function conf lAnalysis(φ, ¬C ∪ {¬l i }, R), which is a sub-clause of l 1 ∨ · · · ∨ l i . (b) Rule 4: If UP(φ ∪ {¬l 1 , . . . , ¬l i }) = 2, then the literal l i is added to the working clause C , and the algorithm continues processing the literal l i+1 . Observe that Rule 2 and Rule 3 cannot be both applied to a clause C during the execution of Algorithm 3, whereas Rule 1 and Rule 4 can be combined with Rule 2 and Rule 3. The vivification function vivify(φ) proposed in this paper is a variant of the vivification function of Piette et al. implemented in the ReVivAl preprocessor [4]. The differences between the two functions mainly come from the fact that the vivification function of Piette et al. is proposed for preprocessing while our function is designed for In fact, re-checking previously vivified clauses can be done in preprocessing but would be too costly during inprocessing. • The approach of Piette et al. orders the literals of clauses using a MOMS-style heuristic, while our approach will use the existing order of literals in the clauses. In fact, as we will explain in the next subsection, the two-literal watching technique in modern CDCL SAT solvers continuously changes the order of the literals in a clause during the search, possibly favouring the success of the vivification function, so that inprocessing vivification does not need a MOMS-style heuristic to re-order the literals of a clause. • The third rule of Piette et al. uses the first UIP scheme to extract a nogood, while our Rule 2 and Rule 3 derive a sub-clause by inspecting the complete implication graph. In preprocessing, one can choose between using the first UIP scheme or inspect the complete implication graph. Piette et al. mention the idea of inspecting the complete implication graph but did not apply it for efficiency reason. However, during inprocessing, one must inspect the complete implication graph to generate a sub-clause of a clause C to substitute C. In fact, the purpose of inprocessing vivification is to simplify the formula, but the first UIP scheme gives a nogood which often is not a sub-clause of C and has to be added as a redundant clause, increasing the size of the formula. • The first rule of Piette et al. does not use any conflict analysis to extract an even smaller sub-clause, which may not be a problem in preprocessing because C can be re-vivified. In the same situation, our Rule 2 inspects the complete implication graph to generate a sub-clause as small as possible. This avoids to re-vivify the same clause during inprocessing, where the cost is significant. The implementation of Algorithm 3 in a particular solver needs the definition of the functions liveClause(C) and sort(C) for that solver. We will define functions liveRestart(nbN ewLearnts, σ) and liveClause(C) for five of the best performing state-of-the-art solvers. A constraint we impose for now is that liveClause(C) can be true only if C is a learnt clause that was never vivified before. This constraint will be later removed in Section 6, where it will be allowed to vivify both original and learnt clauses, as well as re-vivify clauses under some conditions. As for function sort(C), it simply returns C in its current literal order. The next subsection analyzes the reason of this choice. The Current Literal Order in a Clause When a SAT solver derives a conflict in the current level, it derives a learnt clause by retracing the implication graph from the conflict until a literal of a lower level or the first UIP in each path. In state-of-the-art solvers such as MiniSat and its descendants, the implication graph is retraced from the conflict in a breadth-first manner, so that the literals are put in the learnt clause in increasing order of their distance to the conflict, except the two first literals that will be watched after backtracking: the first literal should be the negation of the first UIP and the second literal should be a literal of the second highest level in the learnt clause. For example, the learnt clause derived from the implication graph in Figure 1 is ¬l 45 ∨ ¬l 8 ∨ ¬l 5 ∨ ¬l 16 ∨ ¬l 11 (in this order). Detecting a redundant literal in a clause C using Algorithm 3 means that Algorithm 3 can derive a conflict without needing to propagate the negation of all literals of C. This happens when propagating the negation of a subset of literals of C already imply the literals in the paths from the negation of the other literals of C to the conflict. For example, in the learnt clause ¬l 45 ∨ ¬l 8 ∨ ¬l 5 ∨ ¬l 16 ∨ ¬l 11 derived from Figure 1, if propagating l 45 and l 8 already asserts l 140 or l 100 , which are in the path from l 16 to the conflict, then clause vivification detects that ¬l 16 is redundant in the learnt clause. Consequently, a reasonable hypothesis is that propagating in priority literals closer to the conflict allows to detect more easily redundant literals. In fact, other things being equal, the longer is a path, the higher is the probability that the path contains a literal that can be asserted by propagating other literals. Therefore, the original literal order of a learnt clause C, which roughly is in the increasing order of their distance to the conflict, is suitable for Algorithm 3 to detect redundant literals. Nevertheless, the literal order of a (learnt or original) clause is continuously changed during the search, if the solver uses the two-literal watching technique and always watches the first two literals of the clause. We argue that these changes increase the probability that Algorithm 3 detects a redundant literal. To see this, we present below how the changes are made in a clause C during the search when one of the first two literals in C is falsified (no change is made in C in any other case). 1. If the other watched literal is satisfied, no change is made in C; 2. Otherwise, let C = l 1 ∨ l 2 ∨ l 3 ∨ · · · ∨ l k . The falsified literal is placed to be l 2 (by exchanging with l 1 if necessary). If C becomes unit or empty, no further change is made on C. Otherwise, assume that l i (i > 2) is the first non-falsified literal of C. Literals l 2 and l i are exchanged so that l i is watched in the place of l 2 . After the exchange, C becomes l 1 ∨ l i ∨ l 3 ∨ · · · ∨ l i−1 ∨ l 2 ∨ l i+1 ∨ · · · ∨ l k . We can see that literals l 3 , l 4 , . . . , l i−1 , l 2 are falsified. If the falsification of l 3 , l 4 , . . . , l i−1 , l 2 does not allow to derive a conflict, the search continues and the next non-falsified literals of C can be pushed ahead. The solver stops changing the literal order of C when a conflict is derived or one of the two first literals is satisfied. In summary, the two-literal watching technique, applied during the search and clause vivification, changes the literal order of a clause C by pushing the falsified literals not allowing to derive a conflict to the end of C and by placing more promising literals at the beginning of C. Recall that clause vivification is based on successively falsifying the literals of C to derive a conflict. The current literal order obtained after the search presumably increases the success probability of vivifying C. That is why function sort(C) just returns C in its current literal order in our approach. In Section 5, we will present experimental results to show that the current literal order is indeed better than several other orders. Vivifying Learnt Clauses in Glucose and TC Glucose Glucose is a very efficient CDCL SAT solver developed from MiniSat [44]. It was habitually awarded in the SAT Competition between 2009 and 2014, and is the base solver of many other awarded solvers. Glucose was the first solver which incorporated the LBD measure in the clause learning mechanism and adopted an aggressive strategy for clause database reduction. We used Glucose 3.0, in which the reduction process is fired once the number of clauses learnt since the last reduction reaches f irst + 2 × inc × σ, where f irst = 2000 and inc = 300 are parameters (note that 2 × inc can be considered as a single parameter in Glucose 3.0), and σ is the number of database reductions performed so far. The learnt clauses are first sorted in decreasing order of their LBD values, and then the first half of learnt clauses are removed except for the binary clauses, the clauses whose LBD value is 2 and the clauses that are reasons of the current partial assignment. Note that the reduction process is not necessarily fired at level 0. Glucose 3.0 also features a fast restart mechanism which is independent of the clause database reduction. Roughly speaking, Glucose restarts the search from level 0 when the average LBD value in recent learnt clauses is high compared with the average LBD value of all the learnt clauses. Solver TC Glucose is like Glucose 3.0 but it uses a tie-breaking technique for VSIDS, and the CHB branching heuristic [31] instead of the VSIDS branching heuristic for small instances. It was the best solver of the hard combinatorial category in SAT Competition 2016. Learnt clause vivification in Glucose and TC Glucose is implemented by defining the three functions in Algorithm 2 and 3 as follows: Function liveRestart(nbN ewLearnts, σ) returns true iff the learnt clause reduction process was fired in the preceding restart. In other words, clause vivification in Glucose and TC Glucose follows their learnt clause database reduction. Function liveClause(C) returns true iff C is a learnt clause that has not yet been vivified and belongs to the second half of learnt clauses after sorting the clauses in decreasing order of their LBD values. Function sort(C) returns C without changing the order of its literals. The intuition behind the definition of liveRestart(nbN ewLearnts, σ) and liveClause(C) for Glucose can be stated as follows. Just after the learnt clause database reduction, about a half of learnt clauses remain there. Among these remaining learnt clauses, the half with smaller LBD will probably survive the next learnt clause database reduction. The vivification of this half of remaining learnt clauses (i.e., a quarter of all learnt clauses) is useful and has a moderate cost. Vivifying Learnt Clauses in COMiniSatPS, MapleCOMSPS and MapleCOMSPS LRB COMiniSatPS is a SAT solver created by applying a series of small diff patches to MiniSat 2.2.0. Its initial prototypes (SWDiA5BY and MiniSat HACK xxxED) won six medals in SAT Competition 2014 and Configurable SAT Solver Challenge 2014. MapleCOMPS and MapleCOMSPS LRB are based on COMiniSatPS, and were the winners of the main track and the application category in SAT Competition 2016, respectively. The clause database reduction policy of COMiniSatPS, MapleCOMSPS and MapleCOMSPS LRB is quite different from that of Glucose. In these solvers, the learnt clauses are divided into three subsets: (1) clauses whose LBD value is smaller than or equal to a threshold t 1 are stored in a subset called CORE; (2) clauses whose LBD value is greater than t 1 and smaller than or equal to another threshold t 2 are stored in a subset called T IER2; and (3) the remaining clauses are stored in a subset called LOCAL. If a clause in T IER2 is not involved in any conflict for a long time, it is moved to LOCAL. Periodically, the clauses of LOCAL are sorted in increasing order of their activity in recent conflicts, and the learnt clauses in the first half are removed (except for the clauses that are reasons of the current partial assignment). The three solvers interleave Glucose-style restart phases with phases without restarts and Luby restart phases. In a Glucose-style restart phase, search is restarted from level 0 if the average LBD value of recent learnt clauses is high. In a Luby restart phase, search is restarted after reaching a number of conflicts. This number can be small (in this case, the restart is fast), and can also be high (in this case, the restart is long). Clause vivification in COMiniSatPS, MapleCOMSPS and MapleCOMSPS LRB is implemented by defining the three functions in Algorithm 2 and Algorithm 3 as follows (recall that Algorithm 2 is executed before each restart): Function liveRestart(nbN ewLearnts, σ) returns true iff nbN ewLearnts, the number of clauses learnt since the last learnt clause vivification, is greater than or equal to α + β × σ. We empirically fixed α=1000 and β=2000 for the three solvers. Note that function liveRestart(nbN ewLearnts, σ) does not follow the clause database reduction in any of the three solvers. Function liveClause(C) returns true iff C is a learnt clause that has not yet been vivified and belongs to CORE or T IER2. Function sort(C) returns C without changing the order of its literals. The definition of liveRestart(nbN ewLearnts, σ) in COMiniSatPS, MapleCOMSPS and MapleCOMSPS LRB is inspired by the learnt clause database reduction of Glucose, because Glucose is the first solver in which we have made effective our inprocessing clause vivification approach by following its learnt clause database reduction strategy. In the definition of liveRestart(nbN ewLearnts, σ), α+β×σ essentially imposes an interval between two successive clause vivifications, and the length of this interval is growing by a constant β: Let k be the length of the interval between the (i − 1) th and the i th clause vivifications, then the length of the interval between the i th and the (i + 1) th clause vivifications is k + β. The original intention of this definition of liveRestart(nbN ewLearnts, σ) is to vivify learnt clauses more frequently at the beginning of the search and to reduce the vivification frequency gradually as the search proceeds, because the quality of learnt clauses is generally lower at the beginning of the search. Experimental Investigation of Learnt Clause Vivification We implemented the learnt clause vivification approach described in Section 4 in the solvers Glucose 3.0, TC Glucose, COMiniSatPS (COMSPS for short), MapleCOMSPS (Maple for short) and MapleCOMSPS LRB (MapleLRB for short). The resulting solvers 3 are named Glucose+, TC Glucose+, COMSPS+, Maple+ and MapleLRB+, respectively. Besides, we created the solvers MapleLRB/noSp and MapleLRB+/noSp by disabling in MapleLBR and MapleLRB+, respectively, the inprocessing technique of [38] that implements function Stamp and is based on binary implication graphs (noSp means that function Stamp is removed in MapleLRB/noSp and MapleLRB+/noSp). The test instances include the application and hard combinatorial tracks of SAT Competition 2014 and 2016 and the instances from the main track of SAT Competition 2017 (no distinction is made between the application and hard combinatorial instances in this track in the 2017 edition). The experiments reported in this section were performed on a computer with Intel Westmere Xeon E7-8837 processors at 2.66 GHz and 10 GB of memory under Linux unless otherwise stated. The cutoff time is 5000 seconds for each instance and solver. Table 1 The performance of MapleLRB+ and MapleLRB is similar on other benchmark families of the application category of SAT Competition 2016. The difference in the number of solved instances within the cutoff time between the two solvers is smaller than or equal to 1. Table 1 also compares the proposed inprocessing vivification approach with the inprocessing of [38] using four solvers: MapleLRB, MapleLRB+, MapleLRB/noSp and MapleLRB+/noSp. The differences among these four solvers are specified in their names: Effectiveness of the learnt clause vivification approach MapleLRB+: with the proposed vivification and the inprocessing of [38] (implemented in function Stamp). MapleLRB+/noSp: with the proposed vivification but without Stamp. MapleLRB: without the proposed vivification but with Stamp. MapleLRB/noSp: without the proposed vivification and without Stamp. The inprocessing of [38] implemented in MapleLRB allows MapleLRB to solve 3 (2) instances of 2014 (2016) more than MapleLRB/noSp, which has disabled that inprocessing. However, our approach is more effective, allowing MapleLRB+/noSp to solve 16 (14) instances of 2014 (2016) more than MapleLRB/noSp, and MapleLRB+ to solve 14 (15) instances of 2014 (2016) more than MapleLRB. Figure 3 shows the cactus plots of the four solvers on the application instances of SAT Competition 2014 (top) and 2016 (bottom). The two solvers using the proposed vivification approach perform clearly better than the two solvers using the inprocessing of [38]. Note that the inprocessing of [38] is also used in Lingeling and subsumes (at least partly) several inprocessing techniques. Table 2 Note that all the solvers in Table 1 and 2 implement the following inprocessing techniques: the clause minimization based on binary clause resolution of Glucose and the recursive learnt clause minimization of MiniSat. In addition, MapleLRB and MapleLRB+ also include the inprocessing of [38]. The previous results clearly indicate that the proposed vivification approach is compatible with all these inprocessing techniques. Table 3 compares the percentage of clauses simplified by the simplification rules of Section 4.1 implemented in Algorithm 3. In the table, noSimp refers to the percentage of clauses C such that liveClause(C) is true but no redundant literal is removed from C (i.e., Algorithm 3 propagated the negation of the literals of C but did not find redundant literals in C), rule n, where n = 1, 2, 3, refers to the percentage of clauses in which only the simplification Rule n was applied, and rule 1n where n = 2, 3, refers to the percentage of clauses in which both the simplification Rule 1 and the simplification Rule n were applied. Observe that rule 1, rule 2, rule 3, rule 12 and rule 13 cover all the simplifications of Algorithm 3. Rule 2 and Rule 3 cannot be both applied to a clause C during the execution of Algorithm 3, whereas Rule 1 can be combined with Rule 2 and Rule 3. Table 3 also gives results for rule n+rule 1n, which refers to the sum of rule n and rule 1n and shows the percentage of clauses in which Rule n is applied with or without Rule 1, and gives results for rule 1+rule 12+rule 13, which shows the percentage of clauses in which Rule 1 is applied with or without Rule 2 and Rule 3. All results are averaged among the solved instances in each group. We do not give results for Rule 4 because it does not remove any literal from C. Glucose+/α-β: it is like Glucose+ but does not follow the learnt clause database reduction of Glucose any more. Analysis of the rules of vivification Robustess of the learnt clause vivification approach Instead, liveRestart(nbN ewLearnts, σ) returns true iff nbN ewLearnts ≥ α + 2 × β × σ. Glucose+H lbd: it is like Glucose+ but liveClause(C) is true iff C is a learnt clause that was never vivified before and is in the first half of learnt clauses when these clauses are sorted in decreasing order of their LBD value. In other words, the learnt clauses with higher LBD are vivified in Glucose+H lbd. Glucose+∆: it is like Glucose+ but liveClause(C) is true iff C is a learnt clause that was never vivified before and is in the last ∆ fraction of learnt clauses when these clauses are sorted in decreasing order of their LBD value. Glucose+ is in fact Glucose+1/2. MapleLRB+/Core: it is like MapleLRB+ but it vivifies every not-yet-vivified clause in CORE upon every restart. It vivifies the clauses in T IER2 as in MapleLRB+. Low2highLevel, High2lowLevel, Low2highActivity, High2lowActivity, Random and Reverse: all these solvers are like MapleLRB+, except that function sort(C) is different. In the solver Low2highLevel (Low2highActivity), literals in C are ordered from small level (activity) to high level (activity). In the solver High2lowLevel (High2lowActivity), literals in C are ordered from high level (activity) to small level (activity). The level of a literal in C refers to the level of its last assertion. In solver Random, literals in C are randomly ordered. In solver Reverse, literals in C are reversed. Table 4 shows the number of (Total, Sat, Unsat) instances solved by each solver within 5000s, and some statistics about the runtime behavior of each solver. Column 5 (Impact) gives the clause size reduction measured as (a − b)/a × 100, where a (b) is the total number of literals in the vivified clauses before (after) vivification. Column 6 (Cost) gives the cost of learnt clause vivification measured as the ratio of the total number of unit propagations performed by Algorithm 3 to the total number of other propagations performed during the search. Column 7 (LiveC) gives the ratio of the number of clauses vivified to the total number of learnt clauses. All data are averaged over the solved instances among the 300 application instances of SAT Competition 2014. Several observations can be made from • All the described implementations of learnt clause vivification, except for the solvers Reverse and Glu-cose+H lbd, improve their original solver. This provides evidence of the robustness of the proposed approach. It is not necessary to fine-tune different parameters to achieve significant gains. • Both MapleLRB+ and MapleLRB+/Core vivify all the clauses in CORE but at different times. The superior performance of MapleLRB+ over MapleLRB+/Core might be explained as follows: When MapleLRB+ vivifies a clause in CORE, there are usually more learnt clauses in the clause database, allowing unit propagation to deduce more easily the empty clause. • Propagating the literals of C in their current order in C is the best option, whereas propagating the literals in the reverse order is the worst option. Recall that the literals of a learnt clause are originally roughly in increasing order of their distance to the conflict and propagating literals in that order means to propagate in priority literals closer to the conflict. Moreover, that order can be changed during the search to favour the success of vivification. See Section 4.2 for a detailed explanation. • Vivifying clauses with high LBD is very costly and useless, because Glucose+H lbd is significantly worse than Glucose. • The cost of the approach in Glucose+∆ is relatively small, because the solver just removed half of the learnt clauses when learnt clause vivification was activated, which is not the case for other solvers. • Glucose+ and MapleLRB+ offer the best trade-off between cost and impact, explaining their superior performance. Table 5 shows the results of an experiment conducted to analyze how sensitive is the proposed vivification approach to the time at which vivification is activated. The table compares Maple+ with variants of Maple+ that implement different strategies for activating learnt clause vivification: Maple+eR activates vivification at every restart, Maple+eReduceDB activates vivification if the clause database reduction was fired in the previous restart, and Maple+0.5kConflict, Maple+1kConflict and Maple+1.5kConflict activate vivification at a restart if the number of clauses learnt since the last vivification is greater than 500, 1000 and 1500, respectively. Note that the strategy in Maple+eR is different from the strategy in MapleLRB+/Core evaluated in Table 4 in that MapleLRB+/Core activates vivification for clauses in CORE at every restart but does not do it for the clauses in TIER2, while Maple+eR does it for clauses in both CORE and TIER2. The experiment was performed on Intel Xeon E5-2680 v4 processors at 2.40GHz and 20GB of memory under Linux, which is faster than the machine used to obtain the results in Tables 1, 2 and 4, and in Figures 2 and 3. So, the different values of number of instances solved by Maple+ are due to the use of different processors. The first column of Table 5 contains the name of the solver, the second and third columns contain the results for the application and crafted instances of SAT Competition 2014, the fourth and fifth columns contain the results for the application and crafted instances of SAT Competition 2016, the sixth column contains the results for the instances of the main track of SAT Competition 2017, and the seventh column totalizes the results for all the instances. For each solver and group of instances, the results are displayed in terms of the total number of solved (satisfiable and unsatisfiable) instances within the cutoff time of 5000 seconds and the mean time needed to solve these instances, as well as the clause size reduction ratio measured as (a − b)/a × 100 and averaged among the solved instances in each group, where a is the total number of literals in all the clauses C such that liveClause(C) is true before applying clause vivification and b is the total number of literals in all those clauses after applying clause vivification. The different values among the clause vivification strategies in Table 5 are due to the fact that the number of learnt clauses between two consecutive clause vivifications is different in the tested solvers. We observe that the reduction ratio of the number of literals is about 20% in most cases and the performance of the solver is significantly improved with this reduction. The small reduction ratio on the crafted instances of SAT competition 2016 explains why the performance of learnt clause vivification is not so good for these instances (see Table 2). The results for Maple+ and its variants, except for Maple+eR, are similar. The number of solved instances ranges from 930 to 938, and different solvers have slight different performances in different categories. Maple+eR is not so competitive because the reduction ratio of the number of literals is smaller. These results indicate that our approach is robust, because varying the activation strategy does not change so much the results. This means that the proposed vivification approach does not require to fine-tune the activation strategy. It is only necessary to select an strategy that leads to a reduction of the ratio of the number of literals close to 20%. Note, for example, that using the strategy for reducing the clause database implemented in Maple allows Maple+eReduceDB to solve 935 instances without requiring to fine-tune any parameter. The most crucial aspect is to determine which clauses to vivify rather than to determine when to vivify such clauses. Improvements in Function liveClause(C) The proposed clause vivification is implemented in a SAT solver by defining three functions: liveRestart(nbN ewLearnts, σ), liveClause(C) and sort(C). The results in Table 4 and 5 appear to indicate that the performance of the proposed vivification is not very sensitive to the different definitions of liveRestart(nbN ewLearnts, σ), and that keeping the current literal order in a clause that has to be vivified is the best option. In this section, we focus on function liveClause(C) in order to improve the clause vivification of Maple+. In particular, we are interested in analyzing if it is worth re-vivifying clauses and vivifying original clauses during pre-and inprocessing. We consider Maple+ because it was ranked first in the main track of SAT Competition 2017 4 . Thus, our goal is to further improve the results of SAT Competition 2017. Vivifying clauses more than once A common feature of the previous definitions of liveClause(C) is that the function returns false if clause C was already vivified in a previous call of Algorithm 3. However, as the search proceeds, and especially as more learnt clauses are added to the clause database, more redundant literals can be detected in C by unit propagation so that C can be vivified again. Since clause vivification is time-consuming, we have to define relevant and precise conditions under which a solver can re-vivify C. The LBD value of C can be considered to be an estimation of the number of decisions that are needed to falsify C. We use the decrease of the LBD value of C to measure the probability that unit propagation detects more redundant literals in C: If the LBD value of C is decreased, then fewer decisions can be needed to falsify C and unit propagation can detect more redundant literals in C. Concretely, in Maple+, as in Glucose, every time C is used to derive a new learnt clause, a new LBD value of C is computed. Let d 0 be the LBD value of C at time t 0 . We say that the LBD value of C is decreased λ times since t 0 if C is used to derive a new learnt clause at times t 1 , t 2 , . . . , t λ (t 0 < t 1 < · · · < t λ ) and d 0 > d 1 > · · · > d λ , where d i (1 ≤ i ≤ λ) is the LBD value of C computed at time t i . Note that the actual computed LBD value depends on the quality and ordering of the decisions, which are continuously improved as the search proceeds. So, when the computed LBD value is decreased one time, it may be only due to an improvement of the quality and/or ordering of the decisions. So, before allowing to vivify C again, we require that the LBD of C is decreased 2 times since the last time C was vivified. Function liveClause+(C) described in Algorithm 5 implements this strategy. The incorporation of function liveClause+(C) into Maple+ results in a new solver called Maple++. Vivifying original clauses Another common feature of the previous definitions of liveClause(C) is that the function returns true only if C is a learnt clause. However, the original clauses of a SAT instance can also contain redundant literals. The question is whether to vivify all original clauses during preprocessing, and whether to vivify or re-vivify all original clauses during inprocessing. Vivifying each original clause, even limited to preprocessing, is a time-consuming task in large instances. This task could be notably speeded up by using trie data structures as in [12]. We conducted preliminary experiments to compare two preprocessing vivifications (see Section 6.3.2): The first one stops the vivification when a fixed number of propagated literals is reached, and the second one vivifies all original clauses without any limit. Without considering the time for preprocessing and with a cutoff time of 5000 seconds for the search, the solver with the two preprocessing vivifications solves almost the same number of instances, meaning that it is not worth vivifying every original clause when an instance is very large. So, we empirically limit the number of propagated literals in preprocessing vivification to 10 8 . To present the selection of original clauses that will be vivified during the search process, we define the notions of useful conflict and useful clause. A conflict during the search process is said to be useful if the LBD of the learnt clause derived from that conflict is smaller than or equal to γ, where γ is an integer parameter that is fixed empirically to 20 (see Section 6.3.3). Intuitively, if the LBD is greater than 20, the conflict probably needs more than 20 decisions to be produced, so that its reproduction is improbable during the search process. We believe that only useful conflicts contribute to solving an instance. Thus, the solver only needs to focus on useful conflicts. Consequently, we say that a clause is useful if it was used to derive a learnt clause in a useful conflict. Our plan is to vivify every useful original clause and re-vivify it under conditions similar to those applied to learnt clauses. Algorithm 6 defines the function LiveClause++(C) that we will use to improve Maple+. if C was never before vivified, or the LBD of C is decreased 3 times since its last vivification by Algorithm 3, or the LBD of C is decreased to 1 since its last vivification by Algorithm 3 then 10 return true; 11 return f alse; The LBD of an original clause is initialized to the number of literals it contains. Then, as in learnt clauses, every time the original clause is used to derive a new learnt clause, a new LBD value is computed. Since an original clause presumably contains fewer redundant literals than a learnt clause, liveClause++(C) requires that the LBD of an original clause is decreased one more time (3) than a learnt clause since its last vivification to vivify it again. A particular case is when the LBD of a learnt or original clause becomes 1 since its last vivification. Obviously, the LBD value cannot be decreased below 1. Moreover, a clause with LBD 1 is likely very powerful in unit propagation, because all its literals are asserted within one decision level. Then, when the LBD of a clause is decreased to 1 since its last vivification, the clause will be re-vivified regardless of how many times the LBD value is decreased. Since both learnt and original clauses are vivified, it is necessary to establish the order in which such clauses are considered. In our approach, Algorithm 3 vivifies learnt clauses before original clauses, because learnt clauses presumably contain more redundant literals. The incorporation of function liveClause++(C) in Maple+ results in a new solver called Maple CV, standing for MapleSAT with clause vivification. Thus, Maple CV is Maple+ but uses liveClause++(C) to select learnt and original clauses to vivify or re-vivify. We add a preprocessing in Maple CV that vivifies every original clause using Algorithm 3 but vivification is stopped when the number of propagated literals reaches 10 8 . We then obtain our final solver called Maple CV+ in this paper. Empirical evaluation In this subsection, we first show the performance of our final solver Maple CV+, providing empirical evidence for the effectiveness of our approach. Then, we analyze the behaviour of Maple CV+. Finally, we show the robustness of our approach by comparing Maple CV+ with several variants and Cadical (version 2018) [43]. Note that Cadical is a non MiniSat-based solver and the used version implements learnt clause vivification in the spirit of the ideas presented in [20]. As in Table 5, the test suite includes the 1450 instances from the main track (application + crafted) of the SAT Competition 2014, 2016 and 2017. The experiments were performed on Intel Xeon E5-2680 v4 processors at 2.40GHz and 20GB of memory under Linux. The cutoff time is 5000 seconds for each solver and instance, including the preprocessing time and the search time, unless otherwise stated. Effectiveness of the proposed approach We compare Maple+, Maple++ and Maple CV to evaluate the improvements of Algorithm 5 (Function liveClause+(C)) and Algorithm 6 (Function liveClause++(C)). Maple++ and Maple CV are like Maple+ but use a different definition of liveClause(C) to select the clauses that will be vivified by Algorithm 3. The final solver Maple CV+ is also included in the comparison. Table 6 shows the results of Maple+, Maple++, Maple CV and Maple CV+. Several observations can be made from these results: • Re-vivifying a learnt clause when its LBD value is decreased two times as done in Maple++ is effective. In fact, Maple++ solves 11 instances more than Maple+. • Vivifying original clauses during the search, using function liveClause++(C) to select them, is effective in terms of the total number of instances solved within the cutoff time. In fact, Maple CV solves 11 instances more than Maple++ and 22 instances more than Maple+. • The incorporation of a limited preprocessing to vivify original clauses by unit propagation further improves the performance of the solvers: Maple CV+ solves 23 instances more than Maple++ and 34 instances more than Maple+. These results clearly indicate that vivification of both learnt and original clauses during pre-and inprocessing is a decisive solving technique for boosting the performance of modern CDCL SAT solvers. It is important to highlight that Maple+ was ranked first in the main track of SAT Competition 2017, and that Maple CV+ solves 8, 6 and 20 instances more than Maple+ in the main track of SAT Competition 2014, 2016 and 2017, respectively. To assess the significance of this result, we would like to mention that the number of additional solved instances in the main track of consecutive SAT competitions is typically less than 5 instances. Analysis of the behaviour of Maple CV+ Recall that Maple CV+ vivifies each original clause during preprocessing but stops preprocessing vivification when the number of propagated literals reaches 10 8 . Table 7 Maple CV+ solves almost the same number of instances as Maple CV+all with similar search time and substantially shorter preprocessing time. Note that the two solvers vivify the original clauses one by one in their natural order during preprocessing. When vivifying all original clauses during preprocessing is too time-consuming, it might be more profitable to devise a clever heuristic to select and order a subset of original clauses to vivify, than to invest in implementing a more efficient vivification algorithm. Table 8 compares the reduction ratios of original clauses and learnt clauses in the solver Maple CV+, measured as (a − b)/a × 100, where a is the total number of literals in all the original (learnt) clauses C such that liveClause++(C) is true before applying clause vivification and b is the total number of literals in the vivified original (learnt) clauses. The displayed data are averaged among the solved instances in each group. We observe that the reduction ratio of original clauses is substantially smaller than the reduction ratio of learnt clauses, because original clauses presumably contain fewer redundant literals. However, even with these small reduction ratios, the results of Table 6 indicate that vivifying the original clauses selected by the function liveClause++(C) is effective, except for the very special crafted instances of SAT Competition 2016 for which the reduction ratio of original clauses is too low. When a clause is involved to derive a learnt clause from a conflict, its LBD is re-computed. Table 9 shows the percentage of original (learnt) clauses whose LBD is decreased among the original (learnt) clauses whose LBD is re-computed, as well as the percentage of these clauses with decreased LBD that are re-vivified in Maple CV+. The displayed data are averaged among the solved instances in each group. The percentage of the re-vivified original clauses is high, partly because the LBD value of many original clauses is reduced to 1 after their last vivification. The From Table 3, Table 5, Table 8 and Table 9, we can get an explanation why our approach is not so effective for the 200 crafted instances in SAT competition 2016: these instances have the highest percentage (51.3%) of clauses that are checked by clause vivification but not simplified, i.e., no redundant literal is detected in these clauses (Table 3), and the lowest reduction ratio ( Table 5 and Table 8). More importantly, only 0.79% of original clauses in these instances have their LBD value reduced during the search (Table 9). On the contrary, these tables also explain why our approach is effective for all other groups of instances. Robustness of the clause vivification approach Recall that Maple CV+ vivifies each original clause used to derive a learnt clause with LBD smaller than or equal to 20. This constant 20 comes from the study of the cumulative distribution of the LBD of learnt clauses generated by Maple+ during the search process, showed in Figure 4. Each point (x, y) of the figure shows the percentage y% of learnt clauses with the LBD value smaller than or equal to x, averaged among the solved instances in each group from the main (industrial + hard combinatorial) track of SAT Competition 2014, 2016 and 2017. Our purpose is to devise a global constant effective enough for all the instances, so that one does not need to adjust the value for any special family of instances. It is interesting to note that the cumulative distribution function has a similar form in all the groups of instances, and the LBD values around 20 roughly corresponds to a transition zone such that, for each LBD value smaller than the values of this zone, there are many learnt clauses and, for each LBD value greater than the values of this zone, there are fewer learnt clauses, meaning that a conflict with LBD greater than 20 has little chance to be reproduced. Table 10 compares the value 20 with 15 and 25. In the table Maple CV+ with value λ ∈ {15, 20, 25} means that Maple CV+ vivifies each original clause used to derived a learnt clause with LBD smaller than or equal to λ. The results confirm that it is relevant to distinguish useful and non-useful conflicts and 20 is a good value for that, because Maple CV+ with value 20 is significantly better than Maple CV+ with value 15 or 25, and the three variants of Maple CV+ perform better than the base solvers Maple+ and Maple++ from which Maple CV+ is developped, showing the robustness of our approach. All the solvers considered so far have been derived from MiniSat by incorporating a number of improvements that allow them to outperform MiniSat. To show that the proposed vivification is also suitable for other non MiniSat-based SAT solvers, we conducted an experiment with three different variants of the SAT solver Cadical-2018 [43], which was created by Armin Biere and incorporates inprocessing techniques such as probing, subsumption and bounded variable elimination. Cadical-2018 has been submitted to SAT Competition 2018. A previous version of Cadical (Cadical-2017 [41]) performed already better than the solver Lingeling by the same author in which clause vivification is missing. In fact, in SAT competition 2017, Cadical-2017 solved 31 instances more than Lingeling among the 350 instances of the main track. Cadical-2017 included inprocessing vivification restricted to irredundant clauses. Cadical-2018 includes, in addition, inprocessing vivification applied to redundant clauses implemented in the spirit of the ideas presented in our IJCAI paper [20]. Here, redundant clauses roughly correspond to the learnt clauses that do not subsume any original clause. We compared Cadical-2018 with the following variants: • Cadical-2017. • Cadical-2018 without any clause vivification (i.e., clause vivification is disabled). • Cadical-2018 with clause vivification only applied to irredundant clauses. • Cadical-2018 with clause vivification only applied to redundant clauses. Table 11 shows the experimental results. We observe that Cadical-2018 solves 23 instances more than the variant without any clause vivification, 19 instances more than the variant only applying clause vivification to irredundant clauses and 11 instances more than the variant only applying clause vivification to redundant clauses. These results provide another evidence that it is important to vivify both original and learnt clauses, as well as that the proposed vivification is also well suited for non MiniSat-based SAT solvers. Conclusions We defined an original inprocessing clause vivification approach that eliminates redundant literals of original and learnt clauses by applying unit propagation, and that can also be applied during preprocessing to vivify original clauses. We also performed an in-depth empirical analysis that shows that the proposed clause vivification is robust and allows to solve a remarkable number of additional instances from recent SAT competitions. The first part of the experimentation is devoted to show that learnt clause vivification by itself is a powerful solving technique that boosts the performance of the best state-of-the-art SAT solvers. The second part of the experimentation is devoted to show that the combination of learnt clause vivification and original clause vivification leads to a yet more powerful solving technique. The clause vivification approach proposed in this paper allowed us to win the gold medal of the main track in SAT competition 2017 and the bronze medal of the main track in SAT competition 2018. Furthermore, the best 13 solvers of the main track of SAT Competition 2018, developed by different author sets, all use our learnt clause vivification approach.
12,224
1807.10174
2950691724
Superpixels provide an efficient low mid-level representation of image data, which greatly reduces the number of image primitives for subsequent vision tasks. Existing superpixel algorithms are not differentiable, making them difficult to integrate into otherwise end-to-end trainable deep neural networks. We develop a new differentiable model for superpixel sampling that leverages deep networks for learning superpixel segmentation. The resulting "Superpixel Sampling Network" (SSN) is end-to-end trainable, which allows learning task-specific superpixels with flexible loss functions and has fast runtime. Extensive experimental analysis indicates that SSNs not only outperform existing superpixel algorithms on traditional segmentation benchmarks, but can also learn superpixels for other tasks. In addition, SSNs can be easily integrated into downstream deep networks resulting in performance improvements.
Clustering-based approaches, on the other hand, leverage traditional clustering techniques such as @math -means for superpixel segmentation. Widely-used algorithms in this category include SLIC @cite_25 , LSC @cite_24 , and Manifold-SLIC @cite_28 . These methods mainly do @math -means clustering but differ in their feature representation. While the SLIC @cite_25 represents each pixel as a @math -dimensional positional and color features ( @math features), LSC @cite_24 method projects these @math -dimensional features on to a @math -dimensional space and performs clustering in the projected space. Manifold-SLIC @cite_28 , on the other hand, uses a @math -dimensional manifold feature space for superpixel clustering. While these clustering algorithms require iterative updates, a non-iterative clustering scheme for superpixel segmentation is proposed in the SNIC method @cite_1 . The proposed approach is also a clustering-based approach. However, unlike existing techniques, we leverage deep networks to learn features for superpixel clustering via an end-to-end training framework.
{ "abstract": [ "We present in this paper a superpixel segmentation algorithm called Linear Spectral Clustering (LSC), which produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. However, instead of using the traditional eigen-based algorithm, we approximate the similarity metric using a kernel function leading to an explicitly mapping of pixel values and coordinates into a high dimensional feature space. We revisit the conclusion that by appropriately weighting each point in this feature space, the objective functions of weighted K-means and normalized cuts share the same optimum point. As such, it is possible to optimize the cost function of normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images. Experimental results show that LSC performs equally well or better than state of the art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation.", "Superpixels are perceptually meaningful atomic regions that can effectively capture image features. Among various methods for computing uniform superpixels, simple linear iterative clustering (SLIC) is popular due to its simplicity and high performance. In this paper, we extend SLIC to compute content-sensitive superpixels, i.e., small superpixels in content-dense regions (e.g., with high intensity or color variation) and large superpixels in content-sparse regions. Rather than the conventional SLIC method that clusters pixels in R5, we map the image I to a 2-dimensional manifold M ⊂ R5, whose area elements are a good measure of the content density in I. We propose an efficient method to compute restricted centroidal Voronoi tessellation (RCVT) — a uniform tessellation — on M, which induces the content-sensitive superpixels in I. Unlike other algorithms that characterize content-sensitivity by geodesic distances, manifold SLIC tackles the problem by measuring areas of Voronoi cells on M, which can be computed at a very low cost. As a result, it runs 10 times faster than the state-of-the-art content-sensitive superpixels algorithm. We evaluate manifold SLIC and seven representative methods on the BSDS500 benchmark and observe that our method outperforms the existing methods.", "We present an improved version of the Simple Linear Iterative Clustering (SLIC) superpixel segmentation. Unlike SLIC, our algorithm is non-iterative, enforces connectivity from the start, requires lesser memory, and is faster. Relying on the superpixel boundaries obtained using our algorithm, we also present a polygonal partitioning algorithm. We demonstrate that our superpixels as well as the polygonal partitioning are superior to the respective state-of-the-art algorithms on quantitative benchmarks.", "" ], "cite_N": [ "@cite_24", "@cite_28", "@cite_1", "@cite_25" ], "mid": [ "1938929646", "2471353525", "2612271937", "" ] }
Superpixel Sampling Networks
Superpixels are an over-segmentation of an image that is formed by grouping image pixels [33] based on low-level image properties. They provide a perceptually meaningful tessellation of image content, thereby reducing the number of image primitives for subsequent image processing. Owing to their representational and computational efficiency, superpixels have become an established low/midlevel image representation and are widely-used in computer vision algorithms such as object detection [35,42], semantic segmentation [15,34,13], saliency estimation [18,30,43,46], optical flow estimation [20,28,37,41], depth estimation [6], tracking [44] to name a few. Superpixels are especially widely-used in traditional energy minimization frameworks, where a low number of image primitives greatly reduce the optimization complexity. The recent years have witnessed a dramatic increase in the adoption of deep learning for a wide range of computer vision problems. With the exception of a few methods (e.g., [13,18,34]), superpixels are scarcely used in conjunction with modern deep networks. There are two main reasons for this. First, the standard convolution operation, which forms the basis of most deep architectures, is usually defined over regular grid lattices and becomes inefficient when operating over irregular superpixel lattices. Second, existing superpixel algorithms are arXiv:1807.10174v1 [cs.CV] 26 Jul 2018 -End-to-end trainable: SSNs are end-to-end trainable and can be easily integrated into other deep network architectures. To the best of our knowledge, this is the first end-to-end trainable superpixel algorithm. -Flexible and task-specific: SSN allows for learning with flexible loss functions resulting in the learning of task-specific superpixels. -State-of-the-art performance: Experiments on a wide range of benchmark datasets show that SSN outperforms existing superpixel algorithms. -Favorable runtime: SSN also performs favorably against prominent superpixel algorithms in terms of runtime, making it amenable to learn on large datasets and also effective for practical applications. Preliminaries At the core of SSN is a differentiable clustering technique that is inspired by the SLIC [1] superpixel algorithm. Here, we briefly review the SLIC before describing our SSN technique in the next section. SLIC is one of the simplest and also one of the most widely-used superpixel algorithms. It is easy to implement, has fast runtime and also produces compact and uniform superpixels. Although there are several different variants [25,27] of SLIC algorithm, in the original form, SLIC is a k-means clustering performed on image pixels in a five dimensional position and color space (usually scaled XY Lab space). Formally, given an image I ∈ R n×5 , with 5-dimensional XY Lab features at n pixels, the task of superpixel computation is to assign each pixel to one of the m superpixels i.e., to compute the pixel-superpixel association map H ∈ {0, 1, · · · , m − 1} n×1 . The SLIC algorithm operates as follows. First, we sample initial cluster (superpixel) centers S 0 ∈ R m×5 in the 5-dimensional space. This sampling is usually done uniformly across the pixel grid with some local perturbations based on image gradients. Given these initial superpixel centers S 0 , the SLIC algorithm proceeds in an iterative manner with the following two steps in each iteration t: 1. Pixel-Superpixel association: Associate each pixel to the nearest superpixel center in the five-dimensional space, i.e., compute the new superpixel assignment at each pixel p, H t p = arg min i∈{0,...,m−1} D(I p , S t−1 i ),(1) where D denotes the distance computation D(a, b) = ||a − b|| 2 . 2. Superpixel center update: Average pixel features (XY Lab) inside each superpixel cluster to obtain new superpixel cluster centers S t . For each super-Superpixel Sampling Networks 5 pixel i, we compute the centroid of that cluster, S t i = 1 Z t i p|H t p =i I p ,(2) where Z t i denotes the number of pixels in the superpixel cluster i. These two steps form the core of the SLIC algorithm and are repeated until either convergence or for a fixed number of iterations. Since computing the distance D in Eq. 1 between all the pixels and superpixels is time-consuming, this computation is usually constrained to a fixed neighborhood around each superpixel center. At the end, depending on the application, there is an optional step of enforcing spatial connectivity across pixels in each superpixel cluster. More details regarding the SLIC algorithm can be found in Achanta et. al. [1]. In the next section, we elucidate how we modify the SLIC algorithm to develop SSN. Superpixel Sampling Networks As illustrated in Fig. 1, SSN is composed of two parts: A deep network that generates pixel features, which are then passed on to differentiable SLIC. Here, we first describe the differentiable SLIC followed by the SSN architecture. Differentiable SLIC Why is SLIC not differentiable? A closer look at all the computations in SLIC shows that the non-differentiability arises because of the computation of pixelsuperpixel associations, which involves a non-differentiable nearest neighbor operation. This nearest neighbor computation also forms the core of the SLIC superpixel clustering and thus we cannot avoid this operation. A key to our approach is to convert the nearest-neighbor operation into a differentiable one. Instead of computing hard pixel-superpixel associations H ∈ {0, 1, · · · , m − 1} n×1 (in Eq. 1), we propose to compute soft-associations Q ∈ R n×m between pixels and superpixels. Specifically, for a pixel p and superpixel i at iteration t, we replace the nearest-neighbor computation (Eq. 1) in SLIC with the following pixel-superpixel association. Q t pi = e −D(Ip,S t−1 i ) = e −||Ip−S t−1 i || 2(3) Correspondingly, the computation of new superpixels cluster centers (Eq. 2) is modified as the weighted sum of pixel features, S t i = 1 Z t i n p=1 Q t pi I p ,(4) where Z t i = p Q t pi is the normalization constant. For convenience, we refer to the column normalized Q t asQ t and thus we can write the above superpixel 3: for each iteration t in 1 to v do 4: Compute association between each pixel p and the surrounding superpixel i, Q t pi = e −||Fp−S t−1 i || 2 . 5: Compute new superpixel centers, S t i = 1 Z t i n p=1 Q t pi Fp; Z t i = p Q t pi . 6: end for 7: (Optional ) Compute hard-associations H v n×1 ; H v p = arg max i∈{0,...,m−1} Q v pi . 8: (Optional ) Enforce spatial connectivity. center update as S t =Q t I. The size of Q is n × m and even for a small number of superpixels m, it is prohibitively expensive to compute Q pi between all the pixels and superpixels. Therefore, we constrain the distance computations from each pixel to only 9 surrounding superpixels as illustrated using the red and green boxes in Fig. 2. For each pixel in the green box, only the surrounding superpixels in the red box are considered for computing the association. This brings down the size of Q from n × m to n × 9, making it efficient in terms of both computation and memory. This approximation in the Q computation is similar in spirit to the approximate nearest-neighbor search in SLIC. Now, both the computations in each SLIC iteration are completely differentiable and we refer to this modified algorithm as differentiable SLIC. Empirically, we observe that replacing the hard pixel-superpixel associations in SLIC with the soft ones in differentiable SLIC does not result in any performance degradations. Since this new superpixel algorithm is differentiable, it can be easily integrated into any deep network architecture. Instead of using manually designed pixel features I p , we can leverage deep feature extractors and train the whole network end-to-end. In other words, we replace the image features I p in the above computations (Eq. 3 and 4) with k dimensional pixel features F p ∈ R n×k computed using a deep network. We refer to this coupling of deep networks with the differentiable SLIC as Superpixel Sampling Network (SSN). Im ag e (X Y L ab ) Su p er pi xe ls Algorithm 1 outlines all the computation steps in SSN. The algorithm starts with deep image feature extraction using a CNN (line 1). We initialize the superpixel cluster centers (line 2) with the average pixels features in an initial regular superpixel grid (Fig. 2). Then, for v iterations, we iteratively update pixel-superpixel associations and superpixel centers, using the above-mentioned computations (lines 3-6). Although one could directly use soft pixel-superpixel associations Q for several downstream tasks, there is an optional step of converting soft associations to hard ones (line 7), depending on the application needs. In addition, like in the original SLIC algorithm, we can optionally enforce spatial connectivity across pixels inside each superpixel cluster. This is accomplished by merging the superpixels, smaller than certain threshold, with the surrounding ones and then assigning a unique cluster ID for each spatially-connected component. Note that these two optional steps (lines 7, 8) are not differentiable. Conv-BN-ReLU Conv-BN-ReLU Pool-Conv-BN-ReLU Conv-BN-ReLU Pool-Conv-BN-ReLU Conv-BN-ReLU Concat-Conv-ReLU Deep Network Compute Pixel-Superpixel Association Compute Superpixel Centers v iterations Differentiable SLIC Mapping between pixel and superpixel representations. For some downstream applications that use superpixels, pixel representations are mapped onto superpixel representations and vice versa. With the traditional superpixel algorithms, which provide hard clusters, this mapping from pixel to superpixel representations is done via averaging inside each cluster (Eq. 2). The inverse mapping from superpixel to pixel representations is done by assigning the same superpixel feature to all the pixels belonging to that superpixel. We can use the same pixel-superpixel mappings with SSN superpixels as well, using the hard clusters (line 7 in Algorithm 1) obtained from SSN. However, since this computation of hard-associations is not differentiable, it may not be desirable to use hard clusters when integrating into an end-to-end trainable system. It is worth noting that the soft pixel-superpixel associations generated by SSN can also be easily used for mapping between pixel and superpixel representations. Eq. 4 already describes the mapping from a pixel to superpixel representation which is a simple matrix multiplication with the transpose of column-normalized Q matrix: S =Q F , where F and S denote pixel and superpixel representations respectively. The inverse mapping from superpixel to pixel representation is done by multiplying the row-normalized Q, denoted asQ, with the superpixel represen-tations, F =QS. Thus the pixel-superpixel feature mappings are given as simple matrix multiplications with the association matrix and are differentiable. Later, we will make use of these mappings in designing the loss functions to train SSN. Architecture Fig. 3 shows the SSN network architecture. The CNN for feature extraction is composed of a series of convolution layers interleaved with batch normalization [21] (BN) and ReLU activations. We use max-pooling, which downsamples the input by a factor of 2, after the 2 nd and 4 th convolution layers to increase the receptive field. We bilinearly upsample the 4 th and 6 th convolution layer outputs and then concatenate with the 2 nd convolution layer output to pass onto the final convolution layer. We use 3 × 3 convolution filters with the number of output channels set to 64 in each layer, except the last CNN layer which outputs k − 5 channels. We concatenate this k − 5 channel output with the XY Lab of the given image resulting in k-dimensional pixel features. We choose this CNN architecture for its simplicity and efficiency. Other network architectures are conceivable. The resulting k dimensional features are passed onto the two modules of differentiable SLIC that iteratively updates pixel-superpixel associations and superpixel centers for v iterations. The entire network is end-to-end trainable. Network Learning Task-Specific Superpixels One of the main advantages of end-to-end trainable SSN is the flexibility in terms of loss functions, which we can use to learn task-specific superpixels. Like in any CNN, we can couple SSN with any task-specific loss function resulting in the learning of superpixels that are optimized for downstream computer vision tasks. In this work, we focus on optimizing the representational efficiency of superpixels i.e., learning superpixels that can efficiently represent a scene characteristic such as semantic labels, optical flow, depth etc. As an example, if we want to learn superpixels that are going to be used for downstream semantic segmentation task, it is desirable to produce superpixels that adhere to semantic boundaries. To optimize for representational efficiency, we find that the combination of a task-specific reconstruction loss and a compactness loss performs well. Task-specific reconstruction loss. We denote the pixel properties that we want to represent efficiently with superpixels as R ∈ R n×l . For instance, R can be semantic label (as one-hot encoding) or optical flow maps. It is important to note that we do not have access to R during the test time, i.e., SSN predicts superpixels only using image data. We only use R during training so that SSN can learn to predict superpixels suitable to represent R. As mentioned previously in Section 4.1, we can map the pixel properties onto superpixels using the columnnormalized association matrixQ,Ȓ =Q R, whereȒ ∈ R m×l . The resulting superpixel representationȒ is then mapped back onto pixel representation R * using row-normalized association matrixQ, R * =QS, where R * ∈ R n×l . Then the reconstruction loss is given as L recon = L(R, R * ) = L(R,QQ R)(5) where L(., .) denotes a task-specific loss-function. In this work, for segmentation tasks, we used cross-entropy loss for L and used L1-norm for learning superpixels for optical flow. Here Q denotes the association matrix Q v after the final iteration of differentiable SLIC. We omit v for convenience. Compactness loss. In addition to the above loss, we also use a compactness loss to encourage superpixels to be spatially compact i.e., to have lower spatial variance inside each superpixel cluster. Let I xy denote positional pixel features. We first map these positional features into our superpixel representation, S xy = Q I xy . Then, we do the inverse mapping onto the pixel representation using the hard associations H, instead of soft associations Q, by assigning the same superpixel positional feature to all the pixels belonging to that superpixel,Ī xy p = S xy i |H p = i. The compactness loss is defined as the following L2 norm: L compact = ||I xy −Ī xy || 2 .(6) This loss encourages superpixels to have lower spatial variance. The flexibility of SSN allows using many other loss functions, which makes for interesting future research. The overall loss we use in this work is a combination of these two loss functions, L = L recon +λL compact , where we set λ to 10 −5 in all our experiments. Implementation and Experiment Protocols We implement the differentiable SLIC as neural network layers using CUDA in the Caffe neural network framework [22]. All the experiments are performed using Caffe with the Python interface. We use scaled XY Lab features as input to the SSN, with position and color feature scales represented as γ pos and γ color respectively. The value of γ color is independent of the number of superpixels and is set to 0.26 with color values ranging between 0 and 255. The value of γ pos depends on the number of superpixels, γ pos = η max (m w /n w , m h /n h ), where m w , n w and m h , n h denotes the number of superpixels and pixels along the image width and height respectively. In practice, we observe that η = 2.5 performs well. For training, we use image patches of size 201 × 201 and 100 superpixels. In terms of data augmentation, we use left-right flips and for the small BSDS500 dataset [4], we use an additional data augmentation of random scaling of image patches. For all the experiments, we use Adam stochastic optimization [23] with a batch size of 8 and a learning rate of 0.0001. Unless otherwise mentioned, we trained the models for 500K iterations and choose the final trained models based on validation accuracy. For the ablation studies, we trained models with varying parameters for 200K iterations. It is important to note that we use a single trained SSN model for estimating varying number of superpixels by scaling the input positional features as described above. We use 5 iterations (v = 5) of differentiable SLIC for training and used 10 iterations while testing as we observed only marginal performance gains with more iterations. Refer to https://varunjampani.github.io/ssn/ for the code and trained models. Experiments We conduct experiments on 4 different benchmark datasets. We first demonstrate the use of learned superpixels with experiments on the prominent superpixel benchmark BSDS500 [4] (Section 5.1). We then demonstrate the use of task-specific superpixels on the Cityscapes [10] and PascalVOC [11] datasets for semantic segmentation (Section 5.2), and on MPI-Sintel [7] dataset for optical flow (Section 5.3). In addition, we demonstrate the use of SSN superpixels in a downstream semantic segmentation network that uses superpixels (Section 5.2). Learned Superpixels We perform ablation studies and evaluate against other superpixel techniques on the BSDS500 benchmark dataset [4]. BSDS500 consists of 200 train, 100 validation, and 200 test images. Each image is annotated with ground-truth (GT) segments from multiple annotators. We treat each annotation as as a separate sample resulting in 1633 training/validation pairs and 1063 testing pairs. In order to learn superpixels that adhere to GT segments, we use GT segment labels in the reconstruction loss (Eq. 5). Specifically, we represent GT segments in each image as one-hot encoding vectors and use that as pixel properties R in the reconstruction loss. We use the cross-entropy loss for L in Eq. 5. Note that, unlike in the semantic segmentation task where the GT labels have meaning, GT segments in this dataset do not carry any semantic meaning. This does not pose any issue to our learning setup as both the SSN and reconstruction loss are agnostic to the meaning of pixel properties R. The reconstruction loss generates a loss value using the given input signal R and its reconstructed version R * and does not consider whether the meaning of R is preserved across images. Evaluation metrics. Superpixels are useful in a wide range of vision tasks and several metrics exist for evaluating superpixels. In this work, we consider Achievable Segmentation Accuracy (ASA) as our primary metric while also reporting boundary metrics such as Boundary Recall (BR) and Boundary Precision (BP) metrics. ASA score represents the upper bound on the accuracy achievable by any segmentation step performed on the superpixels. Boundary precision and recall on the other hand measures how well the superpixel boundaries align with the GT boundaries. We explain these metrics in more detail in the supplementary material. The higher these scores, the better is the segmentation result. We report the average ASA and boundary metrics by varying the average number of generated superpixels. A fair evaluation of boundary precision and recall expects superpixels to be spatially connected. Thus, for the sake of unbiased comparisons, we follow the optional post-processing of computing hard clusters and enforcing spatial connectivity (lines 7-8 in Algorithm 1) on SSN superpixels. Ablation studies. We refer to our main model illustrated in Fig. 3, with 7 convolution layers in deep network, as SSN deep . As a baseline model, we evalute the superpixels generated with differentiable SLIC that takes pixel XY Lab features as input. This is similar to standard SLIC algorithm, which we refer to as SSN pix and has no trainable parameters. As an another baseline model, we replaced the deep network with a single convolution layer that learns to linearly transform input XY Lab features, which we refer to as SSN linear . Fig. 4 shows the average ASA and BR scores for these different models with varying feature dimensionality k and the number of iterations v in differentiable SLIC. The ASA and BR of SSN linear is already reliably higher than the baseline SSN pix showing the importance of our loss functions and back-propagating the loss signal through the superpixel algorithm. SSN deep further improves ASA and BR scores by a large margin. We observe slightly better scores with higher feature dimensionality k and also more iterations v. For computational reasons, we choose k = 20 and v = 10 and from here on refer to this model as SSN deep . Comparison with the state-of-the-arts. Fig. 10 shows the ASA and precisionrecall comparison of SSN with state-of-the-art superpixel algorithms. We compare with the following prominent algorithms: SLIC [1], SNIC [2], SEEDS [5], LSC [25], ERS [26], ETPS [45] and SCALP [14]. Plots indicate that SSN pix performs similarly to SLIC superpixels, showing that the performance of SLIC does not drop when relaxing the nearest neighbor constraints. Comparison with other techniques indicate that SSN performs considerably better in terms of both ASA score and precision-recall. Fig. 2 shows a visual result comparing SSN pix and SSN deep and, Fig. 7 shows visual results comparing SSN deep with state-ofthe-arts. Notice that SSN deep superpixels smoothly follow object boundaries and are also more concentrated near the object boundaries. Superpixels for Semantic Segmentation In this section, we present results on the semantic segmentation benchmarks of Cityscapes [10] and PascalVOC [11]. The experimental settings are quite similar to that of the previous section with the only difference being the use of semantic labels as the pixel properties R in the reconstruction loss. Thus, we encourage SSN to learn superpixels that adhere to semantic segments. Cityscapes. Cityscapes is a large scale urban scene understanding benchmark with pixel accurate semantic annotations. We train SSN with the 2975 train images and evaluate on the 500 validation images. For the ease of experimentation, we experiment with half-resolution (512 × 1024) images. Plots in Fig. 6 shows that SSN deep performs on par with SEAL [38] superpixels in terms of ASA while being better in terms of precisionrecall. We show a visual result in Fig. 7 with more in the supplementary. Runtime analysis. We report the approximate runtimes of different techniques, for computing 1000 superpixels on a 512 × 1024 cityscapes image in Table 1. We compute GPU runtimes using an NVIDIA Tesla V100 GPU. The runtime comparison between SSN pix and SSN deep indicates that a significant portion of the SSN computation time is due to the differentiable SLIC. The runtimes indicate that SSN is considerably faster than the implementations of several superpixel algorithms. PascalVOC. PascalVOC2012 [11] is another widely-used semantic segmentation benchmark, where we train SSN with 1464 train images and validate on 1449 validation images. Fig. 8(a) shows the ASA scores for different techniques. We do not analyze boundary scores on this dataset as the GT semantic boundaries are dilated with an ignore label. The ASA scores indicate that SSN deep outperforms other techniques. We also evaluated the BSDS-trained model on this dataset and observed only a marginal drop in accuracy ('SSN deep -BSDS' in Fig. 8(a)). This shows the generalization and robustness of SSN to different datasets. An example visual result is shown in Fig. 7 with more in the supplementary. Method IoU DeepLab [8] 68.9 + CRF [8] 72.7 + BI (SLIC) [13] 74.1 + BI (SSN deep ) 75.3 We perform an additional experiment where we plug SSN into the downstream semantic segmentation network of [13], The network in [13] has bilateral inception layers that makes use of superpixels for longrange data-adaptive information propagation across intermediate CNN representations. Table 2 shows the Intersection over Union (IoU) score for this joint model evaluated on the test data. The improvements in IoU with respect to original SLIC superpixels used in [13] shows that SSN can also bring performance improvements to the downstream task networks that use superpixels. Superpixels for Optical Flow To demonstrate the applicability of SSN for regression tasks as well, we conduct a proof-of-concept experiment where we learn superpixels that adhere to optical flow boundaries. To this end, we experiment on the MPI-Sintel dataset [7] and use SSN to predict superpixels given a pair of input frames. We use GT optical flow as pixel properties R in the reconstruction loss (Eq. 5) and use L1 loss for L, encouraging SSN to generate superpixels that can effectively represent flow. The MPI-Sintel dataset consists of 23 video sequences, which we split into disjoint sets of 18 (836 frames) training and 5 (205 frames) validation sequences. To evaluate the superpixels, we follow a similar strategy as for computing ASA. That is, for each pixel inside a superpixel, we assign the average GT optical flow resulting in a segmented flow. Fig. 9 shows sample segmented flows obtained using different types of superpixels. We then compute the Euclidean distance between the GT flow and the segmented flow, which is referred to as end-point error (EPE). The lower the EPE value, the better the superpixels are for representing flow. A sample result in Fig. 9 shows that SSN deep superpixels are better aligned with the changes in the GT flow than other superpixels. Fig. 8(b) shows the average EPE values for different techniques where SSN deep performs favourably against existing superpixel techniques. This shows the usefulness of SSN in learning task-specific superpixels. Conclusion We propose a novel superpixel sampling network (SSN) that leverages deep features learned via end-to-end training for estimating task-specific superpixels. To our knowledge, this is the first deep superpixel prediction technique that is endto-end trainable. Experiments several benchmarks show that SSN consistently performs favorably against state-of-the-art superpixel techniques, while also being faster. Integration of SSN into a semantic segmentation network [13] also results in performance improvements showing the usefulness of SSN in downstream computer vision tasks. SSN is fast, easy to implement, can be easily integrated into other deep networks and has good empirical performance. SSN has addressed one of the main hurdles for incorporating superpixels into deep networks which is the non-differentiable nature of existing superpixel algorithms. The use of superpixels inside deep networks can have several advantages. Superpixels can reduce the computational complexity, especially when processing high-resolution images. Superpixels can also be used to enforce piece-wise constant assumptions and also help in long-range information propagation [13]. We believe this work opens up new avenues in leveraging superpixels inside deep networks and also inspires new deep learning techniques that use superpixels. addition, we also plot F-measure scores in Fig. 10(b). In summary, SSN deep also outperforms other techniques in terms of F-measure while maintaining the compactness as that of SSNpix. This shows the robustness of SSN with respect to different superpixel aspects. Additional visual results. In this section, we present additional visual results of different techniques and on different datasets. Figs. 11, 12 and 13 show superpixel visual results on three segmentation benchmarks of BSDS500 [4], Cityscapes [10] and PascalVOC [11] respectively. For comparisons, we show the superpixels obtained with
4,476
1807.10174
2950691724
Superpixels provide an efficient low mid-level representation of image data, which greatly reduces the number of image primitives for subsequent vision tasks. Existing superpixel algorithms are not differentiable, making them difficult to integrate into otherwise end-to-end trainable deep neural networks. We develop a new differentiable model for superpixel sampling that leverages deep networks for learning superpixel segmentation. The resulting "Superpixel Sampling Network" (SSN) is end-to-end trainable, which allows learning task-specific superpixels with flexible loss functions and has fast runtime. Extensive experimental analysis indicates that SSNs not only outperform existing superpixel algorithms on traditional segmentation benchmarks, but can also learn superpixels for other tasks. In addition, SSNs can be easily integrated into downstream deep networks resulting in performance improvements.
Inspired by the success of deep learning for supervised tasks, several methods investigate the use of deep networks for unsupervised data clustering. Recently, Greff al @cite_35 propose the neural expectation maximization framework where they model the posterior distribution of cluster labels using deep networks and unroll the iterative steps in the EM procedure for end-to-end training. In another work @cite_21 , the Ladder network @cite_7 is used to model a hierarchical latent variable model for clustering. Hershey al @cite_17 propose a deep learning-based clustering framework for separating and segmenting audio signals. Xie al @cite_43 propose a deep embedded clustering framework, for simultaneously learning feature representations and cluster assignments. In a recent survey paper, Aljalbout al @cite_42 give a taxonomy of deep learning based clustering methods. In this paper, we also propose a deep learning-based clustering algorithm. Different from the prior work, our algorithm is tailored for the superpixel segmentation task where we use image-specific constraints. Moreover, our framework can easily incorporate other vision objective functions for learning task-specific superpixel representations.
{ "abstract": [ "Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.", "We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on the Ladder network proposed by Valpola (2015), which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification, in addition to permutation-invariant MNIST classification with all labels.", "We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it to group the representations of different objects in an iterative manner. By allowing the system to amortize the iterative inference of the groupings, we achieve very fast convergence. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. For multi-digit classification of very cluttered images that require texture segmentation, our method offers improved classification performance over convolutional networks despite being fully connected. Furthermore, we observe that our system greatly improves on the semi-supervised result of a baseline Ladder network on our dataset, indicating that segmentation can also improve sample efficiency.", "Clustering is a fundamental machine learning method. The quality of its results is dependent on the data distribution. For this reason, deep neural networks can be used for learning better representations of the data. In this paper, we propose a systematic taxonomy for clustering with deep learning, in addition to a review of methods from the field. Based on our taxonomy, creating new methods is more straightforward. We also propose a new approach which is built on the taxonomy and surpasses some of the limitations of some previous work. Our experimental evaluation on image datasets shows that the method approaches state-of-the-art clustering quality, and performs better in some cases.", "", "We address the problem of acoustic source separation in a deep learning framework we call \"deep clustering.\" Rather than directly estimating signals or masking functions, we train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data. Previous deep network approaches provide great advantages in terms of learning power and speed, but previously it has been unclear how to use them to separate signals in a class-independent way. In contrast, spectral clustering approaches are flexible with respect to the classes and number of items to be segmented, but it has been unclear how to leverage the learning power and speed of deep networks. To obtain the best of both worlds, we use an objective function that to train embeddings that yield a low-rank approximation to an ideal pairwise affinity matrix, in a class-independent way. This avoids the high cost of spectral factorization and instead produces compact clusters that are amenable to simple clustering methods. The segmentations are therefore implicitly encoded in the embeddings, and can be \"decoded\" by clustering. Preliminary experiments show that the proposed method can separate speech: when trained on spectrogram features containing mixtures of two speakers, and tested on mixtures of a held-out set of speakers, it can infer masking functions that improve signal quality by around 6dB. We show that the model can generalize to three-speaker mixtures despite training only on two-speaker mixtures. The framework can be used without class labels, and therefore has the potential to be trained on a diverse set of sound types, and to generalize to novel sources. We hope that future work will lead to segmentation of arbitrary sounds, with extensions to microphone array methods as well as image segmentation and other domains." ], "cite_N": [ "@cite_35", "@cite_7", "@cite_21", "@cite_42", "@cite_43", "@cite_17" ], "mid": [ "2747264013", "2952229419", "2463993963", "2784962210", "", "2950354455" ] }
Superpixel Sampling Networks
Superpixels are an over-segmentation of an image that is formed by grouping image pixels [33] based on low-level image properties. They provide a perceptually meaningful tessellation of image content, thereby reducing the number of image primitives for subsequent image processing. Owing to their representational and computational efficiency, superpixels have become an established low/midlevel image representation and are widely-used in computer vision algorithms such as object detection [35,42], semantic segmentation [15,34,13], saliency estimation [18,30,43,46], optical flow estimation [20,28,37,41], depth estimation [6], tracking [44] to name a few. Superpixels are especially widely-used in traditional energy minimization frameworks, where a low number of image primitives greatly reduce the optimization complexity. The recent years have witnessed a dramatic increase in the adoption of deep learning for a wide range of computer vision problems. With the exception of a few methods (e.g., [13,18,34]), superpixels are scarcely used in conjunction with modern deep networks. There are two main reasons for this. First, the standard convolution operation, which forms the basis of most deep architectures, is usually defined over regular grid lattices and becomes inefficient when operating over irregular superpixel lattices. Second, existing superpixel algorithms are arXiv:1807.10174v1 [cs.CV] 26 Jul 2018 -End-to-end trainable: SSNs are end-to-end trainable and can be easily integrated into other deep network architectures. To the best of our knowledge, this is the first end-to-end trainable superpixel algorithm. -Flexible and task-specific: SSN allows for learning with flexible loss functions resulting in the learning of task-specific superpixels. -State-of-the-art performance: Experiments on a wide range of benchmark datasets show that SSN outperforms existing superpixel algorithms. -Favorable runtime: SSN also performs favorably against prominent superpixel algorithms in terms of runtime, making it amenable to learn on large datasets and also effective for practical applications. Preliminaries At the core of SSN is a differentiable clustering technique that is inspired by the SLIC [1] superpixel algorithm. Here, we briefly review the SLIC before describing our SSN technique in the next section. SLIC is one of the simplest and also one of the most widely-used superpixel algorithms. It is easy to implement, has fast runtime and also produces compact and uniform superpixels. Although there are several different variants [25,27] of SLIC algorithm, in the original form, SLIC is a k-means clustering performed on image pixels in a five dimensional position and color space (usually scaled XY Lab space). Formally, given an image I ∈ R n×5 , with 5-dimensional XY Lab features at n pixels, the task of superpixel computation is to assign each pixel to one of the m superpixels i.e., to compute the pixel-superpixel association map H ∈ {0, 1, · · · , m − 1} n×1 . The SLIC algorithm operates as follows. First, we sample initial cluster (superpixel) centers S 0 ∈ R m×5 in the 5-dimensional space. This sampling is usually done uniformly across the pixel grid with some local perturbations based on image gradients. Given these initial superpixel centers S 0 , the SLIC algorithm proceeds in an iterative manner with the following two steps in each iteration t: 1. Pixel-Superpixel association: Associate each pixel to the nearest superpixel center in the five-dimensional space, i.e., compute the new superpixel assignment at each pixel p, H t p = arg min i∈{0,...,m−1} D(I p , S t−1 i ),(1) where D denotes the distance computation D(a, b) = ||a − b|| 2 . 2. Superpixel center update: Average pixel features (XY Lab) inside each superpixel cluster to obtain new superpixel cluster centers S t . For each super-Superpixel Sampling Networks 5 pixel i, we compute the centroid of that cluster, S t i = 1 Z t i p|H t p =i I p ,(2) where Z t i denotes the number of pixels in the superpixel cluster i. These two steps form the core of the SLIC algorithm and are repeated until either convergence or for a fixed number of iterations. Since computing the distance D in Eq. 1 between all the pixels and superpixels is time-consuming, this computation is usually constrained to a fixed neighborhood around each superpixel center. At the end, depending on the application, there is an optional step of enforcing spatial connectivity across pixels in each superpixel cluster. More details regarding the SLIC algorithm can be found in Achanta et. al. [1]. In the next section, we elucidate how we modify the SLIC algorithm to develop SSN. Superpixel Sampling Networks As illustrated in Fig. 1, SSN is composed of two parts: A deep network that generates pixel features, which are then passed on to differentiable SLIC. Here, we first describe the differentiable SLIC followed by the SSN architecture. Differentiable SLIC Why is SLIC not differentiable? A closer look at all the computations in SLIC shows that the non-differentiability arises because of the computation of pixelsuperpixel associations, which involves a non-differentiable nearest neighbor operation. This nearest neighbor computation also forms the core of the SLIC superpixel clustering and thus we cannot avoid this operation. A key to our approach is to convert the nearest-neighbor operation into a differentiable one. Instead of computing hard pixel-superpixel associations H ∈ {0, 1, · · · , m − 1} n×1 (in Eq. 1), we propose to compute soft-associations Q ∈ R n×m between pixels and superpixels. Specifically, for a pixel p and superpixel i at iteration t, we replace the nearest-neighbor computation (Eq. 1) in SLIC with the following pixel-superpixel association. Q t pi = e −D(Ip,S t−1 i ) = e −||Ip−S t−1 i || 2(3) Correspondingly, the computation of new superpixels cluster centers (Eq. 2) is modified as the weighted sum of pixel features, S t i = 1 Z t i n p=1 Q t pi I p ,(4) where Z t i = p Q t pi is the normalization constant. For convenience, we refer to the column normalized Q t asQ t and thus we can write the above superpixel 3: for each iteration t in 1 to v do 4: Compute association between each pixel p and the surrounding superpixel i, Q t pi = e −||Fp−S t−1 i || 2 . 5: Compute new superpixel centers, S t i = 1 Z t i n p=1 Q t pi Fp; Z t i = p Q t pi . 6: end for 7: (Optional ) Compute hard-associations H v n×1 ; H v p = arg max i∈{0,...,m−1} Q v pi . 8: (Optional ) Enforce spatial connectivity. center update as S t =Q t I. The size of Q is n × m and even for a small number of superpixels m, it is prohibitively expensive to compute Q pi between all the pixels and superpixels. Therefore, we constrain the distance computations from each pixel to only 9 surrounding superpixels as illustrated using the red and green boxes in Fig. 2. For each pixel in the green box, only the surrounding superpixels in the red box are considered for computing the association. This brings down the size of Q from n × m to n × 9, making it efficient in terms of both computation and memory. This approximation in the Q computation is similar in spirit to the approximate nearest-neighbor search in SLIC. Now, both the computations in each SLIC iteration are completely differentiable and we refer to this modified algorithm as differentiable SLIC. Empirically, we observe that replacing the hard pixel-superpixel associations in SLIC with the soft ones in differentiable SLIC does not result in any performance degradations. Since this new superpixel algorithm is differentiable, it can be easily integrated into any deep network architecture. Instead of using manually designed pixel features I p , we can leverage deep feature extractors and train the whole network end-to-end. In other words, we replace the image features I p in the above computations (Eq. 3 and 4) with k dimensional pixel features F p ∈ R n×k computed using a deep network. We refer to this coupling of deep networks with the differentiable SLIC as Superpixel Sampling Network (SSN). Im ag e (X Y L ab ) Su p er pi xe ls Algorithm 1 outlines all the computation steps in SSN. The algorithm starts with deep image feature extraction using a CNN (line 1). We initialize the superpixel cluster centers (line 2) with the average pixels features in an initial regular superpixel grid (Fig. 2). Then, for v iterations, we iteratively update pixel-superpixel associations and superpixel centers, using the above-mentioned computations (lines 3-6). Although one could directly use soft pixel-superpixel associations Q for several downstream tasks, there is an optional step of converting soft associations to hard ones (line 7), depending on the application needs. In addition, like in the original SLIC algorithm, we can optionally enforce spatial connectivity across pixels inside each superpixel cluster. This is accomplished by merging the superpixels, smaller than certain threshold, with the surrounding ones and then assigning a unique cluster ID for each spatially-connected component. Note that these two optional steps (lines 7, 8) are not differentiable. Conv-BN-ReLU Conv-BN-ReLU Pool-Conv-BN-ReLU Conv-BN-ReLU Pool-Conv-BN-ReLU Conv-BN-ReLU Concat-Conv-ReLU Deep Network Compute Pixel-Superpixel Association Compute Superpixel Centers v iterations Differentiable SLIC Mapping between pixel and superpixel representations. For some downstream applications that use superpixels, pixel representations are mapped onto superpixel representations and vice versa. With the traditional superpixel algorithms, which provide hard clusters, this mapping from pixel to superpixel representations is done via averaging inside each cluster (Eq. 2). The inverse mapping from superpixel to pixel representations is done by assigning the same superpixel feature to all the pixels belonging to that superpixel. We can use the same pixel-superpixel mappings with SSN superpixels as well, using the hard clusters (line 7 in Algorithm 1) obtained from SSN. However, since this computation of hard-associations is not differentiable, it may not be desirable to use hard clusters when integrating into an end-to-end trainable system. It is worth noting that the soft pixel-superpixel associations generated by SSN can also be easily used for mapping between pixel and superpixel representations. Eq. 4 already describes the mapping from a pixel to superpixel representation which is a simple matrix multiplication with the transpose of column-normalized Q matrix: S =Q F , where F and S denote pixel and superpixel representations respectively. The inverse mapping from superpixel to pixel representation is done by multiplying the row-normalized Q, denoted asQ, with the superpixel represen-tations, F =QS. Thus the pixel-superpixel feature mappings are given as simple matrix multiplications with the association matrix and are differentiable. Later, we will make use of these mappings in designing the loss functions to train SSN. Architecture Fig. 3 shows the SSN network architecture. The CNN for feature extraction is composed of a series of convolution layers interleaved with batch normalization [21] (BN) and ReLU activations. We use max-pooling, which downsamples the input by a factor of 2, after the 2 nd and 4 th convolution layers to increase the receptive field. We bilinearly upsample the 4 th and 6 th convolution layer outputs and then concatenate with the 2 nd convolution layer output to pass onto the final convolution layer. We use 3 × 3 convolution filters with the number of output channels set to 64 in each layer, except the last CNN layer which outputs k − 5 channels. We concatenate this k − 5 channel output with the XY Lab of the given image resulting in k-dimensional pixel features. We choose this CNN architecture for its simplicity and efficiency. Other network architectures are conceivable. The resulting k dimensional features are passed onto the two modules of differentiable SLIC that iteratively updates pixel-superpixel associations and superpixel centers for v iterations. The entire network is end-to-end trainable. Network Learning Task-Specific Superpixels One of the main advantages of end-to-end trainable SSN is the flexibility in terms of loss functions, which we can use to learn task-specific superpixels. Like in any CNN, we can couple SSN with any task-specific loss function resulting in the learning of superpixels that are optimized for downstream computer vision tasks. In this work, we focus on optimizing the representational efficiency of superpixels i.e., learning superpixels that can efficiently represent a scene characteristic such as semantic labels, optical flow, depth etc. As an example, if we want to learn superpixels that are going to be used for downstream semantic segmentation task, it is desirable to produce superpixels that adhere to semantic boundaries. To optimize for representational efficiency, we find that the combination of a task-specific reconstruction loss and a compactness loss performs well. Task-specific reconstruction loss. We denote the pixel properties that we want to represent efficiently with superpixels as R ∈ R n×l . For instance, R can be semantic label (as one-hot encoding) or optical flow maps. It is important to note that we do not have access to R during the test time, i.e., SSN predicts superpixels only using image data. We only use R during training so that SSN can learn to predict superpixels suitable to represent R. As mentioned previously in Section 4.1, we can map the pixel properties onto superpixels using the columnnormalized association matrixQ,Ȓ =Q R, whereȒ ∈ R m×l . The resulting superpixel representationȒ is then mapped back onto pixel representation R * using row-normalized association matrixQ, R * =QS, where R * ∈ R n×l . Then the reconstruction loss is given as L recon = L(R, R * ) = L(R,QQ R)(5) where L(., .) denotes a task-specific loss-function. In this work, for segmentation tasks, we used cross-entropy loss for L and used L1-norm for learning superpixels for optical flow. Here Q denotes the association matrix Q v after the final iteration of differentiable SLIC. We omit v for convenience. Compactness loss. In addition to the above loss, we also use a compactness loss to encourage superpixels to be spatially compact i.e., to have lower spatial variance inside each superpixel cluster. Let I xy denote positional pixel features. We first map these positional features into our superpixel representation, S xy = Q I xy . Then, we do the inverse mapping onto the pixel representation using the hard associations H, instead of soft associations Q, by assigning the same superpixel positional feature to all the pixels belonging to that superpixel,Ī xy p = S xy i |H p = i. The compactness loss is defined as the following L2 norm: L compact = ||I xy −Ī xy || 2 .(6) This loss encourages superpixels to have lower spatial variance. The flexibility of SSN allows using many other loss functions, which makes for interesting future research. The overall loss we use in this work is a combination of these two loss functions, L = L recon +λL compact , where we set λ to 10 −5 in all our experiments. Implementation and Experiment Protocols We implement the differentiable SLIC as neural network layers using CUDA in the Caffe neural network framework [22]. All the experiments are performed using Caffe with the Python interface. We use scaled XY Lab features as input to the SSN, with position and color feature scales represented as γ pos and γ color respectively. The value of γ color is independent of the number of superpixels and is set to 0.26 with color values ranging between 0 and 255. The value of γ pos depends on the number of superpixels, γ pos = η max (m w /n w , m h /n h ), where m w , n w and m h , n h denotes the number of superpixels and pixels along the image width and height respectively. In practice, we observe that η = 2.5 performs well. For training, we use image patches of size 201 × 201 and 100 superpixels. In terms of data augmentation, we use left-right flips and for the small BSDS500 dataset [4], we use an additional data augmentation of random scaling of image patches. For all the experiments, we use Adam stochastic optimization [23] with a batch size of 8 and a learning rate of 0.0001. Unless otherwise mentioned, we trained the models for 500K iterations and choose the final trained models based on validation accuracy. For the ablation studies, we trained models with varying parameters for 200K iterations. It is important to note that we use a single trained SSN model for estimating varying number of superpixels by scaling the input positional features as described above. We use 5 iterations (v = 5) of differentiable SLIC for training and used 10 iterations while testing as we observed only marginal performance gains with more iterations. Refer to https://varunjampani.github.io/ssn/ for the code and trained models. Experiments We conduct experiments on 4 different benchmark datasets. We first demonstrate the use of learned superpixels with experiments on the prominent superpixel benchmark BSDS500 [4] (Section 5.1). We then demonstrate the use of task-specific superpixels on the Cityscapes [10] and PascalVOC [11] datasets for semantic segmentation (Section 5.2), and on MPI-Sintel [7] dataset for optical flow (Section 5.3). In addition, we demonstrate the use of SSN superpixels in a downstream semantic segmentation network that uses superpixels (Section 5.2). Learned Superpixels We perform ablation studies and evaluate against other superpixel techniques on the BSDS500 benchmark dataset [4]. BSDS500 consists of 200 train, 100 validation, and 200 test images. Each image is annotated with ground-truth (GT) segments from multiple annotators. We treat each annotation as as a separate sample resulting in 1633 training/validation pairs and 1063 testing pairs. In order to learn superpixels that adhere to GT segments, we use GT segment labels in the reconstruction loss (Eq. 5). Specifically, we represent GT segments in each image as one-hot encoding vectors and use that as pixel properties R in the reconstruction loss. We use the cross-entropy loss for L in Eq. 5. Note that, unlike in the semantic segmentation task where the GT labels have meaning, GT segments in this dataset do not carry any semantic meaning. This does not pose any issue to our learning setup as both the SSN and reconstruction loss are agnostic to the meaning of pixel properties R. The reconstruction loss generates a loss value using the given input signal R and its reconstructed version R * and does not consider whether the meaning of R is preserved across images. Evaluation metrics. Superpixels are useful in a wide range of vision tasks and several metrics exist for evaluating superpixels. In this work, we consider Achievable Segmentation Accuracy (ASA) as our primary metric while also reporting boundary metrics such as Boundary Recall (BR) and Boundary Precision (BP) metrics. ASA score represents the upper bound on the accuracy achievable by any segmentation step performed on the superpixels. Boundary precision and recall on the other hand measures how well the superpixel boundaries align with the GT boundaries. We explain these metrics in more detail in the supplementary material. The higher these scores, the better is the segmentation result. We report the average ASA and boundary metrics by varying the average number of generated superpixels. A fair evaluation of boundary precision and recall expects superpixels to be spatially connected. Thus, for the sake of unbiased comparisons, we follow the optional post-processing of computing hard clusters and enforcing spatial connectivity (lines 7-8 in Algorithm 1) on SSN superpixels. Ablation studies. We refer to our main model illustrated in Fig. 3, with 7 convolution layers in deep network, as SSN deep . As a baseline model, we evalute the superpixels generated with differentiable SLIC that takes pixel XY Lab features as input. This is similar to standard SLIC algorithm, which we refer to as SSN pix and has no trainable parameters. As an another baseline model, we replaced the deep network with a single convolution layer that learns to linearly transform input XY Lab features, which we refer to as SSN linear . Fig. 4 shows the average ASA and BR scores for these different models with varying feature dimensionality k and the number of iterations v in differentiable SLIC. The ASA and BR of SSN linear is already reliably higher than the baseline SSN pix showing the importance of our loss functions and back-propagating the loss signal through the superpixel algorithm. SSN deep further improves ASA and BR scores by a large margin. We observe slightly better scores with higher feature dimensionality k and also more iterations v. For computational reasons, we choose k = 20 and v = 10 and from here on refer to this model as SSN deep . Comparison with the state-of-the-arts. Fig. 10 shows the ASA and precisionrecall comparison of SSN with state-of-the-art superpixel algorithms. We compare with the following prominent algorithms: SLIC [1], SNIC [2], SEEDS [5], LSC [25], ERS [26], ETPS [45] and SCALP [14]. Plots indicate that SSN pix performs similarly to SLIC superpixels, showing that the performance of SLIC does not drop when relaxing the nearest neighbor constraints. Comparison with other techniques indicate that SSN performs considerably better in terms of both ASA score and precision-recall. Fig. 2 shows a visual result comparing SSN pix and SSN deep and, Fig. 7 shows visual results comparing SSN deep with state-ofthe-arts. Notice that SSN deep superpixels smoothly follow object boundaries and are also more concentrated near the object boundaries. Superpixels for Semantic Segmentation In this section, we present results on the semantic segmentation benchmarks of Cityscapes [10] and PascalVOC [11]. The experimental settings are quite similar to that of the previous section with the only difference being the use of semantic labels as the pixel properties R in the reconstruction loss. Thus, we encourage SSN to learn superpixels that adhere to semantic segments. Cityscapes. Cityscapes is a large scale urban scene understanding benchmark with pixel accurate semantic annotations. We train SSN with the 2975 train images and evaluate on the 500 validation images. For the ease of experimentation, we experiment with half-resolution (512 × 1024) images. Plots in Fig. 6 shows that SSN deep performs on par with SEAL [38] superpixels in terms of ASA while being better in terms of precisionrecall. We show a visual result in Fig. 7 with more in the supplementary. Runtime analysis. We report the approximate runtimes of different techniques, for computing 1000 superpixels on a 512 × 1024 cityscapes image in Table 1. We compute GPU runtimes using an NVIDIA Tesla V100 GPU. The runtime comparison between SSN pix and SSN deep indicates that a significant portion of the SSN computation time is due to the differentiable SLIC. The runtimes indicate that SSN is considerably faster than the implementations of several superpixel algorithms. PascalVOC. PascalVOC2012 [11] is another widely-used semantic segmentation benchmark, where we train SSN with 1464 train images and validate on 1449 validation images. Fig. 8(a) shows the ASA scores for different techniques. We do not analyze boundary scores on this dataset as the GT semantic boundaries are dilated with an ignore label. The ASA scores indicate that SSN deep outperforms other techniques. We also evaluated the BSDS-trained model on this dataset and observed only a marginal drop in accuracy ('SSN deep -BSDS' in Fig. 8(a)). This shows the generalization and robustness of SSN to different datasets. An example visual result is shown in Fig. 7 with more in the supplementary. Method IoU DeepLab [8] 68.9 + CRF [8] 72.7 + BI (SLIC) [13] 74.1 + BI (SSN deep ) 75.3 We perform an additional experiment where we plug SSN into the downstream semantic segmentation network of [13], The network in [13] has bilateral inception layers that makes use of superpixels for longrange data-adaptive information propagation across intermediate CNN representations. Table 2 shows the Intersection over Union (IoU) score for this joint model evaluated on the test data. The improvements in IoU with respect to original SLIC superpixels used in [13] shows that SSN can also bring performance improvements to the downstream task networks that use superpixels. Superpixels for Optical Flow To demonstrate the applicability of SSN for regression tasks as well, we conduct a proof-of-concept experiment where we learn superpixels that adhere to optical flow boundaries. To this end, we experiment on the MPI-Sintel dataset [7] and use SSN to predict superpixels given a pair of input frames. We use GT optical flow as pixel properties R in the reconstruction loss (Eq. 5) and use L1 loss for L, encouraging SSN to generate superpixels that can effectively represent flow. The MPI-Sintel dataset consists of 23 video sequences, which we split into disjoint sets of 18 (836 frames) training and 5 (205 frames) validation sequences. To evaluate the superpixels, we follow a similar strategy as for computing ASA. That is, for each pixel inside a superpixel, we assign the average GT optical flow resulting in a segmented flow. Fig. 9 shows sample segmented flows obtained using different types of superpixels. We then compute the Euclidean distance between the GT flow and the segmented flow, which is referred to as end-point error (EPE). The lower the EPE value, the better the superpixels are for representing flow. A sample result in Fig. 9 shows that SSN deep superpixels are better aligned with the changes in the GT flow than other superpixels. Fig. 8(b) shows the average EPE values for different techniques where SSN deep performs favourably against existing superpixel techniques. This shows the usefulness of SSN in learning task-specific superpixels. Conclusion We propose a novel superpixel sampling network (SSN) that leverages deep features learned via end-to-end training for estimating task-specific superpixels. To our knowledge, this is the first deep superpixel prediction technique that is endto-end trainable. Experiments several benchmarks show that SSN consistently performs favorably against state-of-the-art superpixel techniques, while also being faster. Integration of SSN into a semantic segmentation network [13] also results in performance improvements showing the usefulness of SSN in downstream computer vision tasks. SSN is fast, easy to implement, can be easily integrated into other deep networks and has good empirical performance. SSN has addressed one of the main hurdles for incorporating superpixels into deep networks which is the non-differentiable nature of existing superpixel algorithms. The use of superpixels inside deep networks can have several advantages. Superpixels can reduce the computational complexity, especially when processing high-resolution images. Superpixels can also be used to enforce piece-wise constant assumptions and also help in long-range information propagation [13]. We believe this work opens up new avenues in leveraging superpixels inside deep networks and also inspires new deep learning techniques that use superpixels. addition, we also plot F-measure scores in Fig. 10(b). In summary, SSN deep also outperforms other techniques in terms of F-measure while maintaining the compactness as that of SSNpix. This shows the robustness of SSN with respect to different superpixel aspects. Additional visual results. In this section, we present additional visual results of different techniques and on different datasets. Figs. 11, 12 and 13 show superpixel visual results on three segmentation benchmarks of BSDS500 [4], Cityscapes [10] and PascalVOC [11] respectively. For comparisons, we show the superpixels obtained with
4,476
1807.10002
2884915206
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
Traditional approaches to image-based gaze estimation are typically categorized as or . Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules @cite_39 @cite_1 @cite_26 @cite_0 and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image @cite_6 @cite_7 @cite_10 @cite_21 by minimizing a suitable energy.
{ "abstract": [ "Despite the widespread use of mobile phones and tablets, hand-held portable devices have only recently been identified as a promising platform for gaze-aware applications. Estimating gaze on portable devices is challenging given their limited computational resources, low quality integrated front-facing RGB cameras, and small screens to which gaze is mapped. In this paper we present EyeTab, a model-based approach for binocular gaze estimation that runs entirely on an unmodified tablet. EyeTab builds on set of established image processing and computer vision algorithms and adapts them for robust and near-realtime gaze estimation. A technical prototype evaluation with eight participants in a normal indoors office setting shows that EyeTab achieves an average gaze estimation accuracy of 6.88° of visual angle at 12 frames per second.", "Simple-setup real-time gaze estimation system using only a consumer depth camera.3D model based gaze estimation method allowing free head movement.Iris center localization method able to handle relatively low-quality eye images.Simple system calibration easy to be carried out for nonprofessional users. Existing eye-gaze-tracking systems typically require multiple infrared (IR) lights and high-quality cameras to achieve good performance and robustness against head movement. This requirement limits the systems' potential for broader applications. In this paper, we present a low-cost, non-intrusive, simple-setup gaze estimation system that can estimate the gaze direction under free head movement. In particular, the proposed system only uses a consumer depth camera (Kinect sensor) positioned at a distance from the subject. We develop a simple procedure to calibrate the geometric relationship between the screen and the camera, and subject-specific parameters. A parameterized iris model is then used to locate the center of the iris for gaze feature extraction, which can handle low-quality eye images. Finally, the gaze direction is determined based on a 3D geometric eye model, where the head movement and deviation of the visual axis from the optical axis are taken into consideration. Experimental results indicate that the system can estimate gaze with an accuracy of 1.4-2.7? and is robust against large head movements. Two real-time human-computer interaction (HCI) applications are presented to demonstrate the potential of the proposed system for wide applications.", "3D model-based gaze estimation methods are widely explored because of their good accuracy and ability to handle free head movement. Traditional methods with complex hardware systems (Eg. infrared lights, 3D sensors, etc.) are restricted to controlled environments, which significantly limit their practical utilities. In this paper, we propose a 3D model-based gaze estimation method with a single web-camera, which enables instant and portable eye gaze tracking. The key idea is to leverage on the proposed 3D eye-face model, from which we can estimate 3D eye gaze from observed 2D facial landmarks. The proposed system includes a 3D deformable eye-face model that is learned offline from multiple training subjects. Given the deformable model, individual 3D eye-face models and personal eye parameters can be recovered through the unified calibration algorithm. Experimental results show that the proposed method outperforms state-of-the-art methods while allowing convenient system setup and free head movement. A real time eye tracking system running at 30 FPS also validates the effectiveness and efficiency of the proposed method.", "Most eye gaze estimation systems rely on explicit calibration, which is inconvenient to the user, limits the amount of possible training data and consequently the performance. Since there is likely a strong correlation between gaze and interaction cues, such as cursor and caret locations, a supervised learning algorithm can learn the complex mapping between gaze features and the gaze point by training on incremental data collected implicitly from normal computer interactions. We develop a set of robust geometric gaze features and a corresponding data validation mechanism that identifies good training data from noisy interaction-informed data collected in real-use scenarios. Based on a study of gaze movement patterns, we apply behavior-informed validation to extract gaze features that correspond with the interaction cue, and data-driven validation provides another level of crosschecking using previous good data. Experimental evaluation shows that the proposed method achieves an average error of 4.06o, and demonstrates the effectiveness of the proposed gaze estimation method and corresponding validation mechanism.", "Most commercial eye gaze tracking systems are based on the use of infrared lights. However, such systems may not work outdoor or may have a very limited head box for them to work. This paper proposes a non-infrared based approach to track one's eye gaze with an RGBD camera (in our case, Kinect). The proposed method adopts a personalized 3D face model constructed off-line. To detect the eye gaze, our system tracks the iris center and a set of 2D facial landmarks whose 3D locations are provided by the RGBD camera. A simple onetime calibration procedure is used to obtain the parameters of the personalized eye gaze model. We compare the performance of the proposed method against the 2D approach using only RGB input on the same images, and find that the use of depth information directly from Kinect achieves more accurate tracking. As expected, the results from the proposed method are not as accurate as the ones from infrared-based approaches. However, this method has the potential for practical use with upcoming better and cheaper depth cameras.", "Low cost eye tracking is an actual challenging research topic for the eye tracking community. Gaze tracking based on a web cam and without infrared light is a searched goal to broaden the applications of eye tracking systems. Web cam based eye tracking results in new challenges to solve such as a wider field of view and a lower image quality. In addition, no infrared light implies that glints cannot be used anymore as a tracking feature. In this paper, a thorough study has been carried out to evaluate pupil (iris) center-eye corner (PC-EC) vector as feature for gaze estimation based on interpolation methods in low cost eye tracking, as it is considered to be partially equivalent to the pupil center-corneal reflection (PC-CR) vector. The analysis is carried out both based on simulated and real data. The experiments show that eye corner positions in the image move slightly when the user is looking at different points of the screen, even with a static head position. This lowers the possible accuracy of the gaze estimation, significantly reducing the accuracy of the system under standard working conditions to 2--3 degrees.", "We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet are not constrained. We collected a large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all of which might impact gaze estimation accuracy. We made three major observations on the collected data and employed a baseline algorithm for analyzing the impact of several factors on gaze estimation accuracy. The baseline algorithm is based on multilevel HoG feature and Random Forests regressor, which achieves a mean error of 3.17 cm. We perform extensive evaluation on the impact of various practical factors such as person dependency, dataset size, race, wearing glasses and user posture on the gaze estimation accuracy.", "Morphable face models are a powerful tool, but have previously failed to model the eye accurately due to complexities in its material and motion. We present a new multi-part model of the eye that includes a morphable model of the facial eye region, as well as an anatomy-based eyeball model. It is the first morphable model that accurately captures eye region shape, since it was built from high-quality head scans. It is also the first to allow independent eyeball movement, since we treat it as a separate part. To showcase our model we present a new method for illumination- and head-pose–invariant gaze estimation from a single RGB image. We fit our model to an image through analysis-by-synthesis, solving for eye region shape, texture, eyeball pose, and illumination simultaneously. The fitted eyeball pose parameters are then used to estimate gaze direction. Through evaluation on two standard datasets we show that our method generalizes to both webcam and high-quality camera images, and outperforms a state-of-the-art CNN method achieving a gaze estimation accuracy of (9.44^ ) in a challenging user-independent scenario." ], "cite_N": [ "@cite_26", "@cite_7", "@cite_21", "@cite_1", "@cite_6", "@cite_39", "@cite_0", "@cite_10" ], "mid": [ "2087862817", "1971652879", "2778474385", "2004128235", "2097087373", "2048968870", "2598992495", "2519247488" ] }
Deep Pictorial Gaze Estimation
Accurately estimating human gaze direction has many applications in assistive technologies for users with motor disabilities [4], gaze-based human-computer interaction [20], visual attention analysis [17], consumer behavior research [36], AR, VR and more. Traditionally this has been done via specialized hardware, shining infrared illumination into the user's eyes and via specialized cameras, sometimes requiring use of a headrest. Recently deep learning based approaches have made first steps towards fully unconstrained gaze estimation under free head motion, in environments with uncontrolled illumination conditions, and using only a single commodity (and potentially low quality) camera. However, this remains a challenging task due to inter-subject variance in eye appearance, self-occlusions, and head pose and rotation variations. In consequence, current approaches attain accuracies in the order of 6 • only and are still far from the requirements of many application scenarios. While demonstrating the feasibility of purely image based gaze estimation and introducing large datasets, these learning-based approaches [14,45,46] have leveraged convolutional neural network (CNN) architectures, originally designed for the task of image classification, with minor modifications. For example, [45,47] simply append head pose orientation to the first fully connected layer of either LeNet-5 or VGG-16, while [14] proposes to merge multiple input modalities by replicating convolutional layers from AlexNet. In [46] the AlexNet architecture is modified to learn socalled spatial-weights to emphasize important activations by region when full face images are provided as input. Typically, the proposed architectures are only supervised via a mean-squared error loss on the gaze direction output, represented as either a 3-dimensional unit vector or pitch and yaw angles in radians. In this work we propose a network architecture that has been specifically designed with the task of gaze estimation in mind. An important insight is that regressing first to an abstract but gaze specific representation helps the network to more accurately predict the final output of 3D gaze direction. Furthermore, introducing this gaze representation also allows for intermediate supervision which we experimentally show to further improve accuracy. Our work is loosely inspired by recent progress in the field of human pose estimation. Here, earlier work directly regressed joint coordinates [34]. More recently the need for a more task specific form of supervision has led to the use of confidence maps or heatmaps, where the position of a joint is depicted as a 2-dimensional Gaussian [21,33,37]. This representation allows for a simpler mapping between input image and joint position, allows for intermediate supervision, and hence for deeper networks. However, applying this concept of heatmaps to regularize training is not directly applicable to the case of gaze estimation since the crucial eyeball center is not observable in 2D image data. We propose a conceptually similar representation for gaze estimation, called gazemaps. Such a gazemap is an abstract, pictorial representation of the eyeball, the iris and the pupil at it's center (see Figure 1). The simplest depiction of an eyeball's rotation can be made via a circle and an ellipse, the former representing the eyeball, and the latter the iris. The gaze direction is then defined by the vector connecting the larger circle's center and the ellipse. Thus 3D gaze direction can be (pictorially) represented in the form of an image, where a spherical eyeball and circular iris are projected onto the image plane, resulting in a circle and ellipse. Hence, changes in gaze direction result in changes in ellipse positioning (cf. Figure 2a). This pictorial representation can be easily generated from existing training data, given known gaze direction annotations. At inference time recovering gaze direction from such a pictorial representation is a much simpler task than regressing directly from raw pixel values. However, adapting the input image to fit our pictorial representation is non-trivial. For a given eye image, a circular eyeball and an ellipse must be fitted, then centered and rescaled to be in the expected shape. We experimentally observed that this task can be performed well using a fully convolutional architecture. Furthermore, we show that our approach outperforms prior work on the final task of gaze estimation significantly. Our main contribution consists of a novel architecture for appearance-based gaze estimation. At the core of the proposed architecture lies the pictorial representation of 3D gaze direction to which the network fits the raw input images and from which additional convolutional layers estimate the final gaze direction. In addition, we perform: (a) an in-depth analysis of the effect of intermediate supervision using our pictorial representation, (b) quantitative evaluation and comparison against state-of-the-art gaze estimation methods on three challenging datasets (MPIIGaze, EYEDIAP, Columbia) in the person independent setting, and a (c) detailed evaluation of the robustness of a model trained using our architecture in terms of gaze direction and head pose as well as image quality. Finally, we show that our method reduces gaze error by 18% compared to the state-of-the-art [47] on MPIIGaze. Appearance-based Gaze Estimation with CNNs Traditional approaches to image-based gaze estimation are typically categorized as feature-based or model-based. Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules [11,12,25,41] and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image [30,35,39,42] by minimizing a suitable energy. Appearance-based methods learn a direct mapping from raw eye images to gaze direction. Learning this direct mapping can be very challenging due to changes in illumination, (partial) occlusions, head motion and eye decorations. Due to these challenges, appearance-based gaze estimation methods required the introduction of large, diverse training datasets and typically leverage some form of convolutional neural network architecture. Early works in appearance-based methods were restricted to laboratory settings with fixed head pose [1,32]. These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings [14,45] or in simulated environments [29,38,40]. The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression [7,18,19], random forests [29], k-nearest neighbours [29,40], and CNNs [14,26,38,[45][46][47]. CNNs have proven to be more robust to visual appearance variations, and are capable of personindependent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices [22], interaction on public displays [48], and identification of gaze targets [44], albeit at the cost of increased need for training data and computational cost. Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang et al. [45,46] adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of 5.5 • when evaluating for the within-dataset crossperson case on MPIIGaze with single eye image and head pose input. In [14] separate convolutional streams are used for left/right eye images, a face image, and a 25 × 25 grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to [45]. In [46] a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy. We introduce a novel pictorial representation of eye gaze and incorporate this into a deep neural network architecture via intermediate supervision. To the best of our knowledge we are the first to apply fully convolutional architecture to the task of appearance-based gaze estimation. We show that together these contribution lead to a significant performance improvement of 18% even when using a single eye image as sole input. Deep Learning with Auxiliary Supervision It has been shown [16,31] that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features -simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision [21,37] to good effect for human pose estimation, by replicating the final output loss. Another technique for improving neural network performance is the use of auxiliary data through multi-task learning. In [24,49], the architectures are formed of a single shared convolutional stream which is split into separate fullyconnected layers or regression functions for the auxiliary tasks of gender classification, face visibility, and head pose. Both works show marked improvements to state-of-the-art results in facial landmarks localization. In these approaches through the introduction of multiple learning objectives, an implicit prior is forced upon the network to learn a representation that is informative to both tasks. On the contrary, we explicitly introduce a gaze-specific prior into the network architecture via gazemaps. Most similar to our contribution is the work in [9] where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to [9] is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. 3.1. Recent work in multi-person human pose estimation [3] learns to estimate joint location heatmaps alongside so-called "part affinity fields". When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the "part confidence" and "part affinity" maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation. Method A key contribution of our work is a pictorial representation of 3D gaze direction -which we call gazemaps. This representation is formed of two boolean maps, which can be regressed by a fully convolutional neural network. In this section, we describe our representation (Sec. 3.1) then explain how we constructed our architecture to use the representation as reference for intermediate supervision during training of the network (Sec. 3.2). Pictorial Representation of 3D Gaze In the task of appearance-based gaze estimation, an input eye image is processed to yield gaze direction in 3D. This direction is often represented as a 3-element unit vector v [6,26,46], or as two angles representing eyeball pitch and yaw g = (θ, φ) [29,38,45,47]. In this section, we propose an alternative to previous direct mappings to v or g. If we state the input eye images as x and regard regressing the values g, a conventional gaze estimation model estimates f : x → g. The mapping f can be complex, as reflected by the improvement in accuracies that have been attained by simple adoption of newer CNN architectures ranging from LeNet-5 [26,45], AlexNet [14,46], to VGG-16 [47], the current state-of-the-art CNN architecture for appearance-based gaze estimation. We hypothesize that it is possible to learn an intermediate image representation of the eye, m. That is, we define our model as g = k • j(x) where j : x → m and k : m → g. It is conceivable that the complexity of learning j and k should be significantly lower than directly learning f , allowing for neural network architectures with significantly lower model complexity to be applied to the same task of gaze estimation with higher or equivalent performance. Thus, we propose to estimate so-called gazemaps (m) and from that the 3D gaze direction (g). We reformulate the task of gaze estimation into two concrete tasks: (a) reduction of input image to minimal normalized form (gazemaps), and (b) gaze estimation from gazemaps. The gazemaps for a given input eye image should be visually similar to the input yet distill only the necessary information for gaze estimation to ensure that the mapping k : m → g is simple. To do this, we consider that an average human eyeball has a diameter of ≈ 24mm [2] while an average human iris has a diameter of ≈ 12mm [5]. We then assume a simple model of the human eyeball and iris, where the eyeball is a perfect sphere, and the iris is a perfect circle. For an output image dimension of m × n, we assume the projected eyeball diameter 2r = 1.2n and calculate the iris centre coordinates (u i , v i ) to be: u i = m 2 − r sin φ cos θ (1) v i = n 2 − r sin θ(2) where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). The iris is drawn as an ellipse with major-axis diameter of r and minor-axis diameter of r |cos θ cos φ|. Examples of our gazemaps are shown in Fig. 2b where two separate boolean maps are produced for one gaze direction g. Learning how to predict gazemaps only from a single eye image is not a trivial task. Not only do extraneous factors such as image artifacts and partial occlusion need to be accounted for, a simplified eyeball must be fit to the given image based on iris and eyelid appearance. The detected regions must then be scaled and centered to produce the gazemaps. Thus the mapping j : x → m requires a more complex neural network architecture than the mapping k : m → g. Neural Network Architecture Our neural network consists of two parts: (a) regression from eye image to gazemap, and (b) regression from gazemap to gaze direction g. While any CNN architecture can be implemented for (b), regressing (a) requires a fully convolutional architecture such as those used in human pose estimation. We adapt the stacked hourglass architecture from Newell et al. [21] for this task. The hourglass architecture has been proven to be effective in tasks such as human pose estimation and facial landmarks detection [43] where complex spatial relations need to be modeled at various scales to estimate the location of occluded joints or key points. The architecture performs repeated multi-scale refinement of feature maps, from which desired output confidence maps can be extracted via 1 × 1 convolution layers. We exploit this fact to have our network predict gazemaps instead of classical confidence or heatmaps for joint positions. In Sec. 5, we demonstrate that this works well in practice. In our gazemap-regression network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is: L gazemap = −α p∈P m(p) logm(p),(3) where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the weight coefficient α to 10 −5 . For the regression to g, we select DenseNet which has recently been shown to perform well on image classification tasks [10] while using fewer parameters compared to previous architectures such as ResNet [8]. The loss term for gaze direction regression (per input) is: L gaze = ||g −ĝ|| 2 2 ,(4) whereg is the gaze direction predicted by our neural network. Implementation In this section, we describe the fully convolutional (Hourglass) and regressive (DenseNet) parts of our architecture in more detail. Hourglass Network In our implementation of the Stacked Hourglass Network [21], we provide images of size 150×90 as input, and refine 64 feature maps of size 75×45 throughout the network. The half-scale feature maps are produced by an initial convolutional layer with filter size 7 and stride 2 as done in the original paper [21]. This is followed by batch normalization, ReLU activation, and two residual modules before being passed as input to the first hourglass module. There exist 3 hourglass modules in our architecture, as visualized in Figure 1. In human pose estimation, the commonly used outputs are 2-dimensional confidence maps, which are pixel-aligned to the input image. Our task differs, and thus we do not apply intermediate supervision to the output of every hourglass module. This is to allow for the input image to be processed at multiple scales over many layers, with the necessary features becoming aligned to the final output gazemap representation. Instead, we apply 1 × 1 convolutions to the output of the last hourglass module, and apply the gazemap loss term (Eq. 3). DenseNet As described in Section 3.1, our pictorial representation allows for a simpler function to be learnt for the actual task of gaze estimation. To demonstrate this, we employ a very lightweight DenseNet architecture [10]. Our gaze regression network consists of 5 dense blocks (5 layers per block) with a growth-rate of 8, bottleneck layers, and a compression factor of 0.5. This results in just 62 feature maps at the end of the DenseNet, and subsequently 62 features through global average pooling. Finally, a single linear layer maps these features to g. The resulting network is light-weight and consists of just 66k trainable parameters. Training Details We train our neural network with a batch size of 32, learning rate of 0.0002 and L 2 weights regularization coefficient of 10 −4 . The optimization method used is Adam [13]. Training occurs for 20 epochs on a desktop PC with an Intel Core i7 CPU and Nvidia Titan Xp GPU, taking just over 2 hours for one fold (out of 15) of a leave-one-person-out evaluation on the MPIIGaze dataset. During training, slight data augmentation is applied in terms of image translation and scaling, and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address over-fitting and to stabilize the final error. Evaluations We perform our evaluations primarily on the MPIIGaze dataset, which consists of images taken of 15 laptop users in everyday settings. The dataset has been used as the standard benchmark dataset for unconstrained appearance-based gaze estimation in recent years [26,38,40,[45][46][47]. Our focus is on cross-person singleeye evaluations where 15 models are trained per configuration or architecture in a leave-one-person-out fashion. That is, a neural network is trained on 14 peoples' data (1500 entries each from left and right eyes), then tested on the test set of the left-out person (1000 entries). The mean over 15 such evaluations is used as the final error metric representing cross-person performance. As MPIIGaze is a dataset which well represents real-world settings, cross-person evaluations on the dataset is indicative of the real-world person-independence of a given model. To further test the generalization capabilities of our method, we also perform evaluations on two additional datasets in this section: Columbia [28] and EYE-DIAP [7], where we perform 5-fold cross validation. While Columbia displays large diversity between its 55 participants, the images are of high quality, having been taken using a DSLR. EYEDIAP on the other hand suffers from the low resolution of the VGA camera used, as well as large distance between camera and participant. We select screen target (CS/DS) and static head pose sequences (S) from the EYEDIAP dataset, sampling every 15 seconds from its VGA video streams (V). Training on moving head sequences (M) with just single eye input proved infeasible, with all models experiencing diverging test error during train-ing. Performance improvements on MPIIGaze, Columbia, and EYEDIAP would indicate that our model is robust to cross-person appearance variations and the challenges caused by low eye image resolution and quality. In this section, we first evaluate the effect of our gazemap loss (Sec. 5.1), then compare the performance (Sec. 5.2) and robustness (Sec. 5.3) of our approach against state-of-the-art architectures. We postulated in Sec. 3.1 that by providing a pictorial representation of 3D gaze direction that is visually similar to the input image, we could achieve improvements in appearancebased gaze estimation. In our experiments we find that applying the gazemaps loss term generally offers performance improvements compared to the case where the loss term is not applied. This improvement is particularly emphasized when DenseNet growth rate is high (eg. k = 32), as shown in Table 1. Pictorial Representation (Gazemaps) By observing the output of the last hourglass module and comparing against the input images (Figure 4), we can confirm that even without intermediate supervision, our network learns to isolate the iris region, yielding a similar image representation of gaze direction across participants. Note that this representation is learned only with the final gaze direction loss, L gaze , and that blobs representing iris locations are not necessarily aligned with actual iris locations on the input images. Without intermediate supervision, the learned minimal image representation may incorporate visual factors such as occlusion due to hair and eyeglases, as shown in Figure 4a. This supports our hypothesis that an intermediate representation consisting of an iris and eyeball contains the required information to regress gaze direction. However, due to the nature of learning, the network may also learn irrelevant details such as the edges of the glasses. Yet, by explicitly providing an intermediate representation in the form of gazemaps, we enforce a prior that helps the network learn the desired representation, without incorporating the previously mentioned unhelpful details. Cross-Person Gaze Estimation We compare the cross-person performance of our model by conducting a leaveone-person-out evaluation on MPIIGaze and 5-fold evaluations on Columbia and EYEDIAP. In Section 3.1 we discussed that the mapping k from gazemap to gaze direction should not require a complex architecture to model. Thus, our DenseNet is configured with a low growth rate (k = 8). To allow fair comparison, we re-implement 2 architectures for single-eye image inputs (of size 150 × 90): AlexNet and VGG-16. The AlexNet and VGG-16 architectures have been used in Table 2. Mean gaze estimation error in degrees for within-dataset cross-person k-fold evaluation. Evaluated on (a) MPIIGaze, (b) Columbia, and (c) EYEDIAP datasets. (a) MPIIGaze (15- recent works in appearance-based gaze estimation and are thus suitable baselines [46,47]. Implementation and training procedure details of these architectures are provided in supplementary materials. In MPIIGaze evaluations (Table 2a), our proposed approach outperforms the current state-of-the-art approach by a large margin, yielding an improvement of 1.0 • (5.5 • → 4.5 • = 18.2%). This significant improvement is in spite of the reduced number of trainable parameters used in our architecture (90M vs 0.7M). Our performance compares favorably to that reported in [46] (4.8 • ) where full-face input is used in contrast to our single-eye input. While our results cannot directly be compared with those of [46] due to the different definition of gaze direction (face-centred as opposed to eye centred), the similar performance suggests that eye images may be sufficient as input to the task of gaze direction estimation. Our approach attains comparable performance to models taking face input, and uses considerably less parameters than recently introduced architectures (129x less than GazeNet). We additionally evaluate our model on the Columbia Gaze and EYEDIAP datasets in Table 2b and Table 2c respectively. While high image quality results in all three methods performing comparably for Columbia Gaze, our approach still prevails with an improvement of 0.4 • over AlexNet. On EYEDIAP, the mean error is very high due to the low resolution and low quality input. Note that there is no head pose estimation performed, with only single eye input being relied on for gaze estimation. Our gazemap-based architecture shows its strengths in this case, performing 0.9 • better than VGG-16 -a 8% improvement. Sample gazemap and gaze direction predictions are shown in Figure 5 where it is evident that despite the lack of visual detail, it is possible to fit gazemaps to yield improved gaze estimation error. By evaluating our architecture on 3 different datasets with different properties in the cross-person setting, we can conclude that our approach provides significantly higher generalization capabilities compared to previous approaches. Thus, we bring gaze estimation closer to direct real-world applications. Robustness Analysis In order to shed more light onto our models' performance, we perform an additional robustness analysis. More concretely, we aim to analyze how our approach performs under difficult and challenging situations, such as extreme head pose and gaze direction. In order to do so, we evaluate a moving average on the output of our within-MPIIGaze evaluations, where the y-values correspond to the mean angular error and the x-values take one of the following factor of variations: head pose (pitch & yaw), gaze direction (pitch & yaw). Additionally, we also consider image quality (contrast & sharpness) as a qualitative factor. In order to isolate each factor of variation from the rest, we evaluate the moving average only on the points whose remaining factors are close to its median value. Intuitively, this corresponds to data points where the person moves only in one specific direction, while staying at rest in all of the remaining directions. This is not the case for image quality analysis, where all data points are used. Figure 6 plots the mean angular error as a function of different movement variations and image qualities. The top row corresponds to variation along the head pose, the middle along gaze direction and the bottom to varying image quality. In order to calculate the image contrast, we used the RMS contrast metric whereas to compute the sharpness, we employ a Laplacian-based formula as outlined in [23]. Both metrics are explained in supplementary materials. The figure shows that we consistently outperform competing architectures for extreme head and gaze angles. Notably, we show more consistent performance in particular over large ranges of head pitch and gaze yaw angles. In addition, we surpass prior works on images of varying quality, as shown in Figures 6e and 6f. Conclusion Our work is a first attempt at proposing an explicit prior designed for the task of gaze estimation with a neural network architecture. We do so by introducing a novel pictorial representation which we call gazemaps. An accompanying architecture and training scheme using intermediate supervision naturally arises as a consequence, with a fully convolutional architecture being employed for the first time for appearance-based eye gaze estimation. Our gazemaps are anatomically inspired, and are experimentally shown to outperform approaches which consist of significantly more model parameters and at times, more input modalities. We report improvements of up to 18% on MPIIGaze along with improvements on additional two different datasets against competitive baselines. In addition, we demonstrate that our final model is more robust to various factors such as extreme head poses and gaze directions, as well as poor image quality compared to prior work. Future work can look into alternative pictorial representations for gaze estimation, and an alternative architecture for gazemap prediction. Additionally, there is potential in using synthesized gaze directions (and corresponding gazemaps) for unsupervised training of the gaze regression function, to further improve performance. A Baseline Architectures The state-of-the-art CNN architecture for appearance-based gaze estimation is based on a lightly modified VGG-16 architecture [47], with mean cross-person gaze estimation error of 5.5 • on the MPIIGaze dataset [45]. We compare against a standard VGG-16 architecture [27] and an AlexNet architecture [15] which has been the standard architecture for gaze estimation in many works [14,46]. The specific architectures used as baseline are described in Table 3. Both models are trained with a batch size of 32, learning rate of 5 × 10 −5 and L 2 weights regularization coefficient of 10 −4 , using the Adam optimizer [13]. Learning rate is multiplied by 0.1 every 5, 000 training steps, and slight data augmentation is performed in image translation and scale. B Image metrics In this section we describe the image metrics used for the robustness plots concerning image quality (Figures 6e and 6f in paper). B.1 Image contrast The root mean contrast is defined as the standard deviation of the pixel intensities: RMC = 1 M N N −1 i=1 M −1 j=1 (I ij −Ī) 2 where I ij is the value of the image I ∈ M × N at location (i, j) andĪ is the average intensity of all pixel values in the image. B.2 Image sharpness In order to have a sharpness-based metric, we calculate the variance of the image I after having convolved it with a Laplacian, similar to [23]. This corresponds to an approximation of the second derivative, which is computed with the help of the following mask: Table 3. Configuration of CNNs used as baseline for gaze estimation. The style of [27] is followed where possible. s represents stride length, p dropout probability, and conv9-96 represents a convolutional layer with kernel size 9 and 96 output feature maps. maxpool3 represents a max-pooling layer with kernel size 3.
4,914
1807.10002
2884915206
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
Early works in appearance-based methods were restricted to laboratory settings with fixed head pose @cite_25 @cite_2 . These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings @cite_36 @cite_13 or in simulated environments @cite_3 @cite_40 @cite_46 . The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression @cite_41 @cite_38 @cite_23 , random forests @cite_3 , @math -nearest neighbours @cite_3 @cite_46 , and CNNs @cite_36 @cite_13 @cite_9 @cite_33 @cite_40 @cite_14 . CNNs have proven to be more robust to visual appearance variations, and are capable of person-independent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices @cite_47 , interaction on public displays @cite_16 , and identification of gaze targets @cite_27 , albeit at the cost of increased need for training data and computational cost.
{ "abstract": [ "The problem of estimating human gaze from eye appearance is regarded as mapping high-dimensional features to low-dimensional target space. Conventional methods require densely obtained training samples on the eye appearance manifold, which results in a tedious calibration stage. In this paper, we introduce an adaptive linear regression (ALR) method for accurate mapping via sparsely collected training samples. The key idea is to adaptively find the subset of training samples where the test sample is most linearly representable. We solve the problem via l1-optimization and thoroughly study the key issues to seek for the best solution for regression. The proposed gaze estimation approach based on ALR is naturally sparse and low-dimensional, giving the ability to infer human gaze from variant resolution eye images using much fewer training samples than existing methods. Especially, the optimization procedure in ALR is extended to solve the subpixel alignment problem simultaneously for low resolution test eye images. Performance of the proposed method is evaluated by extensive experiments against various factors such as number of training samples, feature dimensionality and eye image resolution to verify its effectiveness.", "We introduce WebGazer, an online eye tracker that uses common webcams already present in laptops and mobile devices to infer the eye-gaze locations of web visitors on a page in real time. The eye tracking model self-calibrates by watching web visitors interact with the web page and trains a mapping between features of the eye and positions on the screen. This approach aims to provide a natural experience to everyday users that is not restricted to laboratories and highly controlled user studies. WebGazer has two key components: a pupil detector that can be combined with any eye detection library, and a gaze estimator using regression analysis informed by user interactions. We perform a large remote online study and a small in-person study to evaluate WebGazer. The findings show that WebGazer can learn from user interactions and that its accuracy is sufficient for approximating the user's gaze. As part of this paper, we release the first eye tracking library that can be easily integrated in any website for real-time gaze interactions, usability studies, or web research.", "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.", "Eye gaze is an important non-verbal cue for human affect analysis. Recent gaze estimation work indicated that information from the full face region can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through extensive evaluation, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3 on MPIIGaze and 27.7 on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses.", "Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild.", "T o infer human gaze from eye appearance, various methods have been proposed. However, most of them assume a fixed head pose because allowing free head motion adds 6 degrees of freedom to the problem and requires a prohibitively large number of training samples. In this paper, we aim at solving the appearance-based gaze estimation problem under free head motion without significantly increasing the cost of training. The idea is to decompose the problem into subproblems, including initial estimation under fixed head pose and subsequent compensations for estimation biases caused by head rotation and eye appearance distortion. Then each subproblem is solved by either learning-based method or geometric-based calculation. Specifically, the gaze estimation bias caused by eye appearance distortion is learnt effectively from a 5-seconds video clip. Extensive experiments were conducted to verify the effectiveness of the proposed approach.", "Learning-based methods are believed to work well for unconstrained gaze estimation, i.e. gaze estimation from a monocular RGB camera without assumptions regarding user, environment, or camera. However, current gaze datasets were collected under laboratory conditions and methods were not evaluated across multiple datasets. Our work makes three contributions towards addressing these limitations. First, we present the MPIIGaze dataset, which contains 213,659 full face images and corresponding ground-truth gaze positions collected from 15 users during everyday laptop use over several months. An experience sampling approach ensured continuous gaze and head poses and realistic variation in eye appearance and illumination. To facilitate cross-dataset evaluations, 37,667 images were manually annotated with eye corners, mouth corners, and pupil centres. Second, we present an extensive evaluation of state-of-the-art gaze estimation methods on three current datasets, including MPIIGaze. We study key challenges including target gaze range, illumination conditions, and facial appearance variation. We show that image resolution and the use of both eyes affect gaze estimation performance, while head pose and pupil centre information are less informative. Finally, we propose GazeNet, the first deep appearance-based gaze estimation method. GazeNet improves on the state of the art by 22 percent (from a mean error of 13.9 degrees to 10.8 degrees) for the most challenging cross-dataset evaluation.", "Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator. We collect the largest and fully calibrated multi-view gaze dataset and perform a 3D reconstruction in order to generate dense training data of eye images. By using the synthesized dataset to learn a random regression forest, we show that our method outperforms existing methods that use low-resolution eye images.", "Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.", "Eye contact is an important non-verbal cue in social signal processing and promising as a measure of overt attention in human-object interactions and attentive user interfaces. However, robust detection of eye contact across different users, gaze targets, camera positions, and illumination conditions is notoriously challenging. We present a novel method for eye contact detection that combines a state-of-the-art appearance-based gaze estimator with a novel approach for unsupervised gaze target discovery, i.e. without the need for tedious and time-consuming manual data annotation. We evaluate our method in two real-world scenarios: detecting eye contact at the workplace, including on the main work display, from cameras mounted to target objects, as well as during everyday social interactions with the wearer of a head-mounted egocentric camera. We empirically evaluate the performance of our method in both scenarios and demonstrate its effectiveness for detecting eye contact independent of target object type and size, camera position, and user and recording environment.", "The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. This paper intends to overcome this limitation by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches. In particular, we have designed this database to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object.", "", "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learning-by-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, appearance variability, and the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses real-time approximations for complex eyeball materials and structures as well as anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework available online for the benefit of the research community.", "Eye gaze is compelling for interaction with situated displays as we naturally use our eyes to engage with them. In this work we present SideWays, a novel person-independent eye gaze interface that supports spontaneous interaction with displays: users can just walk up to a display and immediately interact using their eyes, without any prior user calibration or training. Requiring only a single off-the-shelf camera and lightweight image processing, SideWays robustly detects whether users attend to the centre of the display or cast glances to the left or right. The system supports an interaction model in which attention to the central display is the default state, while \"sidelong glances\" trigger input or actions. The robustness of the system and usability of the interaction model are validated in a study with 14 participants. Analysis of the participants' strategies in performing different tasks provides insights on gaze control strategies for design of SideWays applications.", "From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at this http URL", "We have developed an artificial neural network based gaze tracking system which can be customized to individual users. A three layer feed forward network, trained with standard error back propagation, is used to determine the position of a user''s gaze from the appearance of the user''s eye. Unlike other gaze trackers, which normally require the user to wear cumbersome headgear, or to use a chin rest to ensure head immobility, our system is entirely non-intrusive. Currently, the best intrusive gaze tracking systems are accurate to approximately 0.75 degrees. In our experiments, we have been able to achieve an accuracy of 1.5 degrees, while allowing head mobility. In its current implementation, our system works at 15 hz. In this paper we present an empirical analysis of the performance of a large number of artificial neural network architectures for this task. Suggestions for further explorations for neurally based gaze trackers are presented, and are related to other similar artificial neural network applications such as autonomous road following." ], "cite_N": [ "@cite_38", "@cite_47", "@cite_14", "@cite_33", "@cite_36", "@cite_41", "@cite_9", "@cite_3", "@cite_40", "@cite_27", "@cite_23", "@cite_2", "@cite_46", "@cite_16", "@cite_13", "@cite_25" ], "mid": [ "2139196511", "2574742418", "2567101557", "2557669140", "2027879843", "2160495187", "2765629358", "1995694455", "2950402190", "2758436589", "2042906110", "", "2299591120", "2099788879", "2952055246", "2103289133" ] }
Deep Pictorial Gaze Estimation
Accurately estimating human gaze direction has many applications in assistive technologies for users with motor disabilities [4], gaze-based human-computer interaction [20], visual attention analysis [17], consumer behavior research [36], AR, VR and more. Traditionally this has been done via specialized hardware, shining infrared illumination into the user's eyes and via specialized cameras, sometimes requiring use of a headrest. Recently deep learning based approaches have made first steps towards fully unconstrained gaze estimation under free head motion, in environments with uncontrolled illumination conditions, and using only a single commodity (and potentially low quality) camera. However, this remains a challenging task due to inter-subject variance in eye appearance, self-occlusions, and head pose and rotation variations. In consequence, current approaches attain accuracies in the order of 6 • only and are still far from the requirements of many application scenarios. While demonstrating the feasibility of purely image based gaze estimation and introducing large datasets, these learning-based approaches [14,45,46] have leveraged convolutional neural network (CNN) architectures, originally designed for the task of image classification, with minor modifications. For example, [45,47] simply append head pose orientation to the first fully connected layer of either LeNet-5 or VGG-16, while [14] proposes to merge multiple input modalities by replicating convolutional layers from AlexNet. In [46] the AlexNet architecture is modified to learn socalled spatial-weights to emphasize important activations by region when full face images are provided as input. Typically, the proposed architectures are only supervised via a mean-squared error loss on the gaze direction output, represented as either a 3-dimensional unit vector or pitch and yaw angles in radians. In this work we propose a network architecture that has been specifically designed with the task of gaze estimation in mind. An important insight is that regressing first to an abstract but gaze specific representation helps the network to more accurately predict the final output of 3D gaze direction. Furthermore, introducing this gaze representation also allows for intermediate supervision which we experimentally show to further improve accuracy. Our work is loosely inspired by recent progress in the field of human pose estimation. Here, earlier work directly regressed joint coordinates [34]. More recently the need for a more task specific form of supervision has led to the use of confidence maps or heatmaps, where the position of a joint is depicted as a 2-dimensional Gaussian [21,33,37]. This representation allows for a simpler mapping between input image and joint position, allows for intermediate supervision, and hence for deeper networks. However, applying this concept of heatmaps to regularize training is not directly applicable to the case of gaze estimation since the crucial eyeball center is not observable in 2D image data. We propose a conceptually similar representation for gaze estimation, called gazemaps. Such a gazemap is an abstract, pictorial representation of the eyeball, the iris and the pupil at it's center (see Figure 1). The simplest depiction of an eyeball's rotation can be made via a circle and an ellipse, the former representing the eyeball, and the latter the iris. The gaze direction is then defined by the vector connecting the larger circle's center and the ellipse. Thus 3D gaze direction can be (pictorially) represented in the form of an image, where a spherical eyeball and circular iris are projected onto the image plane, resulting in a circle and ellipse. Hence, changes in gaze direction result in changes in ellipse positioning (cf. Figure 2a). This pictorial representation can be easily generated from existing training data, given known gaze direction annotations. At inference time recovering gaze direction from such a pictorial representation is a much simpler task than regressing directly from raw pixel values. However, adapting the input image to fit our pictorial representation is non-trivial. For a given eye image, a circular eyeball and an ellipse must be fitted, then centered and rescaled to be in the expected shape. We experimentally observed that this task can be performed well using a fully convolutional architecture. Furthermore, we show that our approach outperforms prior work on the final task of gaze estimation significantly. Our main contribution consists of a novel architecture for appearance-based gaze estimation. At the core of the proposed architecture lies the pictorial representation of 3D gaze direction to which the network fits the raw input images and from which additional convolutional layers estimate the final gaze direction. In addition, we perform: (a) an in-depth analysis of the effect of intermediate supervision using our pictorial representation, (b) quantitative evaluation and comparison against state-of-the-art gaze estimation methods on three challenging datasets (MPIIGaze, EYEDIAP, Columbia) in the person independent setting, and a (c) detailed evaluation of the robustness of a model trained using our architecture in terms of gaze direction and head pose as well as image quality. Finally, we show that our method reduces gaze error by 18% compared to the state-of-the-art [47] on MPIIGaze. Appearance-based Gaze Estimation with CNNs Traditional approaches to image-based gaze estimation are typically categorized as feature-based or model-based. Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules [11,12,25,41] and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image [30,35,39,42] by minimizing a suitable energy. Appearance-based methods learn a direct mapping from raw eye images to gaze direction. Learning this direct mapping can be very challenging due to changes in illumination, (partial) occlusions, head motion and eye decorations. Due to these challenges, appearance-based gaze estimation methods required the introduction of large, diverse training datasets and typically leverage some form of convolutional neural network architecture. Early works in appearance-based methods were restricted to laboratory settings with fixed head pose [1,32]. These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings [14,45] or in simulated environments [29,38,40]. The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression [7,18,19], random forests [29], k-nearest neighbours [29,40], and CNNs [14,26,38,[45][46][47]. CNNs have proven to be more robust to visual appearance variations, and are capable of personindependent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices [22], interaction on public displays [48], and identification of gaze targets [44], albeit at the cost of increased need for training data and computational cost. Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang et al. [45,46] adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of 5.5 • when evaluating for the within-dataset crossperson case on MPIIGaze with single eye image and head pose input. In [14] separate convolutional streams are used for left/right eye images, a face image, and a 25 × 25 grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to [45]. In [46] a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy. We introduce a novel pictorial representation of eye gaze and incorporate this into a deep neural network architecture via intermediate supervision. To the best of our knowledge we are the first to apply fully convolutional architecture to the task of appearance-based gaze estimation. We show that together these contribution lead to a significant performance improvement of 18% even when using a single eye image as sole input. Deep Learning with Auxiliary Supervision It has been shown [16,31] that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features -simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision [21,37] to good effect for human pose estimation, by replicating the final output loss. Another technique for improving neural network performance is the use of auxiliary data through multi-task learning. In [24,49], the architectures are formed of a single shared convolutional stream which is split into separate fullyconnected layers or regression functions for the auxiliary tasks of gender classification, face visibility, and head pose. Both works show marked improvements to state-of-the-art results in facial landmarks localization. In these approaches through the introduction of multiple learning objectives, an implicit prior is forced upon the network to learn a representation that is informative to both tasks. On the contrary, we explicitly introduce a gaze-specific prior into the network architecture via gazemaps. Most similar to our contribution is the work in [9] where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to [9] is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. 3.1. Recent work in multi-person human pose estimation [3] learns to estimate joint location heatmaps alongside so-called "part affinity fields". When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the "part confidence" and "part affinity" maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation. Method A key contribution of our work is a pictorial representation of 3D gaze direction -which we call gazemaps. This representation is formed of two boolean maps, which can be regressed by a fully convolutional neural network. In this section, we describe our representation (Sec. 3.1) then explain how we constructed our architecture to use the representation as reference for intermediate supervision during training of the network (Sec. 3.2). Pictorial Representation of 3D Gaze In the task of appearance-based gaze estimation, an input eye image is processed to yield gaze direction in 3D. This direction is often represented as a 3-element unit vector v [6,26,46], or as two angles representing eyeball pitch and yaw g = (θ, φ) [29,38,45,47]. In this section, we propose an alternative to previous direct mappings to v or g. If we state the input eye images as x and regard regressing the values g, a conventional gaze estimation model estimates f : x → g. The mapping f can be complex, as reflected by the improvement in accuracies that have been attained by simple adoption of newer CNN architectures ranging from LeNet-5 [26,45], AlexNet [14,46], to VGG-16 [47], the current state-of-the-art CNN architecture for appearance-based gaze estimation. We hypothesize that it is possible to learn an intermediate image representation of the eye, m. That is, we define our model as g = k • j(x) where j : x → m and k : m → g. It is conceivable that the complexity of learning j and k should be significantly lower than directly learning f , allowing for neural network architectures with significantly lower model complexity to be applied to the same task of gaze estimation with higher or equivalent performance. Thus, we propose to estimate so-called gazemaps (m) and from that the 3D gaze direction (g). We reformulate the task of gaze estimation into two concrete tasks: (a) reduction of input image to minimal normalized form (gazemaps), and (b) gaze estimation from gazemaps. The gazemaps for a given input eye image should be visually similar to the input yet distill only the necessary information for gaze estimation to ensure that the mapping k : m → g is simple. To do this, we consider that an average human eyeball has a diameter of ≈ 24mm [2] while an average human iris has a diameter of ≈ 12mm [5]. We then assume a simple model of the human eyeball and iris, where the eyeball is a perfect sphere, and the iris is a perfect circle. For an output image dimension of m × n, we assume the projected eyeball diameter 2r = 1.2n and calculate the iris centre coordinates (u i , v i ) to be: u i = m 2 − r sin φ cos θ (1) v i = n 2 − r sin θ(2) where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). The iris is drawn as an ellipse with major-axis diameter of r and minor-axis diameter of r |cos θ cos φ|. Examples of our gazemaps are shown in Fig. 2b where two separate boolean maps are produced for one gaze direction g. Learning how to predict gazemaps only from a single eye image is not a trivial task. Not only do extraneous factors such as image artifacts and partial occlusion need to be accounted for, a simplified eyeball must be fit to the given image based on iris and eyelid appearance. The detected regions must then be scaled and centered to produce the gazemaps. Thus the mapping j : x → m requires a more complex neural network architecture than the mapping k : m → g. Neural Network Architecture Our neural network consists of two parts: (a) regression from eye image to gazemap, and (b) regression from gazemap to gaze direction g. While any CNN architecture can be implemented for (b), regressing (a) requires a fully convolutional architecture such as those used in human pose estimation. We adapt the stacked hourglass architecture from Newell et al. [21] for this task. The hourglass architecture has been proven to be effective in tasks such as human pose estimation and facial landmarks detection [43] where complex spatial relations need to be modeled at various scales to estimate the location of occluded joints or key points. The architecture performs repeated multi-scale refinement of feature maps, from which desired output confidence maps can be extracted via 1 × 1 convolution layers. We exploit this fact to have our network predict gazemaps instead of classical confidence or heatmaps for joint positions. In Sec. 5, we demonstrate that this works well in practice. In our gazemap-regression network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is: L gazemap = −α p∈P m(p) logm(p),(3) where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the weight coefficient α to 10 −5 . For the regression to g, we select DenseNet which has recently been shown to perform well on image classification tasks [10] while using fewer parameters compared to previous architectures such as ResNet [8]. The loss term for gaze direction regression (per input) is: L gaze = ||g −ĝ|| 2 2 ,(4) whereg is the gaze direction predicted by our neural network. Implementation In this section, we describe the fully convolutional (Hourglass) and regressive (DenseNet) parts of our architecture in more detail. Hourglass Network In our implementation of the Stacked Hourglass Network [21], we provide images of size 150×90 as input, and refine 64 feature maps of size 75×45 throughout the network. The half-scale feature maps are produced by an initial convolutional layer with filter size 7 and stride 2 as done in the original paper [21]. This is followed by batch normalization, ReLU activation, and two residual modules before being passed as input to the first hourglass module. There exist 3 hourglass modules in our architecture, as visualized in Figure 1. In human pose estimation, the commonly used outputs are 2-dimensional confidence maps, which are pixel-aligned to the input image. Our task differs, and thus we do not apply intermediate supervision to the output of every hourglass module. This is to allow for the input image to be processed at multiple scales over many layers, with the necessary features becoming aligned to the final output gazemap representation. Instead, we apply 1 × 1 convolutions to the output of the last hourglass module, and apply the gazemap loss term (Eq. 3). DenseNet As described in Section 3.1, our pictorial representation allows for a simpler function to be learnt for the actual task of gaze estimation. To demonstrate this, we employ a very lightweight DenseNet architecture [10]. Our gaze regression network consists of 5 dense blocks (5 layers per block) with a growth-rate of 8, bottleneck layers, and a compression factor of 0.5. This results in just 62 feature maps at the end of the DenseNet, and subsequently 62 features through global average pooling. Finally, a single linear layer maps these features to g. The resulting network is light-weight and consists of just 66k trainable parameters. Training Details We train our neural network with a batch size of 32, learning rate of 0.0002 and L 2 weights regularization coefficient of 10 −4 . The optimization method used is Adam [13]. Training occurs for 20 epochs on a desktop PC with an Intel Core i7 CPU and Nvidia Titan Xp GPU, taking just over 2 hours for one fold (out of 15) of a leave-one-person-out evaluation on the MPIIGaze dataset. During training, slight data augmentation is applied in terms of image translation and scaling, and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address over-fitting and to stabilize the final error. Evaluations We perform our evaluations primarily on the MPIIGaze dataset, which consists of images taken of 15 laptop users in everyday settings. The dataset has been used as the standard benchmark dataset for unconstrained appearance-based gaze estimation in recent years [26,38,40,[45][46][47]. Our focus is on cross-person singleeye evaluations where 15 models are trained per configuration or architecture in a leave-one-person-out fashion. That is, a neural network is trained on 14 peoples' data (1500 entries each from left and right eyes), then tested on the test set of the left-out person (1000 entries). The mean over 15 such evaluations is used as the final error metric representing cross-person performance. As MPIIGaze is a dataset which well represents real-world settings, cross-person evaluations on the dataset is indicative of the real-world person-independence of a given model. To further test the generalization capabilities of our method, we also perform evaluations on two additional datasets in this section: Columbia [28] and EYE-DIAP [7], where we perform 5-fold cross validation. While Columbia displays large diversity between its 55 participants, the images are of high quality, having been taken using a DSLR. EYEDIAP on the other hand suffers from the low resolution of the VGA camera used, as well as large distance between camera and participant. We select screen target (CS/DS) and static head pose sequences (S) from the EYEDIAP dataset, sampling every 15 seconds from its VGA video streams (V). Training on moving head sequences (M) with just single eye input proved infeasible, with all models experiencing diverging test error during train-ing. Performance improvements on MPIIGaze, Columbia, and EYEDIAP would indicate that our model is robust to cross-person appearance variations and the challenges caused by low eye image resolution and quality. In this section, we first evaluate the effect of our gazemap loss (Sec. 5.1), then compare the performance (Sec. 5.2) and robustness (Sec. 5.3) of our approach against state-of-the-art architectures. We postulated in Sec. 3.1 that by providing a pictorial representation of 3D gaze direction that is visually similar to the input image, we could achieve improvements in appearancebased gaze estimation. In our experiments we find that applying the gazemaps loss term generally offers performance improvements compared to the case where the loss term is not applied. This improvement is particularly emphasized when DenseNet growth rate is high (eg. k = 32), as shown in Table 1. Pictorial Representation (Gazemaps) By observing the output of the last hourglass module and comparing against the input images (Figure 4), we can confirm that even without intermediate supervision, our network learns to isolate the iris region, yielding a similar image representation of gaze direction across participants. Note that this representation is learned only with the final gaze direction loss, L gaze , and that blobs representing iris locations are not necessarily aligned with actual iris locations on the input images. Without intermediate supervision, the learned minimal image representation may incorporate visual factors such as occlusion due to hair and eyeglases, as shown in Figure 4a. This supports our hypothesis that an intermediate representation consisting of an iris and eyeball contains the required information to regress gaze direction. However, due to the nature of learning, the network may also learn irrelevant details such as the edges of the glasses. Yet, by explicitly providing an intermediate representation in the form of gazemaps, we enforce a prior that helps the network learn the desired representation, without incorporating the previously mentioned unhelpful details. Cross-Person Gaze Estimation We compare the cross-person performance of our model by conducting a leaveone-person-out evaluation on MPIIGaze and 5-fold evaluations on Columbia and EYEDIAP. In Section 3.1 we discussed that the mapping k from gazemap to gaze direction should not require a complex architecture to model. Thus, our DenseNet is configured with a low growth rate (k = 8). To allow fair comparison, we re-implement 2 architectures for single-eye image inputs (of size 150 × 90): AlexNet and VGG-16. The AlexNet and VGG-16 architectures have been used in Table 2. Mean gaze estimation error in degrees for within-dataset cross-person k-fold evaluation. Evaluated on (a) MPIIGaze, (b) Columbia, and (c) EYEDIAP datasets. (a) MPIIGaze (15- recent works in appearance-based gaze estimation and are thus suitable baselines [46,47]. Implementation and training procedure details of these architectures are provided in supplementary materials. In MPIIGaze evaluations (Table 2a), our proposed approach outperforms the current state-of-the-art approach by a large margin, yielding an improvement of 1.0 • (5.5 • → 4.5 • = 18.2%). This significant improvement is in spite of the reduced number of trainable parameters used in our architecture (90M vs 0.7M). Our performance compares favorably to that reported in [46] (4.8 • ) where full-face input is used in contrast to our single-eye input. While our results cannot directly be compared with those of [46] due to the different definition of gaze direction (face-centred as opposed to eye centred), the similar performance suggests that eye images may be sufficient as input to the task of gaze direction estimation. Our approach attains comparable performance to models taking face input, and uses considerably less parameters than recently introduced architectures (129x less than GazeNet). We additionally evaluate our model on the Columbia Gaze and EYEDIAP datasets in Table 2b and Table 2c respectively. While high image quality results in all three methods performing comparably for Columbia Gaze, our approach still prevails with an improvement of 0.4 • over AlexNet. On EYEDIAP, the mean error is very high due to the low resolution and low quality input. Note that there is no head pose estimation performed, with only single eye input being relied on for gaze estimation. Our gazemap-based architecture shows its strengths in this case, performing 0.9 • better than VGG-16 -a 8% improvement. Sample gazemap and gaze direction predictions are shown in Figure 5 where it is evident that despite the lack of visual detail, it is possible to fit gazemaps to yield improved gaze estimation error. By evaluating our architecture on 3 different datasets with different properties in the cross-person setting, we can conclude that our approach provides significantly higher generalization capabilities compared to previous approaches. Thus, we bring gaze estimation closer to direct real-world applications. Robustness Analysis In order to shed more light onto our models' performance, we perform an additional robustness analysis. More concretely, we aim to analyze how our approach performs under difficult and challenging situations, such as extreme head pose and gaze direction. In order to do so, we evaluate a moving average on the output of our within-MPIIGaze evaluations, where the y-values correspond to the mean angular error and the x-values take one of the following factor of variations: head pose (pitch & yaw), gaze direction (pitch & yaw). Additionally, we also consider image quality (contrast & sharpness) as a qualitative factor. In order to isolate each factor of variation from the rest, we evaluate the moving average only on the points whose remaining factors are close to its median value. Intuitively, this corresponds to data points where the person moves only in one specific direction, while staying at rest in all of the remaining directions. This is not the case for image quality analysis, where all data points are used. Figure 6 plots the mean angular error as a function of different movement variations and image qualities. The top row corresponds to variation along the head pose, the middle along gaze direction and the bottom to varying image quality. In order to calculate the image contrast, we used the RMS contrast metric whereas to compute the sharpness, we employ a Laplacian-based formula as outlined in [23]. Both metrics are explained in supplementary materials. The figure shows that we consistently outperform competing architectures for extreme head and gaze angles. Notably, we show more consistent performance in particular over large ranges of head pitch and gaze yaw angles. In addition, we surpass prior works on images of varying quality, as shown in Figures 6e and 6f. Conclusion Our work is a first attempt at proposing an explicit prior designed for the task of gaze estimation with a neural network architecture. We do so by introducing a novel pictorial representation which we call gazemaps. An accompanying architecture and training scheme using intermediate supervision naturally arises as a consequence, with a fully convolutional architecture being employed for the first time for appearance-based eye gaze estimation. Our gazemaps are anatomically inspired, and are experimentally shown to outperform approaches which consist of significantly more model parameters and at times, more input modalities. We report improvements of up to 18% on MPIIGaze along with improvements on additional two different datasets against competitive baselines. In addition, we demonstrate that our final model is more robust to various factors such as extreme head poses and gaze directions, as well as poor image quality compared to prior work. Future work can look into alternative pictorial representations for gaze estimation, and an alternative architecture for gazemap prediction. Additionally, there is potential in using synthesized gaze directions (and corresponding gazemaps) for unsupervised training of the gaze regression function, to further improve performance. A Baseline Architectures The state-of-the-art CNN architecture for appearance-based gaze estimation is based on a lightly modified VGG-16 architecture [47], with mean cross-person gaze estimation error of 5.5 • on the MPIIGaze dataset [45]. We compare against a standard VGG-16 architecture [27] and an AlexNet architecture [15] which has been the standard architecture for gaze estimation in many works [14,46]. The specific architectures used as baseline are described in Table 3. Both models are trained with a batch size of 32, learning rate of 5 × 10 −5 and L 2 weights regularization coefficient of 10 −4 , using the Adam optimizer [13]. Learning rate is multiplied by 0.1 every 5, 000 training steps, and slight data augmentation is performed in image translation and scale. B Image metrics In this section we describe the image metrics used for the robustness plots concerning image quality (Figures 6e and 6f in paper). B.1 Image contrast The root mean contrast is defined as the standard deviation of the pixel intensities: RMC = 1 M N N −1 i=1 M −1 j=1 (I ij −Ī) 2 where I ij is the value of the image I ∈ M × N at location (i, j) andĪ is the average intensity of all pixel values in the image. B.2 Image sharpness In order to have a sharpness-based metric, we calculate the variance of the image I after having convolved it with a Laplacian, similar to [23]. This corresponds to an approximation of the second derivative, which is computed with the help of the following mask: Table 3. Configuration of CNNs used as baseline for gaze estimation. The style of [27] is followed where possible. s represents stride length, p dropout probability, and conv9-96 represents a convolutional layer with kernel size 9 and 96 output feature maps. maxpool3 represents a max-pooling layer with kernel size 3.
4,914
1807.10002
2884915206
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang al @cite_36 @cite_33 adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of @math when evaluating for the within-dataset cross-person case on MPIIGaze with single eye image and head pose input. In @cite_13 separate convolutional streams are used for left right eye images, a face image, and a @math grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to @cite_36 . In @cite_33 a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy.
{ "abstract": [ "Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild.", "From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at this http URL", "Eye gaze is an important non-verbal cue for human affect analysis. Recent gaze estimation work indicated that information from the full face region can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through extensive evaluation, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3 on MPIIGaze and 27.7 on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses." ], "cite_N": [ "@cite_36", "@cite_13", "@cite_33" ], "mid": [ "2027879843", "2952055246", "2557669140" ] }
Deep Pictorial Gaze Estimation
Accurately estimating human gaze direction has many applications in assistive technologies for users with motor disabilities [4], gaze-based human-computer interaction [20], visual attention analysis [17], consumer behavior research [36], AR, VR and more. Traditionally this has been done via specialized hardware, shining infrared illumination into the user's eyes and via specialized cameras, sometimes requiring use of a headrest. Recently deep learning based approaches have made first steps towards fully unconstrained gaze estimation under free head motion, in environments with uncontrolled illumination conditions, and using only a single commodity (and potentially low quality) camera. However, this remains a challenging task due to inter-subject variance in eye appearance, self-occlusions, and head pose and rotation variations. In consequence, current approaches attain accuracies in the order of 6 • only and are still far from the requirements of many application scenarios. While demonstrating the feasibility of purely image based gaze estimation and introducing large datasets, these learning-based approaches [14,45,46] have leveraged convolutional neural network (CNN) architectures, originally designed for the task of image classification, with minor modifications. For example, [45,47] simply append head pose orientation to the first fully connected layer of either LeNet-5 or VGG-16, while [14] proposes to merge multiple input modalities by replicating convolutional layers from AlexNet. In [46] the AlexNet architecture is modified to learn socalled spatial-weights to emphasize important activations by region when full face images are provided as input. Typically, the proposed architectures are only supervised via a mean-squared error loss on the gaze direction output, represented as either a 3-dimensional unit vector or pitch and yaw angles in radians. In this work we propose a network architecture that has been specifically designed with the task of gaze estimation in mind. An important insight is that regressing first to an abstract but gaze specific representation helps the network to more accurately predict the final output of 3D gaze direction. Furthermore, introducing this gaze representation also allows for intermediate supervision which we experimentally show to further improve accuracy. Our work is loosely inspired by recent progress in the field of human pose estimation. Here, earlier work directly regressed joint coordinates [34]. More recently the need for a more task specific form of supervision has led to the use of confidence maps or heatmaps, where the position of a joint is depicted as a 2-dimensional Gaussian [21,33,37]. This representation allows for a simpler mapping between input image and joint position, allows for intermediate supervision, and hence for deeper networks. However, applying this concept of heatmaps to regularize training is not directly applicable to the case of gaze estimation since the crucial eyeball center is not observable in 2D image data. We propose a conceptually similar representation for gaze estimation, called gazemaps. Such a gazemap is an abstract, pictorial representation of the eyeball, the iris and the pupil at it's center (see Figure 1). The simplest depiction of an eyeball's rotation can be made via a circle and an ellipse, the former representing the eyeball, and the latter the iris. The gaze direction is then defined by the vector connecting the larger circle's center and the ellipse. Thus 3D gaze direction can be (pictorially) represented in the form of an image, where a spherical eyeball and circular iris are projected onto the image plane, resulting in a circle and ellipse. Hence, changes in gaze direction result in changes in ellipse positioning (cf. Figure 2a). This pictorial representation can be easily generated from existing training data, given known gaze direction annotations. At inference time recovering gaze direction from such a pictorial representation is a much simpler task than regressing directly from raw pixel values. However, adapting the input image to fit our pictorial representation is non-trivial. For a given eye image, a circular eyeball and an ellipse must be fitted, then centered and rescaled to be in the expected shape. We experimentally observed that this task can be performed well using a fully convolutional architecture. Furthermore, we show that our approach outperforms prior work on the final task of gaze estimation significantly. Our main contribution consists of a novel architecture for appearance-based gaze estimation. At the core of the proposed architecture lies the pictorial representation of 3D gaze direction to which the network fits the raw input images and from which additional convolutional layers estimate the final gaze direction. In addition, we perform: (a) an in-depth analysis of the effect of intermediate supervision using our pictorial representation, (b) quantitative evaluation and comparison against state-of-the-art gaze estimation methods on three challenging datasets (MPIIGaze, EYEDIAP, Columbia) in the person independent setting, and a (c) detailed evaluation of the robustness of a model trained using our architecture in terms of gaze direction and head pose as well as image quality. Finally, we show that our method reduces gaze error by 18% compared to the state-of-the-art [47] on MPIIGaze. Appearance-based Gaze Estimation with CNNs Traditional approaches to image-based gaze estimation are typically categorized as feature-based or model-based. Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules [11,12,25,41] and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image [30,35,39,42] by minimizing a suitable energy. Appearance-based methods learn a direct mapping from raw eye images to gaze direction. Learning this direct mapping can be very challenging due to changes in illumination, (partial) occlusions, head motion and eye decorations. Due to these challenges, appearance-based gaze estimation methods required the introduction of large, diverse training datasets and typically leverage some form of convolutional neural network architecture. Early works in appearance-based methods were restricted to laboratory settings with fixed head pose [1,32]. These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings [14,45] or in simulated environments [29,38,40]. The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression [7,18,19], random forests [29], k-nearest neighbours [29,40], and CNNs [14,26,38,[45][46][47]. CNNs have proven to be more robust to visual appearance variations, and are capable of personindependent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices [22], interaction on public displays [48], and identification of gaze targets [44], albeit at the cost of increased need for training data and computational cost. Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang et al. [45,46] adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of 5.5 • when evaluating for the within-dataset crossperson case on MPIIGaze with single eye image and head pose input. In [14] separate convolutional streams are used for left/right eye images, a face image, and a 25 × 25 grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to [45]. In [46] a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy. We introduce a novel pictorial representation of eye gaze and incorporate this into a deep neural network architecture via intermediate supervision. To the best of our knowledge we are the first to apply fully convolutional architecture to the task of appearance-based gaze estimation. We show that together these contribution lead to a significant performance improvement of 18% even when using a single eye image as sole input. Deep Learning with Auxiliary Supervision It has been shown [16,31] that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features -simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision [21,37] to good effect for human pose estimation, by replicating the final output loss. Another technique for improving neural network performance is the use of auxiliary data through multi-task learning. In [24,49], the architectures are formed of a single shared convolutional stream which is split into separate fullyconnected layers or regression functions for the auxiliary tasks of gender classification, face visibility, and head pose. Both works show marked improvements to state-of-the-art results in facial landmarks localization. In these approaches through the introduction of multiple learning objectives, an implicit prior is forced upon the network to learn a representation that is informative to both tasks. On the contrary, we explicitly introduce a gaze-specific prior into the network architecture via gazemaps. Most similar to our contribution is the work in [9] where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to [9] is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. 3.1. Recent work in multi-person human pose estimation [3] learns to estimate joint location heatmaps alongside so-called "part affinity fields". When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the "part confidence" and "part affinity" maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation. Method A key contribution of our work is a pictorial representation of 3D gaze direction -which we call gazemaps. This representation is formed of two boolean maps, which can be regressed by a fully convolutional neural network. In this section, we describe our representation (Sec. 3.1) then explain how we constructed our architecture to use the representation as reference for intermediate supervision during training of the network (Sec. 3.2). Pictorial Representation of 3D Gaze In the task of appearance-based gaze estimation, an input eye image is processed to yield gaze direction in 3D. This direction is often represented as a 3-element unit vector v [6,26,46], or as two angles representing eyeball pitch and yaw g = (θ, φ) [29,38,45,47]. In this section, we propose an alternative to previous direct mappings to v or g. If we state the input eye images as x and regard regressing the values g, a conventional gaze estimation model estimates f : x → g. The mapping f can be complex, as reflected by the improvement in accuracies that have been attained by simple adoption of newer CNN architectures ranging from LeNet-5 [26,45], AlexNet [14,46], to VGG-16 [47], the current state-of-the-art CNN architecture for appearance-based gaze estimation. We hypothesize that it is possible to learn an intermediate image representation of the eye, m. That is, we define our model as g = k • j(x) where j : x → m and k : m → g. It is conceivable that the complexity of learning j and k should be significantly lower than directly learning f , allowing for neural network architectures with significantly lower model complexity to be applied to the same task of gaze estimation with higher or equivalent performance. Thus, we propose to estimate so-called gazemaps (m) and from that the 3D gaze direction (g). We reformulate the task of gaze estimation into two concrete tasks: (a) reduction of input image to minimal normalized form (gazemaps), and (b) gaze estimation from gazemaps. The gazemaps for a given input eye image should be visually similar to the input yet distill only the necessary information for gaze estimation to ensure that the mapping k : m → g is simple. To do this, we consider that an average human eyeball has a diameter of ≈ 24mm [2] while an average human iris has a diameter of ≈ 12mm [5]. We then assume a simple model of the human eyeball and iris, where the eyeball is a perfect sphere, and the iris is a perfect circle. For an output image dimension of m × n, we assume the projected eyeball diameter 2r = 1.2n and calculate the iris centre coordinates (u i , v i ) to be: u i = m 2 − r sin φ cos θ (1) v i = n 2 − r sin θ(2) where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). The iris is drawn as an ellipse with major-axis diameter of r and minor-axis diameter of r |cos θ cos φ|. Examples of our gazemaps are shown in Fig. 2b where two separate boolean maps are produced for one gaze direction g. Learning how to predict gazemaps only from a single eye image is not a trivial task. Not only do extraneous factors such as image artifacts and partial occlusion need to be accounted for, a simplified eyeball must be fit to the given image based on iris and eyelid appearance. The detected regions must then be scaled and centered to produce the gazemaps. Thus the mapping j : x → m requires a more complex neural network architecture than the mapping k : m → g. Neural Network Architecture Our neural network consists of two parts: (a) regression from eye image to gazemap, and (b) regression from gazemap to gaze direction g. While any CNN architecture can be implemented for (b), regressing (a) requires a fully convolutional architecture such as those used in human pose estimation. We adapt the stacked hourglass architecture from Newell et al. [21] for this task. The hourglass architecture has been proven to be effective in tasks such as human pose estimation and facial landmarks detection [43] where complex spatial relations need to be modeled at various scales to estimate the location of occluded joints or key points. The architecture performs repeated multi-scale refinement of feature maps, from which desired output confidence maps can be extracted via 1 × 1 convolution layers. We exploit this fact to have our network predict gazemaps instead of classical confidence or heatmaps for joint positions. In Sec. 5, we demonstrate that this works well in practice. In our gazemap-regression network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is: L gazemap = −α p∈P m(p) logm(p),(3) where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the weight coefficient α to 10 −5 . For the regression to g, we select DenseNet which has recently been shown to perform well on image classification tasks [10] while using fewer parameters compared to previous architectures such as ResNet [8]. The loss term for gaze direction regression (per input) is: L gaze = ||g −ĝ|| 2 2 ,(4) whereg is the gaze direction predicted by our neural network. Implementation In this section, we describe the fully convolutional (Hourglass) and regressive (DenseNet) parts of our architecture in more detail. Hourglass Network In our implementation of the Stacked Hourglass Network [21], we provide images of size 150×90 as input, and refine 64 feature maps of size 75×45 throughout the network. The half-scale feature maps are produced by an initial convolutional layer with filter size 7 and stride 2 as done in the original paper [21]. This is followed by batch normalization, ReLU activation, and two residual modules before being passed as input to the first hourglass module. There exist 3 hourglass modules in our architecture, as visualized in Figure 1. In human pose estimation, the commonly used outputs are 2-dimensional confidence maps, which are pixel-aligned to the input image. Our task differs, and thus we do not apply intermediate supervision to the output of every hourglass module. This is to allow for the input image to be processed at multiple scales over many layers, with the necessary features becoming aligned to the final output gazemap representation. Instead, we apply 1 × 1 convolutions to the output of the last hourglass module, and apply the gazemap loss term (Eq. 3). DenseNet As described in Section 3.1, our pictorial representation allows for a simpler function to be learnt for the actual task of gaze estimation. To demonstrate this, we employ a very lightweight DenseNet architecture [10]. Our gaze regression network consists of 5 dense blocks (5 layers per block) with a growth-rate of 8, bottleneck layers, and a compression factor of 0.5. This results in just 62 feature maps at the end of the DenseNet, and subsequently 62 features through global average pooling. Finally, a single linear layer maps these features to g. The resulting network is light-weight and consists of just 66k trainable parameters. Training Details We train our neural network with a batch size of 32, learning rate of 0.0002 and L 2 weights regularization coefficient of 10 −4 . The optimization method used is Adam [13]. Training occurs for 20 epochs on a desktop PC with an Intel Core i7 CPU and Nvidia Titan Xp GPU, taking just over 2 hours for one fold (out of 15) of a leave-one-person-out evaluation on the MPIIGaze dataset. During training, slight data augmentation is applied in terms of image translation and scaling, and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address over-fitting and to stabilize the final error. Evaluations We perform our evaluations primarily on the MPIIGaze dataset, which consists of images taken of 15 laptop users in everyday settings. The dataset has been used as the standard benchmark dataset for unconstrained appearance-based gaze estimation in recent years [26,38,40,[45][46][47]. Our focus is on cross-person singleeye evaluations where 15 models are trained per configuration or architecture in a leave-one-person-out fashion. That is, a neural network is trained on 14 peoples' data (1500 entries each from left and right eyes), then tested on the test set of the left-out person (1000 entries). The mean over 15 such evaluations is used as the final error metric representing cross-person performance. As MPIIGaze is a dataset which well represents real-world settings, cross-person evaluations on the dataset is indicative of the real-world person-independence of a given model. To further test the generalization capabilities of our method, we also perform evaluations on two additional datasets in this section: Columbia [28] and EYE-DIAP [7], where we perform 5-fold cross validation. While Columbia displays large diversity between its 55 participants, the images are of high quality, having been taken using a DSLR. EYEDIAP on the other hand suffers from the low resolution of the VGA camera used, as well as large distance between camera and participant. We select screen target (CS/DS) and static head pose sequences (S) from the EYEDIAP dataset, sampling every 15 seconds from its VGA video streams (V). Training on moving head sequences (M) with just single eye input proved infeasible, with all models experiencing diverging test error during train-ing. Performance improvements on MPIIGaze, Columbia, and EYEDIAP would indicate that our model is robust to cross-person appearance variations and the challenges caused by low eye image resolution and quality. In this section, we first evaluate the effect of our gazemap loss (Sec. 5.1), then compare the performance (Sec. 5.2) and robustness (Sec. 5.3) of our approach against state-of-the-art architectures. We postulated in Sec. 3.1 that by providing a pictorial representation of 3D gaze direction that is visually similar to the input image, we could achieve improvements in appearancebased gaze estimation. In our experiments we find that applying the gazemaps loss term generally offers performance improvements compared to the case where the loss term is not applied. This improvement is particularly emphasized when DenseNet growth rate is high (eg. k = 32), as shown in Table 1. Pictorial Representation (Gazemaps) By observing the output of the last hourglass module and comparing against the input images (Figure 4), we can confirm that even without intermediate supervision, our network learns to isolate the iris region, yielding a similar image representation of gaze direction across participants. Note that this representation is learned only with the final gaze direction loss, L gaze , and that blobs representing iris locations are not necessarily aligned with actual iris locations on the input images. Without intermediate supervision, the learned minimal image representation may incorporate visual factors such as occlusion due to hair and eyeglases, as shown in Figure 4a. This supports our hypothesis that an intermediate representation consisting of an iris and eyeball contains the required information to regress gaze direction. However, due to the nature of learning, the network may also learn irrelevant details such as the edges of the glasses. Yet, by explicitly providing an intermediate representation in the form of gazemaps, we enforce a prior that helps the network learn the desired representation, without incorporating the previously mentioned unhelpful details. Cross-Person Gaze Estimation We compare the cross-person performance of our model by conducting a leaveone-person-out evaluation on MPIIGaze and 5-fold evaluations on Columbia and EYEDIAP. In Section 3.1 we discussed that the mapping k from gazemap to gaze direction should not require a complex architecture to model. Thus, our DenseNet is configured with a low growth rate (k = 8). To allow fair comparison, we re-implement 2 architectures for single-eye image inputs (of size 150 × 90): AlexNet and VGG-16. The AlexNet and VGG-16 architectures have been used in Table 2. Mean gaze estimation error in degrees for within-dataset cross-person k-fold evaluation. Evaluated on (a) MPIIGaze, (b) Columbia, and (c) EYEDIAP datasets. (a) MPIIGaze (15- recent works in appearance-based gaze estimation and are thus suitable baselines [46,47]. Implementation and training procedure details of these architectures are provided in supplementary materials. In MPIIGaze evaluations (Table 2a), our proposed approach outperforms the current state-of-the-art approach by a large margin, yielding an improvement of 1.0 • (5.5 • → 4.5 • = 18.2%). This significant improvement is in spite of the reduced number of trainable parameters used in our architecture (90M vs 0.7M). Our performance compares favorably to that reported in [46] (4.8 • ) where full-face input is used in contrast to our single-eye input. While our results cannot directly be compared with those of [46] due to the different definition of gaze direction (face-centred as opposed to eye centred), the similar performance suggests that eye images may be sufficient as input to the task of gaze direction estimation. Our approach attains comparable performance to models taking face input, and uses considerably less parameters than recently introduced architectures (129x less than GazeNet). We additionally evaluate our model on the Columbia Gaze and EYEDIAP datasets in Table 2b and Table 2c respectively. While high image quality results in all three methods performing comparably for Columbia Gaze, our approach still prevails with an improvement of 0.4 • over AlexNet. On EYEDIAP, the mean error is very high due to the low resolution and low quality input. Note that there is no head pose estimation performed, with only single eye input being relied on for gaze estimation. Our gazemap-based architecture shows its strengths in this case, performing 0.9 • better than VGG-16 -a 8% improvement. Sample gazemap and gaze direction predictions are shown in Figure 5 where it is evident that despite the lack of visual detail, it is possible to fit gazemaps to yield improved gaze estimation error. By evaluating our architecture on 3 different datasets with different properties in the cross-person setting, we can conclude that our approach provides significantly higher generalization capabilities compared to previous approaches. Thus, we bring gaze estimation closer to direct real-world applications. Robustness Analysis In order to shed more light onto our models' performance, we perform an additional robustness analysis. More concretely, we aim to analyze how our approach performs under difficult and challenging situations, such as extreme head pose and gaze direction. In order to do so, we evaluate a moving average on the output of our within-MPIIGaze evaluations, where the y-values correspond to the mean angular error and the x-values take one of the following factor of variations: head pose (pitch & yaw), gaze direction (pitch & yaw). Additionally, we also consider image quality (contrast & sharpness) as a qualitative factor. In order to isolate each factor of variation from the rest, we evaluate the moving average only on the points whose remaining factors are close to its median value. Intuitively, this corresponds to data points where the person moves only in one specific direction, while staying at rest in all of the remaining directions. This is not the case for image quality analysis, where all data points are used. Figure 6 plots the mean angular error as a function of different movement variations and image qualities. The top row corresponds to variation along the head pose, the middle along gaze direction and the bottom to varying image quality. In order to calculate the image contrast, we used the RMS contrast metric whereas to compute the sharpness, we employ a Laplacian-based formula as outlined in [23]. Both metrics are explained in supplementary materials. The figure shows that we consistently outperform competing architectures for extreme head and gaze angles. Notably, we show more consistent performance in particular over large ranges of head pitch and gaze yaw angles. In addition, we surpass prior works on images of varying quality, as shown in Figures 6e and 6f. Conclusion Our work is a first attempt at proposing an explicit prior designed for the task of gaze estimation with a neural network architecture. We do so by introducing a novel pictorial representation which we call gazemaps. An accompanying architecture and training scheme using intermediate supervision naturally arises as a consequence, with a fully convolutional architecture being employed for the first time for appearance-based eye gaze estimation. Our gazemaps are anatomically inspired, and are experimentally shown to outperform approaches which consist of significantly more model parameters and at times, more input modalities. We report improvements of up to 18% on MPIIGaze along with improvements on additional two different datasets against competitive baselines. In addition, we demonstrate that our final model is more robust to various factors such as extreme head poses and gaze directions, as well as poor image quality compared to prior work. Future work can look into alternative pictorial representations for gaze estimation, and an alternative architecture for gazemap prediction. Additionally, there is potential in using synthesized gaze directions (and corresponding gazemaps) for unsupervised training of the gaze regression function, to further improve performance. A Baseline Architectures The state-of-the-art CNN architecture for appearance-based gaze estimation is based on a lightly modified VGG-16 architecture [47], with mean cross-person gaze estimation error of 5.5 • on the MPIIGaze dataset [45]. We compare against a standard VGG-16 architecture [27] and an AlexNet architecture [15] which has been the standard architecture for gaze estimation in many works [14,46]. The specific architectures used as baseline are described in Table 3. Both models are trained with a batch size of 32, learning rate of 5 × 10 −5 and L 2 weights regularization coefficient of 10 −4 , using the Adam optimizer [13]. Learning rate is multiplied by 0.1 every 5, 000 training steps, and slight data augmentation is performed in image translation and scale. B Image metrics In this section we describe the image metrics used for the robustness plots concerning image quality (Figures 6e and 6f in paper). B.1 Image contrast The root mean contrast is defined as the standard deviation of the pixel intensities: RMC = 1 M N N −1 i=1 M −1 j=1 (I ij −Ī) 2 where I ij is the value of the image I ∈ M × N at location (i, j) andĪ is the average intensity of all pixel values in the image. B.2 Image sharpness In order to have a sharpness-based metric, we calculate the variance of the image I after having convolved it with a Laplacian, similar to [23]. This corresponds to an approximation of the second derivative, which is computed with the help of the following mask: Table 3. Configuration of CNNs used as baseline for gaze estimation. The style of [27] is followed where possible. s represents stride length, p dropout probability, and conv9-96 represents a convolutional layer with kernel size 9 and 96 output feature maps. maxpool3 represents a max-pooling layer with kernel size 3.
4,914
1807.10002
2884915206
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
It has been shown @cite_4 @cite_28 that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features - simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision @cite_15 @cite_18 to good effect for human pose estimation, by replicating the final output loss.
{ "abstract": [ "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods." ], "cite_N": [ "@cite_28", "@cite_15", "@cite_4", "@cite_18" ], "mid": [ "2950179405", "2255781698", "", "2307770531" ] }
Deep Pictorial Gaze Estimation
Accurately estimating human gaze direction has many applications in assistive technologies for users with motor disabilities [4], gaze-based human-computer interaction [20], visual attention analysis [17], consumer behavior research [36], AR, VR and more. Traditionally this has been done via specialized hardware, shining infrared illumination into the user's eyes and via specialized cameras, sometimes requiring use of a headrest. Recently deep learning based approaches have made first steps towards fully unconstrained gaze estimation under free head motion, in environments with uncontrolled illumination conditions, and using only a single commodity (and potentially low quality) camera. However, this remains a challenging task due to inter-subject variance in eye appearance, self-occlusions, and head pose and rotation variations. In consequence, current approaches attain accuracies in the order of 6 • only and are still far from the requirements of many application scenarios. While demonstrating the feasibility of purely image based gaze estimation and introducing large datasets, these learning-based approaches [14,45,46] have leveraged convolutional neural network (CNN) architectures, originally designed for the task of image classification, with minor modifications. For example, [45,47] simply append head pose orientation to the first fully connected layer of either LeNet-5 or VGG-16, while [14] proposes to merge multiple input modalities by replicating convolutional layers from AlexNet. In [46] the AlexNet architecture is modified to learn socalled spatial-weights to emphasize important activations by region when full face images are provided as input. Typically, the proposed architectures are only supervised via a mean-squared error loss on the gaze direction output, represented as either a 3-dimensional unit vector or pitch and yaw angles in radians. In this work we propose a network architecture that has been specifically designed with the task of gaze estimation in mind. An important insight is that regressing first to an abstract but gaze specific representation helps the network to more accurately predict the final output of 3D gaze direction. Furthermore, introducing this gaze representation also allows for intermediate supervision which we experimentally show to further improve accuracy. Our work is loosely inspired by recent progress in the field of human pose estimation. Here, earlier work directly regressed joint coordinates [34]. More recently the need for a more task specific form of supervision has led to the use of confidence maps or heatmaps, where the position of a joint is depicted as a 2-dimensional Gaussian [21,33,37]. This representation allows for a simpler mapping between input image and joint position, allows for intermediate supervision, and hence for deeper networks. However, applying this concept of heatmaps to regularize training is not directly applicable to the case of gaze estimation since the crucial eyeball center is not observable in 2D image data. We propose a conceptually similar representation for gaze estimation, called gazemaps. Such a gazemap is an abstract, pictorial representation of the eyeball, the iris and the pupil at it's center (see Figure 1). The simplest depiction of an eyeball's rotation can be made via a circle and an ellipse, the former representing the eyeball, and the latter the iris. The gaze direction is then defined by the vector connecting the larger circle's center and the ellipse. Thus 3D gaze direction can be (pictorially) represented in the form of an image, where a spherical eyeball and circular iris are projected onto the image plane, resulting in a circle and ellipse. Hence, changes in gaze direction result in changes in ellipse positioning (cf. Figure 2a). This pictorial representation can be easily generated from existing training data, given known gaze direction annotations. At inference time recovering gaze direction from such a pictorial representation is a much simpler task than regressing directly from raw pixel values. However, adapting the input image to fit our pictorial representation is non-trivial. For a given eye image, a circular eyeball and an ellipse must be fitted, then centered and rescaled to be in the expected shape. We experimentally observed that this task can be performed well using a fully convolutional architecture. Furthermore, we show that our approach outperforms prior work on the final task of gaze estimation significantly. Our main contribution consists of a novel architecture for appearance-based gaze estimation. At the core of the proposed architecture lies the pictorial representation of 3D gaze direction to which the network fits the raw input images and from which additional convolutional layers estimate the final gaze direction. In addition, we perform: (a) an in-depth analysis of the effect of intermediate supervision using our pictorial representation, (b) quantitative evaluation and comparison against state-of-the-art gaze estimation methods on three challenging datasets (MPIIGaze, EYEDIAP, Columbia) in the person independent setting, and a (c) detailed evaluation of the robustness of a model trained using our architecture in terms of gaze direction and head pose as well as image quality. Finally, we show that our method reduces gaze error by 18% compared to the state-of-the-art [47] on MPIIGaze. Appearance-based Gaze Estimation with CNNs Traditional approaches to image-based gaze estimation are typically categorized as feature-based or model-based. Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules [11,12,25,41] and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image [30,35,39,42] by minimizing a suitable energy. Appearance-based methods learn a direct mapping from raw eye images to gaze direction. Learning this direct mapping can be very challenging due to changes in illumination, (partial) occlusions, head motion and eye decorations. Due to these challenges, appearance-based gaze estimation methods required the introduction of large, diverse training datasets and typically leverage some form of convolutional neural network architecture. Early works in appearance-based methods were restricted to laboratory settings with fixed head pose [1,32]. These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings [14,45] or in simulated environments [29,38,40]. The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression [7,18,19], random forests [29], k-nearest neighbours [29,40], and CNNs [14,26,38,[45][46][47]. CNNs have proven to be more robust to visual appearance variations, and are capable of personindependent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices [22], interaction on public displays [48], and identification of gaze targets [44], albeit at the cost of increased need for training data and computational cost. Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang et al. [45,46] adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of 5.5 • when evaluating for the within-dataset crossperson case on MPIIGaze with single eye image and head pose input. In [14] separate convolutional streams are used for left/right eye images, a face image, and a 25 × 25 grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to [45]. In [46] a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy. We introduce a novel pictorial representation of eye gaze and incorporate this into a deep neural network architecture via intermediate supervision. To the best of our knowledge we are the first to apply fully convolutional architecture to the task of appearance-based gaze estimation. We show that together these contribution lead to a significant performance improvement of 18% even when using a single eye image as sole input. Deep Learning with Auxiliary Supervision It has been shown [16,31] that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features -simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision [21,37] to good effect for human pose estimation, by replicating the final output loss. Another technique for improving neural network performance is the use of auxiliary data through multi-task learning. In [24,49], the architectures are formed of a single shared convolutional stream which is split into separate fullyconnected layers or regression functions for the auxiliary tasks of gender classification, face visibility, and head pose. Both works show marked improvements to state-of-the-art results in facial landmarks localization. In these approaches through the introduction of multiple learning objectives, an implicit prior is forced upon the network to learn a representation that is informative to both tasks. On the contrary, we explicitly introduce a gaze-specific prior into the network architecture via gazemaps. Most similar to our contribution is the work in [9] where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to [9] is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. 3.1. Recent work in multi-person human pose estimation [3] learns to estimate joint location heatmaps alongside so-called "part affinity fields". When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the "part confidence" and "part affinity" maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation. Method A key contribution of our work is a pictorial representation of 3D gaze direction -which we call gazemaps. This representation is formed of two boolean maps, which can be regressed by a fully convolutional neural network. In this section, we describe our representation (Sec. 3.1) then explain how we constructed our architecture to use the representation as reference for intermediate supervision during training of the network (Sec. 3.2). Pictorial Representation of 3D Gaze In the task of appearance-based gaze estimation, an input eye image is processed to yield gaze direction in 3D. This direction is often represented as a 3-element unit vector v [6,26,46], or as two angles representing eyeball pitch and yaw g = (θ, φ) [29,38,45,47]. In this section, we propose an alternative to previous direct mappings to v or g. If we state the input eye images as x and regard regressing the values g, a conventional gaze estimation model estimates f : x → g. The mapping f can be complex, as reflected by the improvement in accuracies that have been attained by simple adoption of newer CNN architectures ranging from LeNet-5 [26,45], AlexNet [14,46], to VGG-16 [47], the current state-of-the-art CNN architecture for appearance-based gaze estimation. We hypothesize that it is possible to learn an intermediate image representation of the eye, m. That is, we define our model as g = k • j(x) where j : x → m and k : m → g. It is conceivable that the complexity of learning j and k should be significantly lower than directly learning f , allowing for neural network architectures with significantly lower model complexity to be applied to the same task of gaze estimation with higher or equivalent performance. Thus, we propose to estimate so-called gazemaps (m) and from that the 3D gaze direction (g). We reformulate the task of gaze estimation into two concrete tasks: (a) reduction of input image to minimal normalized form (gazemaps), and (b) gaze estimation from gazemaps. The gazemaps for a given input eye image should be visually similar to the input yet distill only the necessary information for gaze estimation to ensure that the mapping k : m → g is simple. To do this, we consider that an average human eyeball has a diameter of ≈ 24mm [2] while an average human iris has a diameter of ≈ 12mm [5]. We then assume a simple model of the human eyeball and iris, where the eyeball is a perfect sphere, and the iris is a perfect circle. For an output image dimension of m × n, we assume the projected eyeball diameter 2r = 1.2n and calculate the iris centre coordinates (u i , v i ) to be: u i = m 2 − r sin φ cos θ (1) v i = n 2 − r sin θ(2) where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). The iris is drawn as an ellipse with major-axis diameter of r and minor-axis diameter of r |cos θ cos φ|. Examples of our gazemaps are shown in Fig. 2b where two separate boolean maps are produced for one gaze direction g. Learning how to predict gazemaps only from a single eye image is not a trivial task. Not only do extraneous factors such as image artifacts and partial occlusion need to be accounted for, a simplified eyeball must be fit to the given image based on iris and eyelid appearance. The detected regions must then be scaled and centered to produce the gazemaps. Thus the mapping j : x → m requires a more complex neural network architecture than the mapping k : m → g. Neural Network Architecture Our neural network consists of two parts: (a) regression from eye image to gazemap, and (b) regression from gazemap to gaze direction g. While any CNN architecture can be implemented for (b), regressing (a) requires a fully convolutional architecture such as those used in human pose estimation. We adapt the stacked hourglass architecture from Newell et al. [21] for this task. The hourglass architecture has been proven to be effective in tasks such as human pose estimation and facial landmarks detection [43] where complex spatial relations need to be modeled at various scales to estimate the location of occluded joints or key points. The architecture performs repeated multi-scale refinement of feature maps, from which desired output confidence maps can be extracted via 1 × 1 convolution layers. We exploit this fact to have our network predict gazemaps instead of classical confidence or heatmaps for joint positions. In Sec. 5, we demonstrate that this works well in practice. In our gazemap-regression network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is: L gazemap = −α p∈P m(p) logm(p),(3) where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the weight coefficient α to 10 −5 . For the regression to g, we select DenseNet which has recently been shown to perform well on image classification tasks [10] while using fewer parameters compared to previous architectures such as ResNet [8]. The loss term for gaze direction regression (per input) is: L gaze = ||g −ĝ|| 2 2 ,(4) whereg is the gaze direction predicted by our neural network. Implementation In this section, we describe the fully convolutional (Hourglass) and regressive (DenseNet) parts of our architecture in more detail. Hourglass Network In our implementation of the Stacked Hourglass Network [21], we provide images of size 150×90 as input, and refine 64 feature maps of size 75×45 throughout the network. The half-scale feature maps are produced by an initial convolutional layer with filter size 7 and stride 2 as done in the original paper [21]. This is followed by batch normalization, ReLU activation, and two residual modules before being passed as input to the first hourglass module. There exist 3 hourglass modules in our architecture, as visualized in Figure 1. In human pose estimation, the commonly used outputs are 2-dimensional confidence maps, which are pixel-aligned to the input image. Our task differs, and thus we do not apply intermediate supervision to the output of every hourglass module. This is to allow for the input image to be processed at multiple scales over many layers, with the necessary features becoming aligned to the final output gazemap representation. Instead, we apply 1 × 1 convolutions to the output of the last hourglass module, and apply the gazemap loss term (Eq. 3). DenseNet As described in Section 3.1, our pictorial representation allows for a simpler function to be learnt for the actual task of gaze estimation. To demonstrate this, we employ a very lightweight DenseNet architecture [10]. Our gaze regression network consists of 5 dense blocks (5 layers per block) with a growth-rate of 8, bottleneck layers, and a compression factor of 0.5. This results in just 62 feature maps at the end of the DenseNet, and subsequently 62 features through global average pooling. Finally, a single linear layer maps these features to g. The resulting network is light-weight and consists of just 66k trainable parameters. Training Details We train our neural network with a batch size of 32, learning rate of 0.0002 and L 2 weights regularization coefficient of 10 −4 . The optimization method used is Adam [13]. Training occurs for 20 epochs on a desktop PC with an Intel Core i7 CPU and Nvidia Titan Xp GPU, taking just over 2 hours for one fold (out of 15) of a leave-one-person-out evaluation on the MPIIGaze dataset. During training, slight data augmentation is applied in terms of image translation and scaling, and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address over-fitting and to stabilize the final error. Evaluations We perform our evaluations primarily on the MPIIGaze dataset, which consists of images taken of 15 laptop users in everyday settings. The dataset has been used as the standard benchmark dataset for unconstrained appearance-based gaze estimation in recent years [26,38,40,[45][46][47]. Our focus is on cross-person singleeye evaluations where 15 models are trained per configuration or architecture in a leave-one-person-out fashion. That is, a neural network is trained on 14 peoples' data (1500 entries each from left and right eyes), then tested on the test set of the left-out person (1000 entries). The mean over 15 such evaluations is used as the final error metric representing cross-person performance. As MPIIGaze is a dataset which well represents real-world settings, cross-person evaluations on the dataset is indicative of the real-world person-independence of a given model. To further test the generalization capabilities of our method, we also perform evaluations on two additional datasets in this section: Columbia [28] and EYE-DIAP [7], where we perform 5-fold cross validation. While Columbia displays large diversity between its 55 participants, the images are of high quality, having been taken using a DSLR. EYEDIAP on the other hand suffers from the low resolution of the VGA camera used, as well as large distance between camera and participant. We select screen target (CS/DS) and static head pose sequences (S) from the EYEDIAP dataset, sampling every 15 seconds from its VGA video streams (V). Training on moving head sequences (M) with just single eye input proved infeasible, with all models experiencing diverging test error during train-ing. Performance improvements on MPIIGaze, Columbia, and EYEDIAP would indicate that our model is robust to cross-person appearance variations and the challenges caused by low eye image resolution and quality. In this section, we first evaluate the effect of our gazemap loss (Sec. 5.1), then compare the performance (Sec. 5.2) and robustness (Sec. 5.3) of our approach against state-of-the-art architectures. We postulated in Sec. 3.1 that by providing a pictorial representation of 3D gaze direction that is visually similar to the input image, we could achieve improvements in appearancebased gaze estimation. In our experiments we find that applying the gazemaps loss term generally offers performance improvements compared to the case where the loss term is not applied. This improvement is particularly emphasized when DenseNet growth rate is high (eg. k = 32), as shown in Table 1. Pictorial Representation (Gazemaps) By observing the output of the last hourglass module and comparing against the input images (Figure 4), we can confirm that even without intermediate supervision, our network learns to isolate the iris region, yielding a similar image representation of gaze direction across participants. Note that this representation is learned only with the final gaze direction loss, L gaze , and that blobs representing iris locations are not necessarily aligned with actual iris locations on the input images. Without intermediate supervision, the learned minimal image representation may incorporate visual factors such as occlusion due to hair and eyeglases, as shown in Figure 4a. This supports our hypothesis that an intermediate representation consisting of an iris and eyeball contains the required information to regress gaze direction. However, due to the nature of learning, the network may also learn irrelevant details such as the edges of the glasses. Yet, by explicitly providing an intermediate representation in the form of gazemaps, we enforce a prior that helps the network learn the desired representation, without incorporating the previously mentioned unhelpful details. Cross-Person Gaze Estimation We compare the cross-person performance of our model by conducting a leaveone-person-out evaluation on MPIIGaze and 5-fold evaluations on Columbia and EYEDIAP. In Section 3.1 we discussed that the mapping k from gazemap to gaze direction should not require a complex architecture to model. Thus, our DenseNet is configured with a low growth rate (k = 8). To allow fair comparison, we re-implement 2 architectures for single-eye image inputs (of size 150 × 90): AlexNet and VGG-16. The AlexNet and VGG-16 architectures have been used in Table 2. Mean gaze estimation error in degrees for within-dataset cross-person k-fold evaluation. Evaluated on (a) MPIIGaze, (b) Columbia, and (c) EYEDIAP datasets. (a) MPIIGaze (15- recent works in appearance-based gaze estimation and are thus suitable baselines [46,47]. Implementation and training procedure details of these architectures are provided in supplementary materials. In MPIIGaze evaluations (Table 2a), our proposed approach outperforms the current state-of-the-art approach by a large margin, yielding an improvement of 1.0 • (5.5 • → 4.5 • = 18.2%). This significant improvement is in spite of the reduced number of trainable parameters used in our architecture (90M vs 0.7M). Our performance compares favorably to that reported in [46] (4.8 • ) where full-face input is used in contrast to our single-eye input. While our results cannot directly be compared with those of [46] due to the different definition of gaze direction (face-centred as opposed to eye centred), the similar performance suggests that eye images may be sufficient as input to the task of gaze direction estimation. Our approach attains comparable performance to models taking face input, and uses considerably less parameters than recently introduced architectures (129x less than GazeNet). We additionally evaluate our model on the Columbia Gaze and EYEDIAP datasets in Table 2b and Table 2c respectively. While high image quality results in all three methods performing comparably for Columbia Gaze, our approach still prevails with an improvement of 0.4 • over AlexNet. On EYEDIAP, the mean error is very high due to the low resolution and low quality input. Note that there is no head pose estimation performed, with only single eye input being relied on for gaze estimation. Our gazemap-based architecture shows its strengths in this case, performing 0.9 • better than VGG-16 -a 8% improvement. Sample gazemap and gaze direction predictions are shown in Figure 5 where it is evident that despite the lack of visual detail, it is possible to fit gazemaps to yield improved gaze estimation error. By evaluating our architecture on 3 different datasets with different properties in the cross-person setting, we can conclude that our approach provides significantly higher generalization capabilities compared to previous approaches. Thus, we bring gaze estimation closer to direct real-world applications. Robustness Analysis In order to shed more light onto our models' performance, we perform an additional robustness analysis. More concretely, we aim to analyze how our approach performs under difficult and challenging situations, such as extreme head pose and gaze direction. In order to do so, we evaluate a moving average on the output of our within-MPIIGaze evaluations, where the y-values correspond to the mean angular error and the x-values take one of the following factor of variations: head pose (pitch & yaw), gaze direction (pitch & yaw). Additionally, we also consider image quality (contrast & sharpness) as a qualitative factor. In order to isolate each factor of variation from the rest, we evaluate the moving average only on the points whose remaining factors are close to its median value. Intuitively, this corresponds to data points where the person moves only in one specific direction, while staying at rest in all of the remaining directions. This is not the case for image quality analysis, where all data points are used. Figure 6 plots the mean angular error as a function of different movement variations and image qualities. The top row corresponds to variation along the head pose, the middle along gaze direction and the bottom to varying image quality. In order to calculate the image contrast, we used the RMS contrast metric whereas to compute the sharpness, we employ a Laplacian-based formula as outlined in [23]. Both metrics are explained in supplementary materials. The figure shows that we consistently outperform competing architectures for extreme head and gaze angles. Notably, we show more consistent performance in particular over large ranges of head pitch and gaze yaw angles. In addition, we surpass prior works on images of varying quality, as shown in Figures 6e and 6f. Conclusion Our work is a first attempt at proposing an explicit prior designed for the task of gaze estimation with a neural network architecture. We do so by introducing a novel pictorial representation which we call gazemaps. An accompanying architecture and training scheme using intermediate supervision naturally arises as a consequence, with a fully convolutional architecture being employed for the first time for appearance-based eye gaze estimation. Our gazemaps are anatomically inspired, and are experimentally shown to outperform approaches which consist of significantly more model parameters and at times, more input modalities. We report improvements of up to 18% on MPIIGaze along with improvements on additional two different datasets against competitive baselines. In addition, we demonstrate that our final model is more robust to various factors such as extreme head poses and gaze directions, as well as poor image quality compared to prior work. Future work can look into alternative pictorial representations for gaze estimation, and an alternative architecture for gazemap prediction. Additionally, there is potential in using synthesized gaze directions (and corresponding gazemaps) for unsupervised training of the gaze regression function, to further improve performance. A Baseline Architectures The state-of-the-art CNN architecture for appearance-based gaze estimation is based on a lightly modified VGG-16 architecture [47], with mean cross-person gaze estimation error of 5.5 • on the MPIIGaze dataset [45]. We compare against a standard VGG-16 architecture [27] and an AlexNet architecture [15] which has been the standard architecture for gaze estimation in many works [14,46]. The specific architectures used as baseline are described in Table 3. Both models are trained with a batch size of 32, learning rate of 5 × 10 −5 and L 2 weights regularization coefficient of 10 −4 , using the Adam optimizer [13]. Learning rate is multiplied by 0.1 every 5, 000 training steps, and slight data augmentation is performed in image translation and scale. B Image metrics In this section we describe the image metrics used for the robustness plots concerning image quality (Figures 6e and 6f in paper). B.1 Image contrast The root mean contrast is defined as the standard deviation of the pixel intensities: RMC = 1 M N N −1 i=1 M −1 j=1 (I ij −Ī) 2 where I ij is the value of the image I ∈ M × N at location (i, j) andĪ is the average intensity of all pixel values in the image. B.2 Image sharpness In order to have a sharpness-based metric, we calculate the variance of the image I after having convolved it with a Laplacian, similar to [23]. This corresponds to an approximation of the second derivative, which is computed with the help of the following mask: Table 3. Configuration of CNNs used as baseline for gaze estimation. The style of [27] is followed where possible. s represents stride length, p dropout probability, and conv9-96 represents a convolutional layer with kernel size 9 and 96 output feature maps. maxpool3 represents a max-pooling layer with kernel size 3.
4,914
1807.10002
2884915206
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
Most similar to our contribution is the work in @cite_24 where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to @cite_24 is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. .
{ "abstract": [ "We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5 of labeled images we outperform previous state-of-the-art trained on the AFLW dataset." ], "cite_N": [ "@cite_24" ], "mid": [ "2753578462" ] }
Deep Pictorial Gaze Estimation
Accurately estimating human gaze direction has many applications in assistive technologies for users with motor disabilities [4], gaze-based human-computer interaction [20], visual attention analysis [17], consumer behavior research [36], AR, VR and more. Traditionally this has been done via specialized hardware, shining infrared illumination into the user's eyes and via specialized cameras, sometimes requiring use of a headrest. Recently deep learning based approaches have made first steps towards fully unconstrained gaze estimation under free head motion, in environments with uncontrolled illumination conditions, and using only a single commodity (and potentially low quality) camera. However, this remains a challenging task due to inter-subject variance in eye appearance, self-occlusions, and head pose and rotation variations. In consequence, current approaches attain accuracies in the order of 6 • only and are still far from the requirements of many application scenarios. While demonstrating the feasibility of purely image based gaze estimation and introducing large datasets, these learning-based approaches [14,45,46] have leveraged convolutional neural network (CNN) architectures, originally designed for the task of image classification, with minor modifications. For example, [45,47] simply append head pose orientation to the first fully connected layer of either LeNet-5 or VGG-16, while [14] proposes to merge multiple input modalities by replicating convolutional layers from AlexNet. In [46] the AlexNet architecture is modified to learn socalled spatial-weights to emphasize important activations by region when full face images are provided as input. Typically, the proposed architectures are only supervised via a mean-squared error loss on the gaze direction output, represented as either a 3-dimensional unit vector or pitch and yaw angles in radians. In this work we propose a network architecture that has been specifically designed with the task of gaze estimation in mind. An important insight is that regressing first to an abstract but gaze specific representation helps the network to more accurately predict the final output of 3D gaze direction. Furthermore, introducing this gaze representation also allows for intermediate supervision which we experimentally show to further improve accuracy. Our work is loosely inspired by recent progress in the field of human pose estimation. Here, earlier work directly regressed joint coordinates [34]. More recently the need for a more task specific form of supervision has led to the use of confidence maps or heatmaps, where the position of a joint is depicted as a 2-dimensional Gaussian [21,33,37]. This representation allows for a simpler mapping between input image and joint position, allows for intermediate supervision, and hence for deeper networks. However, applying this concept of heatmaps to regularize training is not directly applicable to the case of gaze estimation since the crucial eyeball center is not observable in 2D image data. We propose a conceptually similar representation for gaze estimation, called gazemaps. Such a gazemap is an abstract, pictorial representation of the eyeball, the iris and the pupil at it's center (see Figure 1). The simplest depiction of an eyeball's rotation can be made via a circle and an ellipse, the former representing the eyeball, and the latter the iris. The gaze direction is then defined by the vector connecting the larger circle's center and the ellipse. Thus 3D gaze direction can be (pictorially) represented in the form of an image, where a spherical eyeball and circular iris are projected onto the image plane, resulting in a circle and ellipse. Hence, changes in gaze direction result in changes in ellipse positioning (cf. Figure 2a). This pictorial representation can be easily generated from existing training data, given known gaze direction annotations. At inference time recovering gaze direction from such a pictorial representation is a much simpler task than regressing directly from raw pixel values. However, adapting the input image to fit our pictorial representation is non-trivial. For a given eye image, a circular eyeball and an ellipse must be fitted, then centered and rescaled to be in the expected shape. We experimentally observed that this task can be performed well using a fully convolutional architecture. Furthermore, we show that our approach outperforms prior work on the final task of gaze estimation significantly. Our main contribution consists of a novel architecture for appearance-based gaze estimation. At the core of the proposed architecture lies the pictorial representation of 3D gaze direction to which the network fits the raw input images and from which additional convolutional layers estimate the final gaze direction. In addition, we perform: (a) an in-depth analysis of the effect of intermediate supervision using our pictorial representation, (b) quantitative evaluation and comparison against state-of-the-art gaze estimation methods on three challenging datasets (MPIIGaze, EYEDIAP, Columbia) in the person independent setting, and a (c) detailed evaluation of the robustness of a model trained using our architecture in terms of gaze direction and head pose as well as image quality. Finally, we show that our method reduces gaze error by 18% compared to the state-of-the-art [47] on MPIIGaze. Appearance-based Gaze Estimation with CNNs Traditional approaches to image-based gaze estimation are typically categorized as feature-based or model-based. Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules [11,12,25,41] and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image [30,35,39,42] by minimizing a suitable energy. Appearance-based methods learn a direct mapping from raw eye images to gaze direction. Learning this direct mapping can be very challenging due to changes in illumination, (partial) occlusions, head motion and eye decorations. Due to these challenges, appearance-based gaze estimation methods required the introduction of large, diverse training datasets and typically leverage some form of convolutional neural network architecture. Early works in appearance-based methods were restricted to laboratory settings with fixed head pose [1,32]. These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings [14,45] or in simulated environments [29,38,40]. The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression [7,18,19], random forests [29], k-nearest neighbours [29,40], and CNNs [14,26,38,[45][46][47]. CNNs have proven to be more robust to visual appearance variations, and are capable of personindependent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices [22], interaction on public displays [48], and identification of gaze targets [44], albeit at the cost of increased need for training data and computational cost. Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang et al. [45,46] adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of 5.5 • when evaluating for the within-dataset crossperson case on MPIIGaze with single eye image and head pose input. In [14] separate convolutional streams are used for left/right eye images, a face image, and a 25 × 25 grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to [45]. In [46] a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy. We introduce a novel pictorial representation of eye gaze and incorporate this into a deep neural network architecture via intermediate supervision. To the best of our knowledge we are the first to apply fully convolutional architecture to the task of appearance-based gaze estimation. We show that together these contribution lead to a significant performance improvement of 18% even when using a single eye image as sole input. Deep Learning with Auxiliary Supervision It has been shown [16,31] that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features -simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision [21,37] to good effect for human pose estimation, by replicating the final output loss. Another technique for improving neural network performance is the use of auxiliary data through multi-task learning. In [24,49], the architectures are formed of a single shared convolutional stream which is split into separate fullyconnected layers or regression functions for the auxiliary tasks of gender classification, face visibility, and head pose. Both works show marked improvements to state-of-the-art results in facial landmarks localization. In these approaches through the introduction of multiple learning objectives, an implicit prior is forced upon the network to learn a representation that is informative to both tasks. On the contrary, we explicitly introduce a gaze-specific prior into the network architecture via gazemaps. Most similar to our contribution is the work in [9] where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to [9] is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. 3.1. Recent work in multi-person human pose estimation [3] learns to estimate joint location heatmaps alongside so-called "part affinity fields". When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the "part confidence" and "part affinity" maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation. Method A key contribution of our work is a pictorial representation of 3D gaze direction -which we call gazemaps. This representation is formed of two boolean maps, which can be regressed by a fully convolutional neural network. In this section, we describe our representation (Sec. 3.1) then explain how we constructed our architecture to use the representation as reference for intermediate supervision during training of the network (Sec. 3.2). Pictorial Representation of 3D Gaze In the task of appearance-based gaze estimation, an input eye image is processed to yield gaze direction in 3D. This direction is often represented as a 3-element unit vector v [6,26,46], or as two angles representing eyeball pitch and yaw g = (θ, φ) [29,38,45,47]. In this section, we propose an alternative to previous direct mappings to v or g. If we state the input eye images as x and regard regressing the values g, a conventional gaze estimation model estimates f : x → g. The mapping f can be complex, as reflected by the improvement in accuracies that have been attained by simple adoption of newer CNN architectures ranging from LeNet-5 [26,45], AlexNet [14,46], to VGG-16 [47], the current state-of-the-art CNN architecture for appearance-based gaze estimation. We hypothesize that it is possible to learn an intermediate image representation of the eye, m. That is, we define our model as g = k • j(x) where j : x → m and k : m → g. It is conceivable that the complexity of learning j and k should be significantly lower than directly learning f , allowing for neural network architectures with significantly lower model complexity to be applied to the same task of gaze estimation with higher or equivalent performance. Thus, we propose to estimate so-called gazemaps (m) and from that the 3D gaze direction (g). We reformulate the task of gaze estimation into two concrete tasks: (a) reduction of input image to minimal normalized form (gazemaps), and (b) gaze estimation from gazemaps. The gazemaps for a given input eye image should be visually similar to the input yet distill only the necessary information for gaze estimation to ensure that the mapping k : m → g is simple. To do this, we consider that an average human eyeball has a diameter of ≈ 24mm [2] while an average human iris has a diameter of ≈ 12mm [5]. We then assume a simple model of the human eyeball and iris, where the eyeball is a perfect sphere, and the iris is a perfect circle. For an output image dimension of m × n, we assume the projected eyeball diameter 2r = 1.2n and calculate the iris centre coordinates (u i , v i ) to be: u i = m 2 − r sin φ cos θ (1) v i = n 2 − r sin θ(2) where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). The iris is drawn as an ellipse with major-axis diameter of r and minor-axis diameter of r |cos θ cos φ|. Examples of our gazemaps are shown in Fig. 2b where two separate boolean maps are produced for one gaze direction g. Learning how to predict gazemaps only from a single eye image is not a trivial task. Not only do extraneous factors such as image artifacts and partial occlusion need to be accounted for, a simplified eyeball must be fit to the given image based on iris and eyelid appearance. The detected regions must then be scaled and centered to produce the gazemaps. Thus the mapping j : x → m requires a more complex neural network architecture than the mapping k : m → g. Neural Network Architecture Our neural network consists of two parts: (a) regression from eye image to gazemap, and (b) regression from gazemap to gaze direction g. While any CNN architecture can be implemented for (b), regressing (a) requires a fully convolutional architecture such as those used in human pose estimation. We adapt the stacked hourglass architecture from Newell et al. [21] for this task. The hourglass architecture has been proven to be effective in tasks such as human pose estimation and facial landmarks detection [43] where complex spatial relations need to be modeled at various scales to estimate the location of occluded joints or key points. The architecture performs repeated multi-scale refinement of feature maps, from which desired output confidence maps can be extracted via 1 × 1 convolution layers. We exploit this fact to have our network predict gazemaps instead of classical confidence or heatmaps for joint positions. In Sec. 5, we demonstrate that this works well in practice. In our gazemap-regression network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is: L gazemap = −α p∈P m(p) logm(p),(3) where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the weight coefficient α to 10 −5 . For the regression to g, we select DenseNet which has recently been shown to perform well on image classification tasks [10] while using fewer parameters compared to previous architectures such as ResNet [8]. The loss term for gaze direction regression (per input) is: L gaze = ||g −ĝ|| 2 2 ,(4) whereg is the gaze direction predicted by our neural network. Implementation In this section, we describe the fully convolutional (Hourglass) and regressive (DenseNet) parts of our architecture in more detail. Hourglass Network In our implementation of the Stacked Hourglass Network [21], we provide images of size 150×90 as input, and refine 64 feature maps of size 75×45 throughout the network. The half-scale feature maps are produced by an initial convolutional layer with filter size 7 and stride 2 as done in the original paper [21]. This is followed by batch normalization, ReLU activation, and two residual modules before being passed as input to the first hourglass module. There exist 3 hourglass modules in our architecture, as visualized in Figure 1. In human pose estimation, the commonly used outputs are 2-dimensional confidence maps, which are pixel-aligned to the input image. Our task differs, and thus we do not apply intermediate supervision to the output of every hourglass module. This is to allow for the input image to be processed at multiple scales over many layers, with the necessary features becoming aligned to the final output gazemap representation. Instead, we apply 1 × 1 convolutions to the output of the last hourglass module, and apply the gazemap loss term (Eq. 3). DenseNet As described in Section 3.1, our pictorial representation allows for a simpler function to be learnt for the actual task of gaze estimation. To demonstrate this, we employ a very lightweight DenseNet architecture [10]. Our gaze regression network consists of 5 dense blocks (5 layers per block) with a growth-rate of 8, bottleneck layers, and a compression factor of 0.5. This results in just 62 feature maps at the end of the DenseNet, and subsequently 62 features through global average pooling. Finally, a single linear layer maps these features to g. The resulting network is light-weight and consists of just 66k trainable parameters. Training Details We train our neural network with a batch size of 32, learning rate of 0.0002 and L 2 weights regularization coefficient of 10 −4 . The optimization method used is Adam [13]. Training occurs for 20 epochs on a desktop PC with an Intel Core i7 CPU and Nvidia Titan Xp GPU, taking just over 2 hours for one fold (out of 15) of a leave-one-person-out evaluation on the MPIIGaze dataset. During training, slight data augmentation is applied in terms of image translation and scaling, and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address over-fitting and to stabilize the final error. Evaluations We perform our evaluations primarily on the MPIIGaze dataset, which consists of images taken of 15 laptop users in everyday settings. The dataset has been used as the standard benchmark dataset for unconstrained appearance-based gaze estimation in recent years [26,38,40,[45][46][47]. Our focus is on cross-person singleeye evaluations where 15 models are trained per configuration or architecture in a leave-one-person-out fashion. That is, a neural network is trained on 14 peoples' data (1500 entries each from left and right eyes), then tested on the test set of the left-out person (1000 entries). The mean over 15 such evaluations is used as the final error metric representing cross-person performance. As MPIIGaze is a dataset which well represents real-world settings, cross-person evaluations on the dataset is indicative of the real-world person-independence of a given model. To further test the generalization capabilities of our method, we also perform evaluations on two additional datasets in this section: Columbia [28] and EYE-DIAP [7], where we perform 5-fold cross validation. While Columbia displays large diversity between its 55 participants, the images are of high quality, having been taken using a DSLR. EYEDIAP on the other hand suffers from the low resolution of the VGA camera used, as well as large distance between camera and participant. We select screen target (CS/DS) and static head pose sequences (S) from the EYEDIAP dataset, sampling every 15 seconds from its VGA video streams (V). Training on moving head sequences (M) with just single eye input proved infeasible, with all models experiencing diverging test error during train-ing. Performance improvements on MPIIGaze, Columbia, and EYEDIAP would indicate that our model is robust to cross-person appearance variations and the challenges caused by low eye image resolution and quality. In this section, we first evaluate the effect of our gazemap loss (Sec. 5.1), then compare the performance (Sec. 5.2) and robustness (Sec. 5.3) of our approach against state-of-the-art architectures. We postulated in Sec. 3.1 that by providing a pictorial representation of 3D gaze direction that is visually similar to the input image, we could achieve improvements in appearancebased gaze estimation. In our experiments we find that applying the gazemaps loss term generally offers performance improvements compared to the case where the loss term is not applied. This improvement is particularly emphasized when DenseNet growth rate is high (eg. k = 32), as shown in Table 1. Pictorial Representation (Gazemaps) By observing the output of the last hourglass module and comparing against the input images (Figure 4), we can confirm that even without intermediate supervision, our network learns to isolate the iris region, yielding a similar image representation of gaze direction across participants. Note that this representation is learned only with the final gaze direction loss, L gaze , and that blobs representing iris locations are not necessarily aligned with actual iris locations on the input images. Without intermediate supervision, the learned minimal image representation may incorporate visual factors such as occlusion due to hair and eyeglases, as shown in Figure 4a. This supports our hypothesis that an intermediate representation consisting of an iris and eyeball contains the required information to regress gaze direction. However, due to the nature of learning, the network may also learn irrelevant details such as the edges of the glasses. Yet, by explicitly providing an intermediate representation in the form of gazemaps, we enforce a prior that helps the network learn the desired representation, without incorporating the previously mentioned unhelpful details. Cross-Person Gaze Estimation We compare the cross-person performance of our model by conducting a leaveone-person-out evaluation on MPIIGaze and 5-fold evaluations on Columbia and EYEDIAP. In Section 3.1 we discussed that the mapping k from gazemap to gaze direction should not require a complex architecture to model. Thus, our DenseNet is configured with a low growth rate (k = 8). To allow fair comparison, we re-implement 2 architectures for single-eye image inputs (of size 150 × 90): AlexNet and VGG-16. The AlexNet and VGG-16 architectures have been used in Table 2. Mean gaze estimation error in degrees for within-dataset cross-person k-fold evaluation. Evaluated on (a) MPIIGaze, (b) Columbia, and (c) EYEDIAP datasets. (a) MPIIGaze (15- recent works in appearance-based gaze estimation and are thus suitable baselines [46,47]. Implementation and training procedure details of these architectures are provided in supplementary materials. In MPIIGaze evaluations (Table 2a), our proposed approach outperforms the current state-of-the-art approach by a large margin, yielding an improvement of 1.0 • (5.5 • → 4.5 • = 18.2%). This significant improvement is in spite of the reduced number of trainable parameters used in our architecture (90M vs 0.7M). Our performance compares favorably to that reported in [46] (4.8 • ) where full-face input is used in contrast to our single-eye input. While our results cannot directly be compared with those of [46] due to the different definition of gaze direction (face-centred as opposed to eye centred), the similar performance suggests that eye images may be sufficient as input to the task of gaze direction estimation. Our approach attains comparable performance to models taking face input, and uses considerably less parameters than recently introduced architectures (129x less than GazeNet). We additionally evaluate our model on the Columbia Gaze and EYEDIAP datasets in Table 2b and Table 2c respectively. While high image quality results in all three methods performing comparably for Columbia Gaze, our approach still prevails with an improvement of 0.4 • over AlexNet. On EYEDIAP, the mean error is very high due to the low resolution and low quality input. Note that there is no head pose estimation performed, with only single eye input being relied on for gaze estimation. Our gazemap-based architecture shows its strengths in this case, performing 0.9 • better than VGG-16 -a 8% improvement. Sample gazemap and gaze direction predictions are shown in Figure 5 where it is evident that despite the lack of visual detail, it is possible to fit gazemaps to yield improved gaze estimation error. By evaluating our architecture on 3 different datasets with different properties in the cross-person setting, we can conclude that our approach provides significantly higher generalization capabilities compared to previous approaches. Thus, we bring gaze estimation closer to direct real-world applications. Robustness Analysis In order to shed more light onto our models' performance, we perform an additional robustness analysis. More concretely, we aim to analyze how our approach performs under difficult and challenging situations, such as extreme head pose and gaze direction. In order to do so, we evaluate a moving average on the output of our within-MPIIGaze evaluations, where the y-values correspond to the mean angular error and the x-values take one of the following factor of variations: head pose (pitch & yaw), gaze direction (pitch & yaw). Additionally, we also consider image quality (contrast & sharpness) as a qualitative factor. In order to isolate each factor of variation from the rest, we evaluate the moving average only on the points whose remaining factors are close to its median value. Intuitively, this corresponds to data points where the person moves only in one specific direction, while staying at rest in all of the remaining directions. This is not the case for image quality analysis, where all data points are used. Figure 6 plots the mean angular error as a function of different movement variations and image qualities. The top row corresponds to variation along the head pose, the middle along gaze direction and the bottom to varying image quality. In order to calculate the image contrast, we used the RMS contrast metric whereas to compute the sharpness, we employ a Laplacian-based formula as outlined in [23]. Both metrics are explained in supplementary materials. The figure shows that we consistently outperform competing architectures for extreme head and gaze angles. Notably, we show more consistent performance in particular over large ranges of head pitch and gaze yaw angles. In addition, we surpass prior works on images of varying quality, as shown in Figures 6e and 6f. Conclusion Our work is a first attempt at proposing an explicit prior designed for the task of gaze estimation with a neural network architecture. We do so by introducing a novel pictorial representation which we call gazemaps. An accompanying architecture and training scheme using intermediate supervision naturally arises as a consequence, with a fully convolutional architecture being employed for the first time for appearance-based eye gaze estimation. Our gazemaps are anatomically inspired, and are experimentally shown to outperform approaches which consist of significantly more model parameters and at times, more input modalities. We report improvements of up to 18% on MPIIGaze along with improvements on additional two different datasets against competitive baselines. In addition, we demonstrate that our final model is more robust to various factors such as extreme head poses and gaze directions, as well as poor image quality compared to prior work. Future work can look into alternative pictorial representations for gaze estimation, and an alternative architecture for gazemap prediction. Additionally, there is potential in using synthesized gaze directions (and corresponding gazemaps) for unsupervised training of the gaze regression function, to further improve performance. A Baseline Architectures The state-of-the-art CNN architecture for appearance-based gaze estimation is based on a lightly modified VGG-16 architecture [47], with mean cross-person gaze estimation error of 5.5 • on the MPIIGaze dataset [45]. We compare against a standard VGG-16 architecture [27] and an AlexNet architecture [15] which has been the standard architecture for gaze estimation in many works [14,46]. The specific architectures used as baseline are described in Table 3. Both models are trained with a batch size of 32, learning rate of 5 × 10 −5 and L 2 weights regularization coefficient of 10 −4 , using the Adam optimizer [13]. Learning rate is multiplied by 0.1 every 5, 000 training steps, and slight data augmentation is performed in image translation and scale. B Image metrics In this section we describe the image metrics used for the robustness plots concerning image quality (Figures 6e and 6f in paper). B.1 Image contrast The root mean contrast is defined as the standard deviation of the pixel intensities: RMC = 1 M N N −1 i=1 M −1 j=1 (I ij −Ī) 2 where I ij is the value of the image I ∈ M × N at location (i, j) andĪ is the average intensity of all pixel values in the image. B.2 Image sharpness In order to have a sharpness-based metric, we calculate the variance of the image I after having convolved it with a Laplacian, similar to [23]. This corresponds to an approximation of the second derivative, which is computed with the help of the following mask: Table 3. Configuration of CNNs used as baseline for gaze estimation. The style of [27] is followed where possible. s represents stride length, p dropout probability, and conv9-96 represents a convolutional layer with kernel size 9 and 96 output feature maps. maxpool3 represents a max-pooling layer with kernel size 3.
4,914
1807.10002
2884915206
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
Recent work in multi-person human pose estimation @cite_17 learns to estimate joint location heatmaps alongside so-called part affinity fields''. When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the part confidence'' and part affinity'' maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation.
{ "abstract": [ "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency." ], "cite_N": [ "@cite_17" ], "mid": [ "2951856387" ] }
Deep Pictorial Gaze Estimation
Accurately estimating human gaze direction has many applications in assistive technologies for users with motor disabilities [4], gaze-based human-computer interaction [20], visual attention analysis [17], consumer behavior research [36], AR, VR and more. Traditionally this has been done via specialized hardware, shining infrared illumination into the user's eyes and via specialized cameras, sometimes requiring use of a headrest. Recently deep learning based approaches have made first steps towards fully unconstrained gaze estimation under free head motion, in environments with uncontrolled illumination conditions, and using only a single commodity (and potentially low quality) camera. However, this remains a challenging task due to inter-subject variance in eye appearance, self-occlusions, and head pose and rotation variations. In consequence, current approaches attain accuracies in the order of 6 • only and are still far from the requirements of many application scenarios. While demonstrating the feasibility of purely image based gaze estimation and introducing large datasets, these learning-based approaches [14,45,46] have leveraged convolutional neural network (CNN) architectures, originally designed for the task of image classification, with minor modifications. For example, [45,47] simply append head pose orientation to the first fully connected layer of either LeNet-5 or VGG-16, while [14] proposes to merge multiple input modalities by replicating convolutional layers from AlexNet. In [46] the AlexNet architecture is modified to learn socalled spatial-weights to emphasize important activations by region when full face images are provided as input. Typically, the proposed architectures are only supervised via a mean-squared error loss on the gaze direction output, represented as either a 3-dimensional unit vector or pitch and yaw angles in radians. In this work we propose a network architecture that has been specifically designed with the task of gaze estimation in mind. An important insight is that regressing first to an abstract but gaze specific representation helps the network to more accurately predict the final output of 3D gaze direction. Furthermore, introducing this gaze representation also allows for intermediate supervision which we experimentally show to further improve accuracy. Our work is loosely inspired by recent progress in the field of human pose estimation. Here, earlier work directly regressed joint coordinates [34]. More recently the need for a more task specific form of supervision has led to the use of confidence maps or heatmaps, where the position of a joint is depicted as a 2-dimensional Gaussian [21,33,37]. This representation allows for a simpler mapping between input image and joint position, allows for intermediate supervision, and hence for deeper networks. However, applying this concept of heatmaps to regularize training is not directly applicable to the case of gaze estimation since the crucial eyeball center is not observable in 2D image data. We propose a conceptually similar representation for gaze estimation, called gazemaps. Such a gazemap is an abstract, pictorial representation of the eyeball, the iris and the pupil at it's center (see Figure 1). The simplest depiction of an eyeball's rotation can be made via a circle and an ellipse, the former representing the eyeball, and the latter the iris. The gaze direction is then defined by the vector connecting the larger circle's center and the ellipse. Thus 3D gaze direction can be (pictorially) represented in the form of an image, where a spherical eyeball and circular iris are projected onto the image plane, resulting in a circle and ellipse. Hence, changes in gaze direction result in changes in ellipse positioning (cf. Figure 2a). This pictorial representation can be easily generated from existing training data, given known gaze direction annotations. At inference time recovering gaze direction from such a pictorial representation is a much simpler task than regressing directly from raw pixel values. However, adapting the input image to fit our pictorial representation is non-trivial. For a given eye image, a circular eyeball and an ellipse must be fitted, then centered and rescaled to be in the expected shape. We experimentally observed that this task can be performed well using a fully convolutional architecture. Furthermore, we show that our approach outperforms prior work on the final task of gaze estimation significantly. Our main contribution consists of a novel architecture for appearance-based gaze estimation. At the core of the proposed architecture lies the pictorial representation of 3D gaze direction to which the network fits the raw input images and from which additional convolutional layers estimate the final gaze direction. In addition, we perform: (a) an in-depth analysis of the effect of intermediate supervision using our pictorial representation, (b) quantitative evaluation and comparison against state-of-the-art gaze estimation methods on three challenging datasets (MPIIGaze, EYEDIAP, Columbia) in the person independent setting, and a (c) detailed evaluation of the robustness of a model trained using our architecture in terms of gaze direction and head pose as well as image quality. Finally, we show that our method reduces gaze error by 18% compared to the state-of-the-art [47] on MPIIGaze. Appearance-based Gaze Estimation with CNNs Traditional approaches to image-based gaze estimation are typically categorized as feature-based or model-based. Feature-based approaches reduce an eye image down to a set of features based on hand-crafted rules [11,12,25,41] and then feed these features into simple, often linear machine learning models to regress the final gaze estimate. Model-based methods instead attempt to fit a known 3D model to the eye image [30,35,39,42] by minimizing a suitable energy. Appearance-based methods learn a direct mapping from raw eye images to gaze direction. Learning this direct mapping can be very challenging due to changes in illumination, (partial) occlusions, head motion and eye decorations. Due to these challenges, appearance-based gaze estimation methods required the introduction of large, diverse training datasets and typically leverage some form of convolutional neural network architecture. Early works in appearance-based methods were restricted to laboratory settings with fixed head pose [1,32]. These initial constraints have become progressively relaxed, notably by the introduction of new datasets collected in everyday settings [14,45] or in simulated environments [29,38,40]. The increasing scale and complexity of training data has given rise to a wide variety of learning-based methods including variations of linear regression [7,18,19], random forests [29], k-nearest neighbours [29,40], and CNNs [14,26,38,[45][46][47]. CNNs have proven to be more robust to visual appearance variations, and are capable of personindependent gaze estimation when provided with sufficient scale and diversity of training data. Person-independent gaze estimation can be performed without a user calibration step, and can directly be applied to areas such as visual attention analysis on unmodified devices [22], interaction on public displays [48], and identification of gaze targets [44], albeit at the cost of increased need for training data and computational cost. Several CNN architectures have been proposed for person-independent gaze estimation in unconstrained settings, mostly differing in terms of possible input data modalities. Zhang et al. [45,46] adapt the LeNet-5 and VGG-16 architectures such that head pose angles (pitch and yaw) are concatenated to the first fully-connected layers. Despite its simplicity this approach yields the current best gaze estimation error of 5.5 • when evaluating for the within-dataset crossperson case on MPIIGaze with single eye image and head pose input. In [14] separate convolutional streams are used for left/right eye images, a face image, and a 25 × 25 grid indicating the location and scale of the detected face in the image frame. Their experiments demonstrate that this approach yields improvements compared to [45]. In [46] a single face image is used as input and so-called spatial-weights are learned. These emphasize important features based on the input image, yielding considerable improvements in gaze estimation accuracy. We introduce a novel pictorial representation of eye gaze and incorporate this into a deep neural network architecture via intermediate supervision. To the best of our knowledge we are the first to apply fully convolutional architecture to the task of appearance-based gaze estimation. We show that together these contribution lead to a significant performance improvement of 18% even when using a single eye image as sole input. Deep Learning with Auxiliary Supervision It has been shown [16,31] that by applying a loss function on intermediate outputs of a network, better performance can be yielded in different tasks. This technique was introduced to address the vanishing gradients problem during the training of deeper networks. In addition, such intermediate supervision allows for the network to quickly learn an estimate for the final output then learn to refine the predicted features -simplifying the mappings which need to be learned at every layer. Subsequent works have adopted intermediate supervision [21,37] to good effect for human pose estimation, by replicating the final output loss. Another technique for improving neural network performance is the use of auxiliary data through multi-task learning. In [24,49], the architectures are formed of a single shared convolutional stream which is split into separate fullyconnected layers or regression functions for the auxiliary tasks of gender classification, face visibility, and head pose. Both works show marked improvements to state-of-the-art results in facial landmarks localization. In these approaches through the introduction of multiple learning objectives, an implicit prior is forced upon the network to learn a representation that is informative to both tasks. On the contrary, we explicitly introduce a gaze-specific prior into the network architecture via gazemaps. Most similar to our contribution is the work in [9] where facial landmark localization performance is improved by applying an auxiliary emotion classification loss. A key aspect to note is that their network is sequential, that is, the emotion recognition network takes only facial landmarks as input. The detected facial landmarks thus act as a manually defined representation for emotion classification, and creates a bottleneck in the full data flow. It is shown experimentally that applying such an auxiliary loss (for a different task) yields improvement over state-of-the-art results on the AFLW dataset. In our work, we learn to regress an intermediate and minimal representation for gaze direction, forming a bottleneck before the main task of regressing two angle values. Thus, an important distinction to [9] is that while we employ an auxiliary loss term, it directly contributes to the task of gaze direction estimation. Furthermore, the auxiliary loss is applied as an intermediate task. We detail this further in Sec. 3.1. Recent work in multi-person human pose estimation [3] learns to estimate joint location heatmaps alongside so-called "part affinity fields". When combined, the two outputs then enable the detection of multiple peoples' joints with reduced ambiguity in terms of which person a joint belongs to. In addition, at the end of every image scale, the architecture concatenates feature maps from each separate stream such that information can flow between the "part confidence" and "part affinity" maps. Thus, they operate on the image representation space, taking advantage of the strengths of convolutional neural networks. Our work is similar in spirit in that it introduces a novel image-based representation. Method A key contribution of our work is a pictorial representation of 3D gaze direction -which we call gazemaps. This representation is formed of two boolean maps, which can be regressed by a fully convolutional neural network. In this section, we describe our representation (Sec. 3.1) then explain how we constructed our architecture to use the representation as reference for intermediate supervision during training of the network (Sec. 3.2). Pictorial Representation of 3D Gaze In the task of appearance-based gaze estimation, an input eye image is processed to yield gaze direction in 3D. This direction is often represented as a 3-element unit vector v [6,26,46], or as two angles representing eyeball pitch and yaw g = (θ, φ) [29,38,45,47]. In this section, we propose an alternative to previous direct mappings to v or g. If we state the input eye images as x and regard regressing the values g, a conventional gaze estimation model estimates f : x → g. The mapping f can be complex, as reflected by the improvement in accuracies that have been attained by simple adoption of newer CNN architectures ranging from LeNet-5 [26,45], AlexNet [14,46], to VGG-16 [47], the current state-of-the-art CNN architecture for appearance-based gaze estimation. We hypothesize that it is possible to learn an intermediate image representation of the eye, m. That is, we define our model as g = k • j(x) where j : x → m and k : m → g. It is conceivable that the complexity of learning j and k should be significantly lower than directly learning f , allowing for neural network architectures with significantly lower model complexity to be applied to the same task of gaze estimation with higher or equivalent performance. Thus, we propose to estimate so-called gazemaps (m) and from that the 3D gaze direction (g). We reformulate the task of gaze estimation into two concrete tasks: (a) reduction of input image to minimal normalized form (gazemaps), and (b) gaze estimation from gazemaps. The gazemaps for a given input eye image should be visually similar to the input yet distill only the necessary information for gaze estimation to ensure that the mapping k : m → g is simple. To do this, we consider that an average human eyeball has a diameter of ≈ 24mm [2] while an average human iris has a diameter of ≈ 12mm [5]. We then assume a simple model of the human eyeball and iris, where the eyeball is a perfect sphere, and the iris is a perfect circle. For an output image dimension of m × n, we assume the projected eyeball diameter 2r = 1.2n and calculate the iris centre coordinates (u i , v i ) to be: u i = m 2 − r sin φ cos θ (1) v i = n 2 − r sin θ(2) where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). The iris is drawn as an ellipse with major-axis diameter of r and minor-axis diameter of r |cos θ cos φ|. Examples of our gazemaps are shown in Fig. 2b where two separate boolean maps are produced for one gaze direction g. Learning how to predict gazemaps only from a single eye image is not a trivial task. Not only do extraneous factors such as image artifacts and partial occlusion need to be accounted for, a simplified eyeball must be fit to the given image based on iris and eyelid appearance. The detected regions must then be scaled and centered to produce the gazemaps. Thus the mapping j : x → m requires a more complex neural network architecture than the mapping k : m → g. Neural Network Architecture Our neural network consists of two parts: (a) regression from eye image to gazemap, and (b) regression from gazemap to gaze direction g. While any CNN architecture can be implemented for (b), regressing (a) requires a fully convolutional architecture such as those used in human pose estimation. We adapt the stacked hourglass architecture from Newell et al. [21] for this task. The hourglass architecture has been proven to be effective in tasks such as human pose estimation and facial landmarks detection [43] where complex spatial relations need to be modeled at various scales to estimate the location of occluded joints or key points. The architecture performs repeated multi-scale refinement of feature maps, from which desired output confidence maps can be extracted via 1 × 1 convolution layers. We exploit this fact to have our network predict gazemaps instead of classical confidence or heatmaps for joint positions. In Sec. 5, we demonstrate that this works well in practice. In our gazemap-regression network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is: L gazemap = −α p∈P m(p) logm(p),(3) where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the weight coefficient α to 10 −5 . For the regression to g, we select DenseNet which has recently been shown to perform well on image classification tasks [10] while using fewer parameters compared to previous architectures such as ResNet [8]. The loss term for gaze direction regression (per input) is: L gaze = ||g −ĝ|| 2 2 ,(4) whereg is the gaze direction predicted by our neural network. Implementation In this section, we describe the fully convolutional (Hourglass) and regressive (DenseNet) parts of our architecture in more detail. Hourglass Network In our implementation of the Stacked Hourglass Network [21], we provide images of size 150×90 as input, and refine 64 feature maps of size 75×45 throughout the network. The half-scale feature maps are produced by an initial convolutional layer with filter size 7 and stride 2 as done in the original paper [21]. This is followed by batch normalization, ReLU activation, and two residual modules before being passed as input to the first hourglass module. There exist 3 hourglass modules in our architecture, as visualized in Figure 1. In human pose estimation, the commonly used outputs are 2-dimensional confidence maps, which are pixel-aligned to the input image. Our task differs, and thus we do not apply intermediate supervision to the output of every hourglass module. This is to allow for the input image to be processed at multiple scales over many layers, with the necessary features becoming aligned to the final output gazemap representation. Instead, we apply 1 × 1 convolutions to the output of the last hourglass module, and apply the gazemap loss term (Eq. 3). DenseNet As described in Section 3.1, our pictorial representation allows for a simpler function to be learnt for the actual task of gaze estimation. To demonstrate this, we employ a very lightweight DenseNet architecture [10]. Our gaze regression network consists of 5 dense blocks (5 layers per block) with a growth-rate of 8, bottleneck layers, and a compression factor of 0.5. This results in just 62 feature maps at the end of the DenseNet, and subsequently 62 features through global average pooling. Finally, a single linear layer maps these features to g. The resulting network is light-weight and consists of just 66k trainable parameters. Training Details We train our neural network with a batch size of 32, learning rate of 0.0002 and L 2 weights regularization coefficient of 10 −4 . The optimization method used is Adam [13]. Training occurs for 20 epochs on a desktop PC with an Intel Core i7 CPU and Nvidia Titan Xp GPU, taking just over 2 hours for one fold (out of 15) of a leave-one-person-out evaluation on the MPIIGaze dataset. During training, slight data augmentation is applied in terms of image translation and scaling, and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address over-fitting and to stabilize the final error. Evaluations We perform our evaluations primarily on the MPIIGaze dataset, which consists of images taken of 15 laptop users in everyday settings. The dataset has been used as the standard benchmark dataset for unconstrained appearance-based gaze estimation in recent years [26,38,40,[45][46][47]. Our focus is on cross-person singleeye evaluations where 15 models are trained per configuration or architecture in a leave-one-person-out fashion. That is, a neural network is trained on 14 peoples' data (1500 entries each from left and right eyes), then tested on the test set of the left-out person (1000 entries). The mean over 15 such evaluations is used as the final error metric representing cross-person performance. As MPIIGaze is a dataset which well represents real-world settings, cross-person evaluations on the dataset is indicative of the real-world person-independence of a given model. To further test the generalization capabilities of our method, we also perform evaluations on two additional datasets in this section: Columbia [28] and EYE-DIAP [7], where we perform 5-fold cross validation. While Columbia displays large diversity between its 55 participants, the images are of high quality, having been taken using a DSLR. EYEDIAP on the other hand suffers from the low resolution of the VGA camera used, as well as large distance between camera and participant. We select screen target (CS/DS) and static head pose sequences (S) from the EYEDIAP dataset, sampling every 15 seconds from its VGA video streams (V). Training on moving head sequences (M) with just single eye input proved infeasible, with all models experiencing diverging test error during train-ing. Performance improvements on MPIIGaze, Columbia, and EYEDIAP would indicate that our model is robust to cross-person appearance variations and the challenges caused by low eye image resolution and quality. In this section, we first evaluate the effect of our gazemap loss (Sec. 5.1), then compare the performance (Sec. 5.2) and robustness (Sec. 5.3) of our approach against state-of-the-art architectures. We postulated in Sec. 3.1 that by providing a pictorial representation of 3D gaze direction that is visually similar to the input image, we could achieve improvements in appearancebased gaze estimation. In our experiments we find that applying the gazemaps loss term generally offers performance improvements compared to the case where the loss term is not applied. This improvement is particularly emphasized when DenseNet growth rate is high (eg. k = 32), as shown in Table 1. Pictorial Representation (Gazemaps) By observing the output of the last hourglass module and comparing against the input images (Figure 4), we can confirm that even without intermediate supervision, our network learns to isolate the iris region, yielding a similar image representation of gaze direction across participants. Note that this representation is learned only with the final gaze direction loss, L gaze , and that blobs representing iris locations are not necessarily aligned with actual iris locations on the input images. Without intermediate supervision, the learned minimal image representation may incorporate visual factors such as occlusion due to hair and eyeglases, as shown in Figure 4a. This supports our hypothesis that an intermediate representation consisting of an iris and eyeball contains the required information to regress gaze direction. However, due to the nature of learning, the network may also learn irrelevant details such as the edges of the glasses. Yet, by explicitly providing an intermediate representation in the form of gazemaps, we enforce a prior that helps the network learn the desired representation, without incorporating the previously mentioned unhelpful details. Cross-Person Gaze Estimation We compare the cross-person performance of our model by conducting a leaveone-person-out evaluation on MPIIGaze and 5-fold evaluations on Columbia and EYEDIAP. In Section 3.1 we discussed that the mapping k from gazemap to gaze direction should not require a complex architecture to model. Thus, our DenseNet is configured with a low growth rate (k = 8). To allow fair comparison, we re-implement 2 architectures for single-eye image inputs (of size 150 × 90): AlexNet and VGG-16. The AlexNet and VGG-16 architectures have been used in Table 2. Mean gaze estimation error in degrees for within-dataset cross-person k-fold evaluation. Evaluated on (a) MPIIGaze, (b) Columbia, and (c) EYEDIAP datasets. (a) MPIIGaze (15- recent works in appearance-based gaze estimation and are thus suitable baselines [46,47]. Implementation and training procedure details of these architectures are provided in supplementary materials. In MPIIGaze evaluations (Table 2a), our proposed approach outperforms the current state-of-the-art approach by a large margin, yielding an improvement of 1.0 • (5.5 • → 4.5 • = 18.2%). This significant improvement is in spite of the reduced number of trainable parameters used in our architecture (90M vs 0.7M). Our performance compares favorably to that reported in [46] (4.8 • ) where full-face input is used in contrast to our single-eye input. While our results cannot directly be compared with those of [46] due to the different definition of gaze direction (face-centred as opposed to eye centred), the similar performance suggests that eye images may be sufficient as input to the task of gaze direction estimation. Our approach attains comparable performance to models taking face input, and uses considerably less parameters than recently introduced architectures (129x less than GazeNet). We additionally evaluate our model on the Columbia Gaze and EYEDIAP datasets in Table 2b and Table 2c respectively. While high image quality results in all three methods performing comparably for Columbia Gaze, our approach still prevails with an improvement of 0.4 • over AlexNet. On EYEDIAP, the mean error is very high due to the low resolution and low quality input. Note that there is no head pose estimation performed, with only single eye input being relied on for gaze estimation. Our gazemap-based architecture shows its strengths in this case, performing 0.9 • better than VGG-16 -a 8% improvement. Sample gazemap and gaze direction predictions are shown in Figure 5 where it is evident that despite the lack of visual detail, it is possible to fit gazemaps to yield improved gaze estimation error. By evaluating our architecture on 3 different datasets with different properties in the cross-person setting, we can conclude that our approach provides significantly higher generalization capabilities compared to previous approaches. Thus, we bring gaze estimation closer to direct real-world applications. Robustness Analysis In order to shed more light onto our models' performance, we perform an additional robustness analysis. More concretely, we aim to analyze how our approach performs under difficult and challenging situations, such as extreme head pose and gaze direction. In order to do so, we evaluate a moving average on the output of our within-MPIIGaze evaluations, where the y-values correspond to the mean angular error and the x-values take one of the following factor of variations: head pose (pitch & yaw), gaze direction (pitch & yaw). Additionally, we also consider image quality (contrast & sharpness) as a qualitative factor. In order to isolate each factor of variation from the rest, we evaluate the moving average only on the points whose remaining factors are close to its median value. Intuitively, this corresponds to data points where the person moves only in one specific direction, while staying at rest in all of the remaining directions. This is not the case for image quality analysis, where all data points are used. Figure 6 plots the mean angular error as a function of different movement variations and image qualities. The top row corresponds to variation along the head pose, the middle along gaze direction and the bottom to varying image quality. In order to calculate the image contrast, we used the RMS contrast metric whereas to compute the sharpness, we employ a Laplacian-based formula as outlined in [23]. Both metrics are explained in supplementary materials. The figure shows that we consistently outperform competing architectures for extreme head and gaze angles. Notably, we show more consistent performance in particular over large ranges of head pitch and gaze yaw angles. In addition, we surpass prior works on images of varying quality, as shown in Figures 6e and 6f. Conclusion Our work is a first attempt at proposing an explicit prior designed for the task of gaze estimation with a neural network architecture. We do so by introducing a novel pictorial representation which we call gazemaps. An accompanying architecture and training scheme using intermediate supervision naturally arises as a consequence, with a fully convolutional architecture being employed for the first time for appearance-based eye gaze estimation. Our gazemaps are anatomically inspired, and are experimentally shown to outperform approaches which consist of significantly more model parameters and at times, more input modalities. We report improvements of up to 18% on MPIIGaze along with improvements on additional two different datasets against competitive baselines. In addition, we demonstrate that our final model is more robust to various factors such as extreme head poses and gaze directions, as well as poor image quality compared to prior work. Future work can look into alternative pictorial representations for gaze estimation, and an alternative architecture for gazemap prediction. Additionally, there is potential in using synthesized gaze directions (and corresponding gazemaps) for unsupervised training of the gaze regression function, to further improve performance. A Baseline Architectures The state-of-the-art CNN architecture for appearance-based gaze estimation is based on a lightly modified VGG-16 architecture [47], with mean cross-person gaze estimation error of 5.5 • on the MPIIGaze dataset [45]. We compare against a standard VGG-16 architecture [27] and an AlexNet architecture [15] which has been the standard architecture for gaze estimation in many works [14,46]. The specific architectures used as baseline are described in Table 3. Both models are trained with a batch size of 32, learning rate of 5 × 10 −5 and L 2 weights regularization coefficient of 10 −4 , using the Adam optimizer [13]. Learning rate is multiplied by 0.1 every 5, 000 training steps, and slight data augmentation is performed in image translation and scale. B Image metrics In this section we describe the image metrics used for the robustness plots concerning image quality (Figures 6e and 6f in paper). B.1 Image contrast The root mean contrast is defined as the standard deviation of the pixel intensities: RMC = 1 M N N −1 i=1 M −1 j=1 (I ij −Ī) 2 where I ij is the value of the image I ∈ M × N at location (i, j) andĪ is the average intensity of all pixel values in the image. B.2 Image sharpness In order to have a sharpness-based metric, we calculate the variance of the image I after having convolved it with a Laplacian, similar to [23]. This corresponds to an approximation of the second derivative, which is computed with the help of the following mask: Table 3. Configuration of CNNs used as baseline for gaze estimation. The style of [27] is followed where possible. s represents stride length, p dropout probability, and conv9-96 represents a convolutional layer with kernel size 9 and 96 output feature maps. maxpool3 represents a max-pooling layer with kernel size 3.
4,914
1807.09951
2950503707
We consider the problem of image-to-video translation, where an input image is translated into an output video containing motions of a single object. Recent methods for such problems typically train transformation networks to generate future frames conditioned on the structure sequence. Parallel work has shown that short high-quality motions can be generated by spatiotemporal generative networks that leverage temporal knowledge from the training data. We combine the benefits of both approaches and propose a two-stage generation framework where videos are generated from structures and then refined by temporal signals. To model motions more efficiently, we train networks to learn residual motion between the current and future frames, which avoids learning motion-irrelevant details. We conduct extensive experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. Superior results over the state-of-the-art methods on both tasks demonstrate the effectiveness of our approach.
Deep learning techniques have improved the accuracy of various vision systems @cite_12 @cite_9 @cite_10 @cite_30 . Especially, a lot of generative problems @cite_43 @cite_17 @cite_4 @cite_40 have been solved by GANs @cite_18 . However, traditional frameworks fail to handle complicated tasks, e.g., to generate fine-grained images or videos with large motion changes. Recent approaches @cite_6 @cite_33 @cite_25 prove that coarse-to-fine strategy can handle these cases. Our model also employs this strategy for video generation.
{ "abstract": [ "We design a new connectivity pattern for the U-Net architecture. Given several stacked U-Nets, we couple each U-Net pair through the connections of their semantic blocks, resulting in the coupled U-Nets (CU-Net). The coupling connections could make the information flow more efficiently across U-Nets. The feature reuse across U-Nets makes each U-Net very parameter efficient. We evaluate the coupled U-Nets on two benchmark datasets of human pose estimation. Both the accuracy and model parameter number are compared. The CU-Net obtains comparable accuracy as state-of-the-art methods. However, it only has at least 60 fewer parameters than other approaches.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "", "Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128 A— 128 resolution for 32 frames. Quantitative and qualitative experiment results demonstrate the superiority of our model over the state-of-the-art models.", "Coarse-to-fine framework for 3-dimensional head pose estimation.Parameterize instance factors in a generative mannar.Uniform embedding in a novel direction alleviates manifold degradation.Outperform state-of-the-arts on multiple challenging databases. Three-dimensional head pose estimation from a single 2D image is a challenging task with extensive applications. Existing approaches lack the capability to deal with multiple pose-related and -unrelated factors in a uniform way. Most of them can provide only one-dimensional yaw estimation and suffer from limited representation ability for out-of-sample testing inputs. These drawbacks lead to limited performance when extensive variations exist on faces in-the-wild. To address these problems, we propose a coarse-to-fine pose estimation framework, where the unit circle and 3-sphere are employed to model the manifold topology on the coarse and fine layer respectively. It can uniformly factorize multiple factors in an instance parametric subspace, where novel inputs can be synthesized under a generative framework. Moreover, our approach can effectively avoid the manifold degradation problem when 3D pose estimation is performed. The results on both experimental and in-the-wild databases demonstrate the validity and superior performance of our approach compared with the state-of-the-arts.", "This paper proposes the novel Pose Guided Person Generation Network (PG @math ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG @math utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128 @math 64 re-identification images and 256 @math 256 fashion photos show that our model generates high-quality person images with convincing details.", "Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.", "Generating multi-view images from a single-view input is an essential yet challenging problem. It has broad applications in vision, graphics, and robotics. Our study indicates that the widely-used generative adversarial network (GAN) may learn \"incomplete\" representations due to the single-pathway framework: an encoder-decoder network followed by a discriminator network. We propose CR-GAN to address this problem. In addition to the single reconstruction path, we introduce a generation sideway to maintain the completeness of the learned embedding space. The two learning pathways collaborate and compete in a parameter-sharing manner, yielding considerably improved generalization ability to \"unseen\" dataset. More importantly, the two-pathway framework makes it possible to combine both labeled and unlabeled data for self-supervised learning, which further enriches the embedding space for realistic generations. The experimental results prove that CR-GAN significantly outperforms state-of-the-art methods, especially when generating from \"unseen\" inputs in wild conditions.", "", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.", "We propose a novel recurrent encoder-decoder network model for real-time video-based face alignment. Our proposed model predicts 2D facial point maps regularized by a regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features, yielding better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state-of-the-art in standard datasets.", "Random data augmentation is a critical technique to avoid overfitting in training deep neural network models. However, data augmentation and network training are usually treated as two isolated processes, limiting the effectiveness of network training. Why not jointly optimize the two? We propose adversarial data augmentation to address this limitation. The main idea is to design an augmentation network (generator) that competes against a target network (discriminator) by generating hard' augmentation operations online. The augmentation network explores the weaknesses of the target network, while the latter learns from hard' augmentations to achieve better performance. We also design a reward penalty strategy for effective joint training. We demonstrate our approach on the problem of human pose estimation and carry out a comprehensive experimental analysis, showing that our method can significantly improve state-of-the-art models without additional data efforts." ], "cite_N": [ "@cite_30", "@cite_18", "@cite_4", "@cite_33", "@cite_9", "@cite_6", "@cite_43", "@cite_40", "@cite_10", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2885369556", "2099471712", "", "2964318715", "302237248", "2617297150", "2605135824", "2808283260", "", "2964024144", "2963544488", "2798409409" ] }
Learning to Forecast and Refine Residual Motion for Image-to-Video Generation
Recently, Generative Adversarial Networks (GANs) [7] have attracted a lot of research interests, as they can be utilized to synthesize realistic-looking images or videos for various vision applications [15,16,39,44,46,47]. Compared with image generation, synthesizing videos is more challenging, since the networks need to learn the appearance of objects as well as their motion models. In this paper, we study a form of classic problems in video generation that can be framed as image-to-video translation tasks, where a system receives one or more images as the input and translates it into a video containing realistic motions of a single object. Examples include facial expression retargeting [13,21,34], future prediction [37,38,40], and human pose forecasting [5,6,39]. One approach for the long-term future generation [39,41] is to train a transformation network that translates the input image into each future frame separately 3 The project website is publicly available at https://garyzhao.github.io/FRGAN. Our framework consists of three components: a condition generator, motion forecasting networks and refinement networks. Each part is explained in the corresponding section. conditioned by a sequence of structures. It suggests that it is beneficial to incorporate high-level structures during the generative process. In parallel, recent work [11,31,36,37,40] has shown that temporal visual features are important for video modeling. Such an approach produces temporally coherent motions with the help of spatiotemporal generative networks but is poor at long-term conditional motion generation since no high-level guidance is provided. In this paper, we combine the benefits of these two methods. Our approach includes two motion transformation networks as shown in Figure 1, where the entire video is synthesized in a generation and then refinement manner. In the generation stage, the motion forecasting networks observe a single frame from the input and generate all future frames individually, which are conditioned by the structure sequence predicted by a motion condition generator. This stage aims to generate a coarse video where the spatial structures of the motions are preserved. In the refinement stage, spatiotemporal motion refinement networks are used for refining the output from the previous stage. It performs the generation guided by temporal signals, which targets at producing temporally coherent motions. For more effective motion modeling, two transformation networks are trained in the residual space. Rather than learning the mapping from the structural conditions to motions directly, we force the networks to learn the differences between motions occurring in the current and future frames. The intuition is that learning only the residual motion avoids the redundant motion-irrelevant information, such as static backgrounds, which remains unchanged during the transformation. Moreover, we introduce a novel network architecture using dense connections for decoders. It encourages reusing spatially different features and thus yields realistic-looking results. We experiment on two tasks: facial expression retargeting and human pose forecasting. Success in either task requires reasoning realistic spatial structures as well as temporal semantics of the motions. Strong performances on both tasks demonstrate the effectiveness of our approach. In summary, our work makes the following contributions: -We devise a novel two-stage generation framework for image-to-video translation, where the future frames are generated according to the spatial structure sequence and then refined with temporal signals; -We investigate learning residual motion for video generation, which focuses on the motion-specific knowledge and avoids learning redundant or irrelevant details from the inputs; -Dense connections between layers of decoders are introduced to encourage spatially different feature reuse during the generation process, which yields more realistic-looking results; -We conduct extensive experimental validation on standard datasets which both quantitatively and subjectively compares our method with the stateof-the-arts to demonstrate the effectiveness of the proposed algorithm. Method As shown in Figure 1, our framework consists of three components: a motion condition generator G C , an image-to-image transformation network G M for forecasting motion conditioned by G C to each future frame individually, and a video-to-video transformation network G R which aims to refine the video clips concatenated from the output of G M . G C is a task-specific generator that produces a sequence of structures to condition the motion of each future frame. Two discriminators are utilized for adversarial learning, where D I differentiates real frames from generated ones and D V is employed for video clips. In the following sections, we explain how each component is designed respectively. Motion Condition Generators In this section, we illustrate how the motion condition generators G C are implemented for two image-to-video translation tasks: facial expression retargeting and human pose forecasting. One superiority of G C is that domain-specific knowledge can be leveraged to help the prediction of motion structures. Fig. 3. Illustration of our residual formulation. We disentangle the motion differences between the input and future frames into a residual motion map m t+k and a residual content map c t+k . Compared with the difference map directly computed from them, our formulation makes the learning task much easier. Figure 2, we utilize 3D Morphable Model (3DMM) [4] to model the sequence of expression motions. Given a video containing expression changes of an actor x, it can be parameterized with α x and (β t , β t+1 , . . . , β t+k ) using 3DMM, where α x represents the facial identity and β t is the expression coefficients in the frame t. In order to retarget the sequence of expressions to another actorx, we compute the facial identity vector αx and combine it with (β t , β t+1 , . . . , β t+k ) to reconstruct a new sequence of 3D face models with corresponding facial expressions. The conditional motion maps are the normal maps calculated from the 3D models respectively. "#$ "#$ Facial Expression Retargeting. As shown in Human Pose Forecasting. We follow [39] to implement an LSTM architecture [6] as the human pose predictor. The human pose of each frame is represented by the 2D coordinate positions of joints. The LSTM observes consecutive pose inputs to identify the type of motion, and then predicts the pose for the next period of time. An example is shown in Figure 2. Note that the motion map is calculated by mapping the output 2D coordinates from the LSTM to heatmaps and concatenating them on depth. Motion Forecasting Networks Starting from the frame I t at time t, our network synthesizes the future frame I t+k by predicting the residual motion between them. Previous work [16,28] implemented this idea by letting the network estimate a difference map between the input and output, which can be denoted as: I t+k = I t + G M (I t |M t , M t+k ),(1) where M t is the motion map which conditions I t . However, this straightforward formulation easily fails when employed to handle videos including large motions, since learning to generate a combination of residual changes from both dynamic and static contents in a single map is quite difficult. Therefore, we introduce an enhanced formulation where the transformation is disentangled into a residual I t+k = m t+k c t+k residual motion + (1 − m t+k ) I t static content ,(2) where both m t+k and c t+k are predicted by G M , and is element-wise multiplication. Intuitively, m t+k ∈ [0, 1] can be viewed as a spatial mask that highlights where the motion occurs. c t+k is the content of the residual motions. By summing the residual motion with the static content, we can obtain the final result. Note that as visualized in Figure 3, m t+k forces G M to reuse the static part from the input and concentrate on inferring dynamic motions. Architecture. Figure 4 shows the architecture of G M , which is inspired by the visual-structure analogy learning [27]. The future frame I t+k can be generated by transferring the structure differences from M t to M t+k to the input frame I t . We use a motion encoder f M , an image encoder f I and a residual content decoder f D to model this concept. And the residual motion is learned by: ∆(I t+k , I t ) = f D (f M (M t+k ) − f M (M t ) + f I (I t )).(3) Intuitively, f M aims to identify key motion features from the motion map containing high-level structural information; f I learns to map the appearance model of the input into an embedding space, where the motion feature transformations can be easily imposed to generate the residual motion; f D learns to decode the embedding. Note that we add skip connections [20] between f I and f D , which makes it easier for f D to reuse features of static objects learned from f I . Huang et al. [9,10] introduce dense connections to enhance feature propagation and reuse in the network. We argue that this is an appealing property for motion transformation networks as well, since in most cases the output frame shares similar high-level structure with the input frame. Especially, dense connections make it easy for the network to reuse features of different spatial positions when large motions are involved in the image. The decoder of our network thus consists of multiple dense connections, each of which connects different dense blocks. A dense block contains two 3 × 3 convolutional layers. The output of a dense block is connected to the first convolutional layers located in all subsequent blocks in the network. As dense blocks have different feature resolutions, we upsample feature maps with lower resolutions when we use them as inputs into higher resolution layers. Training Details. Given a video clip, we train our network to perform random jumps in time to learn forecasting motion changes. To be specific, for every iteration at training time, we sample a frame I t and its corresponding motion map M t given by G C at time t, and then force it to generate frame I t+k given motion map M t+k . Note that in order to let our network perform learning in the entire residual motion space, k is also randomly defined for each iteration. On the other hand, learning with jumps in time can prevent the network from falling into suboptimal parameters as well [39]. Our network is trained to minimize the following objective function: L G M = L rec (I t+k ,Ĩ t+k ) + L r (m t+k ) + L gen .(4) L rec is the reconstruction loss defined in the image space which measures the pixel-wise differences between the predicted and target frames: L rec (I t+k ,Ĩ t+k ) = I t+k −Ĩ t+k 1 ,(5) whereĨ t+k denotes the frame predicted by G M . The reconstruction loss intuitively offers guidance for our network in making a rough prediction that preserves most content information of the target image. More importantly, it leads the result to share similar structure information with the input image. L r is an L-1 norm regularization term defined as: L r (m t+k ) = m t+k 1 ,(6) where m t+k is the residual motion map predicted by G M . It forces the predicted motion changes to be sparse, since dynamic motions always occur in local positions of each frame while the static parts (e.g., background objects) should be unchanged. L gen is the adversarial loss that enables our model to generate realistic frames and reduce blurs, and it is defined as: L gen = −D I ([Ĩ t+k , M t+k ]),(7) where D I is the discriminator for images in adversarial learning. We concatenate the output of G M and motion map M t+k as the input of D I and make the discriminator conditioned on the motion [18]. Note that we follow WGAN [3,8] to train D I to measure the Wasserstein distance between distributions of the real images and results generated from G M . During the optimization of D I , the following loss function is minimized: 5. Architecture of our motion refinement network GR. The network receives temporally concatenated frames generated by GM together with their corresponding conditional motion map as the input and aims to refine the video clip to be more temporally coherent. It performs learning in the residual motion space as well. L D I = D I ([Ĩ t+k , M t+k ]) − D I ([I t+k , M t+k ]) + λ · L gp ,(8)L gp = ( ∇ [Î t+k ,M t+k ] D I ([Î t+k , M t+k ]) 2 − 1) 2 ,(9) where λ is experimentally set to 10. L gp is the gradient penalty term proposed by [8] whereÎ t+k is sampled from the interpolation of I t+k andĨ t+k , and we extend it to be conditioned on the motion M t+k as well. The adversarial loss in combination with the rest of loss terms allows our network to generate highquality frames given the motion conditions. Motion Refinement Networks LetṼ t = [Ĩ t+1 ,Ĩ t+2 , . . . ,Ĩ t+K ] be the video clip with length K temporally concatenated from the outputs of G M . The goal of the motion refinement network G R is to refineṼ t to be more temporally coherent, which is achieved by performing pixel-level refinement with the help of spatiotemporal generative networks. We extends Equation 2 by adding one additional temporal dimension to let G R estimate the residual between the real video clip V t andṼ t , which is defined as: V t = m t c t + (1 − m t ) Ṽ t ,(10) where m t is a spatiotemporal mask which selects either to be refined for each pixel location and timestep, while c t produces a spatiotemporal cuboid which stands for the refined motion content masked by m t . Architecture. Our motion refinement network roughly follows the architectural guidelines of [40]. As shown in Figure 5, we do not use pooling layers, instead strided and fractionally strided convolutions are utilized for in-network downsampling and upsampling. We also add skip connections to encourage feature reuse. Note that we concatenate the frames with their corresponding conditional motion maps as the inputs to guide the refinement. Training Details. The key requirement for G R is that the refined video should be temporal coherent in motion while preserving the annotation information from the input. To this end, we propose to train the network by minimizing a combination of three losses which is similar to Equation 4: L G R = L rec (V t ,V t ) + L r (m t ) +L gen ,(11) whereV t is the output of G R . L rec and L r share the same definition with Equation 5 and 6 respectively. L rec is the reconstruction loss that aims at refining the synthesized video towards the ground truth with minimal error. Compared with the self-regularization loss proposed by [29], we argue that the sparse regularization term L r is also efficient to preserve the annotation information (e.g., the facial identity and the type of pose) during the refinement, since it force the network to only modify the essential pixels.L gen is the adversarial loss: L gen = −D V ([V t , M t ]) − 1 K K i=1 D I ([Ī t+i , M t+i ]),(12) where M t = [M t+1 , M t+2 , . . . , M t+K ] is the temporally concatenated condition motion maps, andĪ t+i is the i-th frame ofV t . In the adversarial learning term L gen , both D I and D V play the role to judge whether the input is a real video clip or not, providing criticisms to G R . The image discriminator D I criticizes G R based on individual frames, which is trained to determine if each frame is sampled from a real video clip. At the same time, D V provides criticisms to G R based on the whole video clip, which takes a fixed length video clip as the input and judges if a video clip is sampled from a real video as well as evaluates the motions contained. As suggested by [37], although D V alone should be sufficient, D I significantly improves the convergence and the final results of G R . We follow the same strategy as introduced in Equation 8 to optimize D I . Note that in each iteration, one pair of real and generated frames is randomly sampled from V t andV t to train D I . On the other hand, training D V is also based on the WGAN framework, where we extend it to spatiotemporal inputs. Therefore, D V is optimized by minimizing the following loss function: L D V = D V ([V t , M t ]) − D V ([V t , M t ]) + λ · L gp ,(13)L gp = ( ∇ [Vt,Mt] D V ([V t , M t ]) 2 − 1) 2 ,(14) whereV t is sampled from the interpolation of V t andV t . Note that G R , D I and D V are trained alternatively. To be specific, we update D I and D V in one step while fixing G R ; in the alternating step, we fix D I and D V while updating G R . Experiments We perform experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. For facial expression retargeting, we demonstrate that our method is able to combine domain-specific knowledge, such as 3DMM, to generate realistic-looking results. For human pose forecasting, experimental results show that our method yields high-quality videos when applied for video generation tasks containing complex motion changes. Settings and Databases To train our networks, we use Adam [12] for optimization with a learning rate of 0.0001 and momentums of 0.0 and 0.9. We first train the forecasting networks, and then train the refinement networks using the generated coarse frames. The batch size is set to 32 for all networks. Due to space constraints, we ask the reader to refer to the project website for the details of the network designs. We use the MUG Facial Expression Database [1] to evaluate our approach on facial expression retargeting. This dataset is composed of 86 subjects (35 women and 51 men). We crop the face regions with regards to the landmark ground truth and scale them to 96 × 96. To train our networks, we use only the sequences representing one of the six facial expressions: anger, fear, disgust, happiness, sadness, and surprise. We evenly split the database into three groups according to the subjects. Two groups are used for training G M and G R respectively, and the results are evaluated on the last one. The 3D Basel Face Model [22] serves as the morphable model to fit the facial identities and expressions for the condition generator G C . We use [48] to compute the 3DMM parameters for each frame. Note that we train G R to refine the video clips every 32 frames. The Penn Action Dataset [45] consists of 2326 video sequences of 15 different human actions, which is used for evaluating our method on human pose forecasting. For each action sequence in the dataset, 13 human joint annotations are provided as the ground truth. To remove very noisy joint ground-truth in the dataset, we follow the setting of [39] to sub-sample the actions. Therefore, 8 actions including baseball pitch, baseball swing, clean and jerk, golf swing, jumping jacks, jump rope, tennis forehand, and tennis serve are used for training our networks. We crop video frames based on temporal tubes to remove as much background as possible while ensuring the human actions are in all frames, and then scale each cropped frame to 64 × 64. We evenly split the standard dataset into three sets. G M and G R are trained in the first two sets respectively, while we evaluate our models in the last set. We employ the same strategy as [39] to train the LSTM pose generator. It is trained to observe 10 inputs and predict 32 steps. Note that G R is trained to refine the video clips with the length of 16. Evaluation on Facial Expression Retargeting We compare our method to MCNet [38], MoCoGAN [37] and Villegas et al. [39] on the MUG Database. For each facial expression, we randomly select one video as the reference, and retarget it to all the subjects in the testing set with different methods. Each method only observes the input frame of the target subject, and performs the generation based on it. Our method and [39] share the same 3DMMbased condition generator as introduced in Section 3.1. Quantitative Comparison. The quality of a generated video are measured by the Average Content Distance (ACD) as introduced in [37]. For each generated video, we make use of OpenFace [2], which outperforms human performance in the face recognition task, to measure the video quality. OpenFace produces a Fig. 6. Examples of facial expression retargeting using our algorithm on the MUG Database [1]. We show two expressions as an illustration: (a) happiness and (b) surprise. The reference video and the input target images are highlighted in green, while the generated frames are highlighted in red. The results are sampled every 8 frames. Table 1. Video generation quality comparison on the MUG Dataset [1]. We also compute the ACD-* score for the training set, which is the reference. Methods ACD-I ACD-C MCNet [38] 0.545 0.322 Villegas et al. [39] [37] 62.5 / 37.5 feature vector for each frame, and then the ACD is calculated by measuring the L-2 distance of these vectors. We introduce two variants of the ACD in this experiment. The ACD-I is the average distance between each generated frame and the original input frame. It aims to judge if the facial identity is wellpreserved in the generated video. The ACD-C is the average pairwise distance of the per-frame feature vectors in the generated video. It measures the content consistency of the generated video. Table 1 summarizes the comparison results. From the table, we find that our method achieves ACD-* scores both lower than 0.2, which is substantially better than the baselines. One interesting observation is that [39] has the worst ACD-I but its ACD-C is the second best. We argue that this is due to the high-level information offered by our 3DMM-based condition generator, which plays a vital role for producing content consistency results. Our method outperforms other state-of-the-arts, since we utilize both domain knowledge (3DMM) and temporal signals for video generation. We show that it is greatly beneficial to incorporate both factors into the generative process. We also conduct a user study to quantitatively compare these methods. For each method, we randomly select 10 videos for each expression. We then randomly pair the videos generated by ours with the videos from one of the competing methods to form 54 questions. For each question, 3 users are asked to select the video which is more realistic. To be fair, the videos from different methods are shown in random orders. We report the average user preference scores (the average number of times, a user prefers our result to the competing one) in Table 2. We find that the users consider the videos generated by ours more realistic most of the time. This is consistent with the ACD results in Table 1, in which our method substantially outperforms the baselines. Visual Results. In Figure 6, we show the visual results (the expressions of happiness and surprise) generated by our method. We observe that our method is able to generate realistic motions while the facial identities are well-preserved. We hypothesize that the domain knowledge (3DMM) employed serves as a good prior which improves the generation. More visual results of different expressions and subjects are given on the project website. Evaluation on Human Pose Forecasting We compare our approach with VGAN [40], Mathieu et al. [17] and Villegas et al. [39] on the Penn Action Dataset. We produce the results of their models according to their papers or reference codes. For fair comparison, we generate videos with 32 generated frames using each method, and evaluate them starting from the first frame. Note that we train an individual VGAN for different action categories with randomly picked video clips from the dataset, while one network among all categories are trained for every other method. Both [39] and our method perform the generation based on the pre-trained LSTM provided by [39], and we train [39] through the same strategy of our motion forecasting network G M . Implementation. Following the settings of [39], we engage the feature similarity loss term L f eat for our motion forecasting network G M to capture the appearance (C 1 ) and structure (C 2 ) of the human action. This loss term is added to Equation 4, which is defined as: L f eat = C 1 (I t+k ) − C 1 (Ĩ t+k ) 2 2 + C 2 (I t+k ) − C 2 (Ĩ t+k ) 2 2 ,(15) where we use the last convolutional layer of the VGG16 Network [30] as C 1 , and the last layer of the Hourglass Network [19] as C 2 . Note that we compute the bounding box according to the group truth to crop the human of interest for each frame, and then scale it to 224 × 224 as the input of the VGG16. Results. We evaluate the predictions using Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE). Both metrics perform pixel-level analysis between the ground truth frames and the generated videos. We also report the Table 4. Quantitative results of ablation study. We report the ACD-* scores on the MUG Database [1] and MSE scores on the Penn Action Dataset [45]. results of our method and [39] using the condition motion maps computed from the ground truth joints (GT). The results are shown in Figure 7 and Table 3 respectively. From these two scores, we discover that the proposed method achieves better quantitative results which demonstrates the effectiveness of our algorithm. Figure 8 shows visual comparison of our method with [39]. We can find that the predicted future of our method is closer to the ground-true future. To be speclfic, our method yields more consistent motions and keeps human appearances as well. Due to space constraints, we ask the reader to refer to the project website for more side by side visual results. Ablation Study Our method consists of three main modules: residual learning, dense connections for the decoder and the two-stage generation schema. Without residual learning, our network decays to [39]. As shown in Section 4.2 and 4.3, ours outperforms [39] which demonstrates the effectiveness of residual learning. To verify the rest modules, we train one partial variant of G M , where the dense connections are not employed in the decoder f D . Then we evaluate three different settings of our method on both tasks: G M without dense connections, using only G M for gener- ation and our full model. Note that in order to get rid of the influence from the LSTM, we report the results using the conditional motion maps calculated from the ground truth on the Penn Action Dataset. Results are shown in Table 4. Our approach with more modules performs better than those with less components, which suggests the effectiveness of each part of our algorithm. Conclusions In this paper, we combine the benefits of high-level structural conditions and spatiotemporal generative networks for image-to-video translation by synthesizing videos in a generation and then refinement manner. We have applied this method to facial expression retargeting where we show that our method is able to engage domain knowledge for realistic video generation, and to human pose forecasting where we demonstrate that our method achieves higher performance than state-of-the-arts when generating videos involving large motion changes. We also incorporate residual learning and dense connections to produce highquality results. In the future, we plan to further explore the use of our framework for other image or video generation tasks.
4,706
1807.09951
2950503707
We consider the problem of image-to-video translation, where an input image is translated into an output video containing motions of a single object. Recent methods for such problems typically train transformation networks to generate future frames conditioned on the structure sequence. Parallel work has shown that short high-quality motions can be generated by spatiotemporal generative networks that leverage temporal knowledge from the training data. We combine the benefits of both approaches and propose a two-stage generation framework where videos are generated from structures and then refined by temporal signals. To model motions more efficiently, we train networks to learn residual motion between the current and future frames, which avoids learning motion-irrelevant details. We conduct extensive experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. Superior results over the state-of-the-art methods on both tasks demonstrate the effectiveness of our approach.
@cite_33 proposed an algorithm to generate video in two stages, but there are important differences between their work and ours. First, @cite_33 is proposed for time-lapse videos while we can generate general videos. Second, we make use of structure conditions to guide the generation in the first stage but @cite_33 models this stage with 3D convolutional networks. Finally, we can make long-term predictions while @cite_33 only generates videos with fixed length.
{ "abstract": [ "Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128 A— 128 resolution for 32 frames. Quantitative and qualitative experiment results demonstrate the superiority of our model over the state-of-the-art models." ], "cite_N": [ "@cite_33" ], "mid": [ "2964318715" ] }
Learning to Forecast and Refine Residual Motion for Image-to-Video Generation
Recently, Generative Adversarial Networks (GANs) [7] have attracted a lot of research interests, as they can be utilized to synthesize realistic-looking images or videos for various vision applications [15,16,39,44,46,47]. Compared with image generation, synthesizing videos is more challenging, since the networks need to learn the appearance of objects as well as their motion models. In this paper, we study a form of classic problems in video generation that can be framed as image-to-video translation tasks, where a system receives one or more images as the input and translates it into a video containing realistic motions of a single object. Examples include facial expression retargeting [13,21,34], future prediction [37,38,40], and human pose forecasting [5,6,39]. One approach for the long-term future generation [39,41] is to train a transformation network that translates the input image into each future frame separately 3 The project website is publicly available at https://garyzhao.github.io/FRGAN. Our framework consists of three components: a condition generator, motion forecasting networks and refinement networks. Each part is explained in the corresponding section. conditioned by a sequence of structures. It suggests that it is beneficial to incorporate high-level structures during the generative process. In parallel, recent work [11,31,36,37,40] has shown that temporal visual features are important for video modeling. Such an approach produces temporally coherent motions with the help of spatiotemporal generative networks but is poor at long-term conditional motion generation since no high-level guidance is provided. In this paper, we combine the benefits of these two methods. Our approach includes two motion transformation networks as shown in Figure 1, where the entire video is synthesized in a generation and then refinement manner. In the generation stage, the motion forecasting networks observe a single frame from the input and generate all future frames individually, which are conditioned by the structure sequence predicted by a motion condition generator. This stage aims to generate a coarse video where the spatial structures of the motions are preserved. In the refinement stage, spatiotemporal motion refinement networks are used for refining the output from the previous stage. It performs the generation guided by temporal signals, which targets at producing temporally coherent motions. For more effective motion modeling, two transformation networks are trained in the residual space. Rather than learning the mapping from the structural conditions to motions directly, we force the networks to learn the differences between motions occurring in the current and future frames. The intuition is that learning only the residual motion avoids the redundant motion-irrelevant information, such as static backgrounds, which remains unchanged during the transformation. Moreover, we introduce a novel network architecture using dense connections for decoders. It encourages reusing spatially different features and thus yields realistic-looking results. We experiment on two tasks: facial expression retargeting and human pose forecasting. Success in either task requires reasoning realistic spatial structures as well as temporal semantics of the motions. Strong performances on both tasks demonstrate the effectiveness of our approach. In summary, our work makes the following contributions: -We devise a novel two-stage generation framework for image-to-video translation, where the future frames are generated according to the spatial structure sequence and then refined with temporal signals; -We investigate learning residual motion for video generation, which focuses on the motion-specific knowledge and avoids learning redundant or irrelevant details from the inputs; -Dense connections between layers of decoders are introduced to encourage spatially different feature reuse during the generation process, which yields more realistic-looking results; -We conduct extensive experimental validation on standard datasets which both quantitatively and subjectively compares our method with the stateof-the-arts to demonstrate the effectiveness of the proposed algorithm. Method As shown in Figure 1, our framework consists of three components: a motion condition generator G C , an image-to-image transformation network G M for forecasting motion conditioned by G C to each future frame individually, and a video-to-video transformation network G R which aims to refine the video clips concatenated from the output of G M . G C is a task-specific generator that produces a sequence of structures to condition the motion of each future frame. Two discriminators are utilized for adversarial learning, where D I differentiates real frames from generated ones and D V is employed for video clips. In the following sections, we explain how each component is designed respectively. Motion Condition Generators In this section, we illustrate how the motion condition generators G C are implemented for two image-to-video translation tasks: facial expression retargeting and human pose forecasting. One superiority of G C is that domain-specific knowledge can be leveraged to help the prediction of motion structures. Fig. 3. Illustration of our residual formulation. We disentangle the motion differences between the input and future frames into a residual motion map m t+k and a residual content map c t+k . Compared with the difference map directly computed from them, our formulation makes the learning task much easier. Figure 2, we utilize 3D Morphable Model (3DMM) [4] to model the sequence of expression motions. Given a video containing expression changes of an actor x, it can be parameterized with α x and (β t , β t+1 , . . . , β t+k ) using 3DMM, where α x represents the facial identity and β t is the expression coefficients in the frame t. In order to retarget the sequence of expressions to another actorx, we compute the facial identity vector αx and combine it with (β t , β t+1 , . . . , β t+k ) to reconstruct a new sequence of 3D face models with corresponding facial expressions. The conditional motion maps are the normal maps calculated from the 3D models respectively. "#$ "#$ Facial Expression Retargeting. As shown in Human Pose Forecasting. We follow [39] to implement an LSTM architecture [6] as the human pose predictor. The human pose of each frame is represented by the 2D coordinate positions of joints. The LSTM observes consecutive pose inputs to identify the type of motion, and then predicts the pose for the next period of time. An example is shown in Figure 2. Note that the motion map is calculated by mapping the output 2D coordinates from the LSTM to heatmaps and concatenating them on depth. Motion Forecasting Networks Starting from the frame I t at time t, our network synthesizes the future frame I t+k by predicting the residual motion between them. Previous work [16,28] implemented this idea by letting the network estimate a difference map between the input and output, which can be denoted as: I t+k = I t + G M (I t |M t , M t+k ),(1) where M t is the motion map which conditions I t . However, this straightforward formulation easily fails when employed to handle videos including large motions, since learning to generate a combination of residual changes from both dynamic and static contents in a single map is quite difficult. Therefore, we introduce an enhanced formulation where the transformation is disentangled into a residual I t+k = m t+k c t+k residual motion + (1 − m t+k ) I t static content ,(2) where both m t+k and c t+k are predicted by G M , and is element-wise multiplication. Intuitively, m t+k ∈ [0, 1] can be viewed as a spatial mask that highlights where the motion occurs. c t+k is the content of the residual motions. By summing the residual motion with the static content, we can obtain the final result. Note that as visualized in Figure 3, m t+k forces G M to reuse the static part from the input and concentrate on inferring dynamic motions. Architecture. Figure 4 shows the architecture of G M , which is inspired by the visual-structure analogy learning [27]. The future frame I t+k can be generated by transferring the structure differences from M t to M t+k to the input frame I t . We use a motion encoder f M , an image encoder f I and a residual content decoder f D to model this concept. And the residual motion is learned by: ∆(I t+k , I t ) = f D (f M (M t+k ) − f M (M t ) + f I (I t )).(3) Intuitively, f M aims to identify key motion features from the motion map containing high-level structural information; f I learns to map the appearance model of the input into an embedding space, where the motion feature transformations can be easily imposed to generate the residual motion; f D learns to decode the embedding. Note that we add skip connections [20] between f I and f D , which makes it easier for f D to reuse features of static objects learned from f I . Huang et al. [9,10] introduce dense connections to enhance feature propagation and reuse in the network. We argue that this is an appealing property for motion transformation networks as well, since in most cases the output frame shares similar high-level structure with the input frame. Especially, dense connections make it easy for the network to reuse features of different spatial positions when large motions are involved in the image. The decoder of our network thus consists of multiple dense connections, each of which connects different dense blocks. A dense block contains two 3 × 3 convolutional layers. The output of a dense block is connected to the first convolutional layers located in all subsequent blocks in the network. As dense blocks have different feature resolutions, we upsample feature maps with lower resolutions when we use them as inputs into higher resolution layers. Training Details. Given a video clip, we train our network to perform random jumps in time to learn forecasting motion changes. To be specific, for every iteration at training time, we sample a frame I t and its corresponding motion map M t given by G C at time t, and then force it to generate frame I t+k given motion map M t+k . Note that in order to let our network perform learning in the entire residual motion space, k is also randomly defined for each iteration. On the other hand, learning with jumps in time can prevent the network from falling into suboptimal parameters as well [39]. Our network is trained to minimize the following objective function: L G M = L rec (I t+k ,Ĩ t+k ) + L r (m t+k ) + L gen .(4) L rec is the reconstruction loss defined in the image space which measures the pixel-wise differences between the predicted and target frames: L rec (I t+k ,Ĩ t+k ) = I t+k −Ĩ t+k 1 ,(5) whereĨ t+k denotes the frame predicted by G M . The reconstruction loss intuitively offers guidance for our network in making a rough prediction that preserves most content information of the target image. More importantly, it leads the result to share similar structure information with the input image. L r is an L-1 norm regularization term defined as: L r (m t+k ) = m t+k 1 ,(6) where m t+k is the residual motion map predicted by G M . It forces the predicted motion changes to be sparse, since dynamic motions always occur in local positions of each frame while the static parts (e.g., background objects) should be unchanged. L gen is the adversarial loss that enables our model to generate realistic frames and reduce blurs, and it is defined as: L gen = −D I ([Ĩ t+k , M t+k ]),(7) where D I is the discriminator for images in adversarial learning. We concatenate the output of G M and motion map M t+k as the input of D I and make the discriminator conditioned on the motion [18]. Note that we follow WGAN [3,8] to train D I to measure the Wasserstein distance between distributions of the real images and results generated from G M . During the optimization of D I , the following loss function is minimized: 5. Architecture of our motion refinement network GR. The network receives temporally concatenated frames generated by GM together with their corresponding conditional motion map as the input and aims to refine the video clip to be more temporally coherent. It performs learning in the residual motion space as well. L D I = D I ([Ĩ t+k , M t+k ]) − D I ([I t+k , M t+k ]) + λ · L gp ,(8)L gp = ( ∇ [Î t+k ,M t+k ] D I ([Î t+k , M t+k ]) 2 − 1) 2 ,(9) where λ is experimentally set to 10. L gp is the gradient penalty term proposed by [8] whereÎ t+k is sampled from the interpolation of I t+k andĨ t+k , and we extend it to be conditioned on the motion M t+k as well. The adversarial loss in combination with the rest of loss terms allows our network to generate highquality frames given the motion conditions. Motion Refinement Networks LetṼ t = [Ĩ t+1 ,Ĩ t+2 , . . . ,Ĩ t+K ] be the video clip with length K temporally concatenated from the outputs of G M . The goal of the motion refinement network G R is to refineṼ t to be more temporally coherent, which is achieved by performing pixel-level refinement with the help of spatiotemporal generative networks. We extends Equation 2 by adding one additional temporal dimension to let G R estimate the residual between the real video clip V t andṼ t , which is defined as: V t = m t c t + (1 − m t ) Ṽ t ,(10) where m t is a spatiotemporal mask which selects either to be refined for each pixel location and timestep, while c t produces a spatiotemporal cuboid which stands for the refined motion content masked by m t . Architecture. Our motion refinement network roughly follows the architectural guidelines of [40]. As shown in Figure 5, we do not use pooling layers, instead strided and fractionally strided convolutions are utilized for in-network downsampling and upsampling. We also add skip connections to encourage feature reuse. Note that we concatenate the frames with their corresponding conditional motion maps as the inputs to guide the refinement. Training Details. The key requirement for G R is that the refined video should be temporal coherent in motion while preserving the annotation information from the input. To this end, we propose to train the network by minimizing a combination of three losses which is similar to Equation 4: L G R = L rec (V t ,V t ) + L r (m t ) +L gen ,(11) whereV t is the output of G R . L rec and L r share the same definition with Equation 5 and 6 respectively. L rec is the reconstruction loss that aims at refining the synthesized video towards the ground truth with minimal error. Compared with the self-regularization loss proposed by [29], we argue that the sparse regularization term L r is also efficient to preserve the annotation information (e.g., the facial identity and the type of pose) during the refinement, since it force the network to only modify the essential pixels.L gen is the adversarial loss: L gen = −D V ([V t , M t ]) − 1 K K i=1 D I ([Ī t+i , M t+i ]),(12) where M t = [M t+1 , M t+2 , . . . , M t+K ] is the temporally concatenated condition motion maps, andĪ t+i is the i-th frame ofV t . In the adversarial learning term L gen , both D I and D V play the role to judge whether the input is a real video clip or not, providing criticisms to G R . The image discriminator D I criticizes G R based on individual frames, which is trained to determine if each frame is sampled from a real video clip. At the same time, D V provides criticisms to G R based on the whole video clip, which takes a fixed length video clip as the input and judges if a video clip is sampled from a real video as well as evaluates the motions contained. As suggested by [37], although D V alone should be sufficient, D I significantly improves the convergence and the final results of G R . We follow the same strategy as introduced in Equation 8 to optimize D I . Note that in each iteration, one pair of real and generated frames is randomly sampled from V t andV t to train D I . On the other hand, training D V is also based on the WGAN framework, where we extend it to spatiotemporal inputs. Therefore, D V is optimized by minimizing the following loss function: L D V = D V ([V t , M t ]) − D V ([V t , M t ]) + λ · L gp ,(13)L gp = ( ∇ [Vt,Mt] D V ([V t , M t ]) 2 − 1) 2 ,(14) whereV t is sampled from the interpolation of V t andV t . Note that G R , D I and D V are trained alternatively. To be specific, we update D I and D V in one step while fixing G R ; in the alternating step, we fix D I and D V while updating G R . Experiments We perform experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. For facial expression retargeting, we demonstrate that our method is able to combine domain-specific knowledge, such as 3DMM, to generate realistic-looking results. For human pose forecasting, experimental results show that our method yields high-quality videos when applied for video generation tasks containing complex motion changes. Settings and Databases To train our networks, we use Adam [12] for optimization with a learning rate of 0.0001 and momentums of 0.0 and 0.9. We first train the forecasting networks, and then train the refinement networks using the generated coarse frames. The batch size is set to 32 for all networks. Due to space constraints, we ask the reader to refer to the project website for the details of the network designs. We use the MUG Facial Expression Database [1] to evaluate our approach on facial expression retargeting. This dataset is composed of 86 subjects (35 women and 51 men). We crop the face regions with regards to the landmark ground truth and scale them to 96 × 96. To train our networks, we use only the sequences representing one of the six facial expressions: anger, fear, disgust, happiness, sadness, and surprise. We evenly split the database into three groups according to the subjects. Two groups are used for training G M and G R respectively, and the results are evaluated on the last one. The 3D Basel Face Model [22] serves as the morphable model to fit the facial identities and expressions for the condition generator G C . We use [48] to compute the 3DMM parameters for each frame. Note that we train G R to refine the video clips every 32 frames. The Penn Action Dataset [45] consists of 2326 video sequences of 15 different human actions, which is used for evaluating our method on human pose forecasting. For each action sequence in the dataset, 13 human joint annotations are provided as the ground truth. To remove very noisy joint ground-truth in the dataset, we follow the setting of [39] to sub-sample the actions. Therefore, 8 actions including baseball pitch, baseball swing, clean and jerk, golf swing, jumping jacks, jump rope, tennis forehand, and tennis serve are used for training our networks. We crop video frames based on temporal tubes to remove as much background as possible while ensuring the human actions are in all frames, and then scale each cropped frame to 64 × 64. We evenly split the standard dataset into three sets. G M and G R are trained in the first two sets respectively, while we evaluate our models in the last set. We employ the same strategy as [39] to train the LSTM pose generator. It is trained to observe 10 inputs and predict 32 steps. Note that G R is trained to refine the video clips with the length of 16. Evaluation on Facial Expression Retargeting We compare our method to MCNet [38], MoCoGAN [37] and Villegas et al. [39] on the MUG Database. For each facial expression, we randomly select one video as the reference, and retarget it to all the subjects in the testing set with different methods. Each method only observes the input frame of the target subject, and performs the generation based on it. Our method and [39] share the same 3DMMbased condition generator as introduced in Section 3.1. Quantitative Comparison. The quality of a generated video are measured by the Average Content Distance (ACD) as introduced in [37]. For each generated video, we make use of OpenFace [2], which outperforms human performance in the face recognition task, to measure the video quality. OpenFace produces a Fig. 6. Examples of facial expression retargeting using our algorithm on the MUG Database [1]. We show two expressions as an illustration: (a) happiness and (b) surprise. The reference video and the input target images are highlighted in green, while the generated frames are highlighted in red. The results are sampled every 8 frames. Table 1. Video generation quality comparison on the MUG Dataset [1]. We also compute the ACD-* score for the training set, which is the reference. Methods ACD-I ACD-C MCNet [38] 0.545 0.322 Villegas et al. [39] [37] 62.5 / 37.5 feature vector for each frame, and then the ACD is calculated by measuring the L-2 distance of these vectors. We introduce two variants of the ACD in this experiment. The ACD-I is the average distance between each generated frame and the original input frame. It aims to judge if the facial identity is wellpreserved in the generated video. The ACD-C is the average pairwise distance of the per-frame feature vectors in the generated video. It measures the content consistency of the generated video. Table 1 summarizes the comparison results. From the table, we find that our method achieves ACD-* scores both lower than 0.2, which is substantially better than the baselines. One interesting observation is that [39] has the worst ACD-I but its ACD-C is the second best. We argue that this is due to the high-level information offered by our 3DMM-based condition generator, which plays a vital role for producing content consistency results. Our method outperforms other state-of-the-arts, since we utilize both domain knowledge (3DMM) and temporal signals for video generation. We show that it is greatly beneficial to incorporate both factors into the generative process. We also conduct a user study to quantitatively compare these methods. For each method, we randomly select 10 videos for each expression. We then randomly pair the videos generated by ours with the videos from one of the competing methods to form 54 questions. For each question, 3 users are asked to select the video which is more realistic. To be fair, the videos from different methods are shown in random orders. We report the average user preference scores (the average number of times, a user prefers our result to the competing one) in Table 2. We find that the users consider the videos generated by ours more realistic most of the time. This is consistent with the ACD results in Table 1, in which our method substantially outperforms the baselines. Visual Results. In Figure 6, we show the visual results (the expressions of happiness and surprise) generated by our method. We observe that our method is able to generate realistic motions while the facial identities are well-preserved. We hypothesize that the domain knowledge (3DMM) employed serves as a good prior which improves the generation. More visual results of different expressions and subjects are given on the project website. Evaluation on Human Pose Forecasting We compare our approach with VGAN [40], Mathieu et al. [17] and Villegas et al. [39] on the Penn Action Dataset. We produce the results of their models according to their papers or reference codes. For fair comparison, we generate videos with 32 generated frames using each method, and evaluate them starting from the first frame. Note that we train an individual VGAN for different action categories with randomly picked video clips from the dataset, while one network among all categories are trained for every other method. Both [39] and our method perform the generation based on the pre-trained LSTM provided by [39], and we train [39] through the same strategy of our motion forecasting network G M . Implementation. Following the settings of [39], we engage the feature similarity loss term L f eat for our motion forecasting network G M to capture the appearance (C 1 ) and structure (C 2 ) of the human action. This loss term is added to Equation 4, which is defined as: L f eat = C 1 (I t+k ) − C 1 (Ĩ t+k ) 2 2 + C 2 (I t+k ) − C 2 (Ĩ t+k ) 2 2 ,(15) where we use the last convolutional layer of the VGG16 Network [30] as C 1 , and the last layer of the Hourglass Network [19] as C 2 . Note that we compute the bounding box according to the group truth to crop the human of interest for each frame, and then scale it to 224 × 224 as the input of the VGG16. Results. We evaluate the predictions using Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE). Both metrics perform pixel-level analysis between the ground truth frames and the generated videos. We also report the Table 4. Quantitative results of ablation study. We report the ACD-* scores on the MUG Database [1] and MSE scores on the Penn Action Dataset [45]. results of our method and [39] using the condition motion maps computed from the ground truth joints (GT). The results are shown in Figure 7 and Table 3 respectively. From these two scores, we discover that the proposed method achieves better quantitative results which demonstrates the effectiveness of our algorithm. Figure 8 shows visual comparison of our method with [39]. We can find that the predicted future of our method is closer to the ground-true future. To be speclfic, our method yields more consistent motions and keeps human appearances as well. Due to space constraints, we ask the reader to refer to the project website for more side by side visual results. Ablation Study Our method consists of three main modules: residual learning, dense connections for the decoder and the two-stage generation schema. Without residual learning, our network decays to [39]. As shown in Section 4.2 and 4.3, ours outperforms [39] which demonstrates the effectiveness of residual learning. To verify the rest modules, we train one partial variant of G M , where the dense connections are not employed in the decoder f D . Then we evaluate three different settings of our method on both tasks: G M without dense connections, using only G M for gener- ation and our full model. Note that in order to get rid of the influence from the LSTM, we report the results using the conditional motion maps calculated from the ground truth on the Penn Action Dataset. Results are shown in Table 4. Our approach with more modules performs better than those with less components, which suggests the effectiveness of each part of our algorithm. Conclusions In this paper, we combine the benefits of high-level structural conditions and spatiotemporal generative networks for image-to-video translation by synthesizing videos in a generation and then refinement manner. We have applied this method to facial expression retargeting where we show that our method is able to engage domain knowledge for realistic video generation, and to human pose forecasting where we demonstrate that our method achieves higher performance than state-of-the-arts when generating videos involving large motion changes. We also incorporate residual learning and dense connections to produce highquality results. In the future, we plan to further explore the use of our framework for other image or video generation tasks.
4,706
1807.09951
2950503707
We consider the problem of image-to-video translation, where an input image is translated into an output video containing motions of a single object. Recent methods for such problems typically train transformation networks to generate future frames conditioned on the structure sequence. Parallel work has shown that short high-quality motions can be generated by spatiotemporal generative networks that leverage temporal knowledge from the training data. We combine the benefits of both approaches and propose a two-stage generation framework where videos are generated from structures and then refined by temporal signals. To model motions more efficiently, we train networks to learn residual motion between the current and future frames, which avoids learning motion-irrelevant details. We conduct extensive experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. Superior results over the state-of-the-art methods on both tasks demonstrate the effectiveness of our approach.
Video Generation. Recent methods @cite_27 @cite_29 @cite_26 @cite_31 solve image-to-video generation problem by training transformation networks that translate the input image into each future frame separately, together with a generator predicting the structure sequence which conditions the future frames. However, due to the absence of pixel-level temporal knowledge during the training process, motion artifacts can be observed from the results of these methods.
{ "abstract": [ "Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset", "We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.", "The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.", "We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy-based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human 3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art." ], "cite_N": [ "@cite_27", "@cite_29", "@cite_31", "@cite_26" ], "mid": [ "2248556341", "2615413256", "1485009520", "2963253230" ] }
Learning to Forecast and Refine Residual Motion for Image-to-Video Generation
Recently, Generative Adversarial Networks (GANs) [7] have attracted a lot of research interests, as they can be utilized to synthesize realistic-looking images or videos for various vision applications [15,16,39,44,46,47]. Compared with image generation, synthesizing videos is more challenging, since the networks need to learn the appearance of objects as well as their motion models. In this paper, we study a form of classic problems in video generation that can be framed as image-to-video translation tasks, where a system receives one or more images as the input and translates it into a video containing realistic motions of a single object. Examples include facial expression retargeting [13,21,34], future prediction [37,38,40], and human pose forecasting [5,6,39]. One approach for the long-term future generation [39,41] is to train a transformation network that translates the input image into each future frame separately 3 The project website is publicly available at https://garyzhao.github.io/FRGAN. Our framework consists of three components: a condition generator, motion forecasting networks and refinement networks. Each part is explained in the corresponding section. conditioned by a sequence of structures. It suggests that it is beneficial to incorporate high-level structures during the generative process. In parallel, recent work [11,31,36,37,40] has shown that temporal visual features are important for video modeling. Such an approach produces temporally coherent motions with the help of spatiotemporal generative networks but is poor at long-term conditional motion generation since no high-level guidance is provided. In this paper, we combine the benefits of these two methods. Our approach includes two motion transformation networks as shown in Figure 1, where the entire video is synthesized in a generation and then refinement manner. In the generation stage, the motion forecasting networks observe a single frame from the input and generate all future frames individually, which are conditioned by the structure sequence predicted by a motion condition generator. This stage aims to generate a coarse video where the spatial structures of the motions are preserved. In the refinement stage, spatiotemporal motion refinement networks are used for refining the output from the previous stage. It performs the generation guided by temporal signals, which targets at producing temporally coherent motions. For more effective motion modeling, two transformation networks are trained in the residual space. Rather than learning the mapping from the structural conditions to motions directly, we force the networks to learn the differences between motions occurring in the current and future frames. The intuition is that learning only the residual motion avoids the redundant motion-irrelevant information, such as static backgrounds, which remains unchanged during the transformation. Moreover, we introduce a novel network architecture using dense connections for decoders. It encourages reusing spatially different features and thus yields realistic-looking results. We experiment on two tasks: facial expression retargeting and human pose forecasting. Success in either task requires reasoning realistic spatial structures as well as temporal semantics of the motions. Strong performances on both tasks demonstrate the effectiveness of our approach. In summary, our work makes the following contributions: -We devise a novel two-stage generation framework for image-to-video translation, where the future frames are generated according to the spatial structure sequence and then refined with temporal signals; -We investigate learning residual motion for video generation, which focuses on the motion-specific knowledge and avoids learning redundant or irrelevant details from the inputs; -Dense connections between layers of decoders are introduced to encourage spatially different feature reuse during the generation process, which yields more realistic-looking results; -We conduct extensive experimental validation on standard datasets which both quantitatively and subjectively compares our method with the stateof-the-arts to demonstrate the effectiveness of the proposed algorithm. Method As shown in Figure 1, our framework consists of three components: a motion condition generator G C , an image-to-image transformation network G M for forecasting motion conditioned by G C to each future frame individually, and a video-to-video transformation network G R which aims to refine the video clips concatenated from the output of G M . G C is a task-specific generator that produces a sequence of structures to condition the motion of each future frame. Two discriminators are utilized for adversarial learning, where D I differentiates real frames from generated ones and D V is employed for video clips. In the following sections, we explain how each component is designed respectively. Motion Condition Generators In this section, we illustrate how the motion condition generators G C are implemented for two image-to-video translation tasks: facial expression retargeting and human pose forecasting. One superiority of G C is that domain-specific knowledge can be leveraged to help the prediction of motion structures. Fig. 3. Illustration of our residual formulation. We disentangle the motion differences between the input and future frames into a residual motion map m t+k and a residual content map c t+k . Compared with the difference map directly computed from them, our formulation makes the learning task much easier. Figure 2, we utilize 3D Morphable Model (3DMM) [4] to model the sequence of expression motions. Given a video containing expression changes of an actor x, it can be parameterized with α x and (β t , β t+1 , . . . , β t+k ) using 3DMM, where α x represents the facial identity and β t is the expression coefficients in the frame t. In order to retarget the sequence of expressions to another actorx, we compute the facial identity vector αx and combine it with (β t , β t+1 , . . . , β t+k ) to reconstruct a new sequence of 3D face models with corresponding facial expressions. The conditional motion maps are the normal maps calculated from the 3D models respectively. "#$ "#$ Facial Expression Retargeting. As shown in Human Pose Forecasting. We follow [39] to implement an LSTM architecture [6] as the human pose predictor. The human pose of each frame is represented by the 2D coordinate positions of joints. The LSTM observes consecutive pose inputs to identify the type of motion, and then predicts the pose for the next period of time. An example is shown in Figure 2. Note that the motion map is calculated by mapping the output 2D coordinates from the LSTM to heatmaps and concatenating them on depth. Motion Forecasting Networks Starting from the frame I t at time t, our network synthesizes the future frame I t+k by predicting the residual motion between them. Previous work [16,28] implemented this idea by letting the network estimate a difference map between the input and output, which can be denoted as: I t+k = I t + G M (I t |M t , M t+k ),(1) where M t is the motion map which conditions I t . However, this straightforward formulation easily fails when employed to handle videos including large motions, since learning to generate a combination of residual changes from both dynamic and static contents in a single map is quite difficult. Therefore, we introduce an enhanced formulation where the transformation is disentangled into a residual I t+k = m t+k c t+k residual motion + (1 − m t+k ) I t static content ,(2) where both m t+k and c t+k are predicted by G M , and is element-wise multiplication. Intuitively, m t+k ∈ [0, 1] can be viewed as a spatial mask that highlights where the motion occurs. c t+k is the content of the residual motions. By summing the residual motion with the static content, we can obtain the final result. Note that as visualized in Figure 3, m t+k forces G M to reuse the static part from the input and concentrate on inferring dynamic motions. Architecture. Figure 4 shows the architecture of G M , which is inspired by the visual-structure analogy learning [27]. The future frame I t+k can be generated by transferring the structure differences from M t to M t+k to the input frame I t . We use a motion encoder f M , an image encoder f I and a residual content decoder f D to model this concept. And the residual motion is learned by: ∆(I t+k , I t ) = f D (f M (M t+k ) − f M (M t ) + f I (I t )).(3) Intuitively, f M aims to identify key motion features from the motion map containing high-level structural information; f I learns to map the appearance model of the input into an embedding space, where the motion feature transformations can be easily imposed to generate the residual motion; f D learns to decode the embedding. Note that we add skip connections [20] between f I and f D , which makes it easier for f D to reuse features of static objects learned from f I . Huang et al. [9,10] introduce dense connections to enhance feature propagation and reuse in the network. We argue that this is an appealing property for motion transformation networks as well, since in most cases the output frame shares similar high-level structure with the input frame. Especially, dense connections make it easy for the network to reuse features of different spatial positions when large motions are involved in the image. The decoder of our network thus consists of multiple dense connections, each of which connects different dense blocks. A dense block contains two 3 × 3 convolutional layers. The output of a dense block is connected to the first convolutional layers located in all subsequent blocks in the network. As dense blocks have different feature resolutions, we upsample feature maps with lower resolutions when we use them as inputs into higher resolution layers. Training Details. Given a video clip, we train our network to perform random jumps in time to learn forecasting motion changes. To be specific, for every iteration at training time, we sample a frame I t and its corresponding motion map M t given by G C at time t, and then force it to generate frame I t+k given motion map M t+k . Note that in order to let our network perform learning in the entire residual motion space, k is also randomly defined for each iteration. On the other hand, learning with jumps in time can prevent the network from falling into suboptimal parameters as well [39]. Our network is trained to minimize the following objective function: L G M = L rec (I t+k ,Ĩ t+k ) + L r (m t+k ) + L gen .(4) L rec is the reconstruction loss defined in the image space which measures the pixel-wise differences between the predicted and target frames: L rec (I t+k ,Ĩ t+k ) = I t+k −Ĩ t+k 1 ,(5) whereĨ t+k denotes the frame predicted by G M . The reconstruction loss intuitively offers guidance for our network in making a rough prediction that preserves most content information of the target image. More importantly, it leads the result to share similar structure information with the input image. L r is an L-1 norm regularization term defined as: L r (m t+k ) = m t+k 1 ,(6) where m t+k is the residual motion map predicted by G M . It forces the predicted motion changes to be sparse, since dynamic motions always occur in local positions of each frame while the static parts (e.g., background objects) should be unchanged. L gen is the adversarial loss that enables our model to generate realistic frames and reduce blurs, and it is defined as: L gen = −D I ([Ĩ t+k , M t+k ]),(7) where D I is the discriminator for images in adversarial learning. We concatenate the output of G M and motion map M t+k as the input of D I and make the discriminator conditioned on the motion [18]. Note that we follow WGAN [3,8] to train D I to measure the Wasserstein distance between distributions of the real images and results generated from G M . During the optimization of D I , the following loss function is minimized: 5. Architecture of our motion refinement network GR. The network receives temporally concatenated frames generated by GM together with their corresponding conditional motion map as the input and aims to refine the video clip to be more temporally coherent. It performs learning in the residual motion space as well. L D I = D I ([Ĩ t+k , M t+k ]) − D I ([I t+k , M t+k ]) + λ · L gp ,(8)L gp = ( ∇ [Î t+k ,M t+k ] D I ([Î t+k , M t+k ]) 2 − 1) 2 ,(9) where λ is experimentally set to 10. L gp is the gradient penalty term proposed by [8] whereÎ t+k is sampled from the interpolation of I t+k andĨ t+k , and we extend it to be conditioned on the motion M t+k as well. The adversarial loss in combination with the rest of loss terms allows our network to generate highquality frames given the motion conditions. Motion Refinement Networks LetṼ t = [Ĩ t+1 ,Ĩ t+2 , . . . ,Ĩ t+K ] be the video clip with length K temporally concatenated from the outputs of G M . The goal of the motion refinement network G R is to refineṼ t to be more temporally coherent, which is achieved by performing pixel-level refinement with the help of spatiotemporal generative networks. We extends Equation 2 by adding one additional temporal dimension to let G R estimate the residual between the real video clip V t andṼ t , which is defined as: V t = m t c t + (1 − m t ) Ṽ t ,(10) where m t is a spatiotemporal mask which selects either to be refined for each pixel location and timestep, while c t produces a spatiotemporal cuboid which stands for the refined motion content masked by m t . Architecture. Our motion refinement network roughly follows the architectural guidelines of [40]. As shown in Figure 5, we do not use pooling layers, instead strided and fractionally strided convolutions are utilized for in-network downsampling and upsampling. We also add skip connections to encourage feature reuse. Note that we concatenate the frames with their corresponding conditional motion maps as the inputs to guide the refinement. Training Details. The key requirement for G R is that the refined video should be temporal coherent in motion while preserving the annotation information from the input. To this end, we propose to train the network by minimizing a combination of three losses which is similar to Equation 4: L G R = L rec (V t ,V t ) + L r (m t ) +L gen ,(11) whereV t is the output of G R . L rec and L r share the same definition with Equation 5 and 6 respectively. L rec is the reconstruction loss that aims at refining the synthesized video towards the ground truth with minimal error. Compared with the self-regularization loss proposed by [29], we argue that the sparse regularization term L r is also efficient to preserve the annotation information (e.g., the facial identity and the type of pose) during the refinement, since it force the network to only modify the essential pixels.L gen is the adversarial loss: L gen = −D V ([V t , M t ]) − 1 K K i=1 D I ([Ī t+i , M t+i ]),(12) where M t = [M t+1 , M t+2 , . . . , M t+K ] is the temporally concatenated condition motion maps, andĪ t+i is the i-th frame ofV t . In the adversarial learning term L gen , both D I and D V play the role to judge whether the input is a real video clip or not, providing criticisms to G R . The image discriminator D I criticizes G R based on individual frames, which is trained to determine if each frame is sampled from a real video clip. At the same time, D V provides criticisms to G R based on the whole video clip, which takes a fixed length video clip as the input and judges if a video clip is sampled from a real video as well as evaluates the motions contained. As suggested by [37], although D V alone should be sufficient, D I significantly improves the convergence and the final results of G R . We follow the same strategy as introduced in Equation 8 to optimize D I . Note that in each iteration, one pair of real and generated frames is randomly sampled from V t andV t to train D I . On the other hand, training D V is also based on the WGAN framework, where we extend it to spatiotemporal inputs. Therefore, D V is optimized by minimizing the following loss function: L D V = D V ([V t , M t ]) − D V ([V t , M t ]) + λ · L gp ,(13)L gp = ( ∇ [Vt,Mt] D V ([V t , M t ]) 2 − 1) 2 ,(14) whereV t is sampled from the interpolation of V t andV t . Note that G R , D I and D V are trained alternatively. To be specific, we update D I and D V in one step while fixing G R ; in the alternating step, we fix D I and D V while updating G R . Experiments We perform experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. For facial expression retargeting, we demonstrate that our method is able to combine domain-specific knowledge, such as 3DMM, to generate realistic-looking results. For human pose forecasting, experimental results show that our method yields high-quality videos when applied for video generation tasks containing complex motion changes. Settings and Databases To train our networks, we use Adam [12] for optimization with a learning rate of 0.0001 and momentums of 0.0 and 0.9. We first train the forecasting networks, and then train the refinement networks using the generated coarse frames. The batch size is set to 32 for all networks. Due to space constraints, we ask the reader to refer to the project website for the details of the network designs. We use the MUG Facial Expression Database [1] to evaluate our approach on facial expression retargeting. This dataset is composed of 86 subjects (35 women and 51 men). We crop the face regions with regards to the landmark ground truth and scale them to 96 × 96. To train our networks, we use only the sequences representing one of the six facial expressions: anger, fear, disgust, happiness, sadness, and surprise. We evenly split the database into three groups according to the subjects. Two groups are used for training G M and G R respectively, and the results are evaluated on the last one. The 3D Basel Face Model [22] serves as the morphable model to fit the facial identities and expressions for the condition generator G C . We use [48] to compute the 3DMM parameters for each frame. Note that we train G R to refine the video clips every 32 frames. The Penn Action Dataset [45] consists of 2326 video sequences of 15 different human actions, which is used for evaluating our method on human pose forecasting. For each action sequence in the dataset, 13 human joint annotations are provided as the ground truth. To remove very noisy joint ground-truth in the dataset, we follow the setting of [39] to sub-sample the actions. Therefore, 8 actions including baseball pitch, baseball swing, clean and jerk, golf swing, jumping jacks, jump rope, tennis forehand, and tennis serve are used for training our networks. We crop video frames based on temporal tubes to remove as much background as possible while ensuring the human actions are in all frames, and then scale each cropped frame to 64 × 64. We evenly split the standard dataset into three sets. G M and G R are trained in the first two sets respectively, while we evaluate our models in the last set. We employ the same strategy as [39] to train the LSTM pose generator. It is trained to observe 10 inputs and predict 32 steps. Note that G R is trained to refine the video clips with the length of 16. Evaluation on Facial Expression Retargeting We compare our method to MCNet [38], MoCoGAN [37] and Villegas et al. [39] on the MUG Database. For each facial expression, we randomly select one video as the reference, and retarget it to all the subjects in the testing set with different methods. Each method only observes the input frame of the target subject, and performs the generation based on it. Our method and [39] share the same 3DMMbased condition generator as introduced in Section 3.1. Quantitative Comparison. The quality of a generated video are measured by the Average Content Distance (ACD) as introduced in [37]. For each generated video, we make use of OpenFace [2], which outperforms human performance in the face recognition task, to measure the video quality. OpenFace produces a Fig. 6. Examples of facial expression retargeting using our algorithm on the MUG Database [1]. We show two expressions as an illustration: (a) happiness and (b) surprise. The reference video and the input target images are highlighted in green, while the generated frames are highlighted in red. The results are sampled every 8 frames. Table 1. Video generation quality comparison on the MUG Dataset [1]. We also compute the ACD-* score for the training set, which is the reference. Methods ACD-I ACD-C MCNet [38] 0.545 0.322 Villegas et al. [39] [37] 62.5 / 37.5 feature vector for each frame, and then the ACD is calculated by measuring the L-2 distance of these vectors. We introduce two variants of the ACD in this experiment. The ACD-I is the average distance between each generated frame and the original input frame. It aims to judge if the facial identity is wellpreserved in the generated video. The ACD-C is the average pairwise distance of the per-frame feature vectors in the generated video. It measures the content consistency of the generated video. Table 1 summarizes the comparison results. From the table, we find that our method achieves ACD-* scores both lower than 0.2, which is substantially better than the baselines. One interesting observation is that [39] has the worst ACD-I but its ACD-C is the second best. We argue that this is due to the high-level information offered by our 3DMM-based condition generator, which plays a vital role for producing content consistency results. Our method outperforms other state-of-the-arts, since we utilize both domain knowledge (3DMM) and temporal signals for video generation. We show that it is greatly beneficial to incorporate both factors into the generative process. We also conduct a user study to quantitatively compare these methods. For each method, we randomly select 10 videos for each expression. We then randomly pair the videos generated by ours with the videos from one of the competing methods to form 54 questions. For each question, 3 users are asked to select the video which is more realistic. To be fair, the videos from different methods are shown in random orders. We report the average user preference scores (the average number of times, a user prefers our result to the competing one) in Table 2. We find that the users consider the videos generated by ours more realistic most of the time. This is consistent with the ACD results in Table 1, in which our method substantially outperforms the baselines. Visual Results. In Figure 6, we show the visual results (the expressions of happiness and surprise) generated by our method. We observe that our method is able to generate realistic motions while the facial identities are well-preserved. We hypothesize that the domain knowledge (3DMM) employed serves as a good prior which improves the generation. More visual results of different expressions and subjects are given on the project website. Evaluation on Human Pose Forecasting We compare our approach with VGAN [40], Mathieu et al. [17] and Villegas et al. [39] on the Penn Action Dataset. We produce the results of their models according to their papers or reference codes. For fair comparison, we generate videos with 32 generated frames using each method, and evaluate them starting from the first frame. Note that we train an individual VGAN for different action categories with randomly picked video clips from the dataset, while one network among all categories are trained for every other method. Both [39] and our method perform the generation based on the pre-trained LSTM provided by [39], and we train [39] through the same strategy of our motion forecasting network G M . Implementation. Following the settings of [39], we engage the feature similarity loss term L f eat for our motion forecasting network G M to capture the appearance (C 1 ) and structure (C 2 ) of the human action. This loss term is added to Equation 4, which is defined as: L f eat = C 1 (I t+k ) − C 1 (Ĩ t+k ) 2 2 + C 2 (I t+k ) − C 2 (Ĩ t+k ) 2 2 ,(15) where we use the last convolutional layer of the VGG16 Network [30] as C 1 , and the last layer of the Hourglass Network [19] as C 2 . Note that we compute the bounding box according to the group truth to crop the human of interest for each frame, and then scale it to 224 × 224 as the input of the VGG16. Results. We evaluate the predictions using Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE). Both metrics perform pixel-level analysis between the ground truth frames and the generated videos. We also report the Table 4. Quantitative results of ablation study. We report the ACD-* scores on the MUG Database [1] and MSE scores on the Penn Action Dataset [45]. results of our method and [39] using the condition motion maps computed from the ground truth joints (GT). The results are shown in Figure 7 and Table 3 respectively. From these two scores, we discover that the proposed method achieves better quantitative results which demonstrates the effectiveness of our algorithm. Figure 8 shows visual comparison of our method with [39]. We can find that the predicted future of our method is closer to the ground-true future. To be speclfic, our method yields more consistent motions and keeps human appearances as well. Due to space constraints, we ask the reader to refer to the project website for more side by side visual results. Ablation Study Our method consists of three main modules: residual learning, dense connections for the decoder and the two-stage generation schema. Without residual learning, our network decays to [39]. As shown in Section 4.2 and 4.3, ours outperforms [39] which demonstrates the effectiveness of residual learning. To verify the rest modules, we train one partial variant of G M , where the dense connections are not employed in the decoder f D . Then we evaluate three different settings of our method on both tasks: G M without dense connections, using only G M for gener- ation and our full model. Note that in order to get rid of the influence from the LSTM, we report the results using the conditional motion maps calculated from the ground truth on the Penn Action Dataset. Results are shown in Table 4. Our approach with more modules performs better than those with less components, which suggests the effectiveness of each part of our algorithm. Conclusions In this paper, we combine the benefits of high-level structural conditions and spatiotemporal generative networks for image-to-video translation by synthesizing videos in a generation and then refinement manner. We have applied this method to facial expression retargeting where we show that our method is able to engage domain knowledge for realistic video generation, and to human pose forecasting where we demonstrate that our method achieves higher performance than state-of-the-arts when generating videos involving large motion changes. We also incorporate residual learning and dense connections to produce highquality results. In the future, we plan to further explore the use of our framework for other image or video generation tasks.
4,706
1807.09951
2950503707
We consider the problem of image-to-video translation, where an input image is translated into an output video containing motions of a single object. Recent methods for such problems typically train transformation networks to generate future frames conditioned on the structure sequence. Parallel work has shown that short high-quality motions can be generated by spatiotemporal generative networks that leverage temporal knowledge from the training data. We combine the benefits of both approaches and propose a two-stage generation framework where videos are generated from structures and then refined by temporal signals. To model motions more efficiently, we train networks to learn residual motion between the current and future frames, which avoids learning motion-irrelevant details. We conduct extensive experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. Superior results over the state-of-the-art methods on both tasks demonstrate the effectiveness of our approach.
Other approaches explore learning temporal visual features from video with spatiotemporal networks. @cite_39 showed how 3D convolutional networks could be applied to human action recognition. @cite_13 employed spatiotemporal 3D convolutions to model features encoded in videos. @cite_35 built a model to generate scene dynamics with 3D generative adversarial networks. Our method differs from the two-stream model of @cite_35 in two aspects. First, our residual motion map disentangles motion from the input: the generated frame is conditioned on the current and future motion structures. Second, we can control object motions in future frames efficiently by using structure conditions. Therefore, our method can be applied to motion manipulation problems.
{ "abstract": [ "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods." ], "cite_N": [ "@cite_35", "@cite_13", "@cite_39" ], "mid": [ "2520707650", "2952633803", "1983364832" ] }
Learning to Forecast and Refine Residual Motion for Image-to-Video Generation
Recently, Generative Adversarial Networks (GANs) [7] have attracted a lot of research interests, as they can be utilized to synthesize realistic-looking images or videos for various vision applications [15,16,39,44,46,47]. Compared with image generation, synthesizing videos is more challenging, since the networks need to learn the appearance of objects as well as their motion models. In this paper, we study a form of classic problems in video generation that can be framed as image-to-video translation tasks, where a system receives one or more images as the input and translates it into a video containing realistic motions of a single object. Examples include facial expression retargeting [13,21,34], future prediction [37,38,40], and human pose forecasting [5,6,39]. One approach for the long-term future generation [39,41] is to train a transformation network that translates the input image into each future frame separately 3 The project website is publicly available at https://garyzhao.github.io/FRGAN. Our framework consists of three components: a condition generator, motion forecasting networks and refinement networks. Each part is explained in the corresponding section. conditioned by a sequence of structures. It suggests that it is beneficial to incorporate high-level structures during the generative process. In parallel, recent work [11,31,36,37,40] has shown that temporal visual features are important for video modeling. Such an approach produces temporally coherent motions with the help of spatiotemporal generative networks but is poor at long-term conditional motion generation since no high-level guidance is provided. In this paper, we combine the benefits of these two methods. Our approach includes two motion transformation networks as shown in Figure 1, where the entire video is synthesized in a generation and then refinement manner. In the generation stage, the motion forecasting networks observe a single frame from the input and generate all future frames individually, which are conditioned by the structure sequence predicted by a motion condition generator. This stage aims to generate a coarse video where the spatial structures of the motions are preserved. In the refinement stage, spatiotemporal motion refinement networks are used for refining the output from the previous stage. It performs the generation guided by temporal signals, which targets at producing temporally coherent motions. For more effective motion modeling, two transformation networks are trained in the residual space. Rather than learning the mapping from the structural conditions to motions directly, we force the networks to learn the differences between motions occurring in the current and future frames. The intuition is that learning only the residual motion avoids the redundant motion-irrelevant information, such as static backgrounds, which remains unchanged during the transformation. Moreover, we introduce a novel network architecture using dense connections for decoders. It encourages reusing spatially different features and thus yields realistic-looking results. We experiment on two tasks: facial expression retargeting and human pose forecasting. Success in either task requires reasoning realistic spatial structures as well as temporal semantics of the motions. Strong performances on both tasks demonstrate the effectiveness of our approach. In summary, our work makes the following contributions: -We devise a novel two-stage generation framework for image-to-video translation, where the future frames are generated according to the spatial structure sequence and then refined with temporal signals; -We investigate learning residual motion for video generation, which focuses on the motion-specific knowledge and avoids learning redundant or irrelevant details from the inputs; -Dense connections between layers of decoders are introduced to encourage spatially different feature reuse during the generation process, which yields more realistic-looking results; -We conduct extensive experimental validation on standard datasets which both quantitatively and subjectively compares our method with the stateof-the-arts to demonstrate the effectiveness of the proposed algorithm. Method As shown in Figure 1, our framework consists of three components: a motion condition generator G C , an image-to-image transformation network G M for forecasting motion conditioned by G C to each future frame individually, and a video-to-video transformation network G R which aims to refine the video clips concatenated from the output of G M . G C is a task-specific generator that produces a sequence of structures to condition the motion of each future frame. Two discriminators are utilized for adversarial learning, where D I differentiates real frames from generated ones and D V is employed for video clips. In the following sections, we explain how each component is designed respectively. Motion Condition Generators In this section, we illustrate how the motion condition generators G C are implemented for two image-to-video translation tasks: facial expression retargeting and human pose forecasting. One superiority of G C is that domain-specific knowledge can be leveraged to help the prediction of motion structures. Fig. 3. Illustration of our residual formulation. We disentangle the motion differences between the input and future frames into a residual motion map m t+k and a residual content map c t+k . Compared with the difference map directly computed from them, our formulation makes the learning task much easier. Figure 2, we utilize 3D Morphable Model (3DMM) [4] to model the sequence of expression motions. Given a video containing expression changes of an actor x, it can be parameterized with α x and (β t , β t+1 , . . . , β t+k ) using 3DMM, where α x represents the facial identity and β t is the expression coefficients in the frame t. In order to retarget the sequence of expressions to another actorx, we compute the facial identity vector αx and combine it with (β t , β t+1 , . . . , β t+k ) to reconstruct a new sequence of 3D face models with corresponding facial expressions. The conditional motion maps are the normal maps calculated from the 3D models respectively. "#$ "#$ Facial Expression Retargeting. As shown in Human Pose Forecasting. We follow [39] to implement an LSTM architecture [6] as the human pose predictor. The human pose of each frame is represented by the 2D coordinate positions of joints. The LSTM observes consecutive pose inputs to identify the type of motion, and then predicts the pose for the next period of time. An example is shown in Figure 2. Note that the motion map is calculated by mapping the output 2D coordinates from the LSTM to heatmaps and concatenating them on depth. Motion Forecasting Networks Starting from the frame I t at time t, our network synthesizes the future frame I t+k by predicting the residual motion between them. Previous work [16,28] implemented this idea by letting the network estimate a difference map between the input and output, which can be denoted as: I t+k = I t + G M (I t |M t , M t+k ),(1) where M t is the motion map which conditions I t . However, this straightforward formulation easily fails when employed to handle videos including large motions, since learning to generate a combination of residual changes from both dynamic and static contents in a single map is quite difficult. Therefore, we introduce an enhanced formulation where the transformation is disentangled into a residual I t+k = m t+k c t+k residual motion + (1 − m t+k ) I t static content ,(2) where both m t+k and c t+k are predicted by G M , and is element-wise multiplication. Intuitively, m t+k ∈ [0, 1] can be viewed as a spatial mask that highlights where the motion occurs. c t+k is the content of the residual motions. By summing the residual motion with the static content, we can obtain the final result. Note that as visualized in Figure 3, m t+k forces G M to reuse the static part from the input and concentrate on inferring dynamic motions. Architecture. Figure 4 shows the architecture of G M , which is inspired by the visual-structure analogy learning [27]. The future frame I t+k can be generated by transferring the structure differences from M t to M t+k to the input frame I t . We use a motion encoder f M , an image encoder f I and a residual content decoder f D to model this concept. And the residual motion is learned by: ∆(I t+k , I t ) = f D (f M (M t+k ) − f M (M t ) + f I (I t )).(3) Intuitively, f M aims to identify key motion features from the motion map containing high-level structural information; f I learns to map the appearance model of the input into an embedding space, where the motion feature transformations can be easily imposed to generate the residual motion; f D learns to decode the embedding. Note that we add skip connections [20] between f I and f D , which makes it easier for f D to reuse features of static objects learned from f I . Huang et al. [9,10] introduce dense connections to enhance feature propagation and reuse in the network. We argue that this is an appealing property for motion transformation networks as well, since in most cases the output frame shares similar high-level structure with the input frame. Especially, dense connections make it easy for the network to reuse features of different spatial positions when large motions are involved in the image. The decoder of our network thus consists of multiple dense connections, each of which connects different dense blocks. A dense block contains two 3 × 3 convolutional layers. The output of a dense block is connected to the first convolutional layers located in all subsequent blocks in the network. As dense blocks have different feature resolutions, we upsample feature maps with lower resolutions when we use them as inputs into higher resolution layers. Training Details. Given a video clip, we train our network to perform random jumps in time to learn forecasting motion changes. To be specific, for every iteration at training time, we sample a frame I t and its corresponding motion map M t given by G C at time t, and then force it to generate frame I t+k given motion map M t+k . Note that in order to let our network perform learning in the entire residual motion space, k is also randomly defined for each iteration. On the other hand, learning with jumps in time can prevent the network from falling into suboptimal parameters as well [39]. Our network is trained to minimize the following objective function: L G M = L rec (I t+k ,Ĩ t+k ) + L r (m t+k ) + L gen .(4) L rec is the reconstruction loss defined in the image space which measures the pixel-wise differences between the predicted and target frames: L rec (I t+k ,Ĩ t+k ) = I t+k −Ĩ t+k 1 ,(5) whereĨ t+k denotes the frame predicted by G M . The reconstruction loss intuitively offers guidance for our network in making a rough prediction that preserves most content information of the target image. More importantly, it leads the result to share similar structure information with the input image. L r is an L-1 norm regularization term defined as: L r (m t+k ) = m t+k 1 ,(6) where m t+k is the residual motion map predicted by G M . It forces the predicted motion changes to be sparse, since dynamic motions always occur in local positions of each frame while the static parts (e.g., background objects) should be unchanged. L gen is the adversarial loss that enables our model to generate realistic frames and reduce blurs, and it is defined as: L gen = −D I ([Ĩ t+k , M t+k ]),(7) where D I is the discriminator for images in adversarial learning. We concatenate the output of G M and motion map M t+k as the input of D I and make the discriminator conditioned on the motion [18]. Note that we follow WGAN [3,8] to train D I to measure the Wasserstein distance between distributions of the real images and results generated from G M . During the optimization of D I , the following loss function is minimized: 5. Architecture of our motion refinement network GR. The network receives temporally concatenated frames generated by GM together with their corresponding conditional motion map as the input and aims to refine the video clip to be more temporally coherent. It performs learning in the residual motion space as well. L D I = D I ([Ĩ t+k , M t+k ]) − D I ([I t+k , M t+k ]) + λ · L gp ,(8)L gp = ( ∇ [Î t+k ,M t+k ] D I ([Î t+k , M t+k ]) 2 − 1) 2 ,(9) where λ is experimentally set to 10. L gp is the gradient penalty term proposed by [8] whereÎ t+k is sampled from the interpolation of I t+k andĨ t+k , and we extend it to be conditioned on the motion M t+k as well. The adversarial loss in combination with the rest of loss terms allows our network to generate highquality frames given the motion conditions. Motion Refinement Networks LetṼ t = [Ĩ t+1 ,Ĩ t+2 , . . . ,Ĩ t+K ] be the video clip with length K temporally concatenated from the outputs of G M . The goal of the motion refinement network G R is to refineṼ t to be more temporally coherent, which is achieved by performing pixel-level refinement with the help of spatiotemporal generative networks. We extends Equation 2 by adding one additional temporal dimension to let G R estimate the residual between the real video clip V t andṼ t , which is defined as: V t = m t c t + (1 − m t ) Ṽ t ,(10) where m t is a spatiotemporal mask which selects either to be refined for each pixel location and timestep, while c t produces a spatiotemporal cuboid which stands for the refined motion content masked by m t . Architecture. Our motion refinement network roughly follows the architectural guidelines of [40]. As shown in Figure 5, we do not use pooling layers, instead strided and fractionally strided convolutions are utilized for in-network downsampling and upsampling. We also add skip connections to encourage feature reuse. Note that we concatenate the frames with their corresponding conditional motion maps as the inputs to guide the refinement. Training Details. The key requirement for G R is that the refined video should be temporal coherent in motion while preserving the annotation information from the input. To this end, we propose to train the network by minimizing a combination of three losses which is similar to Equation 4: L G R = L rec (V t ,V t ) + L r (m t ) +L gen ,(11) whereV t is the output of G R . L rec and L r share the same definition with Equation 5 and 6 respectively. L rec is the reconstruction loss that aims at refining the synthesized video towards the ground truth with minimal error. Compared with the self-regularization loss proposed by [29], we argue that the sparse regularization term L r is also efficient to preserve the annotation information (e.g., the facial identity and the type of pose) during the refinement, since it force the network to only modify the essential pixels.L gen is the adversarial loss: L gen = −D V ([V t , M t ]) − 1 K K i=1 D I ([Ī t+i , M t+i ]),(12) where M t = [M t+1 , M t+2 , . . . , M t+K ] is the temporally concatenated condition motion maps, andĪ t+i is the i-th frame ofV t . In the adversarial learning term L gen , both D I and D V play the role to judge whether the input is a real video clip or not, providing criticisms to G R . The image discriminator D I criticizes G R based on individual frames, which is trained to determine if each frame is sampled from a real video clip. At the same time, D V provides criticisms to G R based on the whole video clip, which takes a fixed length video clip as the input and judges if a video clip is sampled from a real video as well as evaluates the motions contained. As suggested by [37], although D V alone should be sufficient, D I significantly improves the convergence and the final results of G R . We follow the same strategy as introduced in Equation 8 to optimize D I . Note that in each iteration, one pair of real and generated frames is randomly sampled from V t andV t to train D I . On the other hand, training D V is also based on the WGAN framework, where we extend it to spatiotemporal inputs. Therefore, D V is optimized by minimizing the following loss function: L D V = D V ([V t , M t ]) − D V ([V t , M t ]) + λ · L gp ,(13)L gp = ( ∇ [Vt,Mt] D V ([V t , M t ]) 2 − 1) 2 ,(14) whereV t is sampled from the interpolation of V t andV t . Note that G R , D I and D V are trained alternatively. To be specific, we update D I and D V in one step while fixing G R ; in the alternating step, we fix D I and D V while updating G R . Experiments We perform experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. For facial expression retargeting, we demonstrate that our method is able to combine domain-specific knowledge, such as 3DMM, to generate realistic-looking results. For human pose forecasting, experimental results show that our method yields high-quality videos when applied for video generation tasks containing complex motion changes. Settings and Databases To train our networks, we use Adam [12] for optimization with a learning rate of 0.0001 and momentums of 0.0 and 0.9. We first train the forecasting networks, and then train the refinement networks using the generated coarse frames. The batch size is set to 32 for all networks. Due to space constraints, we ask the reader to refer to the project website for the details of the network designs. We use the MUG Facial Expression Database [1] to evaluate our approach on facial expression retargeting. This dataset is composed of 86 subjects (35 women and 51 men). We crop the face regions with regards to the landmark ground truth and scale them to 96 × 96. To train our networks, we use only the sequences representing one of the six facial expressions: anger, fear, disgust, happiness, sadness, and surprise. We evenly split the database into three groups according to the subjects. Two groups are used for training G M and G R respectively, and the results are evaluated on the last one. The 3D Basel Face Model [22] serves as the morphable model to fit the facial identities and expressions for the condition generator G C . We use [48] to compute the 3DMM parameters for each frame. Note that we train G R to refine the video clips every 32 frames. The Penn Action Dataset [45] consists of 2326 video sequences of 15 different human actions, which is used for evaluating our method on human pose forecasting. For each action sequence in the dataset, 13 human joint annotations are provided as the ground truth. To remove very noisy joint ground-truth in the dataset, we follow the setting of [39] to sub-sample the actions. Therefore, 8 actions including baseball pitch, baseball swing, clean and jerk, golf swing, jumping jacks, jump rope, tennis forehand, and tennis serve are used for training our networks. We crop video frames based on temporal tubes to remove as much background as possible while ensuring the human actions are in all frames, and then scale each cropped frame to 64 × 64. We evenly split the standard dataset into three sets. G M and G R are trained in the first two sets respectively, while we evaluate our models in the last set. We employ the same strategy as [39] to train the LSTM pose generator. It is trained to observe 10 inputs and predict 32 steps. Note that G R is trained to refine the video clips with the length of 16. Evaluation on Facial Expression Retargeting We compare our method to MCNet [38], MoCoGAN [37] and Villegas et al. [39] on the MUG Database. For each facial expression, we randomly select one video as the reference, and retarget it to all the subjects in the testing set with different methods. Each method only observes the input frame of the target subject, and performs the generation based on it. Our method and [39] share the same 3DMMbased condition generator as introduced in Section 3.1. Quantitative Comparison. The quality of a generated video are measured by the Average Content Distance (ACD) as introduced in [37]. For each generated video, we make use of OpenFace [2], which outperforms human performance in the face recognition task, to measure the video quality. OpenFace produces a Fig. 6. Examples of facial expression retargeting using our algorithm on the MUG Database [1]. We show two expressions as an illustration: (a) happiness and (b) surprise. The reference video and the input target images are highlighted in green, while the generated frames are highlighted in red. The results are sampled every 8 frames. Table 1. Video generation quality comparison on the MUG Dataset [1]. We also compute the ACD-* score for the training set, which is the reference. Methods ACD-I ACD-C MCNet [38] 0.545 0.322 Villegas et al. [39] [37] 62.5 / 37.5 feature vector for each frame, and then the ACD is calculated by measuring the L-2 distance of these vectors. We introduce two variants of the ACD in this experiment. The ACD-I is the average distance between each generated frame and the original input frame. It aims to judge if the facial identity is wellpreserved in the generated video. The ACD-C is the average pairwise distance of the per-frame feature vectors in the generated video. It measures the content consistency of the generated video. Table 1 summarizes the comparison results. From the table, we find that our method achieves ACD-* scores both lower than 0.2, which is substantially better than the baselines. One interesting observation is that [39] has the worst ACD-I but its ACD-C is the second best. We argue that this is due to the high-level information offered by our 3DMM-based condition generator, which plays a vital role for producing content consistency results. Our method outperforms other state-of-the-arts, since we utilize both domain knowledge (3DMM) and temporal signals for video generation. We show that it is greatly beneficial to incorporate both factors into the generative process. We also conduct a user study to quantitatively compare these methods. For each method, we randomly select 10 videos for each expression. We then randomly pair the videos generated by ours with the videos from one of the competing methods to form 54 questions. For each question, 3 users are asked to select the video which is more realistic. To be fair, the videos from different methods are shown in random orders. We report the average user preference scores (the average number of times, a user prefers our result to the competing one) in Table 2. We find that the users consider the videos generated by ours more realistic most of the time. This is consistent with the ACD results in Table 1, in which our method substantially outperforms the baselines. Visual Results. In Figure 6, we show the visual results (the expressions of happiness and surprise) generated by our method. We observe that our method is able to generate realistic motions while the facial identities are well-preserved. We hypothesize that the domain knowledge (3DMM) employed serves as a good prior which improves the generation. More visual results of different expressions and subjects are given on the project website. Evaluation on Human Pose Forecasting We compare our approach with VGAN [40], Mathieu et al. [17] and Villegas et al. [39] on the Penn Action Dataset. We produce the results of their models according to their papers or reference codes. For fair comparison, we generate videos with 32 generated frames using each method, and evaluate them starting from the first frame. Note that we train an individual VGAN for different action categories with randomly picked video clips from the dataset, while one network among all categories are trained for every other method. Both [39] and our method perform the generation based on the pre-trained LSTM provided by [39], and we train [39] through the same strategy of our motion forecasting network G M . Implementation. Following the settings of [39], we engage the feature similarity loss term L f eat for our motion forecasting network G M to capture the appearance (C 1 ) and structure (C 2 ) of the human action. This loss term is added to Equation 4, which is defined as: L f eat = C 1 (I t+k ) − C 1 (Ĩ t+k ) 2 2 + C 2 (I t+k ) − C 2 (Ĩ t+k ) 2 2 ,(15) where we use the last convolutional layer of the VGG16 Network [30] as C 1 , and the last layer of the Hourglass Network [19] as C 2 . Note that we compute the bounding box according to the group truth to crop the human of interest for each frame, and then scale it to 224 × 224 as the input of the VGG16. Results. We evaluate the predictions using Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE). Both metrics perform pixel-level analysis between the ground truth frames and the generated videos. We also report the Table 4. Quantitative results of ablation study. We report the ACD-* scores on the MUG Database [1] and MSE scores on the Penn Action Dataset [45]. results of our method and [39] using the condition motion maps computed from the ground truth joints (GT). The results are shown in Figure 7 and Table 3 respectively. From these two scores, we discover that the proposed method achieves better quantitative results which demonstrates the effectiveness of our algorithm. Figure 8 shows visual comparison of our method with [39]. We can find that the predicted future of our method is closer to the ground-true future. To be speclfic, our method yields more consistent motions and keeps human appearances as well. Due to space constraints, we ask the reader to refer to the project website for more side by side visual results. Ablation Study Our method consists of three main modules: residual learning, dense connections for the decoder and the two-stage generation schema. Without residual learning, our network decays to [39]. As shown in Section 4.2 and 4.3, ours outperforms [39] which demonstrates the effectiveness of residual learning. To verify the rest modules, we train one partial variant of G M , where the dense connections are not employed in the decoder f D . Then we evaluate three different settings of our method on both tasks: G M without dense connections, using only G M for gener- ation and our full model. Note that in order to get rid of the influence from the LSTM, we report the results using the conditional motion maps calculated from the ground truth on the Penn Action Dataset. Results are shown in Table 4. Our approach with more modules performs better than those with less components, which suggests the effectiveness of each part of our algorithm. Conclusions In this paper, we combine the benefits of high-level structural conditions and spatiotemporal generative networks for image-to-video translation by synthesizing videos in a generation and then refinement manner. We have applied this method to facial expression retargeting where we show that our method is able to engage domain knowledge for realistic video generation, and to human pose forecasting where we demonstrate that our method achieves higher performance than state-of-the-arts when generating videos involving large motion changes. We also incorporate residual learning and dense connections to produce highquality results. In the future, we plan to further explore the use of our framework for other image or video generation tasks.
4,706
1807.09161
2964281094
In recent years, with the popularization of deep learning frameworks and large datasets, researchers have started parallelizing their models in order to train faster. This is crucially important, because they typically explore many hyperparameters in order to find the best ones for their applications. This process is time consuming and, consequently, speeding up training improves productivity. One approach to parallelize deep learning models followed by many researchers is based on weak scaling. The minibatches increase in size as new GPUs are added to the system. In addition, new learning rates schedules have been proposed to fix optimization issues that occur with large minibatch sizes. In this paper, however, we show that the recommendations provided by recent work do not apply to models that lack large datasets. In fact, we argument in favor of using strong scaling for achieving reliable performance in such cases. We evaluated our approach with up to 32 GPUs and show that weak scaling not only does not have the same accuracy as the sequential model, it also fails to converge most of time. Meanwhile, strong scaling has good scalability while having exactly the same accuracy of a sequential implementation.
Goyal @cite_9 present a strategy to train ResNet-50 @cite_0 in one hour using the ImageNet dataset. They employed a distributed synchronous Stochastic Gradient Descent (SGD) approach with up to 256 GPUs. They used a large minibatch---8192 examples---and reached an accuracy similar to a much smaller minibatch of 256 examples.
{ "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves 90 scaling efficiency when moving from 8 to 256 GPUs. Our findings enable training visual recognition models on internet-scale data with high efficiency." ], "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2949650786", "2622263826" ] }
0
1807.09161
2964281094
In recent years, with the popularization of deep learning frameworks and large datasets, researchers have started parallelizing their models in order to train faster. This is crucially important, because they typically explore many hyperparameters in order to find the best ones for their applications. This process is time consuming and, consequently, speeding up training improves productivity. One approach to parallelize deep learning models followed by many researchers is based on weak scaling. The minibatches increase in size as new GPUs are added to the system. In addition, new learning rates schedules have been proposed to fix optimization issues that occur with large minibatch sizes. In this paper, however, we show that the recommendations provided by recent work do not apply to models that lack large datasets. In fact, we argument in favor of using strong scaling for achieving reliable performance in such cases. We evaluated our approach with up to 32 GPUs and show that weak scaling not only does not have the same accuracy as the sequential model, it also fails to converge most of time. Meanwhile, strong scaling has good scalability while having exactly the same accuracy of a sequential implementation.
Tensorflow has also parallel execution capabilities built-in @cite_1 . In this framework, the programmer specifies a set of operations to be placed in the available devices. These devices can be processors and accelerators (GPUs and TPUs) that are distributed across a cluster of computers. A common strategy is to split the data and processing units into parameter servers and workers nodes. The parameter servers are responsible for holding the network weights and other parameters, while the workers are responsible for computing the forward and backward passes of the model.
{ "abstract": [ "TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous \"parameter server\" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that TensorFlow achieves for several real-world applications." ], "cite_N": [ "@cite_1" ], "mid": [ "2402144811" ] }
0
1807.09161
2964281094
In recent years, with the popularization of deep learning frameworks and large datasets, researchers have started parallelizing their models in order to train faster. This is crucially important, because they typically explore many hyperparameters in order to find the best ones for their applications. This process is time consuming and, consequently, speeding up training improves productivity. One approach to parallelize deep learning models followed by many researchers is based on weak scaling. The minibatches increase in size as new GPUs are added to the system. In addition, new learning rates schedules have been proposed to fix optimization issues that occur with large minibatch sizes. In this paper, however, we show that the recommendations provided by recent work do not apply to models that lack large datasets. In fact, we argument in favor of using strong scaling for achieving reliable performance in such cases. We evaluated our approach with up to 32 GPUs and show that weak scaling not only does not have the same accuracy as the sequential model, it also fails to converge most of time. Meanwhile, strong scaling has good scalability while having exactly the same accuracy of a sequential implementation.
The Tensorflow framework has grown in great popularity. However, its parallelization capabilities have been criticized for its unnecessary complexity. One alternative was proposed by Sergeev and Balso @cite_10 . They used a simplified strategy based on MPI to parallelize and run distributed SGD models. A key feature of the approach is to combine small messages so that it better uses the network.
{ "abstract": [ "Training modern deep learning models requires large amounts of computation, often provided by GPUs. Scaling computation from one GPU to many can enable much faster training and research progress but entails two complications. First, the training library must support inter-GPU communication. Depending on the particular methods employed, this communication may entail anywhere from negligible to significant overhead. Second, the user must modify his or her training code to take advantage of inter-GPU communication. Depending on the training library's API, the modification required may be either significant or minimal. Existing methods for enabling multi-GPU training under the TensorFlow library entail non-negligible communication overhead and require users to heavily modify their model-building code, leading many researchers to avoid the whole mess and stick with slower single-GPU training. In this paper we introduce Horovod, an open source library that improves on both obstructions to scaling: it employs efficient inter-GPU communication via ring reduction and requires only a few lines of modification to user code, enabling faster, easier distributed training in TensorFlow. Horovod is available under the Apache 2.0 license at this https URL" ], "cite_N": [ "@cite_10" ], "mid": [ "2787998955" ] }
0
1807.09161
2964281094
In recent years, with the popularization of deep learning frameworks and large datasets, researchers have started parallelizing their models in order to train faster. This is crucially important, because they typically explore many hyperparameters in order to find the best ones for their applications. This process is time consuming and, consequently, speeding up training improves productivity. One approach to parallelize deep learning models followed by many researchers is based on weak scaling. The minibatches increase in size as new GPUs are added to the system. In addition, new learning rates schedules have been proposed to fix optimization issues that occur with large minibatch sizes. In this paper, however, we show that the recommendations provided by recent work do not apply to models that lack large datasets. In fact, we argument in favor of using strong scaling for achieving reliable performance in such cases. We evaluated our approach with up to 32 GPUs and show that weak scaling not only does not have the same accuracy as the sequential model, it also fails to converge most of time. Meanwhile, strong scaling has good scalability while having exactly the same accuracy of a sequential implementation.
In medical imaging, the current state-of-art for classification and segmentation of 3D exams are 3D deep learning models @cite_15 . However, the widely known computational burden of such models due to 3D convolutions, in many times hinders the development of real-time systems for aiding in the diagnosis of 3D exams. This creates opportunities for methods that aim at providing time-efficient support to process 3D models in parallel.
{ "abstract": [ "This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement." ], "cite_N": [ "@cite_15" ], "mid": [ "2533800772" ] }
0
1807.08920
2884188791
Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.
ResNet @cite_46 has become popular by virtue of its assistance in deep model training. Numerous works based thereon improve performance by expanding its structure @cite_18 @cite_43 @cite_39 @cite_19 or use its explanation of ordinary differential equations to explore its reversible form @cite_23 @cite_36 . Because ResNet is internally diverse without operations such as the "drop-path" @cite_33 and has been proven to be structurally redundant @cite_14 , destructive approaches may promote its efficiency and enrich the structural representation by means of policy learning @cite_29 @cite_15 or a dynamic exit strategy @cite_10 @cite_17 .
{ "abstract": [ "", "In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.", "We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.", "Increasing depth and complexity in convolutional neural networks has enabled significant progress in visual perception tasks. However, incremental improvements in accuracy are often accompanied by exponentially deeper models that push the computational limits of modern hardware. These incremental improvements in accuracy imply that only a small fraction of the inputs require the additional model complexity. As a consequence, for any given image it is possible to bypass multiple stages of computation to reduce the cost of forward inference without affecting accuracy. We exploit this simple observation by learning to dynamically route computation through a convolutional network. We introduce dynamically routed networks (SkipNets) by adding gating layers that route images through existing convolutional networks and formulate the routing problem in the context of sequential decision making. We propose a hybrid learning algorithm which combines supervised learning and reinforcement learning to address the challenges of inherently non-differentiable routing decisions. We show SkipNet reduces computation by 30 - 90 while preserving the accuracy of the original model on four benchmark datasets. We compare SkipNet with SACT and ACT to show SkipNet achieves better accuracy with lower computation.", "We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.", "Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20 on average, going as high as 36 for some images, while maintaining the same 76.4 top-1 accuracy on ImageNet.", "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "A number of studies have shown that increasing the depth or width of convolutional networks is a rewarding approach to improve the performance of image recognition. In our study, however, we observed difficulties along both directions. On one hand, the pursuit for very deep networks is met with a diminishing return and increased training difficulty, on the other hand, widening a network would result in a quadratic growth in both computational cost and memory demand. These difficulties motivate us to explore structural diversity in designing deep networks, a new dimension beyond just depth and width. Specifically, we present a new family of modules, namely the PolyInception, which can be flexibly inserted in isolation or in a composition as replacements of different parts of a network. Choosing PolyInception modules with the guidance of architectural efficiency can improve the expressive power while preserving comparable computational cost. The Very Deep PolyNet, designed following this direction, demonstrates substantial improvements over the state-of-the-art on the ILSVRC 2012 benchmark. Compared to Inception-ResNet-v2, it reduces the top-5 validation error on single crops from 4.9 to 4.25 , and that on multi-crops from 3.7 to 3.45 .", "Deep convolutional neural networks (DCNNs) have shown remarkable performance in image classification tasks in recent years. Generally, deep neural network architectures are stacks consisting of a large number of convolutional layers, and they perform downsampling along the spatial dimension via pooling to reduce memory usage. Concurrently, the feature map dimension (i.e., the number of channels) is sharply increased at downsampling locations, which is essential to ensure effective performance because it increases the diversity of high-level attributes. This also applies to residual networks and is very closely related to their performance. In this research, instead of sharply increasing the feature map dimension at units that perform downsampling, we gradually increase the feature map dimension at all units to involve as many locations as possible. This design, which is discussed in depth together with our new insights, has proven to be an effective means of improving generalization ability. Furthermore, we propose a novel residual unit capable of further improving the classification accuracy with our new network architecture. Experiments on benchmark CIFAR-10, CIFAR-100, and ImageNet datasets have shown that our network architecture has superior generalization ability compared to the original residual networks. Code is available at https: github.com jhkim89 PyramidNet.", "Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memory-efficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions.", "" ], "cite_N": [ "@cite_18", "@cite_14", "@cite_33", "@cite_15", "@cite_36", "@cite_29", "@cite_39", "@cite_19", "@cite_43", "@cite_23", "@cite_46", "@cite_10", "@cite_17" ], "mid": [ "2964137095", "2963410064", "2963975324", "2769858569", "2808746463", "2962944050", "2953328958", "2962737770", "2962971773", "2756433771", "2194775991", "2562731582", "2598097916" ] }
Competitive Inner-Imaging Squeeze and Excitation for Residual Network
Deep convolutional neural networks (CNNs) have exhibited significant effectiveness in tackling and modeling image data [19,30,31,27]. The presentation of the residual network (ResNet) enables the network structure go far deeper and achieve superior performance [10]. Moreover, attention has also been paid to the modeling of implicit relationships in CNNs [4,33]. The "squeeze-excitation" (SE-Net) architecture [12] captures the channel relationships with a low cost, and can be used directly in all CNN types. However, when a SE-block is applied in ResNet, the identity mapping does not take into account the input of the channel-wise attention of the residual flow. For analysis of ResNet, the residual * corresponding author † equal contribution with Guihua Wen mapping can be regarded as a supplement to the identical mapping [11], and with the increase in depth, the residual network exhibits a certain amount of redundancy [15,32]; thus, identity mappings should also consider channel attention, thereby making the supplement for itself more dynamic and precise, under the known condition that the residual network has extremely high redundancy. In this work, we design a new, competitive squeeze and excitation architecture based on the SE-block, known as the competitive SE (CMPE-SE) network. We aim to expand the factors considered in the channel re-weighting of residual mappings and use the CMPE-SE design to model the implicit competitive relationship between identity and residual feature maps. Furthermore, we attempt to presents a novel strategy to alleviate the redundancy of ResNets with the CMPE-SE mechanism, it makes residual mappings tend to provide more efficient supplementary for identity mappings. Compared to the typical SE building block, the composition of the CMPE-SE block is illustrated in Fig. 1. The basic mode of the CMPE-SE module absorbs the compressed signals for identity mappings X ∈ R W ×H ×C and residual mappings U ∈ R W ×H×C , and with the same squeeze operation as in reference [12], concatenates and embeds these jointly and multiplies the excitation value back to each channel. Moreover, the global distributions from residual and identity feature maps can be stitched into new relational maps, we call this operation as "Inner-Imaging". Through "Inner-Imaging", we can use convolution filters to model the relationships between channels in spatial location, and various filters can be tested on the inner-imaged maps. As the design of the CMPE-SE module considers residual and identity flow jointly, based on the original SE block for ResNet, it expands the task and meaning of "squeeze and excitation", recalibrating the channel-wise features. The modeling object of the CMPE-SE unit is not limited to the relationship of the residual channels, but the relationship between all residual and identity feature maps, as well as the competition between residual and identity flows. In this manner, the network can dynamically adjust the complementary weights of residual channels to the identity mapping by using Figure 1: Competitive Squeeze-Excitation Architecture for Residual block. the competitive relations in each residual block. Furthermore, "Inner-Imaging" enable us to encode the channel-wise relationship with convolution filters, at the same time, it also provide diversified and spatial internal representation for the architecture of ResNet. The exploration of convolutional network architecture and modeling of network internal representation is a meaningful and challenging task [38,47], typically with high complicacy [44,35]. In comparison, the layout of the CMPE-SE module outlined above is easy to implement and can be cheaply applied to the residual network and its all variants. The contributions of this study can be listed as follows. • We present a new strategy to alleviate the redundancy of residual network and enhance its modeling efficiency, with the novel competitive "squeeze and excitation" unit, which jointly models the relationship of residual and identity channels, the identity mapping can participate in the re-weighting for residual channels. • We propose a inner-imaging design for intermediate structure representation in CNNs, in order to re-scan the channel relation features with convolutional filters. Furthermore, we try to fold the re-imaged channel relation maps and explore more possibilities of convolutional channel relationship encoder. • We conduct experiments on several datasets, including CIFAR-10, CIFAR-100, SVHN, and ImageNet, to validate the performance of the presented models. Moreover, we discover our approach can stimulate the potential of the smaller networks. Competitive Squeeze Excitation Blocks The residual block is routinely defined as the amalgamation of identity mapping X ∈ R W ×H ×C and residual mapping U ∈ R W ×H×C , as follows: y = F res (x, w r ) + x.(1) We record the output of the residual mapping as U r = F res (x, w r ) = [u 1 r , u 2 r , . . . , u C r ]. As described in the design of SE-Net, the "squeeze-excitation" module [12] controls the re-weighted value of the convolution feature maps including the residual mappings, as follows: u c r = F sq (u c r ) = 1 W × H W i=1 H j=1 u c r (i, j),(2)s = F ex (û r , w ex ) = σ (ReLU (û r , w 1 ), w 2 ) ,(3)x c = F scale (s c , u c r ) = F se (u r )[·] × F res (x, w r )[·] = s c · u c r ,(4) whereû c r refers to the global pooling result of the squeeze operation, σ(·) denotes the sigmoid activation, and operators × and · are the element-wise multiplication. The excitation contains two fully connected (FC) layers, the weights w 1 ∈ R C t ×C mean dimensionality-reduction with the ratio t (set to 16 by default) and w 2 ∈ R C× C t , so the variable s is the rescaling tensor for the residual channels. We can summarize the flow of the residual block in SE-ResNet as: y = F se (u r ) · F res (x, w r ) + x.(5) Stated thus, the conventional SE operation models the relationship of the convolution channels and feedback by recalibrating values that are calculated only using the feature maps of the residual flow in ResNet. Competition between Residual and Identity Flows The architecture of the current SE-ResNet illustrates that the rebuilding weights are not products of the joint decision with identity and residual mappings. From an intuitional point of view, we introduce the identity flow into the process of "squeeze-excitation". Corresponding to the residual mapping U r , the global information embedding from the identity mapping X id = [x 1 id , x 2 id , . . . , x C id ] can also be obtained as: x c id = F sq (x c id ) = 1 W × H W i=1 H j=1 x c id (i, j),(6) and as withû c r ,x c id is the global average pooling of identity features, and is used as a part of the joint input for the residual channel recalibration, together withû c r : s = F ex (û r ,x id , w ex ) = σ ReLU (û r , w r 1 ), ReLU (x id , w id 1 ) , w ex 2 ,(7)x c = F se (u r , x id )[·] × F res (x id , w r )[·] = s c · u c r ,(8) where the parameters w r 1 ∈ R C t ×C and w id 1 ∈ R C t ×C encode the squeezed signals from the identity and residual mappings, and are followed by another FC layer parameterized by w ex 2 ∈ R C× 2C t , with C neurons. The competition between the residual and identity mappings is modeled by the CMPE-SE module introduced above, and reacts to each residual channel. Implicitly, we can believe that the winning of the identity channels in this competition results in less weights of the residual channels, while the weights of the residual channels will increase. Finally, the CMPE-SE residual block is reformulated as: y = F se (u r , x id ) · F res (x id , w r ) + x id .(9) Figures 2(a) and (b) illustrate the difference between the typical SE and CMPE-SE residual modules. The embedding of the squeezed signalsû r = [û 1 r ,û 2 r , . . . ,û C r ] and x id = [x 1 id ,x 2 id , . . . ,x C id ] are simply concatenated prior to excitation. Here, the back-propagation algorithm optimizes two intertwined parts of modeling processes: (1) the relationships of all channels in the residual block; and (2) the competition between the residual and identity channels. Moreover, w id 1 is the only additional parameter cost. In the basic mode of the CMPE-SE residual block, one additional FC encoder is required for joint modeling of the competition of the residual and identity channels. We also design the pair-view strategies of the competitive "Squeeze-Excitation" to save parameters and capture the channel relation features from a novel angle. Figures 2(c) illustrate their structures. Firstly, the stacked squeezed feature maps are generated as:v Pair-View s = û r x id = û 1 r ,û 2 r · · ·û C r x 1 id ,x 2 id · · ·x C id ,(10) where the inner-imaging encoder acquires the feature maps of the channel relations rather than the original picture input. We use ε filters {w 1 (2×1) , . . . , w ε (2×1) }, w i (2×1) = [w i 11 , w i 21 ] scan the stacked tensor of squeezed features from the residual and identity channels, and then average the pair-view outputs, v c = 1 ε ε i=1 v s * w i (2×1) ,(11) where * denotes the convolution and v c is the re-imaged feature map. Batch normalization (BN) [16] is performed directly following convolution. Next, re-imaged signal encoding and excitation take place, as follows: s = σ (w ex 2 · ReLU (w 1 · v c )) ,(12) where the squeeze encoder is parameterized by w 1 ∈ R C t ×C and the excitation parameters are also shrunk to w ex Figure 3(a) illustrates the detailed structure of the "Conv (2 × 1)" pair-view CMPE-SE unit. 2 ∈ R C× C t . The "Conv (2 × 1)" pair-view strategy models the competition between the residual and identity channels based on strict upper and lower positions, which ignores the factor that any feature signal in the re-imaged tensor could be associated with any other signal, not only in the location of the vertical direction. Based on this consideration, we use a 1 × 1 convolution kernel w i (1×1) = [w i 11 ] to replace the above w i (2×1) = [w i 11 , w i 21 ] . Furthermore, a flattened layer is used to reshape the output of the 1 × 1 convolution: v c = 1 ε ε i=1 v s * w i (1×1) ,(13)s = σ w ex 2 · ReLU w 1 · (F f latten (v c )) ,(14) where v c = (F f latten (v c )) corresponds to Eq. 11, the parameter size of the encoder will return to w 1 ∈ R C t ×2C , and the excitation remains w ex 2 ∈ R C× C t . Figure 3(b) depicts the "Conv (1 × 1)" pair-view CMPE-SE unit. In fact, this mode can be regarded as a simple linear transformation for combined squeezed signals prior to embedding. The number of pair-view convolution kernels ε mentioned previously is set as the block width divided by the dimensionality-reduction ratio t. Exploration of Folded Shape for Pair-View Inner-imaging The inner-imaging design provide two shapes of convolutional kernel: "conv (2×1)" and "conv (1×1)" can be regard as a simple linear transformation for combined squeezed signals prior to embedding. However, too flat inner-imaged maps obstruct the diversity of filter shapes, and it is impossible to model location relationships of squeezed signals in larger fields. In order to expand the shape of inner-imaging convolution, and provide more robust and precise channel relation modeling, we fold the pair-view re-imaged maps into more square matrices with shape of (n × m) while maintaining the alternating arrangement of squeezed signals from residual and identity channels, as follows: v f = T (v s ) =   v 11 s , · · ·v 1m s . . . . . . . . . v n1 s , · · ·v nm s    = T û r x id =           û 1 r ,û 2 r · · ·û m r x 1 id ,x 2 id · · ·x m id u m+1 r ,û m+2 r · · ·û 2·m r x m+1 id ,x m+2 id · · ·x 2·m id . . . . . . . . . . . . u C−m+1 r ,û C−m+2 r · · ·û C r x C−m+1 id ,x C−m+2 id · · ·x C id            ,(15) where T (·) is the reshape function to fold the basic innerimaged maps and we receive the folded matrixv f . Then, we can freely expand the shape of inner-imaging convolution kernel to 3 × 3 as w i 3×3 , and use it scan the folded pair-view maps as follows, the structure details of folded pair-view are also shown in Figures 2(d) and 4. v c = 1 ε ε i=1 v f * w i (3×3)(16) Acquiescently, in folded mode of pair-view encoders, the flatten layer is used to reshape the convolution results for subsequent FC layers, as v c = (F f latten (v c )) . To sum up, the proposed CMPE-SE mechanism can technically improve the efficiency of residual network modeling through the following two characteristics: 1. Directly participating by identity flow, in the re-weighting of residual channels, makes the complementary modeling more efficient; 2. The mechanism of inner-imaging and its folded mode explore the richer forms of channel relationship modeling. Experiments We evaluate our approach on the CIFAR-10, CIFAR-100, SVHN and ImageNet datasets. We train several basic ResNets and compare their performances with/without the CMPE-SE module. Thereafter, we challenge the state-ofthe-art results. Datasets and Settings CIFAR. The CIFAR-10 and CIFAR-100 datasets consist of 32 × 32 colored images [18]. Both datasets contain 60,000 images belonging to 10 and 100 classes, with 50,000 images for training and 10,000 images for testing. We subtract the mean and divide by the standard deviation for data normalization, and standard data augmentation (translation/mirroring) is adopted for the training sets. SVHN. The Street View House Number (SVHN) dataset [23] contains 32 × 32 colored images of 73,257 samples in the training set and 26,032 for testing, with 531,131 digits for additional training. We divide the images by 255 and use all training data without data augmentation. ImageNet. The ILSVRC 2012 dataset [7] contains 1.2 million training images, 50,000 validation images, and 100,000 for testing, with 1,000 classes. Standard data augmentation is adopted for the training set and the 224 × 224 crop is randomly sampled. All images are normalized into [0, 1], with mean values and standard deviations. Settings. We test the effectiveness of the CMPE-SE modules on two classical models: pre-act ResNet [11] and the Wide Residual Network [41] with CIFAR-10 and CIFAR-100, and we also re-implement the typical SE block [12] based on these. For fair comparison, we follow the basic structures and hyper-parameter turning in the original papers; further implementation details are available on the open source 1 . We train our models by means of optimizer stochastic gradient descent with 0.9 Nesterov momentum, and use a batch size of 128 for 200 epochs. The learning rate is initialized to 0.1 and divided by 10 at the 100th and 150th epochs for the pre-act ResNet, and divided by 5 at epochs 60, 120, and 160 for WRN. The mixup is an advanced training strategy on convex combinations of sample pairs and their labels [42]. We apply this to the aforementioned evaluations and add 20 epochs with the traditional strategy following the formal training process of mixup. On the SVHN, our models are trained for 160 epochs; the initial learning rate is 0.01, and is divided by 10 at the 80th and 120th epochs. On ImageNet, we train our models for 100 epochs with a batch size of 64. The initial learning rate is 0.1 and it is reduced by 10 times at epochs 30, 60, and 90. Based on experimental experience, the shape of folded re-imaging maps is set as: (n = 2C/16, m = 16) for preact ResNet and (n = 20, m = C/10) for WRN. In fact, Results on CIFAR and SVHN The results of the contrast experiments for ResNets with/without the CMPE-SE module are illustrated in Tables 1 and 2. We use the pre-act ResNet [11] by default, where the numbers of parameters are recorded in brackets and the optimal records are marked in bold. By analyzing these results, we can draw the following conclusions: The CMPE-SE block can achieve superior performance over the SE block for both the classical and wide ResNets. It reduces the error rate of SE-ResNet by 0.226% on average and 0.312% for WRN, and does not consume excessive extra parameters (0.2% ∼ 5% over the SE residual network). The pair-view mode of the CMPE-SE units with 1 × 1 convolution can achieve superior results over the basic mode and use less parameters, which means that hybrid modeling of squeezed signals is more effective than merging them after embedding. Another phenomenon is that the CMPE-SE module can reduce the error rate more efficaciously on the WRN model than on the traditional ResNet; therefore, the fewer number of layers and wider residual in the "dumpy" wide ResNet can better reflect the role of identity mapping in the residual channel-wise attention. By observing the performances of ResNets under different scales, the CMPE-SE unit enables smaller networks to achieve or even exceed the same structure with additional parameters. For WRN, the classification results of the CMPE-SE-WRN-16-8 are the same as or exceed those of WRN-28-10, and the results of the CMPE-SE-WRN-22-10 are superior to those of the SE-WRN-28-10. The folded mode of CMPE-SE unit with 3 × 3 filters can achieve fairly or even better results than 1 × 1 pair-view CMPE-SE, with less parameters. The mixup [42] can be considered as an advanced approach to data augmentation, which can improve the generalization ability of models. In the case of using the mixup, the CMPE-SE block can further improve the performance of the residual networks until achieving state-of-the-art results. Table 3 lists the challenge results of the CMPE-SE-WRN-28-10 with state-of-the-art results. The compared networks include: original ResNet [10], pre-act ResNet [11], ResNet with stochastic depth [15], FractalNet [20], DenseNet [14], ResNeXt [39], PyramidNet [9], and CliqueNet [40]. We observe that our models based on wide residual networks can achieve comparable or superior performance to the compared models. Moreover, we know that although the parameter size taken is large, the training speed of the WRN is significantly faster than DenseNets, and even faster than ResNets [41]. Considering the high extensibility of proposed CMPE-SE mechanism on all ResNet variants, it is reasonable to believe that the CMPE-SE module can achieve better results on some more complex residual achitectures. Results on ImageNet Owing to the limitation of computational resources (GTX 1080Ti × 2), we only test the performance of the pre-act ResNet-50 (ImageNet mode) after being equipped with CMPE-SE blocks, and we use the smaller mini-batch with a size of 64, instead of 256 as in most studies. Although a smaller batch size would impair the performance training for the same epochs [40], the results of the CMPE-SE-ResNet-50 (both double FC and 1 × 1 pair-view modes) are slightly superior to those of other models at the same level, such as SE-ResNet-50 [12]. Compared to the SE-ResNet-50, the CMPE-SE-ResNet-50 with 3 × 3 folded inner-imaging can reduce the top-1 error rate by 0.5% and the top-5 error rate by 0.27%. The other compared models contain the pre-act ResNet-18, 34, and 50 [11], DenseNet-121 [14], CliqueNet, and SE-CliqueNet [40], where "SE-CliqueNet" means CliqueNet uses channel-wise attentional transition. Discussion Compared to the promotion of the CMPE-SE module on the same level networks, another fact is highly noteworthy: our CMPE-SE unit can greatly stimulate the potential of smaller networks with fewer parameters, enabling them to achieve or even exceed the performance of larger models. This improvement proves that the refined modeling for the inner features in convolutional networks is necessary. Regarding the refined modeling of intermediate convolutional features, DenseNet [14] is a type of robust repetitive refinement for inner feature maps, while the "squeeze and excitation" [12] can also be considered as a type of refined modeling for channel features, and its refining task is learning the relationships of convolution channels. Furthermore, the CMPE-SE module extends the task of refined modeling for intermediate features. So that we can make the modeling process of ResNets more efficient. In addition to modeling the competitive relationship be-tween the residual and identity mappings, the CMPE-SE module also provides the fundamental environment for reimaging the intermediate residual and identity features. In order to facilitate the display, Fig. 5 illustrates several examples of fragmented inner-imaging parts and the corresponding excitation outputs. These re-imaged maps come from different layers in depths 4, 13, 22, and 28, and we can observe that the average pooled signal maps of different samples are largely identical with only minor differences at first, then become more diversified after multi-times attentional re-scaling. The attentional outputs show great diversity and tend to suppress the redundant residual modeling at deeper layers, until in the last layers, when the network feature maps themselves are more expressive and sparse, the attention values become stable, only with very few jumps. Although the folded inner-imaging mechanism does not show very significant superiority over the ordinary pair-view CMPE-SE module, such a design still provides more possibilities for channel-wise squeezed signal organization and encoding, it has a strong enlightenment. In order to reduce the parameter cost generated by the subsequent FC layers, we average the outputs of the pairview convolution kernels. When attempting not to do so, we find that the former can save numerous parameters without sacrificing too much performance. This indicates that the inner-imaging of the channel features is parameter efficient, and we can even use a tiny and fixed number of filters to complete pair-view re-imaging. In the study of this paper, we have only applied 2 × 1 and 1×1 two types of pair-view filters, and 3×3 kernels on folded pair-view encoder, which can achieve the aforementioned results. More forms of convolutional channel relation encoders can be easily added into the CMPSE-SE framework, it shows that the CMPE-SE module has high extensibility and wide application value. Also, we have reason to believe that branch competitive modeling and inner-imaging can result in more capacious re-imaged feature maps and a diverse refined modeling structure on multi-branch networks. Conclusion In this paper, we have presented a competitive squeeze and excitation block for ResNets, which models the competitive relation from both the residual and identity channels, and expand the task of channel-wise attentional modeling. Furthermore, we introduce the inner-imaging strategy to explore the channel relationships by convolution on re-imaged feature maps, then we fold the inner-imaged maps to enrich the channel relation encoding strategies. The proposed design uses several additional parameters and can easily be applied to any type of residual network. We evaluated our models on three publicly available datasets against the state-of-theart results. Our approach can improve the performance of ResNets and stimulate the potential of smaller networks. Table 3: Error rates (%) of different methods on CIFAR-10, CIFAR-100, and SVHN datasets. The best records of our models are in bold and the best results are highlighted in red. Table 4: Single crop error rates (%) on ImageNet. Moreover, the presented method is extremely scalable and offers the potential to play a greater role in multi-branch architectures. Appendices A. Evaluation for Different Shapes of Folded Inner-Imaging Table 5 lists the test error with Conv 3×3 encoders in different shapes of folded inner-imaging. We can find out that the classification ability of out models is not very sensitive to the folded shape. Although the more square folded shapes can get better results, the worst results in Table 5 can still achieve at the same level as the basic CMPE-SE module or better, so, some bad settings of the folded shape will not cause the performance of our models to drop drastically. B. Folded Inner-Imaging Examples and The Corresponding Excitation Outputs From these diagrams, we can observe different phenomena for different scale models. Firstly, after the re-weighting with channel-wise attention, all inner-imaged maps have changed compared with the previous ones, and the degree of change depends on the attentional values shown in the diagrams, high fluctuation of attention values can lead to dramatic changes in inner-imaging maps, on the contrary, it will lead to smaller changes or just the difference in color depth. In most cases of model folded inner-imaging CMPE-SE-WRN-22-10, CMPE-SE-WRN-16-8 and CMPE-SE-ResNet164, with the deepening of layers, channel-wise attention values show more and more strong diversity, and the fluctuation range is more and more intense. In case of CMPE-SE-WRN-28-10, the attention outputs of last layer tend to be stable at near 0.5, with only a few jumps. We infer that in the deeper layer of the high-parameter networks, the feature maps have a strong diversity and high representation ability, so our CMPE-SE module is more inclined to maintain their original information, before that, the CMPE-SE mechanism uses less severe shake for the original features from lower layers, and for the deeper layers of abstract features, more violent shake is automatically applied to enhance the diversity of representation. For the two-stage inner-imaging samples, we can find that the similarity of inner-imaged maps is very high before being processed by CMPE-SE, for different samples in the same layers, and after channel weight re-scaling, they show a certain degree of differentiation, even if only half of the signals are likely to be re-weighted (signals from identity mappings will not change). For model WRN-28-10, WRN-22-10 and WRN-16-8, in deeper layers, attentional excitation values by inner-imaging CMPE-SE modules obviously lower than the outputs of ordinary SE-block, which is represented by the red lines. This phenomenon confirms the following characteristics of CMPE-SE module (especially in modes of inner-imaging): in the deeper layers, the network has modeled some complete features, so the CMPE-SE module will play a role in suppressing redundant residual modeling. C. Examples of Excitation Outputs for All Models For some networks with more number of layers, like ResNet-164 and ResNet-110, with the deepening of layers, the excitation outputs of basic SE module gradually tend to be flat, while that of CMPE-SE becomes more active. On the whole, the inner-imaging modes (especially with Conv 3×3 pair-view encoder) of CMPE-SE module are the most active on all networks. In some ways, this also indicates that the inner-imaging CMPE-SE module works well. D. Statistical Analysis of Sample Excitation Outputs Furthermore, we exhibit some statistical results of the attentional outputs of each model, in Fig 15 and 16. we can also observe some interesting phenomena from them. The right part of these diagrams show the average attention values of different blocks, firstly, we need note that the number of blocks in networks: WRN-28-10, WRN-22-10, WRN-16-8, ResNet-164 and ResNet-110 are 12,9,6,54 and 54 respectively. In almost all networks, the CMPE-SE module has a obvious inhibition on the residual mappings in the middle and very deep layers, by attentional excitation values, this indicates that the CMPE-SE mechanism does encourage identity mappings at the deeper layers, while reducing the redundancy of residual mapping modeling, which is compared with the basic SE mechanism. The left part of Fig 15 and 16 show the variance distributions of attentional outputs with different kinds of SE blocks, which reflect the diversity of channel-wise attention values at each layer. We notice that the variance distributions of Table 5: Test error(%) comparison for folded inner-imaging with different shapes on CIFAR-10 and CIFAR-100, the folded inner-imaging mechanism uses Conv 3×3 encoder and the italics with underline indicate the shape we choosed. Depth:
4,759
1807.08920
2884188791
Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.
A parallel line of research has deemed that intermediate feature maps should be modeled repeatedly @cite_41 . This compact architecture enables intermediate features to be refined and expanded, thereby enhancing the representation ability with concentrated parameter sizes. @cite_16 proposed a more compact model by circulating the dense block. Furthermore, dual-path networks (DPNs) @cite_2 combine the advantages of ResNet and DenseNet, and cause the residual units to perform extra modeling of the relationship between the identity and densely connected flow. A trend of compact architectures is to expand the mission of the network subassemblies while refining the intermediate features. Based on the SE block @cite_25 , our proposed CMPE-SE design also refines the intermediate features and develops the role of the SE unit. The difference is that our model focuses on self-controlling of components in ResNet, rather than simple feature reuse. Moreover, the re-imaging of channel signals presents a novel modeling view of intermediate features.
{ "abstract": [ "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections&#x2014;one between each layer and its subsequent layer&#x2014;our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "Improving information flow in deep networks helps to ease the training difficulties and utilize parameters more efficiently. Here we propose a new convolutional neural network architecture with alternately updated clique (CliqueNet). In contrast to prior networks, there are both forward and backward connections between any two layers in the same block. The layers are constructed as a loop and are updated alternately. The CliqueNet has some unique properties. For each layer, it is both the input and output of any other layer in the same block, so that the information flow among layers is maximized. During propagation, the newly updated layers are concatenated to re-update previously updated layer, and parameters are reused for multiple times. This recurrent feedback structure is able to bring higher level visual information back to refine low-level filters and achieve spatial attention. We analyze the features generated at different stages and observe that using refined features leads to a better result. We adopt a multiscale feature strategy that effectively avoids the progressive growth of parameters. Experiments on image recognition datasets including CIFAR-10, CIFAR-100, SVHN and ImageNet show that our proposed models achieve the state-of-the-art performance with fewer parameters1.", "", "In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26 smaller model size, 25 less computational cost and 8 lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications." ], "cite_N": [ "@cite_41", "@cite_16", "@cite_25", "@cite_2" ], "mid": [ "2963446712", "2963977677", "", "2964166828" ] }
Competitive Inner-Imaging Squeeze and Excitation for Residual Network
Deep convolutional neural networks (CNNs) have exhibited significant effectiveness in tackling and modeling image data [19,30,31,27]. The presentation of the residual network (ResNet) enables the network structure go far deeper and achieve superior performance [10]. Moreover, attention has also been paid to the modeling of implicit relationships in CNNs [4,33]. The "squeeze-excitation" (SE-Net) architecture [12] captures the channel relationships with a low cost, and can be used directly in all CNN types. However, when a SE-block is applied in ResNet, the identity mapping does not take into account the input of the channel-wise attention of the residual flow. For analysis of ResNet, the residual * corresponding author † equal contribution with Guihua Wen mapping can be regarded as a supplement to the identical mapping [11], and with the increase in depth, the residual network exhibits a certain amount of redundancy [15,32]; thus, identity mappings should also consider channel attention, thereby making the supplement for itself more dynamic and precise, under the known condition that the residual network has extremely high redundancy. In this work, we design a new, competitive squeeze and excitation architecture based on the SE-block, known as the competitive SE (CMPE-SE) network. We aim to expand the factors considered in the channel re-weighting of residual mappings and use the CMPE-SE design to model the implicit competitive relationship between identity and residual feature maps. Furthermore, we attempt to presents a novel strategy to alleviate the redundancy of ResNets with the CMPE-SE mechanism, it makes residual mappings tend to provide more efficient supplementary for identity mappings. Compared to the typical SE building block, the composition of the CMPE-SE block is illustrated in Fig. 1. The basic mode of the CMPE-SE module absorbs the compressed signals for identity mappings X ∈ R W ×H ×C and residual mappings U ∈ R W ×H×C , and with the same squeeze operation as in reference [12], concatenates and embeds these jointly and multiplies the excitation value back to each channel. Moreover, the global distributions from residual and identity feature maps can be stitched into new relational maps, we call this operation as "Inner-Imaging". Through "Inner-Imaging", we can use convolution filters to model the relationships between channels in spatial location, and various filters can be tested on the inner-imaged maps. As the design of the CMPE-SE module considers residual and identity flow jointly, based on the original SE block for ResNet, it expands the task and meaning of "squeeze and excitation", recalibrating the channel-wise features. The modeling object of the CMPE-SE unit is not limited to the relationship of the residual channels, but the relationship between all residual and identity feature maps, as well as the competition between residual and identity flows. In this manner, the network can dynamically adjust the complementary weights of residual channels to the identity mapping by using Figure 1: Competitive Squeeze-Excitation Architecture for Residual block. the competitive relations in each residual block. Furthermore, "Inner-Imaging" enable us to encode the channel-wise relationship with convolution filters, at the same time, it also provide diversified and spatial internal representation for the architecture of ResNet. The exploration of convolutional network architecture and modeling of network internal representation is a meaningful and challenging task [38,47], typically with high complicacy [44,35]. In comparison, the layout of the CMPE-SE module outlined above is easy to implement and can be cheaply applied to the residual network and its all variants. The contributions of this study can be listed as follows. • We present a new strategy to alleviate the redundancy of residual network and enhance its modeling efficiency, with the novel competitive "squeeze and excitation" unit, which jointly models the relationship of residual and identity channels, the identity mapping can participate in the re-weighting for residual channels. • We propose a inner-imaging design for intermediate structure representation in CNNs, in order to re-scan the channel relation features with convolutional filters. Furthermore, we try to fold the re-imaged channel relation maps and explore more possibilities of convolutional channel relationship encoder. • We conduct experiments on several datasets, including CIFAR-10, CIFAR-100, SVHN, and ImageNet, to validate the performance of the presented models. Moreover, we discover our approach can stimulate the potential of the smaller networks. Competitive Squeeze Excitation Blocks The residual block is routinely defined as the amalgamation of identity mapping X ∈ R W ×H ×C and residual mapping U ∈ R W ×H×C , as follows: y = F res (x, w r ) + x.(1) We record the output of the residual mapping as U r = F res (x, w r ) = [u 1 r , u 2 r , . . . , u C r ]. As described in the design of SE-Net, the "squeeze-excitation" module [12] controls the re-weighted value of the convolution feature maps including the residual mappings, as follows: u c r = F sq (u c r ) = 1 W × H W i=1 H j=1 u c r (i, j),(2)s = F ex (û r , w ex ) = σ (ReLU (û r , w 1 ), w 2 ) ,(3)x c = F scale (s c , u c r ) = F se (u r )[·] × F res (x, w r )[·] = s c · u c r ,(4) whereû c r refers to the global pooling result of the squeeze operation, σ(·) denotes the sigmoid activation, and operators × and · are the element-wise multiplication. The excitation contains two fully connected (FC) layers, the weights w 1 ∈ R C t ×C mean dimensionality-reduction with the ratio t (set to 16 by default) and w 2 ∈ R C× C t , so the variable s is the rescaling tensor for the residual channels. We can summarize the flow of the residual block in SE-ResNet as: y = F se (u r ) · F res (x, w r ) + x.(5) Stated thus, the conventional SE operation models the relationship of the convolution channels and feedback by recalibrating values that are calculated only using the feature maps of the residual flow in ResNet. Competition between Residual and Identity Flows The architecture of the current SE-ResNet illustrates that the rebuilding weights are not products of the joint decision with identity and residual mappings. From an intuitional point of view, we introduce the identity flow into the process of "squeeze-excitation". Corresponding to the residual mapping U r , the global information embedding from the identity mapping X id = [x 1 id , x 2 id , . . . , x C id ] can also be obtained as: x c id = F sq (x c id ) = 1 W × H W i=1 H j=1 x c id (i, j),(6) and as withû c r ,x c id is the global average pooling of identity features, and is used as a part of the joint input for the residual channel recalibration, together withû c r : s = F ex (û r ,x id , w ex ) = σ ReLU (û r , w r 1 ), ReLU (x id , w id 1 ) , w ex 2 ,(7)x c = F se (u r , x id )[·] × F res (x id , w r )[·] = s c · u c r ,(8) where the parameters w r 1 ∈ R C t ×C and w id 1 ∈ R C t ×C encode the squeezed signals from the identity and residual mappings, and are followed by another FC layer parameterized by w ex 2 ∈ R C× 2C t , with C neurons. The competition between the residual and identity mappings is modeled by the CMPE-SE module introduced above, and reacts to each residual channel. Implicitly, we can believe that the winning of the identity channels in this competition results in less weights of the residual channels, while the weights of the residual channels will increase. Finally, the CMPE-SE residual block is reformulated as: y = F se (u r , x id ) · F res (x id , w r ) + x id .(9) Figures 2(a) and (b) illustrate the difference between the typical SE and CMPE-SE residual modules. The embedding of the squeezed signalsû r = [û 1 r ,û 2 r , . . . ,û C r ] and x id = [x 1 id ,x 2 id , . . . ,x C id ] are simply concatenated prior to excitation. Here, the back-propagation algorithm optimizes two intertwined parts of modeling processes: (1) the relationships of all channels in the residual block; and (2) the competition between the residual and identity channels. Moreover, w id 1 is the only additional parameter cost. In the basic mode of the CMPE-SE residual block, one additional FC encoder is required for joint modeling of the competition of the residual and identity channels. We also design the pair-view strategies of the competitive "Squeeze-Excitation" to save parameters and capture the channel relation features from a novel angle. Figures 2(c) illustrate their structures. Firstly, the stacked squeezed feature maps are generated as:v Pair-View s = û r x id = û 1 r ,û 2 r · · ·û C r x 1 id ,x 2 id · · ·x C id ,(10) where the inner-imaging encoder acquires the feature maps of the channel relations rather than the original picture input. We use ε filters {w 1 (2×1) , . . . , w ε (2×1) }, w i (2×1) = [w i 11 , w i 21 ] scan the stacked tensor of squeezed features from the residual and identity channels, and then average the pair-view outputs, v c = 1 ε ε i=1 v s * w i (2×1) ,(11) where * denotes the convolution and v c is the re-imaged feature map. Batch normalization (BN) [16] is performed directly following convolution. Next, re-imaged signal encoding and excitation take place, as follows: s = σ (w ex 2 · ReLU (w 1 · v c )) ,(12) where the squeeze encoder is parameterized by w 1 ∈ R C t ×C and the excitation parameters are also shrunk to w ex Figure 3(a) illustrates the detailed structure of the "Conv (2 × 1)" pair-view CMPE-SE unit. 2 ∈ R C× C t . The "Conv (2 × 1)" pair-view strategy models the competition between the residual and identity channels based on strict upper and lower positions, which ignores the factor that any feature signal in the re-imaged tensor could be associated with any other signal, not only in the location of the vertical direction. Based on this consideration, we use a 1 × 1 convolution kernel w i (1×1) = [w i 11 ] to replace the above w i (2×1) = [w i 11 , w i 21 ] . Furthermore, a flattened layer is used to reshape the output of the 1 × 1 convolution: v c = 1 ε ε i=1 v s * w i (1×1) ,(13)s = σ w ex 2 · ReLU w 1 · (F f latten (v c )) ,(14) where v c = (F f latten (v c )) corresponds to Eq. 11, the parameter size of the encoder will return to w 1 ∈ R C t ×2C , and the excitation remains w ex 2 ∈ R C× C t . Figure 3(b) depicts the "Conv (1 × 1)" pair-view CMPE-SE unit. In fact, this mode can be regarded as a simple linear transformation for combined squeezed signals prior to embedding. The number of pair-view convolution kernels ε mentioned previously is set as the block width divided by the dimensionality-reduction ratio t. Exploration of Folded Shape for Pair-View Inner-imaging The inner-imaging design provide two shapes of convolutional kernel: "conv (2×1)" and "conv (1×1)" can be regard as a simple linear transformation for combined squeezed signals prior to embedding. However, too flat inner-imaged maps obstruct the diversity of filter shapes, and it is impossible to model location relationships of squeezed signals in larger fields. In order to expand the shape of inner-imaging convolution, and provide more robust and precise channel relation modeling, we fold the pair-view re-imaged maps into more square matrices with shape of (n × m) while maintaining the alternating arrangement of squeezed signals from residual and identity channels, as follows: v f = T (v s ) =   v 11 s , · · ·v 1m s . . . . . . . . . v n1 s , · · ·v nm s    = T û r x id =           û 1 r ,û 2 r · · ·û m r x 1 id ,x 2 id · · ·x m id u m+1 r ,û m+2 r · · ·û 2·m r x m+1 id ,x m+2 id · · ·x 2·m id . . . . . . . . . . . . u C−m+1 r ,û C−m+2 r · · ·û C r x C−m+1 id ,x C−m+2 id · · ·x C id            ,(15) where T (·) is the reshape function to fold the basic innerimaged maps and we receive the folded matrixv f . Then, we can freely expand the shape of inner-imaging convolution kernel to 3 × 3 as w i 3×3 , and use it scan the folded pair-view maps as follows, the structure details of folded pair-view are also shown in Figures 2(d) and 4. v c = 1 ε ε i=1 v f * w i (3×3)(16) Acquiescently, in folded mode of pair-view encoders, the flatten layer is used to reshape the convolution results for subsequent FC layers, as v c = (F f latten (v c )) . To sum up, the proposed CMPE-SE mechanism can technically improve the efficiency of residual network modeling through the following two characteristics: 1. Directly participating by identity flow, in the re-weighting of residual channels, makes the complementary modeling more efficient; 2. The mechanism of inner-imaging and its folded mode explore the richer forms of channel relationship modeling. Experiments We evaluate our approach on the CIFAR-10, CIFAR-100, SVHN and ImageNet datasets. We train several basic ResNets and compare their performances with/without the CMPE-SE module. Thereafter, we challenge the state-ofthe-art results. Datasets and Settings CIFAR. The CIFAR-10 and CIFAR-100 datasets consist of 32 × 32 colored images [18]. Both datasets contain 60,000 images belonging to 10 and 100 classes, with 50,000 images for training and 10,000 images for testing. We subtract the mean and divide by the standard deviation for data normalization, and standard data augmentation (translation/mirroring) is adopted for the training sets. SVHN. The Street View House Number (SVHN) dataset [23] contains 32 × 32 colored images of 73,257 samples in the training set and 26,032 for testing, with 531,131 digits for additional training. We divide the images by 255 and use all training data without data augmentation. ImageNet. The ILSVRC 2012 dataset [7] contains 1.2 million training images, 50,000 validation images, and 100,000 for testing, with 1,000 classes. Standard data augmentation is adopted for the training set and the 224 × 224 crop is randomly sampled. All images are normalized into [0, 1], with mean values and standard deviations. Settings. We test the effectiveness of the CMPE-SE modules on two classical models: pre-act ResNet [11] and the Wide Residual Network [41] with CIFAR-10 and CIFAR-100, and we also re-implement the typical SE block [12] based on these. For fair comparison, we follow the basic structures and hyper-parameter turning in the original papers; further implementation details are available on the open source 1 . We train our models by means of optimizer stochastic gradient descent with 0.9 Nesterov momentum, and use a batch size of 128 for 200 epochs. The learning rate is initialized to 0.1 and divided by 10 at the 100th and 150th epochs for the pre-act ResNet, and divided by 5 at epochs 60, 120, and 160 for WRN. The mixup is an advanced training strategy on convex combinations of sample pairs and their labels [42]. We apply this to the aforementioned evaluations and add 20 epochs with the traditional strategy following the formal training process of mixup. On the SVHN, our models are trained for 160 epochs; the initial learning rate is 0.01, and is divided by 10 at the 80th and 120th epochs. On ImageNet, we train our models for 100 epochs with a batch size of 64. The initial learning rate is 0.1 and it is reduced by 10 times at epochs 30, 60, and 90. Based on experimental experience, the shape of folded re-imaging maps is set as: (n = 2C/16, m = 16) for preact ResNet and (n = 20, m = C/10) for WRN. In fact, Results on CIFAR and SVHN The results of the contrast experiments for ResNets with/without the CMPE-SE module are illustrated in Tables 1 and 2. We use the pre-act ResNet [11] by default, where the numbers of parameters are recorded in brackets and the optimal records are marked in bold. By analyzing these results, we can draw the following conclusions: The CMPE-SE block can achieve superior performance over the SE block for both the classical and wide ResNets. It reduces the error rate of SE-ResNet by 0.226% on average and 0.312% for WRN, and does not consume excessive extra parameters (0.2% ∼ 5% over the SE residual network). The pair-view mode of the CMPE-SE units with 1 × 1 convolution can achieve superior results over the basic mode and use less parameters, which means that hybrid modeling of squeezed signals is more effective than merging them after embedding. Another phenomenon is that the CMPE-SE module can reduce the error rate more efficaciously on the WRN model than on the traditional ResNet; therefore, the fewer number of layers and wider residual in the "dumpy" wide ResNet can better reflect the role of identity mapping in the residual channel-wise attention. By observing the performances of ResNets under different scales, the CMPE-SE unit enables smaller networks to achieve or even exceed the same structure with additional parameters. For WRN, the classification results of the CMPE-SE-WRN-16-8 are the same as or exceed those of WRN-28-10, and the results of the CMPE-SE-WRN-22-10 are superior to those of the SE-WRN-28-10. The folded mode of CMPE-SE unit with 3 × 3 filters can achieve fairly or even better results than 1 × 1 pair-view CMPE-SE, with less parameters. The mixup [42] can be considered as an advanced approach to data augmentation, which can improve the generalization ability of models. In the case of using the mixup, the CMPE-SE block can further improve the performance of the residual networks until achieving state-of-the-art results. Table 3 lists the challenge results of the CMPE-SE-WRN-28-10 with state-of-the-art results. The compared networks include: original ResNet [10], pre-act ResNet [11], ResNet with stochastic depth [15], FractalNet [20], DenseNet [14], ResNeXt [39], PyramidNet [9], and CliqueNet [40]. We observe that our models based on wide residual networks can achieve comparable or superior performance to the compared models. Moreover, we know that although the parameter size taken is large, the training speed of the WRN is significantly faster than DenseNets, and even faster than ResNets [41]. Considering the high extensibility of proposed CMPE-SE mechanism on all ResNet variants, it is reasonable to believe that the CMPE-SE module can achieve better results on some more complex residual achitectures. Results on ImageNet Owing to the limitation of computational resources (GTX 1080Ti × 2), we only test the performance of the pre-act ResNet-50 (ImageNet mode) after being equipped with CMPE-SE blocks, and we use the smaller mini-batch with a size of 64, instead of 256 as in most studies. Although a smaller batch size would impair the performance training for the same epochs [40], the results of the CMPE-SE-ResNet-50 (both double FC and 1 × 1 pair-view modes) are slightly superior to those of other models at the same level, such as SE-ResNet-50 [12]. Compared to the SE-ResNet-50, the CMPE-SE-ResNet-50 with 3 × 3 folded inner-imaging can reduce the top-1 error rate by 0.5% and the top-5 error rate by 0.27%. The other compared models contain the pre-act ResNet-18, 34, and 50 [11], DenseNet-121 [14], CliqueNet, and SE-CliqueNet [40], where "SE-CliqueNet" means CliqueNet uses channel-wise attentional transition. Discussion Compared to the promotion of the CMPE-SE module on the same level networks, another fact is highly noteworthy: our CMPE-SE unit can greatly stimulate the potential of smaller networks with fewer parameters, enabling them to achieve or even exceed the performance of larger models. This improvement proves that the refined modeling for the inner features in convolutional networks is necessary. Regarding the refined modeling of intermediate convolutional features, DenseNet [14] is a type of robust repetitive refinement for inner feature maps, while the "squeeze and excitation" [12] can also be considered as a type of refined modeling for channel features, and its refining task is learning the relationships of convolution channels. Furthermore, the CMPE-SE module extends the task of refined modeling for intermediate features. So that we can make the modeling process of ResNets more efficient. In addition to modeling the competitive relationship be-tween the residual and identity mappings, the CMPE-SE module also provides the fundamental environment for reimaging the intermediate residual and identity features. In order to facilitate the display, Fig. 5 illustrates several examples of fragmented inner-imaging parts and the corresponding excitation outputs. These re-imaged maps come from different layers in depths 4, 13, 22, and 28, and we can observe that the average pooled signal maps of different samples are largely identical with only minor differences at first, then become more diversified after multi-times attentional re-scaling. The attentional outputs show great diversity and tend to suppress the redundant residual modeling at deeper layers, until in the last layers, when the network feature maps themselves are more expressive and sparse, the attention values become stable, only with very few jumps. Although the folded inner-imaging mechanism does not show very significant superiority over the ordinary pair-view CMPE-SE module, such a design still provides more possibilities for channel-wise squeezed signal organization and encoding, it has a strong enlightenment. In order to reduce the parameter cost generated by the subsequent FC layers, we average the outputs of the pairview convolution kernels. When attempting not to do so, we find that the former can save numerous parameters without sacrificing too much performance. This indicates that the inner-imaging of the channel features is parameter efficient, and we can even use a tiny and fixed number of filters to complete pair-view re-imaging. In the study of this paper, we have only applied 2 × 1 and 1×1 two types of pair-view filters, and 3×3 kernels on folded pair-view encoder, which can achieve the aforementioned results. More forms of convolutional channel relation encoders can be easily added into the CMPSE-SE framework, it shows that the CMPE-SE module has high extensibility and wide application value. Also, we have reason to believe that branch competitive modeling and inner-imaging can result in more capacious re-imaged feature maps and a diverse refined modeling structure on multi-branch networks. Conclusion In this paper, we have presented a competitive squeeze and excitation block for ResNets, which models the competitive relation from both the residual and identity channels, and expand the task of channel-wise attentional modeling. Furthermore, we introduce the inner-imaging strategy to explore the channel relationships by convolution on re-imaged feature maps, then we fold the inner-imaged maps to enrich the channel relation encoding strategies. The proposed design uses several additional parameters and can easily be applied to any type of residual network. We evaluated our models on three publicly available datasets against the state-of-theart results. Our approach can improve the performance of ResNets and stimulate the potential of smaller networks. Table 3: Error rates (%) of different methods on CIFAR-10, CIFAR-100, and SVHN datasets. The best records of our models are in bold and the best results are highlighted in red. Table 4: Single crop error rates (%) on ImageNet. Moreover, the presented method is extremely scalable and offers the potential to play a greater role in multi-branch architectures. Appendices A. Evaluation for Different Shapes of Folded Inner-Imaging Table 5 lists the test error with Conv 3×3 encoders in different shapes of folded inner-imaging. We can find out that the classification ability of out models is not very sensitive to the folded shape. Although the more square folded shapes can get better results, the worst results in Table 5 can still achieve at the same level as the basic CMPE-SE module or better, so, some bad settings of the folded shape will not cause the performance of our models to drop drastically. B. Folded Inner-Imaging Examples and The Corresponding Excitation Outputs From these diagrams, we can observe different phenomena for different scale models. Firstly, after the re-weighting with channel-wise attention, all inner-imaged maps have changed compared with the previous ones, and the degree of change depends on the attentional values shown in the diagrams, high fluctuation of attention values can lead to dramatic changes in inner-imaging maps, on the contrary, it will lead to smaller changes or just the difference in color depth. In most cases of model folded inner-imaging CMPE-SE-WRN-22-10, CMPE-SE-WRN-16-8 and CMPE-SE-ResNet164, with the deepening of layers, channel-wise attention values show more and more strong diversity, and the fluctuation range is more and more intense. In case of CMPE-SE-WRN-28-10, the attention outputs of last layer tend to be stable at near 0.5, with only a few jumps. We infer that in the deeper layer of the high-parameter networks, the feature maps have a strong diversity and high representation ability, so our CMPE-SE module is more inclined to maintain their original information, before that, the CMPE-SE mechanism uses less severe shake for the original features from lower layers, and for the deeper layers of abstract features, more violent shake is automatically applied to enhance the diversity of representation. For the two-stage inner-imaging samples, we can find that the similarity of inner-imaged maps is very high before being processed by CMPE-SE, for different samples in the same layers, and after channel weight re-scaling, they show a certain degree of differentiation, even if only half of the signals are likely to be re-weighted (signals from identity mappings will not change). For model WRN-28-10, WRN-22-10 and WRN-16-8, in deeper layers, attentional excitation values by inner-imaging CMPE-SE modules obviously lower than the outputs of ordinary SE-block, which is represented by the red lines. This phenomenon confirms the following characteristics of CMPE-SE module (especially in modes of inner-imaging): in the deeper layers, the network has modeled some complete features, so the CMPE-SE module will play a role in suppressing redundant residual modeling. C. Examples of Excitation Outputs for All Models For some networks with more number of layers, like ResNet-164 and ResNet-110, with the deepening of layers, the excitation outputs of basic SE module gradually tend to be flat, while that of CMPE-SE becomes more active. On the whole, the inner-imaging modes (especially with Conv 3×3 pair-view encoder) of CMPE-SE module are the most active on all networks. In some ways, this also indicates that the inner-imaging CMPE-SE module works well. D. Statistical Analysis of Sample Excitation Outputs Furthermore, we exhibit some statistical results of the attentional outputs of each model, in Fig 15 and 16. we can also observe some interesting phenomena from them. The right part of these diagrams show the average attention values of different blocks, firstly, we need note that the number of blocks in networks: WRN-28-10, WRN-22-10, WRN-16-8, ResNet-164 and ResNet-110 are 12,9,6,54 and 54 respectively. In almost all networks, the CMPE-SE module has a obvious inhibition on the residual mappings in the middle and very deep layers, by attentional excitation values, this indicates that the CMPE-SE mechanism does encourage identity mappings at the deeper layers, while reducing the redundancy of residual mapping modeling, which is compared with the basic SE mechanism. The left part of Fig 15 and 16 show the variance distributions of attentional outputs with different kinds of SE blocks, which reflect the diversity of channel-wise attention values at each layer. We notice that the variance distributions of Table 5: Test error(%) comparison for folded inner-imaging with different shapes on CIFAR-10 and CIFAR-100, the folded inner-imaging mechanism uses Conv 3×3 encoder and the italics with underline indicate the shape we choosed. Depth:
4,759
1807.08920
2884188791
Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.
Attention is widely applied in the modeling process of CNNs @cite_37 and is typically used to re-weight the image spatial signals @cite_24 @cite_5 @cite_11 @cite_22 , including multi-scale @cite_1 @cite_35 and multi-shape @cite_38 features. As a tool for biasing the allocation of resources @cite_25 , attention is also used to regulate the internal CNN features of @cite_3 @cite_42 . Unlike channel switching, combination @cite_32 @cite_40 or using reinforcement learning to reorganize the network paths @cite_30 , channel-wise attention, typically such as @cite_25 , provides an end-to-end training solution for re-weighting the intermediate channel features. Moreover, certain models combine spatial and channel-wise attention @cite_0 @cite_8 @cite_20 , and their modeling scope is still limited in total attentional elements. In contrast, our proposed CMPE-SE block considers the additional related factors (identity mappings) apart from the objects of attention (residual mappings). Furthermore, we test the effects of various convolutional filters in channel-wise attention with channel signal inner-imaging, which can mine the spatial channel-wise relations.
{ "abstract": [ "While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance. To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network. Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ResNeXt'' multi-branch network given the same learning capacity.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Visual saliency analysis detects salient regions objects that attract human attention in natural scenes. It has attracted intensive research in different fields such as computer vision, computer graphics, and multimedia. While many such computational models exist, the focused study of what and how applications can be beneficial is still lacking. In this article, our ultimate goal is thus to provide a comprehensive review of the applications using saliency cues, the so-called attentive systems. We would like to provide a broad vision about saliency applications and what visual saliency can do. We categorize the vast amount of applications into different areas such as computer vision, computer graphics, and multimedia. Intensively covering 200+ publications we survey (1) key application trends, (2) the role of visual saliency, and (3) the usability of saliency into different tasks.", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. This dataset will be released upon acceptance to facilitate the research of fine-grained image recognition. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.", "State-of-the-art deep convolutional networks (DCNs) such as squeeze-and- excitation (SE) residual networks implement a form of attention, also known as contextual guidance, which is derived from global image features. Here, we explore a complementary form of attention, known as visual saliency, which is derived from local image features. We extend the SE module with a novel global-and-local attention (GALA) module which combines both forms of attention -- resulting in state-of-the-art accuracy on ILSVRC. We further describe ClickMe.ai, a large-scale online experiment designed for human participants to identify diagnostic image regions to co-train a GALA network. Adding humans-in-the-loop is shown to significantly improve network accuracy, while also yielding visual features that are more interpretable and more similar to those used by human observers.", "Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet's feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.", "Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.", "In this paper, we present a simple and modularized neural network architecture, named interleaved group convolutional neural networks (IGCNets). The main point lies in a novel building block, a pair of two successive interleaved group convolutions: primary group convolution and secondary group convolution. The two group convolutions are complementary: (i) the convolution on each partition in primary group convolution is a spatial convolution, while on each partition in secondary group convolution, the convolution is a point-wise convolution; (ii) the channels in the same secondary partition come from different primary partitions. We discuss one representative advantage: Wider than a regular convolution with the number of parameters and the computation complexity preserved. We also show that regular convolutions, group convolution with summation fusion, and the Xception block are special cases of interleaved group convolutions. Empirical results over standard benchmarks, CIFAR- @math , CIFAR- @math , SVHN and ImageNet demonstrate that our networks are more efficient in using parameters and computation complexity with similar or higher accuracy.", "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.", "", "Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism &#x2014; a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.", "", "", "", "We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.", "" ], "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_38", "@cite_22", "@cite_8", "@cite_42", "@cite_1", "@cite_32", "@cite_3", "@cite_24", "@cite_0", "@cite_40", "@cite_5", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2757338536", "2307770531", "2756464018", "603908379", "2952011421", "2803137490", "2962970253", "2962891704", "2748788739", "2760103357", "", "2550553598", "", "", "", "2884585870", "" ] }
Competitive Inner-Imaging Squeeze and Excitation for Residual Network
Deep convolutional neural networks (CNNs) have exhibited significant effectiveness in tackling and modeling image data [19,30,31,27]. The presentation of the residual network (ResNet) enables the network structure go far deeper and achieve superior performance [10]. Moreover, attention has also been paid to the modeling of implicit relationships in CNNs [4,33]. The "squeeze-excitation" (SE-Net) architecture [12] captures the channel relationships with a low cost, and can be used directly in all CNN types. However, when a SE-block is applied in ResNet, the identity mapping does not take into account the input of the channel-wise attention of the residual flow. For analysis of ResNet, the residual * corresponding author † equal contribution with Guihua Wen mapping can be regarded as a supplement to the identical mapping [11], and with the increase in depth, the residual network exhibits a certain amount of redundancy [15,32]; thus, identity mappings should also consider channel attention, thereby making the supplement for itself more dynamic and precise, under the known condition that the residual network has extremely high redundancy. In this work, we design a new, competitive squeeze and excitation architecture based on the SE-block, known as the competitive SE (CMPE-SE) network. We aim to expand the factors considered in the channel re-weighting of residual mappings and use the CMPE-SE design to model the implicit competitive relationship between identity and residual feature maps. Furthermore, we attempt to presents a novel strategy to alleviate the redundancy of ResNets with the CMPE-SE mechanism, it makes residual mappings tend to provide more efficient supplementary for identity mappings. Compared to the typical SE building block, the composition of the CMPE-SE block is illustrated in Fig. 1. The basic mode of the CMPE-SE module absorbs the compressed signals for identity mappings X ∈ R W ×H ×C and residual mappings U ∈ R W ×H×C , and with the same squeeze operation as in reference [12], concatenates and embeds these jointly and multiplies the excitation value back to each channel. Moreover, the global distributions from residual and identity feature maps can be stitched into new relational maps, we call this operation as "Inner-Imaging". Through "Inner-Imaging", we can use convolution filters to model the relationships between channels in spatial location, and various filters can be tested on the inner-imaged maps. As the design of the CMPE-SE module considers residual and identity flow jointly, based on the original SE block for ResNet, it expands the task and meaning of "squeeze and excitation", recalibrating the channel-wise features. The modeling object of the CMPE-SE unit is not limited to the relationship of the residual channels, but the relationship between all residual and identity feature maps, as well as the competition between residual and identity flows. In this manner, the network can dynamically adjust the complementary weights of residual channels to the identity mapping by using Figure 1: Competitive Squeeze-Excitation Architecture for Residual block. the competitive relations in each residual block. Furthermore, "Inner-Imaging" enable us to encode the channel-wise relationship with convolution filters, at the same time, it also provide diversified and spatial internal representation for the architecture of ResNet. The exploration of convolutional network architecture and modeling of network internal representation is a meaningful and challenging task [38,47], typically with high complicacy [44,35]. In comparison, the layout of the CMPE-SE module outlined above is easy to implement and can be cheaply applied to the residual network and its all variants. The contributions of this study can be listed as follows. • We present a new strategy to alleviate the redundancy of residual network and enhance its modeling efficiency, with the novel competitive "squeeze and excitation" unit, which jointly models the relationship of residual and identity channels, the identity mapping can participate in the re-weighting for residual channels. • We propose a inner-imaging design for intermediate structure representation in CNNs, in order to re-scan the channel relation features with convolutional filters. Furthermore, we try to fold the re-imaged channel relation maps and explore more possibilities of convolutional channel relationship encoder. • We conduct experiments on several datasets, including CIFAR-10, CIFAR-100, SVHN, and ImageNet, to validate the performance of the presented models. Moreover, we discover our approach can stimulate the potential of the smaller networks. Competitive Squeeze Excitation Blocks The residual block is routinely defined as the amalgamation of identity mapping X ∈ R W ×H ×C and residual mapping U ∈ R W ×H×C , as follows: y = F res (x, w r ) + x.(1) We record the output of the residual mapping as U r = F res (x, w r ) = [u 1 r , u 2 r , . . . , u C r ]. As described in the design of SE-Net, the "squeeze-excitation" module [12] controls the re-weighted value of the convolution feature maps including the residual mappings, as follows: u c r = F sq (u c r ) = 1 W × H W i=1 H j=1 u c r (i, j),(2)s = F ex (û r , w ex ) = σ (ReLU (û r , w 1 ), w 2 ) ,(3)x c = F scale (s c , u c r ) = F se (u r )[·] × F res (x, w r )[·] = s c · u c r ,(4) whereû c r refers to the global pooling result of the squeeze operation, σ(·) denotes the sigmoid activation, and operators × and · are the element-wise multiplication. The excitation contains two fully connected (FC) layers, the weights w 1 ∈ R C t ×C mean dimensionality-reduction with the ratio t (set to 16 by default) and w 2 ∈ R C× C t , so the variable s is the rescaling tensor for the residual channels. We can summarize the flow of the residual block in SE-ResNet as: y = F se (u r ) · F res (x, w r ) + x.(5) Stated thus, the conventional SE operation models the relationship of the convolution channels and feedback by recalibrating values that are calculated only using the feature maps of the residual flow in ResNet. Competition between Residual and Identity Flows The architecture of the current SE-ResNet illustrates that the rebuilding weights are not products of the joint decision with identity and residual mappings. From an intuitional point of view, we introduce the identity flow into the process of "squeeze-excitation". Corresponding to the residual mapping U r , the global information embedding from the identity mapping X id = [x 1 id , x 2 id , . . . , x C id ] can also be obtained as: x c id = F sq (x c id ) = 1 W × H W i=1 H j=1 x c id (i, j),(6) and as withû c r ,x c id is the global average pooling of identity features, and is used as a part of the joint input for the residual channel recalibration, together withû c r : s = F ex (û r ,x id , w ex ) = σ ReLU (û r , w r 1 ), ReLU (x id , w id 1 ) , w ex 2 ,(7)x c = F se (u r , x id )[·] × F res (x id , w r )[·] = s c · u c r ,(8) where the parameters w r 1 ∈ R C t ×C and w id 1 ∈ R C t ×C encode the squeezed signals from the identity and residual mappings, and are followed by another FC layer parameterized by w ex 2 ∈ R C× 2C t , with C neurons. The competition between the residual and identity mappings is modeled by the CMPE-SE module introduced above, and reacts to each residual channel. Implicitly, we can believe that the winning of the identity channels in this competition results in less weights of the residual channels, while the weights of the residual channels will increase. Finally, the CMPE-SE residual block is reformulated as: y = F se (u r , x id ) · F res (x id , w r ) + x id .(9) Figures 2(a) and (b) illustrate the difference between the typical SE and CMPE-SE residual modules. The embedding of the squeezed signalsû r = [û 1 r ,û 2 r , . . . ,û C r ] and x id = [x 1 id ,x 2 id , . . . ,x C id ] are simply concatenated prior to excitation. Here, the back-propagation algorithm optimizes two intertwined parts of modeling processes: (1) the relationships of all channels in the residual block; and (2) the competition between the residual and identity channels. Moreover, w id 1 is the only additional parameter cost. In the basic mode of the CMPE-SE residual block, one additional FC encoder is required for joint modeling of the competition of the residual and identity channels. We also design the pair-view strategies of the competitive "Squeeze-Excitation" to save parameters and capture the channel relation features from a novel angle. Figures 2(c) illustrate their structures. Firstly, the stacked squeezed feature maps are generated as:v Pair-View s = û r x id = û 1 r ,û 2 r · · ·û C r x 1 id ,x 2 id · · ·x C id ,(10) where the inner-imaging encoder acquires the feature maps of the channel relations rather than the original picture input. We use ε filters {w 1 (2×1) , . . . , w ε (2×1) }, w i (2×1) = [w i 11 , w i 21 ] scan the stacked tensor of squeezed features from the residual and identity channels, and then average the pair-view outputs, v c = 1 ε ε i=1 v s * w i (2×1) ,(11) where * denotes the convolution and v c is the re-imaged feature map. Batch normalization (BN) [16] is performed directly following convolution. Next, re-imaged signal encoding and excitation take place, as follows: s = σ (w ex 2 · ReLU (w 1 · v c )) ,(12) where the squeeze encoder is parameterized by w 1 ∈ R C t ×C and the excitation parameters are also shrunk to w ex Figure 3(a) illustrates the detailed structure of the "Conv (2 × 1)" pair-view CMPE-SE unit. 2 ∈ R C× C t . The "Conv (2 × 1)" pair-view strategy models the competition between the residual and identity channels based on strict upper and lower positions, which ignores the factor that any feature signal in the re-imaged tensor could be associated with any other signal, not only in the location of the vertical direction. Based on this consideration, we use a 1 × 1 convolution kernel w i (1×1) = [w i 11 ] to replace the above w i (2×1) = [w i 11 , w i 21 ] . Furthermore, a flattened layer is used to reshape the output of the 1 × 1 convolution: v c = 1 ε ε i=1 v s * w i (1×1) ,(13)s = σ w ex 2 · ReLU w 1 · (F f latten (v c )) ,(14) where v c = (F f latten (v c )) corresponds to Eq. 11, the parameter size of the encoder will return to w 1 ∈ R C t ×2C , and the excitation remains w ex 2 ∈ R C× C t . Figure 3(b) depicts the "Conv (1 × 1)" pair-view CMPE-SE unit. In fact, this mode can be regarded as a simple linear transformation for combined squeezed signals prior to embedding. The number of pair-view convolution kernels ε mentioned previously is set as the block width divided by the dimensionality-reduction ratio t. Exploration of Folded Shape for Pair-View Inner-imaging The inner-imaging design provide two shapes of convolutional kernel: "conv (2×1)" and "conv (1×1)" can be regard as a simple linear transformation for combined squeezed signals prior to embedding. However, too flat inner-imaged maps obstruct the diversity of filter shapes, and it is impossible to model location relationships of squeezed signals in larger fields. In order to expand the shape of inner-imaging convolution, and provide more robust and precise channel relation modeling, we fold the pair-view re-imaged maps into more square matrices with shape of (n × m) while maintaining the alternating arrangement of squeezed signals from residual and identity channels, as follows: v f = T (v s ) =   v 11 s , · · ·v 1m s . . . . . . . . . v n1 s , · · ·v nm s    = T û r x id =           û 1 r ,û 2 r · · ·û m r x 1 id ,x 2 id · · ·x m id u m+1 r ,û m+2 r · · ·û 2·m r x m+1 id ,x m+2 id · · ·x 2·m id . . . . . . . . . . . . u C−m+1 r ,û C−m+2 r · · ·û C r x C−m+1 id ,x C−m+2 id · · ·x C id            ,(15) where T (·) is the reshape function to fold the basic innerimaged maps and we receive the folded matrixv f . Then, we can freely expand the shape of inner-imaging convolution kernel to 3 × 3 as w i 3×3 , and use it scan the folded pair-view maps as follows, the structure details of folded pair-view are also shown in Figures 2(d) and 4. v c = 1 ε ε i=1 v f * w i (3×3)(16) Acquiescently, in folded mode of pair-view encoders, the flatten layer is used to reshape the convolution results for subsequent FC layers, as v c = (F f latten (v c )) . To sum up, the proposed CMPE-SE mechanism can technically improve the efficiency of residual network modeling through the following two characteristics: 1. Directly participating by identity flow, in the re-weighting of residual channels, makes the complementary modeling more efficient; 2. The mechanism of inner-imaging and its folded mode explore the richer forms of channel relationship modeling. Experiments We evaluate our approach on the CIFAR-10, CIFAR-100, SVHN and ImageNet datasets. We train several basic ResNets and compare their performances with/without the CMPE-SE module. Thereafter, we challenge the state-ofthe-art results. Datasets and Settings CIFAR. The CIFAR-10 and CIFAR-100 datasets consist of 32 × 32 colored images [18]. Both datasets contain 60,000 images belonging to 10 and 100 classes, with 50,000 images for training and 10,000 images for testing. We subtract the mean and divide by the standard deviation for data normalization, and standard data augmentation (translation/mirroring) is adopted for the training sets. SVHN. The Street View House Number (SVHN) dataset [23] contains 32 × 32 colored images of 73,257 samples in the training set and 26,032 for testing, with 531,131 digits for additional training. We divide the images by 255 and use all training data without data augmentation. ImageNet. The ILSVRC 2012 dataset [7] contains 1.2 million training images, 50,000 validation images, and 100,000 for testing, with 1,000 classes. Standard data augmentation is adopted for the training set and the 224 × 224 crop is randomly sampled. All images are normalized into [0, 1], with mean values and standard deviations. Settings. We test the effectiveness of the CMPE-SE modules on two classical models: pre-act ResNet [11] and the Wide Residual Network [41] with CIFAR-10 and CIFAR-100, and we also re-implement the typical SE block [12] based on these. For fair comparison, we follow the basic structures and hyper-parameter turning in the original papers; further implementation details are available on the open source 1 . We train our models by means of optimizer stochastic gradient descent with 0.9 Nesterov momentum, and use a batch size of 128 for 200 epochs. The learning rate is initialized to 0.1 and divided by 10 at the 100th and 150th epochs for the pre-act ResNet, and divided by 5 at epochs 60, 120, and 160 for WRN. The mixup is an advanced training strategy on convex combinations of sample pairs and their labels [42]. We apply this to the aforementioned evaluations and add 20 epochs with the traditional strategy following the formal training process of mixup. On the SVHN, our models are trained for 160 epochs; the initial learning rate is 0.01, and is divided by 10 at the 80th and 120th epochs. On ImageNet, we train our models for 100 epochs with a batch size of 64. The initial learning rate is 0.1 and it is reduced by 10 times at epochs 30, 60, and 90. Based on experimental experience, the shape of folded re-imaging maps is set as: (n = 2C/16, m = 16) for preact ResNet and (n = 20, m = C/10) for WRN. In fact, Results on CIFAR and SVHN The results of the contrast experiments for ResNets with/without the CMPE-SE module are illustrated in Tables 1 and 2. We use the pre-act ResNet [11] by default, where the numbers of parameters are recorded in brackets and the optimal records are marked in bold. By analyzing these results, we can draw the following conclusions: The CMPE-SE block can achieve superior performance over the SE block for both the classical and wide ResNets. It reduces the error rate of SE-ResNet by 0.226% on average and 0.312% for WRN, and does not consume excessive extra parameters (0.2% ∼ 5% over the SE residual network). The pair-view mode of the CMPE-SE units with 1 × 1 convolution can achieve superior results over the basic mode and use less parameters, which means that hybrid modeling of squeezed signals is more effective than merging them after embedding. Another phenomenon is that the CMPE-SE module can reduce the error rate more efficaciously on the WRN model than on the traditional ResNet; therefore, the fewer number of layers and wider residual in the "dumpy" wide ResNet can better reflect the role of identity mapping in the residual channel-wise attention. By observing the performances of ResNets under different scales, the CMPE-SE unit enables smaller networks to achieve or even exceed the same structure with additional parameters. For WRN, the classification results of the CMPE-SE-WRN-16-8 are the same as or exceed those of WRN-28-10, and the results of the CMPE-SE-WRN-22-10 are superior to those of the SE-WRN-28-10. The folded mode of CMPE-SE unit with 3 × 3 filters can achieve fairly or even better results than 1 × 1 pair-view CMPE-SE, with less parameters. The mixup [42] can be considered as an advanced approach to data augmentation, which can improve the generalization ability of models. In the case of using the mixup, the CMPE-SE block can further improve the performance of the residual networks until achieving state-of-the-art results. Table 3 lists the challenge results of the CMPE-SE-WRN-28-10 with state-of-the-art results. The compared networks include: original ResNet [10], pre-act ResNet [11], ResNet with stochastic depth [15], FractalNet [20], DenseNet [14], ResNeXt [39], PyramidNet [9], and CliqueNet [40]. We observe that our models based on wide residual networks can achieve comparable or superior performance to the compared models. Moreover, we know that although the parameter size taken is large, the training speed of the WRN is significantly faster than DenseNets, and even faster than ResNets [41]. Considering the high extensibility of proposed CMPE-SE mechanism on all ResNet variants, it is reasonable to believe that the CMPE-SE module can achieve better results on some more complex residual achitectures. Results on ImageNet Owing to the limitation of computational resources (GTX 1080Ti × 2), we only test the performance of the pre-act ResNet-50 (ImageNet mode) after being equipped with CMPE-SE blocks, and we use the smaller mini-batch with a size of 64, instead of 256 as in most studies. Although a smaller batch size would impair the performance training for the same epochs [40], the results of the CMPE-SE-ResNet-50 (both double FC and 1 × 1 pair-view modes) are slightly superior to those of other models at the same level, such as SE-ResNet-50 [12]. Compared to the SE-ResNet-50, the CMPE-SE-ResNet-50 with 3 × 3 folded inner-imaging can reduce the top-1 error rate by 0.5% and the top-5 error rate by 0.27%. The other compared models contain the pre-act ResNet-18, 34, and 50 [11], DenseNet-121 [14], CliqueNet, and SE-CliqueNet [40], where "SE-CliqueNet" means CliqueNet uses channel-wise attentional transition. Discussion Compared to the promotion of the CMPE-SE module on the same level networks, another fact is highly noteworthy: our CMPE-SE unit can greatly stimulate the potential of smaller networks with fewer parameters, enabling them to achieve or even exceed the performance of larger models. This improvement proves that the refined modeling for the inner features in convolutional networks is necessary. Regarding the refined modeling of intermediate convolutional features, DenseNet [14] is a type of robust repetitive refinement for inner feature maps, while the "squeeze and excitation" [12] can also be considered as a type of refined modeling for channel features, and its refining task is learning the relationships of convolution channels. Furthermore, the CMPE-SE module extends the task of refined modeling for intermediate features. So that we can make the modeling process of ResNets more efficient. In addition to modeling the competitive relationship be-tween the residual and identity mappings, the CMPE-SE module also provides the fundamental environment for reimaging the intermediate residual and identity features. In order to facilitate the display, Fig. 5 illustrates several examples of fragmented inner-imaging parts and the corresponding excitation outputs. These re-imaged maps come from different layers in depths 4, 13, 22, and 28, and we can observe that the average pooled signal maps of different samples are largely identical with only minor differences at first, then become more diversified after multi-times attentional re-scaling. The attentional outputs show great diversity and tend to suppress the redundant residual modeling at deeper layers, until in the last layers, when the network feature maps themselves are more expressive and sparse, the attention values become stable, only with very few jumps. Although the folded inner-imaging mechanism does not show very significant superiority over the ordinary pair-view CMPE-SE module, such a design still provides more possibilities for channel-wise squeezed signal organization and encoding, it has a strong enlightenment. In order to reduce the parameter cost generated by the subsequent FC layers, we average the outputs of the pairview convolution kernels. When attempting not to do so, we find that the former can save numerous parameters without sacrificing too much performance. This indicates that the inner-imaging of the channel features is parameter efficient, and we can even use a tiny and fixed number of filters to complete pair-view re-imaging. In the study of this paper, we have only applied 2 × 1 and 1×1 two types of pair-view filters, and 3×3 kernels on folded pair-view encoder, which can achieve the aforementioned results. More forms of convolutional channel relation encoders can be easily added into the CMPSE-SE framework, it shows that the CMPE-SE module has high extensibility and wide application value. Also, we have reason to believe that branch competitive modeling and inner-imaging can result in more capacious re-imaged feature maps and a diverse refined modeling structure on multi-branch networks. Conclusion In this paper, we have presented a competitive squeeze and excitation block for ResNets, which models the competitive relation from both the residual and identity channels, and expand the task of channel-wise attentional modeling. Furthermore, we introduce the inner-imaging strategy to explore the channel relationships by convolution on re-imaged feature maps, then we fold the inner-imaged maps to enrich the channel relation encoding strategies. The proposed design uses several additional parameters and can easily be applied to any type of residual network. We evaluated our models on three publicly available datasets against the state-of-theart results. Our approach can improve the performance of ResNets and stimulate the potential of smaller networks. Table 3: Error rates (%) of different methods on CIFAR-10, CIFAR-100, and SVHN datasets. The best records of our models are in bold and the best results are highlighted in red. Table 4: Single crop error rates (%) on ImageNet. Moreover, the presented method is extremely scalable and offers the potential to play a greater role in multi-branch architectures. Appendices A. Evaluation for Different Shapes of Folded Inner-Imaging Table 5 lists the test error with Conv 3×3 encoders in different shapes of folded inner-imaging. We can find out that the classification ability of out models is not very sensitive to the folded shape. Although the more square folded shapes can get better results, the worst results in Table 5 can still achieve at the same level as the basic CMPE-SE module or better, so, some bad settings of the folded shape will not cause the performance of our models to drop drastically. B. Folded Inner-Imaging Examples and The Corresponding Excitation Outputs From these diagrams, we can observe different phenomena for different scale models. Firstly, after the re-weighting with channel-wise attention, all inner-imaged maps have changed compared with the previous ones, and the degree of change depends on the attentional values shown in the diagrams, high fluctuation of attention values can lead to dramatic changes in inner-imaging maps, on the contrary, it will lead to smaller changes or just the difference in color depth. In most cases of model folded inner-imaging CMPE-SE-WRN-22-10, CMPE-SE-WRN-16-8 and CMPE-SE-ResNet164, with the deepening of layers, channel-wise attention values show more and more strong diversity, and the fluctuation range is more and more intense. In case of CMPE-SE-WRN-28-10, the attention outputs of last layer tend to be stable at near 0.5, with only a few jumps. We infer that in the deeper layer of the high-parameter networks, the feature maps have a strong diversity and high representation ability, so our CMPE-SE module is more inclined to maintain their original information, before that, the CMPE-SE mechanism uses less severe shake for the original features from lower layers, and for the deeper layers of abstract features, more violent shake is automatically applied to enhance the diversity of representation. For the two-stage inner-imaging samples, we can find that the similarity of inner-imaged maps is very high before being processed by CMPE-SE, for different samples in the same layers, and after channel weight re-scaling, they show a certain degree of differentiation, even if only half of the signals are likely to be re-weighted (signals from identity mappings will not change). For model WRN-28-10, WRN-22-10 and WRN-16-8, in deeper layers, attentional excitation values by inner-imaging CMPE-SE modules obviously lower than the outputs of ordinary SE-block, which is represented by the red lines. This phenomenon confirms the following characteristics of CMPE-SE module (especially in modes of inner-imaging): in the deeper layers, the network has modeled some complete features, so the CMPE-SE module will play a role in suppressing redundant residual modeling. C. Examples of Excitation Outputs for All Models For some networks with more number of layers, like ResNet-164 and ResNet-110, with the deepening of layers, the excitation outputs of basic SE module gradually tend to be flat, while that of CMPE-SE becomes more active. On the whole, the inner-imaging modes (especially with Conv 3×3 pair-view encoder) of CMPE-SE module are the most active on all networks. In some ways, this also indicates that the inner-imaging CMPE-SE module works well. D. Statistical Analysis of Sample Excitation Outputs Furthermore, we exhibit some statistical results of the attentional outputs of each model, in Fig 15 and 16. we can also observe some interesting phenomena from them. The right part of these diagrams show the average attention values of different blocks, firstly, we need note that the number of blocks in networks: WRN-28-10, WRN-22-10, WRN-16-8, ResNet-164 and ResNet-110 are 12,9,6,54 and 54 respectively. In almost all networks, the CMPE-SE module has a obvious inhibition on the residual mappings in the middle and very deep layers, by attentional excitation values, this indicates that the CMPE-SE mechanism does encourage identity mappings at the deeper layers, while reducing the redundancy of residual mapping modeling, which is compared with the basic SE mechanism. The left part of Fig 15 and 16 show the variance distributions of attentional outputs with different kinds of SE blocks, which reflect the diversity of channel-wise attention values at each layer. We notice that the variance distributions of Table 5: Test error(%) comparison for folded inner-imaging with different shapes on CIFAR-10 and CIFAR-100, the folded inner-imaging mechanism uses Conv 3×3 encoder and the italics with underline indicate the shape we choosed. Depth:
4,759
1906.04135
2759245808
Named Entity Recognition for social media data is challenging because of its inherent noisiness. In addition to improper grammatical structures, it contains spelling inconsistencies and numerous informal abbreviations. We propose a novel multi-task approach by employing a more general secondary task of Named Entity (NE) segmentation together with the primary task of fine-grained NE categorization. The multi-task neural network architecture learns higher order feature representations from word and character sequences along with basic Part-of-Speech tags and gazetteer information. This neural network acts as a feature extractor to feed a Conditional Random Fields classifier. We were able to obtain the first position in the 3rd Workshop on Noisy User-generated Text (WNUT-2017) with a 41.86 entity F1-score and a 40.24 surface F1-score.
Traditional NER systems use hand-crafted features, gazetteers and other external resources to perform well @cite_12 . obtain state-of-the-art results by relying on heavily hand-crafted features, which are expensive to develop and maintain. Recently, many studies have outperformed traditional NER systems by applying neural network architectures. For instance, use a bidirectional LSTM-CRF architecture. They obtain a state-of-the-art performance without relying on hand-crafted features. , who achieved the first place on WNUT-2016 shared task, use a BLSTM neural network to leverage orthographic features. We use a similar approach but we employ CNN and BLSTM in parallel instead of forwarding the CNN output to the BLSTM. Nevertheless, our main contribution resides on Multi-Task Learning (MTL) and a combination of POS tags and gazetteers representation to feed the network. Recently, MTL has gained significant attention. Researchers have tried to correlate the success of MTL with label entropy, regularizers, training data size, and other aspects @cite_13 @cite_17 . For instance, use a multi-task network for different NLP tasks and show that the multi-task setting improves generality among shared tasks. In this paper, we take advantage of the multi-task setting by adding a more general secondary task, NE segmentation, along with the primary NE categorization task.
{ "abstract": [ "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.", "We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset.", "" ], "cite_N": [ "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2590925815", "2004763266", "2592170186" ] }
A Multi-task Approach for Named Entity Recognition in Social Media Data
Named Entity Recognition (NER) aims at identifying different types of entities, such as people names, companies, location, etc., within a given text. This information is useful for higher-level Natural Language Processing (NLP) applications such as information extraction, summarization, and data mining (Chen et al., 2004;Banko et al., 2007;Aramaki et al., 2009). Learning Named Entities (NEs) from social media is a challenging task mainly because (i) entities usually represent a small part of limited annotated data which makes the task hard to generalize, and (ii) they do not follow strict rules (Ritter et al., 2011;Li et al., 2012). This paper describes a multi-task neural network that aims at generalizing the underneath rules of emerging NEs in user-generated text. In addition to the main category classification task, we employ an auxiliary but related secondary task called NE segmentation (i.e. a binary classification of whether a given token is a NE or not). We use both tasks to jointly train the network. More specifically, the model captures word shapes and some orthographic features at the character level by using a Convolutional Neural Network (CNN). For contextual and syntactical information at the word level, such as word and Partof-Speech (POS) embeddings, the model implements a Bidirectional Long-Short Term Memory (BLSTM) architecture. Finally, to cover wellknown entities, the model uses a dense representation of gazetteers. Once the network is trained, we use it as a feature extractor to feed a Conditional Random Fields (CRF) classifier. The CRF classifier jointly predicts the most likely sequence of labels giving better results than the network itself. With respect to the participants of the shared task, our approach achieved the best results in both categories: 41.86% F1-score for entities, and 40.24% F1-score for surface forms. The data for this shared task is provided by Derczynski et al. (2017). Methodology This section describes our system 1 in three parts: feature representation, model description 2 , and sequential inference. Feature Representation We select features to represent the most relevant aspects of the data for the task. The features are divided into three categories: character, word, and lexicons. Character representation: we use an orthographic encoder similar to that of Limsopatham and Collier (2016) to encapsulate capitalization, punctuation, word shape, and other orthographic features. The only difference is that we handle non-ASCII characters. For instance, the sentence "3rd Workshop !" becomes "ncc Cccccccc p" as we map numbers to 'n', letters to 'c' (or 'C' if capitalized), and punctuation marks to 'p'. Non-ASCII characters are mapped to 'x'. This encoded representation reduces the sparsity of character features and allows us to focus on word shapes 1 https://github.com/tavo91/NER-WNUT17 2 The neural network is implemented using Keras (https://github.com/fchollet/keras) and Theano as backend (http://deeplearning.net/ software/theano/). and punctuation patterns. Once we have an encoded word, we represent each character with a 30-dimensional vector (Ma and Hovy, 2016). We account for a maximum length of 20 characters 3 per word, applying post padding on shorter words and truncating longer words. Word representation: we have two different representations at the word level. The first one uses pre-trained word embeddings trained on 400 million tweets representing each word with 400 dimensions (Godin et al., 2015) 4 . The second one uses Part-of-Speech tags generated by the CMU Twitter POS tagger (Owoputi et al., 2013). The POS tag embeddings are represented by 100dimensional vectors. In order to capture contextual information, we account for a context window of 3 tokens on both words and POS tags, where the target token is in the middle of the window. We randomly initialize both the character features and the POS tag vectors using a uniform dis- tribution in the range − 3 dim , + 3 dim , where dim is the dimension of the vectors from each feature representation (He et al., 2015). Lexical representation: we use gazetteers provided by Mishra and Diesner (2016) to help the model improve its precision for well-known entities. For each word we create a binary vector of 6 dimensions (one dimension per class). Each of the vector dimensions is set to one if the word appears in the gazetteers of the related class. Model Description Character level CNN: we use a CNN architecture to learn word shapes and some orthographic features at the character level representation (see Figure 1). The characters are embedded into a R d×l dimensional space, where d is the dimension of the features per character and l is the maximum length of characters per word. Then, we take the character embeddings and apply 2-stacked convolutional layers. Following Zhou et al. (2015), we perform a global average pooling 5 instead of the widely used max pooling operation. Finally, the result is passed to a fully-connected layer using a Rectifier Linear Unit (ReLU) activation function, which yields the character-based representation of Figure 1: Orthographic character-based representation of a word (green) using a CNN with 2-stacked convolutional layers. The first layer takes the input from embeddings (red) while the second layer (blue) takes the input from the first convolutional layer. Global Average Pooling is applied after the second convolutional layer. a word. The resulting vector is used as input for the rest of the network. Word level BLSTM: we use a Bidirectional LSTM (Dyer et al., 2015) to learn the contextual information of a sequence of words as described in Figure 2. Word embeddings are initialized with pre-trained Twitter word embeddings from a Skipgram model (Godin et al., 2015) using word2vec (Mikolov et al., 2013). Additionally, we use POS tag embeddings, which are randomly initialized using a uniform distribution. The model receives the concatenation of both POS tags and Twitter word embeddings. The BLSTM layer extracts the features from both forward and backward directions and concatenates the resulting vectors from each direction ([ h; h]). Following Ma and Hovy (2016), we use 100 neurons per direction. The resulting vector is used as input for the rest of the network. Lexicon network: we take the lexical representation vectors of the input words and feed them into a fully-connected layer. We use 32 neurons on this layer and a ReLU activation function. Then, the resulting vector is used as input for the rest of the network. Multi-task network: we create a unified model to predict the NE segmentation and NE categorization tasks simultaneously. Typically, the additional task acts as a regularizer to generalize the model (Goodfellow et al., 2016;Collobert and Weston, 2008). The concatenation of character, word and lexical vectors is fed into the NE segmentation and categorization tasks. We use a single-neuron layer with a sigmoid activation function for the secondary NE segmentation task, whereas for the primary NE categorization task, we employ a 13neuron 6 layer with a softmax activation function. Finally, we add the losses from both tasks and feed the total loss backward during training. Sequential Inference The multi-task network predicts probabilities for each token in the input sentence individually. Thus, those individual probabilities do not account for sequential information. We exploit the sequential information by using a Conditional Random Fields 7 classifier over those probabilities. This allows us to jointly predict the most likely sequence of labels for a given sentence instead of performing a word-by-word prediction. More specifically, we take the weights learned by the multi-task neural network and use them as features for the CRF classifier (see Figure 3). Taking weights from the common dense layer captures both of the segmentation and categorization features. Experimental Settings We preprocess all the datasets by replacing the URLs with the token <URL> before performing any experiment. Additionally, we use half of development set as validation and the other half as evaluation. Figure 3: Overall system design. First, the system embeds a sentence into a high-dimensional space and uses CNN, BLSTM, and dense encoders to extract features. Then, it concatenates the resulting vectors of each encoder and performs multi-task. The top left single-node layer represents segmentation (red) while the top right three-node layer represents categorization (blue). Finally, a CRF classifier uses the weights of the common dense layer to perform a sequential classification. Regarding the network hyper-parameters, in the case of the CNN, we set the kernel size to 3 on both convolutional layers. We also use the same number of filters on both layers: 64. Increasing the number of filters and the number of convolutional layers yields worse results, and it takes significantly more time. In the case of the BLSTM architecture, we add dropout layers before and after the Bidirectional LSTM layers with dropout rates of 0.5. The dropout layers allow the network to reduce overfitting (Srivastava et al., 2014). We also tried using a batch normalization layer instead of dropouts, but the experiment yielded worse results. The training of the whole neural network is conducted using a batch size of 500 samples, and 150 epochs. Additionally, we compile the model using the AdaMax optimizer (Kingma and Ba, 2014). Accuracy and F1-score are used as evaluation metrics. For sequential inference, the CRF classifier uses L-BFGS as a training algorithm with L1 and L2 regularization. The penalties for L1 and L2 are 1.0 and 1.0e −3 , respectively. Results and Discussion We compare the results of the multi-task neural network itself and the CRF classifier on each of our experiments. The latter one always shows the best results, which emphasizes the importance of sequential information. The results of the CRF, using the development set, are in Table 1. Moreover, the addition of a secondary task allows the CRF to use more relevant features from the network improving its results from a F1-score of 52.42% to 54.12%. Our finding that a multitask architecture is generally preferable over the single task architecture is consistent with prior research (Søgaard and Goldberg, 2016;Collobert and Weston, 2008;Attia et al., 2016;Maharjan et al., 2017). We also study the relevance of our features by performing multiple experiments with the same architecture and different combinations of features. For instance, removing gazetteers from the model drops the results from 54.12% to 52.69%. Similarly, removing POS tags gives worse results (51.12%). Among many combinations, the feature set presented in Section 3.1 yields the best results. The final results of our submission to the WNUT-2017 shared task are shown in Table 2. Our approach obtains the best results for the person and location categories. It is less effective for corporation, and the most difficult categories for our system are creative-work and product. Our intuition is that the latter two classes are the most difficult to predict for because they grow faster and have less restrictive patterns than the rest. For instance, products can have any type of letters or numbers in their names, or in the case of creative works, as many words as their titles can hold (e.g. name of movies, books, songs, etc.). Regarding the shared-task metrics, our approach achieves a 41.86% F1-score for entities and 40.24% for surface forms. Table 3 shows that our system yields similar results to the other participants on both metrics. In general, the final scores are low which states the difficulty of the task and that the problem is far from being solved. Error Analysis By evaluating the errors made by the CRF classifier, we find that the NE boundaries are a problem. For instance, when a NE is preceded by an article starting with a capitalized letter, the model includes the article as if it were part of the NE. This behavior may be caused by the capitalization features captured by the CNN network. Similarly, if a NE is followed by a conjunction and another NE, the classifier tends to join both NEs as if the conjunction were part of a single unified entity. Another common problem shown by the classifier is that fully-capitalized NEs are disregarded most of the time. This pattern may be related to the switch of domains in the training and testing phases. For instance, some Twitter informal abbreviations 8 may appear fully-capitalized but they do not represent NEs, whereas in Reddit and Stack Overflow fully-capitalized words are more likely to describe NEs. Conclusion We show that our multi-task neural network is capable of extracting relevant features from noisy user-generated text. We also show that a CRF classifier can boost the neural network results because it uses the whole sentence to predict the most likely set of labels. Additionally, our approach emphasizes the importance of POS tags in conjunction with gazetteers for NER tasks. Twitter word embeddings and orthographic character embeddings are also relevant for the task. Finally, our ongoing work aims at improving these results by getting a better understanding of the strengths and weaknesses of our model. We also plan to evaluate the current system in related tasks where noise and emerging NEs are prevalent.
2,114
1807.08571
2884493853
Nowadays, there are plenty of works introducing convolutional neural networks (CNNs) to the steganalysis and exceeding conventional steganalysis algorithms. These works have shown the improving potential of deep learning in information hiding domain. There are also several works based on deep learning to do image steganography, but these works still have problems in capacity, invisibility and security. In this paper, we propose a novel CNN architecture named as to conceal a secret gray image into a color cover image on the sender side and exactly extract the secret image out on the receiver side. There are three contributions in our work: (i) we improve the invisibility by hiding the secret image only in the Y channel of the cover image; (ii) We introduce the generative adversarial networks to strengthen the security by minimizing the divergence between the empirical probability distributions of stego images and natural images. (iii) In order to associate with the human visual system better, we construct a mixed loss function which is more appropriate for steganography to generate more realistic stego images and reveal out more better secret images. Experiment results show that ISGAN can achieve start-of-art performances on LFW, Pascal VOC2012 and ImageNet datasets.
There have been plenty of works using deep learning to do image steganalysis and got excellent performance. @cite_20 proposed a CNN-based steganalysis model GNCNN, the model introduced the hand-crafted KV filter to extract residual noise and used the gaussian activation function to get more useful features. The performance of the GNCNN is inferior to the state-of-the-art hand-crafted feature set spatial rich model (SRM) @cite_3 slightly. Based on GNCNN, @cite_0 presented Batch Normalization @cite_29 in to prevent the network falling into the local minima. XuNet was equipped with Tanh, @math convolution, global average pooling, and got comparable performance to SRM @cite_3 . @cite_16 put forward YeNet which surpassed SRM and its several variants. YeNet used 30 hand-crafted filters from SRM to prepropose images, applied well-designed activation function named TLU and selection-channel module to strengthen features from rich texture region where is more suitable for hiding information. @cite_13 proposed a JPEG steganalysis model with less parameters than XuNet and got better performance than XuNet. These works have applied deep learning to steganalysis successfully, but there is still space for improvement.
{ "abstract": [ "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary ±1 embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.", "", "Nowadays, the prevailing detectors of steganographic communication in digital images mainly consist of three steps, i.e., residual computation, feature extraction, and binary classification. In this paper, we present an alternative approach to steganalysis of digital images based on convolutional neural network (CNN), which is shown to be able to well replicate and optimize these key steps in a unified framework and learn hierarchical representations directly from raw images. The proposed CNN has a quite different structure from the ones used in conventional computer vision tasks. Rather than a random strategy, the weights in the first layer of the proposed CNN are initialized with the basic high-pass filter set used in the calculation of residual maps in a spatial rich model (SRM), which acts as a regularizer to suppress the image content effectively. To better capture the structure of embedding signals, which usually have extremely low SNR (stego signal to image content), a new activation function called a truncated linear unit is adopted in our CNN model. Finally, we further boost the performance of the proposed CNN-based steganalyzer by incorporating the knowledge of selection channel. Three state-of-the-art steganographic algorithms in spatial domain, e.g., WOW, S-UNIWARD, and HILL, are used to evaluate the effectiveness of our model. Compared to SRM and its selection-channel-aware variant maxSRMd2, our model achieves superior performance across all tested algorithms for a wide variety of payloads.", "Adoption of deep learning in image steganalysis is still in its initial stage. In this paper, we propose a generic hybrid deep-learning framework for JPEG steganalysis incorporating the domain knowledge behind rich steganalytic models. Our proposed framework involves two main stages. The first stage is hand-crafted, corresponding to the convolution phase and the quantization and truncation phase of the rich models. The second stage is a compound deep-neural network containing multiple deep subnets, in which the model parameters are learned in the training procedure. We provided experimental evidence and theoretical reflections to argue that the introduction of threshold quantizers, though disabling the gradient-descent-based learning of the bottom convolution phase, is indeed cost-effective. We have conducted extensive experiments on a large-scale data set extracted from ImageNet. The primary data set used in our experiments contains 500 000 cover images, while our largest data set contains five million cover images. Our experiments show that the integration of quantization and truncation into deep-learning steganalyzers do boost the detection performance by a clear margin. Furthermore, we demonstrate that our framework is insensitive to JPEG blocking artifact alterations, and the learned model can be easily transferred to a different attacking target and even a different data set. These properties are of critical importance in practical applications.", "Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only." ], "cite_N": [ "@cite_29", "@cite_3", "@cite_0", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2949117887", "2009130368", "", "2621048556", "2565257220", "2046180645" ] }
Invisible Steganography via Generative Adversarial Networks *
Image steganography is the main content of information hiding. The sender conceal a secret message into a cover image, then get the container image called stego, and finish the secret message's transmission on the public channel by transferring the stego image. Then the receiver part of the transmission can reveal the secret message out. Steganalysis is an attack to the steganography algorithm. The listener on the public channel intercept the image and analyze whether the image contains secret information. Since their proposed, steganography and steganalysis promote each other's progress. Image steganography can be used into the transmission of secret information, watermark, copyright certification and many other applications. In general, we can measure a steganography algorithm by capacity, invisibility and security. The capacity is measured by bits-per-pixel (bpp) which means the average number of bits concealed into each pixel of the cover image. With the capacity becomes larger, the security and the invisibility become worse. The invisibility is measured by the similarity of the stego image and its corresponding cover image. The invisibility becomes better as the similarity going higher. The security is measured by whether the stego image can be recognized out from natural images by steganalysis algorithms. Correspondingly, there are two focused challenges constraining the steganography performance. The amount of hidden message alters the quality of stego images. The more message in it, the easier the stego image can be checked out. Another keypoint is the cover image itself. Concealing message into noisy, rich semantic region of the cover image yields less detectable perturbations than hiding into smooth region. Nowadays, traditional steganography algorithms, such as S-UNIWARD [1], J-UNIWARD [1], conceal the secret information into cover images' spatial domain or transform domains by hand-crafted embedding algorithms successfully and get excellent invisibility and security. With the rise of deep learning in recent years, deep learning has become the hottest research method in computer vision and has been introduced into information hiding domain. Volkhonskiy et al. [7] proposed a steganography enhancement algorithm based on GAN, they concealed secret message into generated images with conventional algorithms and enhanced the security. But their generated images are warping in semantic, which will be drawn attention easily. Tang et al. [9] proposed an automatic steganographic distortion learning framework, their generator can find pixels which are suitable for embedding and conceal message into them, their discriminator is trained as a steganalyzer. With the adversarial training, the model can finish the steganography process. But this kind of method has low capacity and is less secure than conventional algorithms. Baluja [12] proposed a convolutional neural network based on the structure of encoder-decoder. The encoder network can conceal a secret image into a same size cover image successfully and the decoder network can reveal out the secret image completely. This method is different from other deep learning based models and conventional steganography algorithms, it has large capacity and strong invisibility. But stego images generated by this model is distorted in color and its security is bad. Inspired by Baluja's work, we proposed an invisible steganography via generative adversarial network named ISGAN. Our model can conceal a gray secret image into a color cover image with the same size, and our model has large capacity, strong invisibility and high security. Comparing with previous works, the main contributions of our work are as below: 1. In order to suppress the distortion of stego im-ages, we select a new steganography position. We only embed and extract secret information in the Y channel of the cover image. The color information is all in Cr and Cb channels of the cover image and can be saved completely into stego images, so the invisibility is strengthened. 2. From the aspect of mathematics, the difference between the empirical probability distributions of stego images and natural images can be measured by the divergence. So we introduce the generative adverasial networks to increase the security throughout minimizing the divergence. In addition, we introduce several architectures from classic computer vision tasks to fuse the cover image and the secret image together better and get faster training speed. 3. In order to fit the human visual system (HVS) better, we introduce the structure similarity index (SSIM) [17] and its variant to construct a mixed loss function. The mixed loss function helps to generate more realistic stego images and reveal out better secret images. This point is never considered by any previous deep-learning-based works in information hiding domain. The rest of the paper is organized as follows. Sec. 2 discusses related works, Sec. 3 introduces architecture details of ISGAN and the mixed loss function. Sec. 4 gives details of different datasets, parameter settings, our experiment processes and results. Finally, Sec. 5 concludes the paper with relevant discussion. Our Approach The complete architecture of our model is shown in Fig. 1. In this section, the new steganography position is introduced firstly. Then we discuss about our design considerations on the basic model and show specfic details of the encoder and the decoder. Thirdly, we present why the generative adversarial networks can improve the security and details of the discriminator. Finally, we explain the motivation to construct the mixed loss function. New Steganography Position Works of Baluja [12] and Atique [13] have implemented the entire hiding and revealing procedure, while their stego images' color is distorted as shown in Fig. 4. To against this weakness, we select a new Figure 1: The overall architecture. The encoder network conceals a gray secret image into the Y channel of a same size cover image, then the Y channel output by the encoder net and the U/V channels constitute the stego image. The decoder network reveals the secret image from the Y channel of the stego image. The steganalyzer network tries to distinguish stego images from cover images thus improving the overall architecture's security. steganography position. As shown in Fig. 2, a color image in the RGB color space can be divided into R, G and B channels, and each channel contains both semantic information and color information. When converted to the YCrCb color space, a color image can be divided into Y, Cr and Cb channels. The Y channel only contains part of semantic information, luminance information and no color information, Cr and Cb channels contain part of semantic information and all color information. To guarantee no color distortion, we conceal the secret image only in the Y channel and all color information are saved into the stego image. In addition, we select gray images as our secret images thus decreasing the secret information by 2 3 . When embedding, the color image is converted to the YCrCb color space, then the Y channel and the gray secret image are concatenated together and then are input to the encoder network. After hiding, the encoder's output and the cover image's CrCb channels constitute the color stego image. When revealing, we get the revealed secret image through decoding the Y channel of the stego image. Besides, the transformation between the RGB color space and the YCrCb color space is just the weighted computation of three channels and doesn't affect the backprop-agation. So we can finish this tranformation during the entire hiding and revealing process. The encoder-decoder architecture can be trained end-toend, which is called as the basic model. Basic Model Conventional or classic image stegnography are usually designed in a heuristic way. Generally, these algorithms decide whether to conceal information into a pixel of the cover image and how to conceal 1 bit information into a pixel. So the key of the classic steganography methods is well hand-crafted algorithms, but all of these algorithms need lots of expertise and this is very difficult for us. The best solution is to mix the secret image with the cover image very well without too much expertise. Deep learning, represented by convolutional neural networks, is a good way to achieve this exactly. What we need to do is to design the structure of the encoder and the decoder as described below. Based on such a starting point, we introduce the inception module [21] in our encoder network. The inception module has excellent performance on the Im-ageNet classification task, which contains several convolution kernels with different kernel sizes as shown Figure 2: Three images in the first column are original RGB color images. Three images in the right of the first row are R channel, G channel and B channel of the original image respectively saved as gray images, three channels all constitutes the luminance information and color information. Three images in the right of the second row are Y channel, Cr channel and Cb channel respectively saved as gray images, and three images in the right of the third row are also Y channel, Cr channel and Cb channel respectively from Wikipedia. We can see that, the Y channel constitutes only the luminance information and semantic information, and the color information about chrominance and chroma are all in the Cr channel and the Cb channel. in Fig. 3. Such a model structure can fuse feature maps with different receptive field sizes very well. As shown in both residual networks [20] and batch normalization [19], a model with these modifications can achieve the performance with significantly fewer training steps comparing to its original version. So we introduce both residual module and batch normalization into the encoder network to speed up the training procedure. The detail structure of the encoder is described in Tab. 1. When using MSE as the metric on LFW dataset, we use our model to train for 30 epochs to get the performance Atique's model can achieve while training for 50 epochs. On the other hand, we need a structure to reveal the secret image out automatically. So we use a fully convolutional network as the decoder network. Feature maps output by each convolutional Figure 3: The inception module with residual shortcut we use in our work. Our steganalyzer Works of Baluja and Atique didn't consider the security problem, while the security is the keypoint in steganography. In our work, we want to take the steganalysis into account automatically throughout training the basic model. Denoting C as the set of all cover images c, the selection of cover images from C can be described by a random variable c on C with probability distribution function (pdf) P . Assuming the cover images are selected with pdf P and embedded with a secret image which is chosen from its corresponding set, the set of all stego images is again a random variable s on C with pdf Q. The statistical detectability can be measured by the Kullback-Leibler divergence [15] shown in (1) or the Jensen-Shannon divergence shown in (2). KL(P ||Q) = c∈C P (c)log P (c) Q(c) (1) JS(P ||Q) = 1 2 KL(P P + Q 2 ) + 1 2 KL(Q P + Q 2 ) (2) The KL divergence or the JS divergence is a very fundamental quantity because it provides bounds on the best possible steganalyzer one can build [16]. So the keypoint for us is how to decrease the divergence. The generative adversarial networks (GAN) are welldesigned in theory to achieve this exactly. The objective of the original GAN is to minimize the JS divergence (2), a variant of the GAN is to minimize the KL divergence (1). The generator network G, which input is a noise z, tries to transform the input to a data sample which is similar to the real sample. The discriminator network D, which input is the real data or the fake data generated by the generator network, determines the difference between the real and fake samples. D and G play a two-player minmax game with the value function (3). min G max D = E x∼p data (x) [logD(x)]+E z∼pz(z) [log(1−D(G(z)))] (3) Now we introduce the generative adversarial networks into our architecture. The basic model can finish the entire hiding and revealing process, so we use the basic model as the generator, and introduce a CNN-based steganalysis model as the discriminator and the steganalyzer. So the value function in our work becomes (4), where D represents the steganalyzer network, G represents the basic model, x, s and G(x, s) represent the cover image, the secret image and the generated stego image respectively. min G max D = E x∼P (x) [logD(x)]+E x∼P (x),s∼P (s) [log(1−D(G(x, s)))] (4) Xu et al. [4] studied the design of CNN structure specific for image steganalysis applications and proposed XuNet. XuNet embeds an absolute activation (ABS) in the first convolutional layer to improve the statistical modeling, applies the TanH activation function in early stages of networks to prevent overfitting, and adds batch normalization (BN) before each nonlinear activation layer. This well-designed CNN provides excellent detection performance in steganalysis. So we design our steganalyzer based on XuNet and adapt it to fit our stego images. In addition, we use the spatial pyramid pooling (SPP) module to replace the global average pooling layer. The spatial pyramid pooling (SPP) module [22] and its variants, Mixed Loss Function In previous works, Baluja [12] used the mean square error (MSE) between the pixels of original images and the pixels of reconstructed images as the metric (5). Where c and s are the cover and secret images respectively, c and s are the stego and revealed secret images respectively, and β is how to weight their reconstruction errors. In particular, we should note that the error term ||c − c || doesn't apply to the weights of the decoder network. On the other hand, both the encoder network and the decoder network receive the error signal β||s − s || for reconstructing the secret image. L(c, c , s, s ) = c − c + β s − s However, the MSE just penalizes large error of two images' corresponding pixels but disregards the underlying structure in images. The human visual system (HVS) is more sensitive to luminance and color variations in texture-less regions. Zhao et al. [14] analyzed the importance of perceptually-motivated losses when the resulting image of image restoration tasks is evaluated by a human observer. They compared the performance of several losses and proposed a novel, differentiable error function. Inspired by their work, we introduce the structure similarity index (SSIM) [17] and its variant, the multi-scale structure similarity index (MS-SSIM) [18] into our metric. The SSIM index separates the task of similarity measurement into three comparisons: luminance, contrast and structure. The luminance, contrast and structure similarity of two images are measured by (6), (7) and (8) respectively. Where µ x and µ y are pixel average of image x and image y, θ x and θ y are pixel deviation of image x and image y, and θ xy is the standard variance of image x and y. In addition, C 1 , C 2 and C 3 are constants included to avoid instability when denominators are close to zero. The total calculation method of SSIM is shown in (9), where l > 0, m > 0, n > 0 are parameters used to adjust the relative importance of three components. More detail introduction to SSIM can be found in [17]. The value range of the SSIM index is [0, 1]. The higher the index is, the more similar the two images are. So we use 1 − SSIM (x, y) in our loss function to measure the difference of two images. And the MS-SSIM [18] is an enhanced variant of the SSIM index, so we also introduce it into our loss function (We use M SSIM in functions to represent MS-SSIM). L(x, y) = 2µ x µ y + C 1 µ 2 x + µ 2 y + C 1 (6) C(x, y) = 2θ x θ y + C 2 θ 2 x + θ 2 y + C 2(7) S(x, y) = θ xy + C 3 θ x θ y + C 3 SSIM (x, y) = [L(x, y)] l · [C(x, y)] m · [S(x, y)] n (9) Considering pixel value differences and structure differences simultaneously, we put MSE, SSIM and MS-SSIM together. So, the metric for the basic steganography network in our framework is as below: L(c, c ) = α(1 − SSIM (c, c )) +(1 − α)(1 − M SSIM (c, c )) +βM SE(c, c ) Where α and β are hyperparameters to weigh influences of three metrics and γ is a hyperparameter to trade off the quality of stego images and revealed secret images. Experiment results in Sec. 4 will compare the performance of different loss functions. Experiments and Results In this section, we'll introduce our experiment details and results. Firstly, the datasets we used are LFW [31], Pascal VOC 2012 [32] and ImageNet [30]. The Labeled Faces in the Wild (LFW) contains more than 13000 face images belonging to 1680 people collected from the web. 10k images were selected from LFW and constituted 5k cover-secret image pairs as our training set, others of LFW were as our validation set. Pascal VOC 2012 is a dataset designed for object detection and semantic segmentation, we selected 16k images randomly to constitute 8k coversecret image pairs as our training set and selected 5k images from the remaining part as our validation set. To further verify our model's performance on the big dataset, we did similar experiments on a subset of the ImageNet. Limited by the computing power, we only used the validation set of ImageNet as our training set which contains 50k images, these images constituted 25k cover-secret image pairs randomly. Then we selected 30k images from the test set of ImageNet as our validation set. We used SSIM [17], Peak Signal to Noise Ration (PSNR) as metrics to measure our model's performance. It is widely accepted that the PSNR doesn't correlate well with the human's perception of image quality [24], so we just used it as a reference. In addition, we designed a CNN-based steganalyzer specially to measure our model's security. All settings of our model on three datasets were the same. All parameters of our model were initialized by the Xavier initialization [23] and the initial learning rate was set as 1e-4 and was descended during training after 20 epochs. The batch size was set as 4 limited by the computing power, and we used Adam to optimize our basic model. After several attempts, we set α, β and γ of the loss function as 0.5, 0.3 and 0.85 respectively, which can trade off the quality of stego images and revealed secret images very well. Because our secret message is an image, so we don't need to reveal out the secret image completely. Certainly, you can set γ higher if you want better revealed secret images. The size of all images we used is 256 × 256, and the capacity of our model is 8bpp (it is equivalent to that we hide a pixel (8 bits) in a pixel). As shown in Tab. 4, we do several experiments with different loss functions on the LFW, the result demonstrates that our proposed mixed loss function is superior to others. Tab. 5 describes final results of our model on three datasets, we can see that the invisibility of our model get a little improvement, while our model's performance is superior to Atique's work intuitively as shown in Fig. 4, 5 and 6. Stego images generated by our model are complete similar to cor- Figure 4: Two examples on LFW. We can see that our stego images are almost same as cover images, while Atique's stego images are yellowing. By analyzing residuals between stego images and cover images, we can see that our stego images are more similar to cover images than Atique's results. responding cover images in semantic and color, this is not reflected by SSIM. On the training set, the average SSIM index between stego images generated by our model and their corresponding cover images is more than 0.985, and the average SSIM index between revealed images and their corresponding secret images is more than 0.97. In practice, we can use several cover images to conceal one secret image and choose the best stego image to transfer on the Internet. On the other hand, by analyzing the detail difference between cover images and stego images, we can see that our residual images are darker than Atiques, Figure 5: Two examples on Pascal VOC12. We can see that our stego images are almost same as cover images, while Atique's stego images are yellowing. By analyzing residuals between stego images and cover images, we can even distinguish the outline of secret images from Atique's residual images, while our residual images are blurrier. which means that our stego images are more similar to cover images and ISGAN has stronger invisibility. Additionally, from Atiques residual images we can even distinguish secret images outline, while our residual images are blurrier. So these residual images can also prove that our ISGAN is securer. When training ISGAN, we referred some tricks from previous works [29]. We flipped labels when training our basic model, replaced the ReLU activation function by the LeakyReLU function, optimized the generator by Adam, optimized the steganalyzer by SGD and applied the L2 normalization to inhibit overfitting. These tricks helped us to speed up training and get better results. To prove the improvement of the security produced by generative adversarial networks, we designed a new experiment. We used a well-trained basic model to generate 5000 stego images on LFW. These 5000 stego images and their corresponding cover images constituted a tiny dataset. We designed a new CNNbased model as a binary classifier to train on the tiny dataset. After training, we used this model to recognize stego images out from another tiny dataset which contains 2000 stego images generated by ISGAN and their corresponding cover images. Similar experi- Figure 6: Two examples on ImageNet. We can see that our stego images are almost same as cover images, while Atique's stego images are yellowing. Residual images between stego images and cover images show that our stego images are more similar to cover images than Atique's results. ments were done on the other two datasets. The results can be seen from Tab. 6. ISGAN strengthens indeed the security of our basic model. And with the training going, we can see that the security of ISGAN is improving slowly. Fig. 7 shows the difference between revealed images and their corresponding secret images. It shows that this kind of model cannot reveal out secret images completely. This is accepted as the information in the secret image is very redundant. However, it is unsuitable for tasks which need to reveal the secret Figure 7: Secret images' residual image on three datasets. We can see that there are differences between original secret images and our revealed secret images, which means that ISGAN is a lossy steganography. Discussion and Conclusion information out completely. As we described before, ISGAN can conceal a gray secret image into a color cover image with the same size excellently and generate stego images which are almost the same as cover images in semantic and color. By means of the adversarial training, the security is improved. In addition, experiment results demonstrate that our mixed loss function based on SSIM can achieve the state-of-art performance on the steganography task. In addition, our steganography is done in the spatial domain and stego images must be lossless, otherwise some parts of the secret image will be lost. There may be methods to address this problem. It doesn't matter if the stego image is sightly lossy since the secret image is inherently redundant. Some noise can be added into the stego images to simulate the image loss caused by the transmission during training. Then our decoder network should be modified to fit both the revealing process and the image enhancement process together. In our future work, we'll try to figure out this problem and improve our model's robustness.
3,992
1807.08571
2884493853
Nowadays, there are plenty of works introducing convolutional neural networks (CNNs) to the steganalysis and exceeding conventional steganalysis algorithms. These works have shown the improving potential of deep learning in information hiding domain. There are also several works based on deep learning to do image steganography, but these works still have problems in capacity, invisibility and security. In this paper, we propose a novel CNN architecture named as to conceal a secret gray image into a color cover image on the sender side and exactly extract the secret image out on the receiver side. There are three contributions in our work: (i) we improve the invisibility by hiding the secret image only in the Y channel of the cover image; (ii) We introduce the generative adversarial networks to strengthen the security by minimizing the divergence between the empirical probability distributions of stego images and natural images. (iii) In order to associate with the human visual system better, we construct a mixed loss function which is more appropriate for steganography to generate more realistic stego images and reveal out more better secret images. Experiment results show that ISGAN can achieve start-of-art performances on LFW, Pascal VOC2012 and ImageNet datasets.
Baluja @cite_22 designed a CNN model to conceal a color secret image into a color cover image yielding state-of-art performance. @cite_4 proposed another encoder-decoder based model to finish the same steganography task (their secret images are gray images). This is a novel steganography method which gets rid of hand-crafted algorithms. It can learn how to merge the cover image and the secret image together automatically. But stego images generated by their models are distorted in color. As shown in Fig. , Atique's stego images are yellowing when compared with the corresponding cover images. And their stego images are easily recognized by well trained CNN-based steganalyzer @cite_22 because of the large capacity. Inspired by works of Baluja and Atique, we improve each shortcoming and get .
{ "abstract": [ "All the existing image steganography methods use manually crafted features to hide binary payloads into cover images. This leads to small payload capacity and image distortion. Here we propose a convolutional neural network based encoder-decoder architecture for embedding of images as payload. To this end, we make following three major contributions: (i) we propose a deep learning based generic encoder-decoder architecture for image steganography; (ii) we introduce a new loss function that ensures joint end-to-end training of encoder-decoder networks; (iii) we perform extensive empirical evaluation of proposed architecture on a range of challenging publicly available datasets (MNIST, CIFAR10, PASCAL-VOC12, ImageNet, LFW) and report state-of-the-art payload capacity at high PSNR and SSIM values.", "Steganography is the practice of concealing a secret message within another, ordinary, message. Commonly, steganography is used to unobtrusively hide a small message within the noisy regions of a larger image. In this study, we attempt to place a full size color image within another image of the same size. Deep neural networks are simultaneously trained to create the hiding and revealing processes and are designed to specifically work as a pair. The system is trained on images drawn randomly from the ImageNet database, and works well on natural images from a wide variety of sources. Beyond demonstrating the successful application of deep learning to hiding images, we carefully examine how the result is achieved and explore extensions. Unlike many popular steganographic methods that encode the secret message within the least significant bits of the carrier image, our approach compresses and distributes the secret image's representation across all of the available bits." ], "cite_N": [ "@cite_4", "@cite_22" ], "mid": [ "2768430110", "2771036112" ] }
Invisible Steganography via Generative Adversarial Networks *
Image steganography is the main content of information hiding. The sender conceal a secret message into a cover image, then get the container image called stego, and finish the secret message's transmission on the public channel by transferring the stego image. Then the receiver part of the transmission can reveal the secret message out. Steganalysis is an attack to the steganography algorithm. The listener on the public channel intercept the image and analyze whether the image contains secret information. Since their proposed, steganography and steganalysis promote each other's progress. Image steganography can be used into the transmission of secret information, watermark, copyright certification and many other applications. In general, we can measure a steganography algorithm by capacity, invisibility and security. The capacity is measured by bits-per-pixel (bpp) which means the average number of bits concealed into each pixel of the cover image. With the capacity becomes larger, the security and the invisibility become worse. The invisibility is measured by the similarity of the stego image and its corresponding cover image. The invisibility becomes better as the similarity going higher. The security is measured by whether the stego image can be recognized out from natural images by steganalysis algorithms. Correspondingly, there are two focused challenges constraining the steganography performance. The amount of hidden message alters the quality of stego images. The more message in it, the easier the stego image can be checked out. Another keypoint is the cover image itself. Concealing message into noisy, rich semantic region of the cover image yields less detectable perturbations than hiding into smooth region. Nowadays, traditional steganography algorithms, such as S-UNIWARD [1], J-UNIWARD [1], conceal the secret information into cover images' spatial domain or transform domains by hand-crafted embedding algorithms successfully and get excellent invisibility and security. With the rise of deep learning in recent years, deep learning has become the hottest research method in computer vision and has been introduced into information hiding domain. Volkhonskiy et al. [7] proposed a steganography enhancement algorithm based on GAN, they concealed secret message into generated images with conventional algorithms and enhanced the security. But their generated images are warping in semantic, which will be drawn attention easily. Tang et al. [9] proposed an automatic steganographic distortion learning framework, their generator can find pixels which are suitable for embedding and conceal message into them, their discriminator is trained as a steganalyzer. With the adversarial training, the model can finish the steganography process. But this kind of method has low capacity and is less secure than conventional algorithms. Baluja [12] proposed a convolutional neural network based on the structure of encoder-decoder. The encoder network can conceal a secret image into a same size cover image successfully and the decoder network can reveal out the secret image completely. This method is different from other deep learning based models and conventional steganography algorithms, it has large capacity and strong invisibility. But stego images generated by this model is distorted in color and its security is bad. Inspired by Baluja's work, we proposed an invisible steganography via generative adversarial network named ISGAN. Our model can conceal a gray secret image into a color cover image with the same size, and our model has large capacity, strong invisibility and high security. Comparing with previous works, the main contributions of our work are as below: 1. In order to suppress the distortion of stego im-ages, we select a new steganography position. We only embed and extract secret information in the Y channel of the cover image. The color information is all in Cr and Cb channels of the cover image and can be saved completely into stego images, so the invisibility is strengthened. 2. From the aspect of mathematics, the difference between the empirical probability distributions of stego images and natural images can be measured by the divergence. So we introduce the generative adverasial networks to increase the security throughout minimizing the divergence. In addition, we introduce several architectures from classic computer vision tasks to fuse the cover image and the secret image together better and get faster training speed. 3. In order to fit the human visual system (HVS) better, we introduce the structure similarity index (SSIM) [17] and its variant to construct a mixed loss function. The mixed loss function helps to generate more realistic stego images and reveal out better secret images. This point is never considered by any previous deep-learning-based works in information hiding domain. The rest of the paper is organized as follows. Sec. 2 discusses related works, Sec. 3 introduces architecture details of ISGAN and the mixed loss function. Sec. 4 gives details of different datasets, parameter settings, our experiment processes and results. Finally, Sec. 5 concludes the paper with relevant discussion. Our Approach The complete architecture of our model is shown in Fig. 1. In this section, the new steganography position is introduced firstly. Then we discuss about our design considerations on the basic model and show specfic details of the encoder and the decoder. Thirdly, we present why the generative adversarial networks can improve the security and details of the discriminator. Finally, we explain the motivation to construct the mixed loss function. New Steganography Position Works of Baluja [12] and Atique [13] have implemented the entire hiding and revealing procedure, while their stego images' color is distorted as shown in Fig. 4. To against this weakness, we select a new Figure 1: The overall architecture. The encoder network conceals a gray secret image into the Y channel of a same size cover image, then the Y channel output by the encoder net and the U/V channels constitute the stego image. The decoder network reveals the secret image from the Y channel of the stego image. The steganalyzer network tries to distinguish stego images from cover images thus improving the overall architecture's security. steganography position. As shown in Fig. 2, a color image in the RGB color space can be divided into R, G and B channels, and each channel contains both semantic information and color information. When converted to the YCrCb color space, a color image can be divided into Y, Cr and Cb channels. The Y channel only contains part of semantic information, luminance information and no color information, Cr and Cb channels contain part of semantic information and all color information. To guarantee no color distortion, we conceal the secret image only in the Y channel and all color information are saved into the stego image. In addition, we select gray images as our secret images thus decreasing the secret information by 2 3 . When embedding, the color image is converted to the YCrCb color space, then the Y channel and the gray secret image are concatenated together and then are input to the encoder network. After hiding, the encoder's output and the cover image's CrCb channels constitute the color stego image. When revealing, we get the revealed secret image through decoding the Y channel of the stego image. Besides, the transformation between the RGB color space and the YCrCb color space is just the weighted computation of three channels and doesn't affect the backprop-agation. So we can finish this tranformation during the entire hiding and revealing process. The encoder-decoder architecture can be trained end-toend, which is called as the basic model. Basic Model Conventional or classic image stegnography are usually designed in a heuristic way. Generally, these algorithms decide whether to conceal information into a pixel of the cover image and how to conceal 1 bit information into a pixel. So the key of the classic steganography methods is well hand-crafted algorithms, but all of these algorithms need lots of expertise and this is very difficult for us. The best solution is to mix the secret image with the cover image very well without too much expertise. Deep learning, represented by convolutional neural networks, is a good way to achieve this exactly. What we need to do is to design the structure of the encoder and the decoder as described below. Based on such a starting point, we introduce the inception module [21] in our encoder network. The inception module has excellent performance on the Im-ageNet classification task, which contains several convolution kernels with different kernel sizes as shown Figure 2: Three images in the first column are original RGB color images. Three images in the right of the first row are R channel, G channel and B channel of the original image respectively saved as gray images, three channels all constitutes the luminance information and color information. Three images in the right of the second row are Y channel, Cr channel and Cb channel respectively saved as gray images, and three images in the right of the third row are also Y channel, Cr channel and Cb channel respectively from Wikipedia. We can see that, the Y channel constitutes only the luminance information and semantic information, and the color information about chrominance and chroma are all in the Cr channel and the Cb channel. in Fig. 3. Such a model structure can fuse feature maps with different receptive field sizes very well. As shown in both residual networks [20] and batch normalization [19], a model with these modifications can achieve the performance with significantly fewer training steps comparing to its original version. So we introduce both residual module and batch normalization into the encoder network to speed up the training procedure. The detail structure of the encoder is described in Tab. 1. When using MSE as the metric on LFW dataset, we use our model to train for 30 epochs to get the performance Atique's model can achieve while training for 50 epochs. On the other hand, we need a structure to reveal the secret image out automatically. So we use a fully convolutional network as the decoder network. Feature maps output by each convolutional Figure 3: The inception module with residual shortcut we use in our work. Our steganalyzer Works of Baluja and Atique didn't consider the security problem, while the security is the keypoint in steganography. In our work, we want to take the steganalysis into account automatically throughout training the basic model. Denoting C as the set of all cover images c, the selection of cover images from C can be described by a random variable c on C with probability distribution function (pdf) P . Assuming the cover images are selected with pdf P and embedded with a secret image which is chosen from its corresponding set, the set of all stego images is again a random variable s on C with pdf Q. The statistical detectability can be measured by the Kullback-Leibler divergence [15] shown in (1) or the Jensen-Shannon divergence shown in (2). KL(P ||Q) = c∈C P (c)log P (c) Q(c) (1) JS(P ||Q) = 1 2 KL(P P + Q 2 ) + 1 2 KL(Q P + Q 2 ) (2) The KL divergence or the JS divergence is a very fundamental quantity because it provides bounds on the best possible steganalyzer one can build [16]. So the keypoint for us is how to decrease the divergence. The generative adversarial networks (GAN) are welldesigned in theory to achieve this exactly. The objective of the original GAN is to minimize the JS divergence (2), a variant of the GAN is to minimize the KL divergence (1). The generator network G, which input is a noise z, tries to transform the input to a data sample which is similar to the real sample. The discriminator network D, which input is the real data or the fake data generated by the generator network, determines the difference between the real and fake samples. D and G play a two-player minmax game with the value function (3). min G max D = E x∼p data (x) [logD(x)]+E z∼pz(z) [log(1−D(G(z)))] (3) Now we introduce the generative adversarial networks into our architecture. The basic model can finish the entire hiding and revealing process, so we use the basic model as the generator, and introduce a CNN-based steganalysis model as the discriminator and the steganalyzer. So the value function in our work becomes (4), where D represents the steganalyzer network, G represents the basic model, x, s and G(x, s) represent the cover image, the secret image and the generated stego image respectively. min G max D = E x∼P (x) [logD(x)]+E x∼P (x),s∼P (s) [log(1−D(G(x, s)))] (4) Xu et al. [4] studied the design of CNN structure specific for image steganalysis applications and proposed XuNet. XuNet embeds an absolute activation (ABS) in the first convolutional layer to improve the statistical modeling, applies the TanH activation function in early stages of networks to prevent overfitting, and adds batch normalization (BN) before each nonlinear activation layer. This well-designed CNN provides excellent detection performance in steganalysis. So we design our steganalyzer based on XuNet and adapt it to fit our stego images. In addition, we use the spatial pyramid pooling (SPP) module to replace the global average pooling layer. The spatial pyramid pooling (SPP) module [22] and its variants, Mixed Loss Function In previous works, Baluja [12] used the mean square error (MSE) between the pixels of original images and the pixels of reconstructed images as the metric (5). Where c and s are the cover and secret images respectively, c and s are the stego and revealed secret images respectively, and β is how to weight their reconstruction errors. In particular, we should note that the error term ||c − c || doesn't apply to the weights of the decoder network. On the other hand, both the encoder network and the decoder network receive the error signal β||s − s || for reconstructing the secret image. L(c, c , s, s ) = c − c + β s − s However, the MSE just penalizes large error of two images' corresponding pixels but disregards the underlying structure in images. The human visual system (HVS) is more sensitive to luminance and color variations in texture-less regions. Zhao et al. [14] analyzed the importance of perceptually-motivated losses when the resulting image of image restoration tasks is evaluated by a human observer. They compared the performance of several losses and proposed a novel, differentiable error function. Inspired by their work, we introduce the structure similarity index (SSIM) [17] and its variant, the multi-scale structure similarity index (MS-SSIM) [18] into our metric. The SSIM index separates the task of similarity measurement into three comparisons: luminance, contrast and structure. The luminance, contrast and structure similarity of two images are measured by (6), (7) and (8) respectively. Where µ x and µ y are pixel average of image x and image y, θ x and θ y are pixel deviation of image x and image y, and θ xy is the standard variance of image x and y. In addition, C 1 , C 2 and C 3 are constants included to avoid instability when denominators are close to zero. The total calculation method of SSIM is shown in (9), where l > 0, m > 0, n > 0 are parameters used to adjust the relative importance of three components. More detail introduction to SSIM can be found in [17]. The value range of the SSIM index is [0, 1]. The higher the index is, the more similar the two images are. So we use 1 − SSIM (x, y) in our loss function to measure the difference of two images. And the MS-SSIM [18] is an enhanced variant of the SSIM index, so we also introduce it into our loss function (We use M SSIM in functions to represent MS-SSIM). L(x, y) = 2µ x µ y + C 1 µ 2 x + µ 2 y + C 1 (6) C(x, y) = 2θ x θ y + C 2 θ 2 x + θ 2 y + C 2(7) S(x, y) = θ xy + C 3 θ x θ y + C 3 SSIM (x, y) = [L(x, y)] l · [C(x, y)] m · [S(x, y)] n (9) Considering pixel value differences and structure differences simultaneously, we put MSE, SSIM and MS-SSIM together. So, the metric for the basic steganography network in our framework is as below: L(c, c ) = α(1 − SSIM (c, c )) +(1 − α)(1 − M SSIM (c, c )) +βM SE(c, c ) Where α and β are hyperparameters to weigh influences of three metrics and γ is a hyperparameter to trade off the quality of stego images and revealed secret images. Experiment results in Sec. 4 will compare the performance of different loss functions. Experiments and Results In this section, we'll introduce our experiment details and results. Firstly, the datasets we used are LFW [31], Pascal VOC 2012 [32] and ImageNet [30]. The Labeled Faces in the Wild (LFW) contains more than 13000 face images belonging to 1680 people collected from the web. 10k images were selected from LFW and constituted 5k cover-secret image pairs as our training set, others of LFW were as our validation set. Pascal VOC 2012 is a dataset designed for object detection and semantic segmentation, we selected 16k images randomly to constitute 8k coversecret image pairs as our training set and selected 5k images from the remaining part as our validation set. To further verify our model's performance on the big dataset, we did similar experiments on a subset of the ImageNet. Limited by the computing power, we only used the validation set of ImageNet as our training set which contains 50k images, these images constituted 25k cover-secret image pairs randomly. Then we selected 30k images from the test set of ImageNet as our validation set. We used SSIM [17], Peak Signal to Noise Ration (PSNR) as metrics to measure our model's performance. It is widely accepted that the PSNR doesn't correlate well with the human's perception of image quality [24], so we just used it as a reference. In addition, we designed a CNN-based steganalyzer specially to measure our model's security. All settings of our model on three datasets were the same. All parameters of our model were initialized by the Xavier initialization [23] and the initial learning rate was set as 1e-4 and was descended during training after 20 epochs. The batch size was set as 4 limited by the computing power, and we used Adam to optimize our basic model. After several attempts, we set α, β and γ of the loss function as 0.5, 0.3 and 0.85 respectively, which can trade off the quality of stego images and revealed secret images very well. Because our secret message is an image, so we don't need to reveal out the secret image completely. Certainly, you can set γ higher if you want better revealed secret images. The size of all images we used is 256 × 256, and the capacity of our model is 8bpp (it is equivalent to that we hide a pixel (8 bits) in a pixel). As shown in Tab. 4, we do several experiments with different loss functions on the LFW, the result demonstrates that our proposed mixed loss function is superior to others. Tab. 5 describes final results of our model on three datasets, we can see that the invisibility of our model get a little improvement, while our model's performance is superior to Atique's work intuitively as shown in Fig. 4, 5 and 6. Stego images generated by our model are complete similar to cor- Figure 4: Two examples on LFW. We can see that our stego images are almost same as cover images, while Atique's stego images are yellowing. By analyzing residuals between stego images and cover images, we can see that our stego images are more similar to cover images than Atique's results. responding cover images in semantic and color, this is not reflected by SSIM. On the training set, the average SSIM index between stego images generated by our model and their corresponding cover images is more than 0.985, and the average SSIM index between revealed images and their corresponding secret images is more than 0.97. In practice, we can use several cover images to conceal one secret image and choose the best stego image to transfer on the Internet. On the other hand, by analyzing the detail difference between cover images and stego images, we can see that our residual images are darker than Atiques, Figure 5: Two examples on Pascal VOC12. We can see that our stego images are almost same as cover images, while Atique's stego images are yellowing. By analyzing residuals between stego images and cover images, we can even distinguish the outline of secret images from Atique's residual images, while our residual images are blurrier. which means that our stego images are more similar to cover images and ISGAN has stronger invisibility. Additionally, from Atiques residual images we can even distinguish secret images outline, while our residual images are blurrier. So these residual images can also prove that our ISGAN is securer. When training ISGAN, we referred some tricks from previous works [29]. We flipped labels when training our basic model, replaced the ReLU activation function by the LeakyReLU function, optimized the generator by Adam, optimized the steganalyzer by SGD and applied the L2 normalization to inhibit overfitting. These tricks helped us to speed up training and get better results. To prove the improvement of the security produced by generative adversarial networks, we designed a new experiment. We used a well-trained basic model to generate 5000 stego images on LFW. These 5000 stego images and their corresponding cover images constituted a tiny dataset. We designed a new CNNbased model as a binary classifier to train on the tiny dataset. After training, we used this model to recognize stego images out from another tiny dataset which contains 2000 stego images generated by ISGAN and their corresponding cover images. Similar experi- Figure 6: Two examples on ImageNet. We can see that our stego images are almost same as cover images, while Atique's stego images are yellowing. Residual images between stego images and cover images show that our stego images are more similar to cover images than Atique's results. ments were done on the other two datasets. The results can be seen from Tab. 6. ISGAN strengthens indeed the security of our basic model. And with the training going, we can see that the security of ISGAN is improving slowly. Fig. 7 shows the difference between revealed images and their corresponding secret images. It shows that this kind of model cannot reveal out secret images completely. This is accepted as the information in the secret image is very redundant. However, it is unsuitable for tasks which need to reveal the secret Figure 7: Secret images' residual image on three datasets. We can see that there are differences between original secret images and our revealed secret images, which means that ISGAN is a lossy steganography. Discussion and Conclusion information out completely. As we described before, ISGAN can conceal a gray secret image into a color cover image with the same size excellently and generate stego images which are almost the same as cover images in semantic and color. By means of the adversarial training, the security is improved. In addition, experiment results demonstrate that our mixed loss function based on SSIM can achieve the state-of-art performance on the steganography task. In addition, our steganography is done in the spatial domain and stego images must be lossless, otherwise some parts of the secret image will be lost. There may be methods to address this problem. It doesn't matter if the stego image is sightly lossy since the secret image is inherently redundant. Some noise can be added into the stego images to simulate the image loss caused by the transmission during training. Then our decoder network should be modified to fit both the revealing process and the image enhancement process together. In our future work, we'll try to figure out this problem and improve our model's robustness.
3,992
1807.07696
2884959301
We introduce a new system for automatic image content removal and inpainting. Unlike traditional inpainting algorithms, which require advance knowledge of the region to be filled in, our system automatically detects the area to be removed and infilled. Region segmentation and inpainting are performed jointly in a single pass. In this way, potential segmentation errors are more naturally alleviated by the inpainting module. The system is implemented as an encoder-decoder architecture, with two decoder branches, one tasked with segmentation of the foreground region, the other with inpainting. The encoder and the two decoder branches are linked via neglect nodes, which guide the inpainting process in selecting which areas need reconstruction. The whole model is trained using a conditional GAN strategy. Comparative experiments show that our algorithm outperforms state-of-the-art inpainting techniques (which, unlike our system, do not segment the input image and thus must be aided by an external segmentation module.)
Pixel-level semantic image segmentation has received considerable attention over the past few years. Most recently published techniques are based on fully convolutional networks (FCN) @cite_22 , possibly combined with fully connected CRF @cite_10 @cite_13 @cite_28 @cite_33 . The general architecture of a FCN includes a sequence of convolution and downsampling layers ( encoder ), which extract multi--scale features, followed by a sequence of deconvolution layers ( decoder ), which produce a high--resolution segmentation (or prediction''). Skip layers are often added, providing shortcut links from an encoder layer to its corresponding decoder layer. The role of skip layers is to provide well-localized information to a decoder layer, in addition to the semantic-rich but poorly resolved information from the prior decoder layer. In this way, skip layers enable good pixel-level localization while facilitating gradient flow during training. Similar architectures have been used in various applications such as text segmentation @cite_9 @cite_34 @cite_33 @cite_29 , edge detection @cite_12 , face segmentation @cite_25 , and scene parsing @cite_18 . Although these algorithms could be used for the foreground segmentation component of a content removal system, unavoidable inaccuracies are liable to dramatically decrease the quality of the recovered background region.
{ "abstract": [ "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "We introduce an algorithm for word-level text spotting that is able to accurately and reliably determine the bounding regions of individual words of text \"in the wild\". Our system is formed by the cascade of two convolutional neural networks. The first network is fully convolutional and is in charge of detecting areas containing text. This results in a very reliable but possibly inaccurate segmentation of the input image. The second network (inspired by the popular YOLO architecture) analyzes each segment produced in the first stage, and predicts oriented rectangular regions containing individual words. No post-processing (e.g. text line grouping) is necessary. With execution time of 450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the highest score to date among published algorithms on the ICDAR 2015 Incidental Scene Text dataset benchmark.", "Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-of-the-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text.", "In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.", "", "Selfies have become commonplace. More and more people take pictures of themselves, and enjoy enhancing these pictures using a variety of image processing techniques. One specific functionality of interest is automatic skin and hair segmentation, as this allows for processing one's skin and hair separately. Traditional approaches require user input in the form of fully specified trimaps, or at least of “scribbles” indicating foreground and background areas, with high-quality masks then generated via matting. Manual input, however, can be difficult or tedious, especially on a smartphone's small screen. In this paper, we propose the use of fully convolutional networks (FCN) and fully-connected CRF to perform pixel-level semantic segmentation into skin, hair and background. The trimap thus generated is given as input to a standard matting algorithm, resulting in accurate skin and hair alpha masks. Our method achieves state-of-the-art performance on the LFW Parts dataset [1]. The effectiveness of our method is also demonstrated with a specific application case.", "" ], "cite_N": [ "@cite_18", "@cite_22", "@cite_33", "@cite_28", "@cite_10", "@cite_9", "@cite_29", "@cite_34", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2952596663", "2952632681", "", "2412782625", "2161236525", "2950274039", "2464918637", "2952365771", "", "2605339450", "" ] }
QIN, WEI, MANDUCHI: AUTOMATIC SEMANTIC CONTENT REMOVAL Automatic Semantic Content Removal by Learning to Neglect
Automatic removal of specific content in an image is a task of practical interest, as well as of intellectual appeal. There are many situations in which a part of an image needs to be erased and replaced. This may include text (whether present in the scene or overimposed on the image), which may have to be removed, for example to protect personal information; people, such as undesired passersby in the scene; or other objects that, for any reasons, one may want to wipe off the picture. While these operations are typically performed manually by skilled Photoshop editors, substantial cost reduction could be achieved by automatizing the workflow. Automatic content removal, though, is difficult, and an as yet unsolved problem. Removing content and inpainting the image in a way that it looks "natural" entails the ability to capture, represent, and synthesize high-level ("semantic") image content. This is particularly true of large image areas infilling, an operation that only recently has been accomplished with some success [14,27,29,30]. Image inpainting algorithms described in the literature normally require that a binary "mask" indicating the location of the area to be synthesized be provided, typically via manual input. In contrast, an automatic content removal system must be able to accomplish two c 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. arXiv:1807.07696v1 [cs.CV] 20 Jul 2018 tasks. First, the pattern or object of interest must be segmented out, creating a binary mask; then, the image content within the mask must be synthesized. The work described in this paper is born from the realization that, for optimal results, these two tasks (segmentation and inpainting) should not be carried out independently. In fact, only in few specific situations can the portion of the image to be removed be represented by a binary mask. Edge smoothing effects are almost always present, either due to the camera's point spreading function, or due to blending, if the pattern (e.g. text) is overimposed on the image. Although one could potentially recover an alpha mask for the foreground content to be removed, we believe that a more appropriate strategy is to simultaneously detect the foreground and synthesize the background image. By doing so, we do not need to resort to hand-made tricks, such as expanding the binary mask to account for inaccurate localization. Our algorithm for content removal and inpainting relies on conditional generative adversarial networks (cGANs) [8], which have become the tool of choice for image synthesis. Our network architecture is based on an encoder-decoder scheme with skip layers (see Fig. 1). Rather than a single decoder branch, however, our network contains two parallel and interconnected branches: one (dec-seg) designed to extract the foreground area to be removed; the other (dec-fill) tasked with synthesizing the missing background. The dec-seg branch interacts with the dec-fill branch via multiple neglect nodes (see Fig. 2). The concept of neglect nodes is germane (but in reverse) to that of mixing nodes normally found in attention networks [22,26]. Mixing nodes highlight a portion of the image that needs to be attended to, or, in our case, neglected. Neglect nodes appear in all layers of the architecture; they ensure that the dec-fill branch is aware of which portions of the image are to be synthesized, without ever committing to a binary mask. A remarkable feature of the proposed system is that the multiple components of the network (encoder and two decoder branches, along with the neglect nodes) are all trained at the same time. Training seeks to minimize a global cost function that includes a conditional GAN component, as well as L 1 distance components for both foreground segmentation and background image. This optimization requires ground-truth availability of foreground segmentation (the component to be removed), background images (the original image without the foreground), and composite images (foreground over background). By jointly optimizing the multiple network components (rather than, say, optimizing for the dec-seg independently on foreground segmentation, then using it to condition optimization of dec-fill via the neglect nodes), we are able to accurately reconstruct the background inpainted image. The algorithm also produces the foreground segmentation as a byproduct. We should emphasize that this foreground mask is not used by the dec-fill synthesizing layer, which only communicates with the dec-seg layer via the neglect nodes. To summarize, this paper has two main contributions. First, we present the first (to the best of our knowledge) truly automatic semantic content removal system with promising results on realistic images. The proposed algorithm is able to recover high-quality background without any knowledge of the foreground segmentation mask. Unlike most previous GAN-based inpainting methods that assume a rectangular foreground region to be removed [14,27,29], our system produces good result with any foreground shape, even when it extends to the image boundary. Second, we introduce a novel encoder-decoder network structure with two parallel and interconnected branches (dec-seg and dec-fill), linked at multiple levels by mixing (neglect) nodes that determine which information from the encoder should be used for synthesis, and which should be neglected. Foreground region segmentation and background inpainting is produced in one single forward pass. Proposed Algorithm Our system for automatic semantic content removal comprises an encoder-decoder network with two decoder branches, tasked with predicting a segmentation mask (dec-seg) and a background image (dec-fill) in a single forward pass. Neglect nodes (an original feature of this architecture) link the two decoder branches and the encoder at various levels. The network is trained along with a discriminator network in an adversarial scheme, in order to foster realistic background image synthesis. We assume in this work that the foreground region to be removed occupies a large portion of the image (or, equivalently, that the image is cropped such that the foreground region takes most of the cropped region). In practice, this can be obtained using a standard object detector. Note that high accuracy of the (rectangular) detector is not required. In our experiments, the margin between the contour of the foreground region and the edges of the image was let to vary between 0 (foreground touching the image edge) to half the size of the foreground mask. Loss Function Following the terminology of GANs, the output z p of dec-seg and y p of dec-fill for an input image x are taken to represent the output of a generator G(x). The generator is trained with a dual task: ensuring that z p and y p are similar to the ground-truth (z g and y g ), and deceiving the discriminator D(x, y), which is concurrently trained to separate y p from y g given x. The cost function L G for the generator combines the conditional GAN loss with a linear combination of the L 1 distances between prediction and ground-truth for segmentation and inpainted background: L G = E[log(1 − D(x, y p ))] + λ f E[||y g − y p || 1 ] + λ s E[||z g − z p || 1 ](1) The discriminator D is trained to minimize the following discriminator loss L D : L D = −(E[log D(x, y g )] + E[log(1 − D(x, y p ))])(2) Network Architecture Generator: Segmentation and background infilling is generated in a single pass by an encoder-decoder network architecture, with multiple encoder layers generating multi-scale features at decreasing resolution, and two parallel decoder branches (dec-seg and dec-fill) producing high-resolution output starting from low-resolution features and higher-resolution data from skip layers. Each encoder stage consists of a convolution layer with kernel of size 4×4, stride 2, followed by instance normalization [21] and ReLU. Each stage of dec-seg contains a deconvolution (transpose convolution) layer (kernel of 4 × 4, stride 2), followed by instance normalization and ReLU. dec-fill replaces each deconvolution layers with a nearestneighbor upsampling layer followed by a convolution layer (kernel sized 3×3, stride 1). This strategy, originally proposed by Odena et al. [13] to reduce checkerboard artifacts, was found to be very useful in our experiments (see Sec. 4.3). The total number of convolution kernels at the i-th encoder layer is min(2 i−1 × 64, 512). The number of deconvolution kernels at the i-th dec-seg layer or convolution kernels at the i-th dec-fill layer are the same as the number of kernels at the (i − 1)-th encoder layer. The output of the first dec-seg layer and dec-fill layer has one channel (foreground segmentation) and three channels (recovered background image) respectively. All ReLU layers in the encoder are leaky, with slope of 0.2. In the dec-seg branch, standard skip layers are added. More precisely, following the layer indexing in Fig. 1, the input of the i-th layer of dec-seg is a concatenation of the output of the (i + 1)th layer in the same branch and of the output of the i-th encoder layer (except for the 7-th decoder layer, which only receives input from the 7-th encoder layer.) As mentioned earlier, skip layers ensure good segmentation localization. The layers of dec-fill also receive information from equi-scale encoder layers, but this information is modulated by neglect masks generated by neglect nodes. Specifically, the i-th neglect node receives in input data from the i-th encoder layer, concatenated with data from the (i + 1)-th dec-seg layer (note that this is the same as the input to the i-th dec-seg layer). A 1 × 1 convolution, followed by a sigmoid, produces a neglect mask (an image with values between 0 and 1). The neglect mask modulates (by pixel multiplication) the content of the i-th encoder layer, before it is concatenated with the output of the (i + 1)-th dec-fill layer and fed to the i-th dec-fill layer. The process is shown in Fig. 2 (a). In practice, neglect nodes provide dec-fill with information about which areas of the image should be erased and infilled, while preserving content elsewhere. Visual inspection of the neglect masks shows that they faithfully identify the portion of the image to be removed at various scales (see e.g. Fig. 2). Discriminator: The input to the discriminator is the concatenation of the input image x and of the predicted background y p or background ground-truth y g . The structure of the discriminator is the same as the first 5 encoder layers of the generator, but its output undergoes a 1 × 1 convolution layer followed by a sigmoid function. In the case of 128 × 128 input dimension, the output size is 4 × 4, with values between 0 and 1, representing the decoder's confidence that the corresponding region is realistic. The average of these 16 values forms the final discriminator output. Experiments Datasets Training our model requires, for each image, three pieces of information: the original background image (for dec-fill); the foreground region (mask) to be removed (for dec-seg); and the composite image. Given that this type of rich information is not available in existing datasets, we built our own training and test sets. Specifically, we considered two different datasets for our experiments: one with synthetic text overimposed on regular images, and one with real images of pedestrians. Text dataset: Images in this dataset are generated by pasting text (generated synthetically) onto real background images. In this way, we have direct access to all necessary data (fore- ground, background, and composite image). Text images come from two resources: (1) the word synthesis engine of [17], which was used to generate 50K word images, along with the ground-truth associated segmentation masks; (2) the ICDAR 2013 dataset [1], which provides pixel-level text stroke labels, allowing us to extract 1850 real text regions. Random geometry transformations and color jittering were used to augment the real text, obtaining 50K more word images. Given a sample from the 100K word image pool, a similarly sized background image patch was cropped at random positions from images randomly picked from the MS COCO dataset [10]. More specifically, the background images for our training and validation set come from the training portion of the MS COCO dataset, while the background images for our test set come from the testing portion of the MS COCO dataset. This ensures that the training and testing sets do not share background images. In total, the training, validation and testing portions of our synthetic text dataset contain 100K, 15K and 15K images, respectively. Pedestrian dataset: This is built from the LASIESTA dataset [5], which contains several video sequences taken from a fixed camera with moving persons in the scene. LASIESTA provides ground-truth pedestrian segmentation for each frame in the videos. In this case, ground-truth background (which is occluded by a person at a given frame) can be found from neighboring frames, after the person has moved away. The foreground map is set to be equal to the segmentation mask provided with the dataset. We randomly selected 15 out of 17 video sequences for training, leaving the rest for testing. A sample of 1821 training images was augmented to 45K images via random cropping and color jittering. The test data set contains 198 images. Implementation Details Our system was implemented using Tensorflow and runs on a desktop (3.3Ghz 6-core CPU, 32G RAM, Titan XP GPU). The model was trained with input images resized to 384 × 128 (for the text dataset images) or 128 × 128 (for the pedestrian dataset images.) Adam solver was used during training, with learning rate set to 0.0001, momentum terms set to β 1 = 0.5 and β 2 = 0.999, and batch size equal to 8. We set λ f = λ s = 100 in the generator loss Table 1: Quantitative comparison of our method against other state-of-the-art image inpainting algorithms (Exemplar [4], Contextual [30], EPLL [35], and IRCNN [31]). Competing inpainting algorithms are fed with a segmentation mask, either predicted by our algorithm (z p ), or ground-truth (z g ). The difference between the original and reconstructed background image is measured using L 1 distance, PSNR (in dB, higher is better) and SSIM (higher is better) [23]. Time measurements refer to a 128 × 128 input image. (1), and followed the standard GAN training strategy [7]. Training is alternated between discriminator (D) and generator (G). Note that the adversarial term for the cost L G in (1) was changed to (− log(D(x, y p ))) (rather than log(1 − D(x, y p )) ) for better numerical stability, as suggested by Goodfellow et al. [7]. When training D, the learning rate was divided by 2. Ablation Study Baseline: In order to validate the effectiveness of the two-branches decoder architecture and of the neglect layers, we compared our result against a simple baseline structure. This baseline structure is made by the encoder and the dec-fill decoding branch, without input from the neglect nodes, but with skip layers from the encoder. This is very very similar to the architecture proposed by Isola et al. [8]. Tab. 1 shows that our method consistently outperforms the baseline structure with both datasets and under all three evaluation metrics considered (L 1 residual, PSNR, SSIM [23]). This shows that explicit estimation of the segmentation mask, along with bypass input from the encoder modulated by the neglect mask, facilitates realistic background image synthesis. An example comparing the result of text removal and of inpainting using the full system and the baseline is shown in the first row of Fig. 3. Deconvolution vs. upsampling + convolution: Deconvolution (or transpose convolution) is a standard approach for generating higher resolution images from coarse level features [6,8,18]. A problem with this technique is that it may produce visible checkerboard artifacts, which are due to "uneven overlapping" during the deconvolution process, especially when the kernel size is not divisible by the stride. Researchers [13] have found that by replacing deconvolution with nearest neighbor upsampling followed by convolution, these artifacts can be significantly reduced. In our experiments, we compared the results using these two techniques (see Fig. 3, second row). Specifically, upsampling + convolution was implemented using a kernel sized 3 × 3 with stride 1 (as described earlier in Sec. 3.2), while deconvolution was implemented by a kernel with size of 4 × 4 and stride of 2. Even though the deconvolution kernel side is divisible by the stride, checkerboard artifacts are still visible in most cases using deconvolution. These artifacts do not appear using upsampling + convolution, which also achieves better quantitative results as shown in Tab. 1. Input Ground-truth Baseline Ours Input Ground-truth Ours (deconv) Ours Comparative Results Due to the lack of directly comparable methods for automatic content removal, we contrasted our technique with other state-of-the-art image inpainting algorithms, which were provided with a foreground mask. More specifically, we considered two setting for the foreground mask fed to these algorithms: (1) the segmentation mask obtained as a byproduct from our algorithm (z p ), and (2) the ground-truth mask (z g ). Note that the latter is a best-case scenario for the competing algorithms: our system never accessed this mask. In both cases, the masks were slightly dilated to ensure that the whole foreground region was covered. Tab. 1 shows comparative results with two legacy (but still widely used) inpainting techniques (Exemplar [4] and EPLL [35]), as well as with two more recent CNN-based algorithms (IRCNN [31] and Contextual [30]). When fed with the z p mask (setting (1)), all competing algorithms produced substantially inferior results with respect to ours under all metrics considered. Even when fed with the (unobservable) ground-truth mask z g (setting (2)), these algorithms generally performed worse than our system (except for IRCNN, which gave better results than ours, under some of the metrics). We should stress that, unlike the competing techniques, our system does not receive an externally produced foreground map. Note also that our algorithm is faster (often by several orders of magnitude) than the competing techniques. Fig. 4 shows comparative examples of results using our system, IRCNN, and Contextual (where the last two were fed with the ground-truth foreground mask, z g ). Note that, even when provided with the "ideal" mask, the visual quality of the results using these competing methods is generally inferior to that obtained with our content removal technique. The result of IRCNN, which is very similar to the result of EPLL, is clearly oversmoothed. This makes the object boundary visible due to the lack of high frequency details in the filled-in region. We also noted that this algorithms cannot cope well with large foreground masks, as can be Figure 4: Sample inpainting results using Contextual [30] (third row), IRCNN [31] (fourth row), and our system (last row). Contextual and IRCNN were fed with the ground-truth segmentation mask, while our system automatically extracted and inpainted the foreground. Top row: input image. Second row: ground-truth background image. seen in the last two columns of Fig. 4 (pedestrian dataset). Contextual [30] does a better job at recovering texture, thanks to its ability to explicitly utilize surrounding image features in its generative model. Yet, we found that our method is often better at completely removing foreground objects. Part of the foreground's boundary is still visible in Contextual's reconstructed background region. Furthermore, the quality Contextual's reconstruction drops significantly when the foreground region reaches the border of the image. This problem is not observed with our method. Fig. 4 also reveals an interesting (and unexpected) feature of our system. As can be noted in the last two columns, the shadow cast by the person was removed along with the image of the person. Note that the system was not trained to detect shadows: the foreground mask only outlined the contour of the person. The most likely reason why the algorithm removed the shadow region is that the background images in the training set data (which, as mentioned in Sec. 4.1, were obtained from frames that did not contain the person) did not contain cast shadows of this type. The system thus decided to synthesize a shadowless image, doubling up as a shadow remover. Conclusion We have presented the first automatic content removal and impainting system that can work with widely different types and sizes of the foreground to be removed and infilled. Com-parison with other state-of-the-art inpainting algorithms (which, unlike are system, need an externally provided foreground mask), along with the ablation study, show that our strategy of joint segmentation and inpainting provides superior results in most cases, at a lower computational cost. Future work will extend this technique to more complex scenarios such as wider ranges of foreground region sizes and transparent foreground.
3,414
1807.07696
2884959301
We introduce a new system for automatic image content removal and inpainting. Unlike traditional inpainting algorithms, which require advance knowledge of the region to be filled in, our system automatically detects the area to be removed and infilled. Region segmentation and inpainting are performed jointly in a single pass. In this way, potential segmentation errors are more naturally alleviated by the inpainting module. The system is implemented as an encoder-decoder architecture, with two decoder branches, one tasked with segmentation of the foreground region, the other with inpainting. The encoder and the two decoder branches are linked via neglect nodes, which guide the inpainting process in selecting which areas need reconstruction. The whole model is trained using a conditional GAN strategy. Comparative experiments show that our algorithm outperforms state-of-the-art inpainting techniques (which, unlike our system, do not segment the input image and thus must be aided by an external segmentation module.)
Image inpainting @cite_31 has a long history. The goal of inpainting is to fill in a missing region with realistic image content. A variety of inpainting methods have been proposed, including those based on prior image statistics @cite_23 @cite_0 , and those based on CNNs @cite_21 @cite_4 . More recently, outstanding results have been demonstrated with the use of GANs to inpaint even large missing areas @cite_8 @cite_2 @cite_15 @cite_35 . While appropriate for certain domains ( face inpainting), methods in this category often suffer from serious limitations, including the requirement that the missing region have fixed size and shape. All of the inpainting methods mentioned above assume that the corrupted region mask is known (typically as provided by the user). This limits their scope of application, as in most cases, this mask is unavailable. This shortcoming is addressed by blind inpainting algorithms @cite_17 @cite_30 , which do not need access to the foreground mask. However, prior blind inpainting work has been demonstrated only for very simple cases, with constant-valued foreground occupying a small area of the image.
{ "abstract": [ "", "Recent deep learning based approaches have shown promising results on image inpainting for the challenging task of filling in large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces, textures and natural images demonstrate that the proposed approach generates higher-quality inpainting results than existing ones. Code and trained models will be released.", "", "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.", "", "", "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.", "We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques.", "", "", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning." ], "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_8", "@cite_15", "@cite_21", "@cite_0", "@cite_23", "@cite_2", "@cite_31", "@cite_17" ], "mid": [ "", "2784790939", "", "2735970878", "", "", "2172275395", "2131686571", "", "", "2146337213" ] }
QIN, WEI, MANDUCHI: AUTOMATIC SEMANTIC CONTENT REMOVAL Automatic Semantic Content Removal by Learning to Neglect
Automatic removal of specific content in an image is a task of practical interest, as well as of intellectual appeal. There are many situations in which a part of an image needs to be erased and replaced. This may include text (whether present in the scene or overimposed on the image), which may have to be removed, for example to protect personal information; people, such as undesired passersby in the scene; or other objects that, for any reasons, one may want to wipe off the picture. While these operations are typically performed manually by skilled Photoshop editors, substantial cost reduction could be achieved by automatizing the workflow. Automatic content removal, though, is difficult, and an as yet unsolved problem. Removing content and inpainting the image in a way that it looks "natural" entails the ability to capture, represent, and synthesize high-level ("semantic") image content. This is particularly true of large image areas infilling, an operation that only recently has been accomplished with some success [14,27,29,30]. Image inpainting algorithms described in the literature normally require that a binary "mask" indicating the location of the area to be synthesized be provided, typically via manual input. In contrast, an automatic content removal system must be able to accomplish two c 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. arXiv:1807.07696v1 [cs.CV] 20 Jul 2018 tasks. First, the pattern or object of interest must be segmented out, creating a binary mask; then, the image content within the mask must be synthesized. The work described in this paper is born from the realization that, for optimal results, these two tasks (segmentation and inpainting) should not be carried out independently. In fact, only in few specific situations can the portion of the image to be removed be represented by a binary mask. Edge smoothing effects are almost always present, either due to the camera's point spreading function, or due to blending, if the pattern (e.g. text) is overimposed on the image. Although one could potentially recover an alpha mask for the foreground content to be removed, we believe that a more appropriate strategy is to simultaneously detect the foreground and synthesize the background image. By doing so, we do not need to resort to hand-made tricks, such as expanding the binary mask to account for inaccurate localization. Our algorithm for content removal and inpainting relies on conditional generative adversarial networks (cGANs) [8], which have become the tool of choice for image synthesis. Our network architecture is based on an encoder-decoder scheme with skip layers (see Fig. 1). Rather than a single decoder branch, however, our network contains two parallel and interconnected branches: one (dec-seg) designed to extract the foreground area to be removed; the other (dec-fill) tasked with synthesizing the missing background. The dec-seg branch interacts with the dec-fill branch via multiple neglect nodes (see Fig. 2). The concept of neglect nodes is germane (but in reverse) to that of mixing nodes normally found in attention networks [22,26]. Mixing nodes highlight a portion of the image that needs to be attended to, or, in our case, neglected. Neglect nodes appear in all layers of the architecture; they ensure that the dec-fill branch is aware of which portions of the image are to be synthesized, without ever committing to a binary mask. A remarkable feature of the proposed system is that the multiple components of the network (encoder and two decoder branches, along with the neglect nodes) are all trained at the same time. Training seeks to minimize a global cost function that includes a conditional GAN component, as well as L 1 distance components for both foreground segmentation and background image. This optimization requires ground-truth availability of foreground segmentation (the component to be removed), background images (the original image without the foreground), and composite images (foreground over background). By jointly optimizing the multiple network components (rather than, say, optimizing for the dec-seg independently on foreground segmentation, then using it to condition optimization of dec-fill via the neglect nodes), we are able to accurately reconstruct the background inpainted image. The algorithm also produces the foreground segmentation as a byproduct. We should emphasize that this foreground mask is not used by the dec-fill synthesizing layer, which only communicates with the dec-seg layer via the neglect nodes. To summarize, this paper has two main contributions. First, we present the first (to the best of our knowledge) truly automatic semantic content removal system with promising results on realistic images. The proposed algorithm is able to recover high-quality background without any knowledge of the foreground segmentation mask. Unlike most previous GAN-based inpainting methods that assume a rectangular foreground region to be removed [14,27,29], our system produces good result with any foreground shape, even when it extends to the image boundary. Second, we introduce a novel encoder-decoder network structure with two parallel and interconnected branches (dec-seg and dec-fill), linked at multiple levels by mixing (neglect) nodes that determine which information from the encoder should be used for synthesis, and which should be neglected. Foreground region segmentation and background inpainting is produced in one single forward pass. Proposed Algorithm Our system for automatic semantic content removal comprises an encoder-decoder network with two decoder branches, tasked with predicting a segmentation mask (dec-seg) and a background image (dec-fill) in a single forward pass. Neglect nodes (an original feature of this architecture) link the two decoder branches and the encoder at various levels. The network is trained along with a discriminator network in an adversarial scheme, in order to foster realistic background image synthesis. We assume in this work that the foreground region to be removed occupies a large portion of the image (or, equivalently, that the image is cropped such that the foreground region takes most of the cropped region). In practice, this can be obtained using a standard object detector. Note that high accuracy of the (rectangular) detector is not required. In our experiments, the margin between the contour of the foreground region and the edges of the image was let to vary between 0 (foreground touching the image edge) to half the size of the foreground mask. Loss Function Following the terminology of GANs, the output z p of dec-seg and y p of dec-fill for an input image x are taken to represent the output of a generator G(x). The generator is trained with a dual task: ensuring that z p and y p are similar to the ground-truth (z g and y g ), and deceiving the discriminator D(x, y), which is concurrently trained to separate y p from y g given x. The cost function L G for the generator combines the conditional GAN loss with a linear combination of the L 1 distances between prediction and ground-truth for segmentation and inpainted background: L G = E[log(1 − D(x, y p ))] + λ f E[||y g − y p || 1 ] + λ s E[||z g − z p || 1 ](1) The discriminator D is trained to minimize the following discriminator loss L D : L D = −(E[log D(x, y g )] + E[log(1 − D(x, y p ))])(2) Network Architecture Generator: Segmentation and background infilling is generated in a single pass by an encoder-decoder network architecture, with multiple encoder layers generating multi-scale features at decreasing resolution, and two parallel decoder branches (dec-seg and dec-fill) producing high-resolution output starting from low-resolution features and higher-resolution data from skip layers. Each encoder stage consists of a convolution layer with kernel of size 4×4, stride 2, followed by instance normalization [21] and ReLU. Each stage of dec-seg contains a deconvolution (transpose convolution) layer (kernel of 4 × 4, stride 2), followed by instance normalization and ReLU. dec-fill replaces each deconvolution layers with a nearestneighbor upsampling layer followed by a convolution layer (kernel sized 3×3, stride 1). This strategy, originally proposed by Odena et al. [13] to reduce checkerboard artifacts, was found to be very useful in our experiments (see Sec. 4.3). The total number of convolution kernels at the i-th encoder layer is min(2 i−1 × 64, 512). The number of deconvolution kernels at the i-th dec-seg layer or convolution kernels at the i-th dec-fill layer are the same as the number of kernels at the (i − 1)-th encoder layer. The output of the first dec-seg layer and dec-fill layer has one channel (foreground segmentation) and three channels (recovered background image) respectively. All ReLU layers in the encoder are leaky, with slope of 0.2. In the dec-seg branch, standard skip layers are added. More precisely, following the layer indexing in Fig. 1, the input of the i-th layer of dec-seg is a concatenation of the output of the (i + 1)th layer in the same branch and of the output of the i-th encoder layer (except for the 7-th decoder layer, which only receives input from the 7-th encoder layer.) As mentioned earlier, skip layers ensure good segmentation localization. The layers of dec-fill also receive information from equi-scale encoder layers, but this information is modulated by neglect masks generated by neglect nodes. Specifically, the i-th neglect node receives in input data from the i-th encoder layer, concatenated with data from the (i + 1)-th dec-seg layer (note that this is the same as the input to the i-th dec-seg layer). A 1 × 1 convolution, followed by a sigmoid, produces a neglect mask (an image with values between 0 and 1). The neglect mask modulates (by pixel multiplication) the content of the i-th encoder layer, before it is concatenated with the output of the (i + 1)-th dec-fill layer and fed to the i-th dec-fill layer. The process is shown in Fig. 2 (a). In practice, neglect nodes provide dec-fill with information about which areas of the image should be erased and infilled, while preserving content elsewhere. Visual inspection of the neglect masks shows that they faithfully identify the portion of the image to be removed at various scales (see e.g. Fig. 2). Discriminator: The input to the discriminator is the concatenation of the input image x and of the predicted background y p or background ground-truth y g . The structure of the discriminator is the same as the first 5 encoder layers of the generator, but its output undergoes a 1 × 1 convolution layer followed by a sigmoid function. In the case of 128 × 128 input dimension, the output size is 4 × 4, with values between 0 and 1, representing the decoder's confidence that the corresponding region is realistic. The average of these 16 values forms the final discriminator output. Experiments Datasets Training our model requires, for each image, three pieces of information: the original background image (for dec-fill); the foreground region (mask) to be removed (for dec-seg); and the composite image. Given that this type of rich information is not available in existing datasets, we built our own training and test sets. Specifically, we considered two different datasets for our experiments: one with synthetic text overimposed on regular images, and one with real images of pedestrians. Text dataset: Images in this dataset are generated by pasting text (generated synthetically) onto real background images. In this way, we have direct access to all necessary data (fore- ground, background, and composite image). Text images come from two resources: (1) the word synthesis engine of [17], which was used to generate 50K word images, along with the ground-truth associated segmentation masks; (2) the ICDAR 2013 dataset [1], which provides pixel-level text stroke labels, allowing us to extract 1850 real text regions. Random geometry transformations and color jittering were used to augment the real text, obtaining 50K more word images. Given a sample from the 100K word image pool, a similarly sized background image patch was cropped at random positions from images randomly picked from the MS COCO dataset [10]. More specifically, the background images for our training and validation set come from the training portion of the MS COCO dataset, while the background images for our test set come from the testing portion of the MS COCO dataset. This ensures that the training and testing sets do not share background images. In total, the training, validation and testing portions of our synthetic text dataset contain 100K, 15K and 15K images, respectively. Pedestrian dataset: This is built from the LASIESTA dataset [5], which contains several video sequences taken from a fixed camera with moving persons in the scene. LASIESTA provides ground-truth pedestrian segmentation for each frame in the videos. In this case, ground-truth background (which is occluded by a person at a given frame) can be found from neighboring frames, after the person has moved away. The foreground map is set to be equal to the segmentation mask provided with the dataset. We randomly selected 15 out of 17 video sequences for training, leaving the rest for testing. A sample of 1821 training images was augmented to 45K images via random cropping and color jittering. The test data set contains 198 images. Implementation Details Our system was implemented using Tensorflow and runs on a desktop (3.3Ghz 6-core CPU, 32G RAM, Titan XP GPU). The model was trained with input images resized to 384 × 128 (for the text dataset images) or 128 × 128 (for the pedestrian dataset images.) Adam solver was used during training, with learning rate set to 0.0001, momentum terms set to β 1 = 0.5 and β 2 = 0.999, and batch size equal to 8. We set λ f = λ s = 100 in the generator loss Table 1: Quantitative comparison of our method against other state-of-the-art image inpainting algorithms (Exemplar [4], Contextual [30], EPLL [35], and IRCNN [31]). Competing inpainting algorithms are fed with a segmentation mask, either predicted by our algorithm (z p ), or ground-truth (z g ). The difference between the original and reconstructed background image is measured using L 1 distance, PSNR (in dB, higher is better) and SSIM (higher is better) [23]. Time measurements refer to a 128 × 128 input image. (1), and followed the standard GAN training strategy [7]. Training is alternated between discriminator (D) and generator (G). Note that the adversarial term for the cost L G in (1) was changed to (− log(D(x, y p ))) (rather than log(1 − D(x, y p )) ) for better numerical stability, as suggested by Goodfellow et al. [7]. When training D, the learning rate was divided by 2. Ablation Study Baseline: In order to validate the effectiveness of the two-branches decoder architecture and of the neglect layers, we compared our result against a simple baseline structure. This baseline structure is made by the encoder and the dec-fill decoding branch, without input from the neglect nodes, but with skip layers from the encoder. This is very very similar to the architecture proposed by Isola et al. [8]. Tab. 1 shows that our method consistently outperforms the baseline structure with both datasets and under all three evaluation metrics considered (L 1 residual, PSNR, SSIM [23]). This shows that explicit estimation of the segmentation mask, along with bypass input from the encoder modulated by the neglect mask, facilitates realistic background image synthesis. An example comparing the result of text removal and of inpainting using the full system and the baseline is shown in the first row of Fig. 3. Deconvolution vs. upsampling + convolution: Deconvolution (or transpose convolution) is a standard approach for generating higher resolution images from coarse level features [6,8,18]. A problem with this technique is that it may produce visible checkerboard artifacts, which are due to "uneven overlapping" during the deconvolution process, especially when the kernel size is not divisible by the stride. Researchers [13] have found that by replacing deconvolution with nearest neighbor upsampling followed by convolution, these artifacts can be significantly reduced. In our experiments, we compared the results using these two techniques (see Fig. 3, second row). Specifically, upsampling + convolution was implemented using a kernel sized 3 × 3 with stride 1 (as described earlier in Sec. 3.2), while deconvolution was implemented by a kernel with size of 4 × 4 and stride of 2. Even though the deconvolution kernel side is divisible by the stride, checkerboard artifacts are still visible in most cases using deconvolution. These artifacts do not appear using upsampling + convolution, which also achieves better quantitative results as shown in Tab. 1. Input Ground-truth Baseline Ours Input Ground-truth Ours (deconv) Ours Comparative Results Due to the lack of directly comparable methods for automatic content removal, we contrasted our technique with other state-of-the-art image inpainting algorithms, which were provided with a foreground mask. More specifically, we considered two setting for the foreground mask fed to these algorithms: (1) the segmentation mask obtained as a byproduct from our algorithm (z p ), and (2) the ground-truth mask (z g ). Note that the latter is a best-case scenario for the competing algorithms: our system never accessed this mask. In both cases, the masks were slightly dilated to ensure that the whole foreground region was covered. Tab. 1 shows comparative results with two legacy (but still widely used) inpainting techniques (Exemplar [4] and EPLL [35]), as well as with two more recent CNN-based algorithms (IRCNN [31] and Contextual [30]). When fed with the z p mask (setting (1)), all competing algorithms produced substantially inferior results with respect to ours under all metrics considered. Even when fed with the (unobservable) ground-truth mask z g (setting (2)), these algorithms generally performed worse than our system (except for IRCNN, which gave better results than ours, under some of the metrics). We should stress that, unlike the competing techniques, our system does not receive an externally produced foreground map. Note also that our algorithm is faster (often by several orders of magnitude) than the competing techniques. Fig. 4 shows comparative examples of results using our system, IRCNN, and Contextual (where the last two were fed with the ground-truth foreground mask, z g ). Note that, even when provided with the "ideal" mask, the visual quality of the results using these competing methods is generally inferior to that obtained with our content removal technique. The result of IRCNN, which is very similar to the result of EPLL, is clearly oversmoothed. This makes the object boundary visible due to the lack of high frequency details in the filled-in region. We also noted that this algorithms cannot cope well with large foreground masks, as can be Figure 4: Sample inpainting results using Contextual [30] (third row), IRCNN [31] (fourth row), and our system (last row). Contextual and IRCNN were fed with the ground-truth segmentation mask, while our system automatically extracted and inpainted the foreground. Top row: input image. Second row: ground-truth background image. seen in the last two columns of Fig. 4 (pedestrian dataset). Contextual [30] does a better job at recovering texture, thanks to its ability to explicitly utilize surrounding image features in its generative model. Yet, we found that our method is often better at completely removing foreground objects. Part of the foreground's boundary is still visible in Contextual's reconstructed background region. Furthermore, the quality Contextual's reconstruction drops significantly when the foreground region reaches the border of the image. This problem is not observed with our method. Fig. 4 also reveals an interesting (and unexpected) feature of our system. As can be noted in the last two columns, the shadow cast by the person was removed along with the image of the person. Note that the system was not trained to detect shadows: the foreground mask only outlined the contour of the person. The most likely reason why the algorithm removed the shadow region is that the background images in the training set data (which, as mentioned in Sec. 4.1, were obtained from frames that did not contain the person) did not contain cast shadows of this type. The system thus decided to synthesize a shadowless image, doubling up as a shadow remover. Conclusion We have presented the first automatic content removal and impainting system that can work with widely different types and sizes of the foreground to be removed and infilled. Com-parison with other state-of-the-art inpainting algorithms (which, unlike are system, need an externally provided foreground mask), along with the ablation study, show that our strategy of joint segmentation and inpainting provides superior results in most cases, at a lower computational cost. Future work will extend this technique to more complex scenarios such as wider ranges of foreground region sizes and transparent foreground.
3,414
1807.07930
2942641522
With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains images with high perceptual quality and improved temporal consistency.
Single image SR is one of the most relevant inverse problems in the field of generative image processing tasks @cite_21 @cite_42 . Since the initial work by @cite_12 which applied small convolutional neural networks to the task of single image SR, several better neural network architectures have been proposed that have achieved a significantly higher PSNR across various datasets @cite_19 @cite_4 @cite_35 @cite_13 @cite_37 @cite_7 @cite_1 . Generally, advances in network architectures for image detection tasks have also helped in SR, e.g. adding residual connections @cite_28 enables the use of much deeper networks and speeds up training @cite_38 . We refer the reader to Agustsson and Timofte @cite_10 for a survey of the state of the art in single image SR.
{ "abstract": [ "As a successful deep model applied in image super-resolution (SR), the Super-Resolution Convolutional Neural Network (SRCNN) [1, 2] has demonstrated superior performance to the previous hand-crafted models either in speed and restoration quality. However, the high computational cost still hinders it from practical usage that demands real-time performance (24 fps). In this paper, we aim at accelerating the current SRCNN, and propose a compact hourglass-shape CNN structure for faster and better SR. We re-design the SRCNN structure mainly in three aspects. First, we introduce a deconvolution layer at the end of the network, then the mapping is learned directly from the original low-resolution image (without interpolation) to the high-resolution one. Second, we reformulate the mapping layer by shrinking the input feature dimension before mapping and expanding back afterwards. Third, we adopt smaller filter sizes but more mapping layers. The proposed model achieves a speed up of more than 40 times with even superior restoration quality. Further, we present the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance. A corresponding transfer strategy is also proposed for fast training and testing across different upscaling factors.", "Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.", "", "", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "This paper introduces a novel large dataset for example-based single image super-resolution and studies the state-of-the-art as emerged from the NTIRE 2017 challenge. The challenge is the first challenge of its kind, with 6 competitions, hundreds of participants and tens of proposed solutions. Our newly collected DIVerse 2K resolution image dataset (DIV2K) was employed by the challenge. In our study we compare the solutions from the challenge to a set of representative methods from the literature and evaluate them using diverse measures on our proposed DIV2K dataset. Moreover, we conduct a number of experiments and draw conclusions on several topics of interest. We conclude that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on our newly proposed DIV2K.", "Image Super-Resolution: Historical Overview and Future Challenges, J. Yang and T. Huang Introduction to Super-Resolution Notations Techniques for Super-Resolution Challenge issues for Super-Resolution Super-Resolution Using Adaptive Wiener Filters, R.C. Hardie Introduction Observation Model AWF SR Algorithms Experimental Results Conclusions Acknowledgments Locally Adaptive Kernel Regression for Space-Time Super-Resolution, H. Takeda and P. Milanfar Introduction Adaptive Kernel Regression Examples Conclusion AppendiX Super-Resolution With Probabilistic Motion Estimation, M. Protter and M. Elad Introduction Classic Super-Resolution: Background The Proposed Algorithm Experimental Validation Summary Spatially Adaptive Filtering as Regularization in Inverse Imaging, A. Danielyan, A. Foi, V. Katkovnik, and K. Egiazarian Introduction Iterative filtering as regularization Compressed sensing Super-resolution Conclusions Registration for Super-Resolution, P. Vandewalle, L. Sbaiz, and M. Vetterli Camera Model What Is Resolution? Super-Resolution as a Multichannel Sampling Problem Registration of Totally Aliased Signals Registration of Partially Aliased Signals Conclusions Towards Super-Resolution in the Presence of Spatially Varying Blur, M. Sorel, F. Sroubek and J. Flusser Introduction Defocus and Optical Aberrations Camera Motion Blur Scene Motion Algorithms Conclusion Acknowledgments Toward Robust Reconstruction-Based Super-Resolution, M. Tanaka and M. Okutomi Introduction Overviews Robust SR Reconstruction with Pixel Selection Robust Super-Resolution Using MPEG Motion Vectors Robust Registration for Super-Resolution Conclusions Multi-Frame Super-Resolution from a Bayesian Perspective, L. Pickup, S. Roberts, A. Zisserman and D. Capel The Generative Model Where Super-Resolution Algorithms Go Wrong Simultaneous Super-Resolution Bayesian Marginalization Concluding Remarks Variational Bayesian Super Resolution Reconstruction, S. Derin Babacan, R. Molina, and A.K. Katsaggelos Introduction Problem Formulation Bayesian Framework for Super Resolution Bayesian Inference Variational Bayesian Inference Using TV Image Priors Experiments Estimation of Motion and Blur Conclusions Acknowledgements Pattern Recognition Techniques for Image Super-Resolution, K. Ni and T.Q. Nguyen Introduction Nearest Neighbor Super-Resolution Markov Random Fields and Approximations Kernel Machines for Image Super-Resolution Multiple Learners and Multiple Regressions Design Considerations and Examples Remarks Glossary Super-Resolution Reconstruction of Multi-Channel Images, O.G. Sezer and Y. Altunbasak Introduction Notation Image Acquisition Model Subspace Representation Reconstruction Algorithm Experiments & Discussions Conclusion New Applications of Super-Resolution in Medical Imaging, M.D.Robinson, S.J. Chiu, C.A. Toth, J.A. Izatt, J.Y. Lo, and S. Farsiu Introduction The Super-Resolution Framework New Medical Imaging Applications Conclusion Acknowledgment Practicing Super-Resolution: What Have We Learned? N. Bozinovic Abstract Introduction MotionDSP: History and Concepts Markets and Applications Technology Results Lessons Learned Conclusions", "Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real-world problems in different fields, from satellite and aerial imaging to medical image processing, to facial image analysis, text image analysis, sign and number plates reading, and biometrics recognition, to name a few. This has resulted in many research papers, each developing a new super-resolution algorithm for a specific purpose. The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy. For each of the groups in the taxonomy, the basic concepts of the algorithms are first explained and then the paths through which each of these groups have evolved are given in detail, by mentioning the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super-resolution algorithms, and the most commonly employed databases are discussed.", "", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage." ], "cite_N": [ "@cite_35", "@cite_37", "@cite_38", "@cite_4", "@cite_7", "@cite_28", "@cite_10", "@cite_21", "@cite_42", "@cite_1", "@cite_19", "@cite_13", "@cite_12" ], "mid": [ "2503339013", "2607041014", "", "", "", "2194775991", "2741137940", "2914829730", "1973788353", "", "2476548250", "2747898905", "54257720" ] }
Photorealistic Video Super Resolution
Advances in convolutional neural networks have revolutionized computer vision and the popular field of super-resolution has been no exception to this rule, as in recent years numerous publications have made great strides towards better reconstructions of high-resolution pictures. A most promising new trend in super-resolution has emerged as the application of perceptual loss functions rather than the previously ubiquitous optimization of the mean squared error. This paradigm shift has enabled the leap from images with blurred textures to near-photorealistic results in terms of perceived image quality using deep neural networks. Notwithstanding the recent success in single image super-resolution, perceptual losses have not yet been successfully utilized in the video superresolution domain as perceptual losses typically introduce artifacts that, while being undisturbing in the spatial domain, emerge as spurious flickering artifacts in videos. In this paper we propose a neural network model that is able to produce sharp videos with fine details while improving its behaviour in terms of temporal consistency. The contributions of the paper are: (1) We propose a recurrent arXiv:1807.07930v1 [cs.CV] 20 Jul 2018 generative adversarial model that uses optical flow in order to exploit temporal cues across frames, and (2) we introduce a temporal-consistency loss term that reinforces coherent consecutive frames in the temporal domain. Proposed method Notation and problem statement Video super-resolution aims at upscaling a given LR image sequence {Y t } by a factor of s, so that the estimated sequence X t resembles the original sequence {X t } by some metric. We denote images in the low-resolution domain by Y ∈ [0, 1] h×w×3 , and ground-truth images in the high-resolution domain by X ∈ [0, 1] sh×sw×3 for a given magnification factor s. An estimate of a high-resolution image X is denoted byX. We discern within a temporal sequence by a subindex to the image variable, e.g., Y t−1 , Y t . We use a superscript w, e.g.X w t−1 , to denote an imageX that has been warped from its time step t − 1 to the following frame X t . The proposed architecture is summarized in Figure 1 and will be explained in detail in the following sections. We define an architecture that naturally leverages not only single image but also inter-frame details present in video sequences by Generator Estimated images Discriminator ? Fig. 1. Network architectures for generator and discriminator. The previous output frame is warped onto the current frame and mapped to LR with the space to depth transformation before being concatenated to the current LR input frame. The generator follows a ResNet architecture with skip connections around the residual blocks and around the whole network. The discriminator follows the common design pattern of decreasing the spatial dimension of the images while increasing the number of channels after each block. using a recurrent neural network architecture. The previous output frame is warped according to the optical flow estimate given by FlowNet 2.0 [30]. By including a discriminator that is only needed at the training stage, we further enable adversarial training which has proved to be a powerful tool for generating sharper and more realistic images [14,17]. To the best of our knowledge, the use of perceptual loss functions (i.e. adversarial training in recurrent architectures) for video super-resolution is novel. In a recently published work, Sajjadi et al. [25] propose a similar recurrent architecture for video super-resolution, however, they do not utilize a perceptual objective and in contrast to our method, they do not apply an explicit loss term that enforces temporal consistency. Recurrent generator and discriminator Following recent super-resolution state of the art methods for both classical and perceptual loss functions [9,14,17,31], we use deep convolutional neural networks with residual connections. This class of networks facilitates learning the identity mapping and leads to better gradient flow through deep networks. Specifically, we adopt a ResNet architecture for our recurrent generator that is similar to the ones introduced by [14,17] with some modifications. Each of the residual blocks is composed by a convolution, a Rectified Linear Unit (ReLU) activation and another convolutional layer following the activation. Previous approaches have applied batch normalization layers in the residual blocks [17], but we choose not to add batch normalization to the generator due to the comparably small batch size, to avoid potential color shift problems, and also taking into account recent evidence hinting that they might be problematic for generative image models [32]. In order to further accelerate and stabilize training, we create an additional skip connection over the whole generator. This means that the network only needs to learn the residual between the bicubic interpolation of the input and the high-resolution ground-truth image rather than having to pass through all low frequencies as well [7,14]. We perform most of our convolutions in low-resolution space for a higher receptive field and higher efficiency. Since the input image has a lower dimension than the output image, the generator needs to have a module that increases the resolution towards the end. There are several ways to do so within a neural network, e.g., transposed convolution layers, interpolation, or depth to space units (pixelshuffle). Following Shi et al. [4], we reshuffle the activations from the channel dimension to the height and width dimensions so that the effective spatial resolution is increased (and, consequently, this operation decreases the number of channels). The upscaling unit is divided into two stages with an intermediate magnification step r (e.g. two times ×2 for a magnification factor of ×4). Each of the upscaling stages is composed of a convolutional layer that increments the number of channels by a factor of r 2 , a depth to space operation and a ReLU activation. Our discriminator follows common design choices and is composed by strided convolutions, batch normalization and leaky ReLU activations that progressively decrease the spatial resolution of the activations and increase the channel count [14,17,33]. The last stage of the discriminator is composed of two dense layers and a sigmoid activation function. In contract to general generative adversarial networks, the input to the proposed generative network is not a random variable but it is composed of the low-resolution image Y t (corresponding to the current frame t) and, additionally, the warped output of the network at the previous stepX w t−1 . The difference in resolution of these two images is adapted through a space to channel layer which decreases the spatial resolution ofX w t−1 without loss of data. For warping the previous image, a dense optical flow field is estimated with a flow estimation network as described in the following section. As described in Section 3.4, the warped frames are also used in an additional loss term that enforces higher temporal consistency in the results. Flow estimation Accurate dense flow estimation is crucial to the success of the proposed architecture. For this reason, we opt to use one of the best available flow estimation methods, FlowNet 2.0 [30]. We use the pre-trained model supplied by the authors [34] and run optical flow estimation on our whole dataset. FlowNet 2.0 is t t-1 warped to t not temporally consistent t t-1 warped to t no pixel-wise fidelity to GT, but temporally consistent Fig. 2. Behavior of the proposed temporal-consistency loss. The sequence depicts a checkerboard pattern which moves to the right. In the first group of GT images, the warped images are exactly similar, and thus the loss is 0. In (a), the results are not temporally consistent, so the warped image is different from the current frame which leads to a loss that is higher than 0. In the example in (b), the estimated patterns are temporally consistent despite not being the same as the GT images, so the loss is 0. the successor of the original FlowNet architecture which was the first approach that used convolutional neural networks to predict dense optical flow maps. The authors show that it is both faster and more accurate than its predecessor. Besides a more sophisticated training procedure, FlowNet 2.0 relies on an arrangement of stacked networks that capture large displacements in coarse flow estimates which are then refined by the next network in a subsequent step. In a final step, the estimates are fused by a shallow fusion network. For details, we refer the reader to the original publication [30]. Losses We train our model with three different loss terms, namely: pixel-wise mean squared error (MSE), adversarial loss and a temporal-consistency loss. Mean Squared Error MSE is by far the most common loss in the superresolution literature as it is well-understood and easy to compute. It accurately captures sharp edges and contours, but it leads to over-smooth and flat textures as the reconstruction of high-frequency areas falls to the local mean rather than a realistic mode [14]. The pixel-wise MSE is defined as the Frobenius norm of the difference of two images: L E = X t − X t 2 2 ,(1) whereX t denotes the estimated image of the generator for frame t and X t denotes the ground-truth HR frame t. Fig. 3. Unfolded recurrent generator G and discriminator D during training for 3 temporal steps. The output of the previous time step is fed into the generator for the next iteration. Note that the weights of G and D are shared across different time steps. Gradients of all losses during training pass through the whole unrolled configuration of network instances. The discriminator is applied independently for each time step. G G G D D D t t+2 t+1 Adversarial Loss Generative Adversarial Networks (GANs) [35] and their characteristic adversarial training scheme have been a very active research field in the recent years, defining a wide landscape of applications. In GANs, a generative model is obtained by simultaneously training an additional network. A generative model G (i.e. generator) that learns to produce samples close to the data distribution of the training set is trained along with a discriminative model D (i.e. discriminator) that estimates the probability of a given sample belonging to the training set or not, i.e., it is generated by G. The objective of G is to maximize the errors committed by D, whereas the training of D should minimize its own errors, leading to a two-player minimax game. Similar to previous single-image super-resolution [14,17], the input to the generator G is not a random vector but an LR image (in our case, with an additional recurrent input), and thus the generator minimizes the following loss: L A = − log(D(G(Y t ||X t−1 )),(2) where the operator || denotes concatenation. The discriminator minimizes: L D = − log(D(X t )) − log(1 − D(G(Y t ||X t−1 )).(3) Temporal-consistency Loss Upscaling video sequences has the additional challenge of respecting the original temporal consistency between adjacent frames so that the estimated video does not present unpleasing flickering artifacts. When minimizing only L E such artifacts are less noticeable for two main reasons: because (1) MSE minimization often converges to the mean in textured regions, and thus flickering is reduced and (2) the pixel-wise MSE with respect to the GT is up to a certain point enforcing the inter-frame consistency present in the training images. However, when adding an adversarial loss term, the difficult to maintain temporal consistency increases. Adversarial training aims at generating samples that lie in the manifold of images, and thus it generates highfrequency content that will hardly be pixel-wise accurate to any ground-truth image. Generating video frames separately thus introduces unpleasing flickering artifacts. In order to tackle the aforementioned limitation, we introduce the temporalconsistency loss, which has already been successfully used in the style transfer community [27,28,29]. The temporal-consistency loss is computed from two adjacent estimated frames (without need of ground-truth), by warping the frame t − 1 to t and computing the MSE between them. We show an example of the behavior of our proposed temporal-consistency loss in Figure 2. Let W (X, O) denote a image warping operation and O an optical flow field mapping t − 1 to t. Our proposed loss reads: L T = X w t−1 −X t 2 2 , forX w t−1 = W (X t−1 , O).(4) Results Training and parameters Our model falls in the category of recurrent neural networks, and thus must be trained via Backpropagation Through Time (BPTT) [36], which is a finite approximation of the infinite recurrent loop created in the model. In practice, BPTT unfolds the network into several temporal steps where each of those steps is a copy of the network sharing the same parameters. The backpropagation algorithm is then used to obtain gradients of the loss with respect to the parameters. An example of unfolded recurrent generator and discriminator can be visualized in Figure 3. We select 10 temporal steps for our training approximation. Note that our discriminator classifies each image independently and is not recurrent, thus the different images produced by G can be stacked in the batch dimension (i.e. the discriminator does not have any connection between adjacent frames). Our training set is composed by 4k videos downloaded from youtube.com and downscaled to 720 × 1280, from which we extract around 300.000 128 × 128 HR crops that serve as ground-truth images, and then further downsample them by a factor of s = 4 to obtain the LR input of size 32 × 32. The training dataset thus is composed by around 30.000 sequences of 10 frames each (i.e. around 30.000 data-points for the recurrent network). We precompute the optical flows with FlowNet 2.0 and load them both during training and testing, as GPU memory becomes scarce specially when unfolding generator and discriminator. We compile a testing set, larger than other previous testing sets in the literature, also downloaded from youtube.com, favoring sharp 4k content that is further downsampled to 720 × 1280 for GT and 180 × 320 for the LR input. In this dataset there are 12 sequences of diverse nature scenes (e.g. landscapes, natural life, urban scenes) ranging from very little to fast motion. Each sequence is composed of roughly 100 to 150 frames, which totals 1281 frames. We use a batch size of 8 sequences, i.e., in total each batch contains 8×10 = 80 training images. All models are pre-trained with L E for about 100k training iterations and then trained with the adversarial and temporal loss for about 1.5M iterations. Training was performed on Nvidia Tesla P100 and V100 GPUs, both of which have 16 GB of memory. Evaluation Seq. Table 1. LPIPS scores (AlexNet architecture with linear calibration layer) for 12 sequences. Best performers in bold font, and runner's-up in blue color. The best performer on average is our proposed model trained with LEA followed closely by ENet. Methods Models We include in our validation three loss configuration: (1) L E is trained only with MSE loss as a baseline, (2) L EA is our adversarial model trained with L E + 3 × 10 −3 L A and (3) L EAT is our adversarial model with temporalconsistency loss L E + 3 × 10 −3 L A + 10 −2 L T . We also include two other state-of-the art models in our benchmarking: En-hanceNet (ENet) as a perceptual single image super-resolution baseline [14] (code and pre-trained network weights obtained from the authors' website), which minimizes an additional loss term based on the Gram matrix of VGG activations; and lastly our model without flow estimation or recurrent connections that we denote in the tables by L SI A . This last model is very similar to the network used in SRGAN from Ledig et al. [17]. Intra-frame quality Evaluation of images trained with perceptual losses is still an open problem. Even though it is trivial for humans to evaluate the perceived similarity between two images, the underlying principles of human perception are still not well-understood. Traditional metrics such as PSNR (based on MSE), Structural Self-Similarity (SSIM) or the Information Fidelity Criterion (IFC) still rely on well-aligned, more or less pixel-wise accuracy estimates, and minor artifacts in the images can cause great perturbations in them. In order to evaluate image samples from models that deviate from the MSE minimization scheme other metrics need to be considered. Table 2. Temporal Consistency Loss for adjacent frames (initial frame has not been included in the computation). Best performer in bold and runner-up in blue color (omitting bicubic). The best performer on average is our proposed method trained with LEAT followed by LE. Seq. Methods The recent work of Zhang et al. [37] explores the capabilities of deep architectures to capture perceptual features that are meaningful for similarity assessment. In their exhaustive evaluation they show how deep features of different architectures outperform other previous metrics by substantial margins and correlate very well with subjective human scores. They conclude that deep networks, regardless of the specific architecture, capture important perceptual features that are well-aligned with those of the human visual system. Consequently, they propose the Learned Perceptual Image Patch Similarity metric (LPIPS). We evaluate our testing set with LPIPS using the AlexNet architecture with an additional linear calibration layer as the authors propose in their manuscript. We show our LPIPS scores in Table 1. These scores are in line with what we show in the quantitative visual inspection in Figure 4: The samples obtained by ENet are together with L EA the most similar to the GT, with ENet producing slightly sharper images than our proposed method. L EAT comes afterwards, as a more conservative generative network (i.e. closer to the one obtained with the MSE loss). Temporal Consistency To evaluate the temporal consistency of the estimated videos, we compute the temporal-consistency loss as described in Equation 4 and Fig. 2 between adjacent frames for all the methods in the benchmark. We show the results in Table 2. We note that all the configurations that are recurrent perform well in this metric, even when we do not minimize L T directly (e.g. L E , L EA ). In contrast, models that are not aware of the temporal dimensions (such as ENet or L SI A ) obtain higher errors, validating that the recurrent network learns inter-frame relationships. Not considering the bicubic interpolation which is very blurry, the best performer is L EAT followed closely by L E . Our model L EA indeed performs reasonably well, especially taking into consideration that it is the best performer in the quality scores shown in Table 1. Evaluating the temporal consistency over adjacent frames in a sequence where the ground-truth optical flow is not known poses several problems, as errors present in the flow estimation will directly affect this metric. Additionally, the bilinear resampling performed for the image warping is, when analyzed in the frequency domain, a low-pass filter that can potentially blur high-frequencies and thus, result in an increase of uncertainty in the measured error. In order to ensure the reliability of the temporal consistency validation, we perform further testing with the MPI Sintel synthetic training sequence (which includes groundtruth optical flow). This enables us to asses the impact of using estimated flows in the temporal consistency metric. We show in Table 3 the results in terms of temporal consistency of the MPI Sintel training 23 sequences using the GT and also FlowNet2 estimated optical flows for the warping in the metric. The error from estimated flows to GT is not significant, and the relationship among methods is similar: L EA improves greatly over the non-recurrent SRGAN L SI A or EnhanceNet, and L EAT (with temporal consistency loss) further improves over L EA . Table 3. Temporal Consistency Loss for adjacent frames for MPI Sintel Dataset using ground-truth and estimated optical flows. Following the example of [21], we also show in Figure 5 temporal profiles for qualitative evaluation of temporal consistency. A temporal profile shows an image where each row is a fixed line over a set of consecutive time frames, creating a 2-dimensional visualization of the temporal evolution of the line. In this figure, we can corroborate the objective assessment performed in Table 1 and Table 2. ENet produces very appealing sharp textures, to the point that it is hallucinating frequencies not present in the GT image (i.e. over-sharpening). This is not necessarily unpleasing with static images, but temporal consistency then becomes very challenging, and some of those textures resemble noise in the temporal profile. Our model L A is hardly distinguishable from the GT image, generating high-quality plausible textures, while showing a very clean temporal profile. L EAT has fewer hallucinated textures than L A , but on the other hand we also see an improved temporal behavior (i.e. less flickering). Conclusions We presented a novel generative adversarial model for video upscaling. Differently from previous approaches to video super-resolution based on MSE minimization, we used adversarial loss functions in order to recover videos with photorealistic textures. To the best of our knowledge, this is the first work that applies perceptual loss functions to the task of video super-resolution. In order to tackle the problem of lacking temporal consistency due to perceptual loss functions, we propose two synergetic contributions: (1) A recurrent generator and discriminator model where the output of frame t − 1 is passed on to the next iteration in combination with the input of frame t, enabling temporal cues during learning and inference. Models trained with adversarial and MSE losses show improved behavior in terms of temporal consistency and a competitive quality when compared to SISR models. (2) Additionally, we introduce the temporal-consistency loss to video super-resolution, in which deviations from the previous warped frame are punished when estimating a given frame. We conducted evaluation by means of the LPIPS and temporal-consistency loss on a testing dataset of more than a thousand 4k video frames, obtaining promising results that open new possibilities within video upscaling.
3,638
1807.07930
2942641522
With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains images with high perceptual quality and improved temporal consistency.
Since maximizing for PSNR leads to generally blurry images @cite_31 , another line of research has investigated alternative loss functions. @cite_16 and Alexey and Brox @cite_24 replace the mean squared error (MSE) in the image space with an MSE measurement in feature space of large pre-trained image recognition networks. @cite_27 extend this idea by adding an adversarial loss and @cite_31 combine perceptual, adversarial and texture synthesis loss terms to produce sharper images with hallucinated details. Although these methods produce detailed images, they typically contain small artifacts that are visible upon close inspection. While such artifacts are bearable in images, they lead to flickering in super-resolved videos. For this reason, applying these perceptual loss functions to the problem of video SR is more involved.
{ "abstract": [ "We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), allowing to generate sharp high resolution images from compressed abstract representations. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric reflects perceptual similarity of images much better and, thus, leads to better results. We demonstrate two examples of use cases of the proposed loss: (1) networks that invert the AlexNet convolutional network; (2) a modified version of a variational autoencoder that generates realistic high-resolution random images.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results." ], "cite_N": [ "@cite_24", "@cite_27", "@cite_31", "@cite_16" ], "mid": [ "2963174698", "2963470893", "2963037581", "2331128040" ] }
Photorealistic Video Super Resolution
Advances in convolutional neural networks have revolutionized computer vision and the popular field of super-resolution has been no exception to this rule, as in recent years numerous publications have made great strides towards better reconstructions of high-resolution pictures. A most promising new trend in super-resolution has emerged as the application of perceptual loss functions rather than the previously ubiquitous optimization of the mean squared error. This paradigm shift has enabled the leap from images with blurred textures to near-photorealistic results in terms of perceived image quality using deep neural networks. Notwithstanding the recent success in single image super-resolution, perceptual losses have not yet been successfully utilized in the video superresolution domain as perceptual losses typically introduce artifacts that, while being undisturbing in the spatial domain, emerge as spurious flickering artifacts in videos. In this paper we propose a neural network model that is able to produce sharp videos with fine details while improving its behaviour in terms of temporal consistency. The contributions of the paper are: (1) We propose a recurrent arXiv:1807.07930v1 [cs.CV] 20 Jul 2018 generative adversarial model that uses optical flow in order to exploit temporal cues across frames, and (2) we introduce a temporal-consistency loss term that reinforces coherent consecutive frames in the temporal domain. Proposed method Notation and problem statement Video super-resolution aims at upscaling a given LR image sequence {Y t } by a factor of s, so that the estimated sequence X t resembles the original sequence {X t } by some metric. We denote images in the low-resolution domain by Y ∈ [0, 1] h×w×3 , and ground-truth images in the high-resolution domain by X ∈ [0, 1] sh×sw×3 for a given magnification factor s. An estimate of a high-resolution image X is denoted byX. We discern within a temporal sequence by a subindex to the image variable, e.g., Y t−1 , Y t . We use a superscript w, e.g.X w t−1 , to denote an imageX that has been warped from its time step t − 1 to the following frame X t . The proposed architecture is summarized in Figure 1 and will be explained in detail in the following sections. We define an architecture that naturally leverages not only single image but also inter-frame details present in video sequences by Generator Estimated images Discriminator ? Fig. 1. Network architectures for generator and discriminator. The previous output frame is warped onto the current frame and mapped to LR with the space to depth transformation before being concatenated to the current LR input frame. The generator follows a ResNet architecture with skip connections around the residual blocks and around the whole network. The discriminator follows the common design pattern of decreasing the spatial dimension of the images while increasing the number of channels after each block. using a recurrent neural network architecture. The previous output frame is warped according to the optical flow estimate given by FlowNet 2.0 [30]. By including a discriminator that is only needed at the training stage, we further enable adversarial training which has proved to be a powerful tool for generating sharper and more realistic images [14,17]. To the best of our knowledge, the use of perceptual loss functions (i.e. adversarial training in recurrent architectures) for video super-resolution is novel. In a recently published work, Sajjadi et al. [25] propose a similar recurrent architecture for video super-resolution, however, they do not utilize a perceptual objective and in contrast to our method, they do not apply an explicit loss term that enforces temporal consistency. Recurrent generator and discriminator Following recent super-resolution state of the art methods for both classical and perceptual loss functions [9,14,17,31], we use deep convolutional neural networks with residual connections. This class of networks facilitates learning the identity mapping and leads to better gradient flow through deep networks. Specifically, we adopt a ResNet architecture for our recurrent generator that is similar to the ones introduced by [14,17] with some modifications. Each of the residual blocks is composed by a convolution, a Rectified Linear Unit (ReLU) activation and another convolutional layer following the activation. Previous approaches have applied batch normalization layers in the residual blocks [17], but we choose not to add batch normalization to the generator due to the comparably small batch size, to avoid potential color shift problems, and also taking into account recent evidence hinting that they might be problematic for generative image models [32]. In order to further accelerate and stabilize training, we create an additional skip connection over the whole generator. This means that the network only needs to learn the residual between the bicubic interpolation of the input and the high-resolution ground-truth image rather than having to pass through all low frequencies as well [7,14]. We perform most of our convolutions in low-resolution space for a higher receptive field and higher efficiency. Since the input image has a lower dimension than the output image, the generator needs to have a module that increases the resolution towards the end. There are several ways to do so within a neural network, e.g., transposed convolution layers, interpolation, or depth to space units (pixelshuffle). Following Shi et al. [4], we reshuffle the activations from the channel dimension to the height and width dimensions so that the effective spatial resolution is increased (and, consequently, this operation decreases the number of channels). The upscaling unit is divided into two stages with an intermediate magnification step r (e.g. two times ×2 for a magnification factor of ×4). Each of the upscaling stages is composed of a convolutional layer that increments the number of channels by a factor of r 2 , a depth to space operation and a ReLU activation. Our discriminator follows common design choices and is composed by strided convolutions, batch normalization and leaky ReLU activations that progressively decrease the spatial resolution of the activations and increase the channel count [14,17,33]. The last stage of the discriminator is composed of two dense layers and a sigmoid activation function. In contract to general generative adversarial networks, the input to the proposed generative network is not a random variable but it is composed of the low-resolution image Y t (corresponding to the current frame t) and, additionally, the warped output of the network at the previous stepX w t−1 . The difference in resolution of these two images is adapted through a space to channel layer which decreases the spatial resolution ofX w t−1 without loss of data. For warping the previous image, a dense optical flow field is estimated with a flow estimation network as described in the following section. As described in Section 3.4, the warped frames are also used in an additional loss term that enforces higher temporal consistency in the results. Flow estimation Accurate dense flow estimation is crucial to the success of the proposed architecture. For this reason, we opt to use one of the best available flow estimation methods, FlowNet 2.0 [30]. We use the pre-trained model supplied by the authors [34] and run optical flow estimation on our whole dataset. FlowNet 2.0 is t t-1 warped to t not temporally consistent t t-1 warped to t no pixel-wise fidelity to GT, but temporally consistent Fig. 2. Behavior of the proposed temporal-consistency loss. The sequence depicts a checkerboard pattern which moves to the right. In the first group of GT images, the warped images are exactly similar, and thus the loss is 0. In (a), the results are not temporally consistent, so the warped image is different from the current frame which leads to a loss that is higher than 0. In the example in (b), the estimated patterns are temporally consistent despite not being the same as the GT images, so the loss is 0. the successor of the original FlowNet architecture which was the first approach that used convolutional neural networks to predict dense optical flow maps. The authors show that it is both faster and more accurate than its predecessor. Besides a more sophisticated training procedure, FlowNet 2.0 relies on an arrangement of stacked networks that capture large displacements in coarse flow estimates which are then refined by the next network in a subsequent step. In a final step, the estimates are fused by a shallow fusion network. For details, we refer the reader to the original publication [30]. Losses We train our model with three different loss terms, namely: pixel-wise mean squared error (MSE), adversarial loss and a temporal-consistency loss. Mean Squared Error MSE is by far the most common loss in the superresolution literature as it is well-understood and easy to compute. It accurately captures sharp edges and contours, but it leads to over-smooth and flat textures as the reconstruction of high-frequency areas falls to the local mean rather than a realistic mode [14]. The pixel-wise MSE is defined as the Frobenius norm of the difference of two images: L E = X t − X t 2 2 ,(1) whereX t denotes the estimated image of the generator for frame t and X t denotes the ground-truth HR frame t. Fig. 3. Unfolded recurrent generator G and discriminator D during training for 3 temporal steps. The output of the previous time step is fed into the generator for the next iteration. Note that the weights of G and D are shared across different time steps. Gradients of all losses during training pass through the whole unrolled configuration of network instances. The discriminator is applied independently for each time step. G G G D D D t t+2 t+1 Adversarial Loss Generative Adversarial Networks (GANs) [35] and their characteristic adversarial training scheme have been a very active research field in the recent years, defining a wide landscape of applications. In GANs, a generative model is obtained by simultaneously training an additional network. A generative model G (i.e. generator) that learns to produce samples close to the data distribution of the training set is trained along with a discriminative model D (i.e. discriminator) that estimates the probability of a given sample belonging to the training set or not, i.e., it is generated by G. The objective of G is to maximize the errors committed by D, whereas the training of D should minimize its own errors, leading to a two-player minimax game. Similar to previous single-image super-resolution [14,17], the input to the generator G is not a random vector but an LR image (in our case, with an additional recurrent input), and thus the generator minimizes the following loss: L A = − log(D(G(Y t ||X t−1 )),(2) where the operator || denotes concatenation. The discriminator minimizes: L D = − log(D(X t )) − log(1 − D(G(Y t ||X t−1 )).(3) Temporal-consistency Loss Upscaling video sequences has the additional challenge of respecting the original temporal consistency between adjacent frames so that the estimated video does not present unpleasing flickering artifacts. When minimizing only L E such artifacts are less noticeable for two main reasons: because (1) MSE minimization often converges to the mean in textured regions, and thus flickering is reduced and (2) the pixel-wise MSE with respect to the GT is up to a certain point enforcing the inter-frame consistency present in the training images. However, when adding an adversarial loss term, the difficult to maintain temporal consistency increases. Adversarial training aims at generating samples that lie in the manifold of images, and thus it generates highfrequency content that will hardly be pixel-wise accurate to any ground-truth image. Generating video frames separately thus introduces unpleasing flickering artifacts. In order to tackle the aforementioned limitation, we introduce the temporalconsistency loss, which has already been successfully used in the style transfer community [27,28,29]. The temporal-consistency loss is computed from two adjacent estimated frames (without need of ground-truth), by warping the frame t − 1 to t and computing the MSE between them. We show an example of the behavior of our proposed temporal-consistency loss in Figure 2. Let W (X, O) denote a image warping operation and O an optical flow field mapping t − 1 to t. Our proposed loss reads: L T = X w t−1 −X t 2 2 , forX w t−1 = W (X t−1 , O).(4) Results Training and parameters Our model falls in the category of recurrent neural networks, and thus must be trained via Backpropagation Through Time (BPTT) [36], which is a finite approximation of the infinite recurrent loop created in the model. In practice, BPTT unfolds the network into several temporal steps where each of those steps is a copy of the network sharing the same parameters. The backpropagation algorithm is then used to obtain gradients of the loss with respect to the parameters. An example of unfolded recurrent generator and discriminator can be visualized in Figure 3. We select 10 temporal steps for our training approximation. Note that our discriminator classifies each image independently and is not recurrent, thus the different images produced by G can be stacked in the batch dimension (i.e. the discriminator does not have any connection between adjacent frames). Our training set is composed by 4k videos downloaded from youtube.com and downscaled to 720 × 1280, from which we extract around 300.000 128 × 128 HR crops that serve as ground-truth images, and then further downsample them by a factor of s = 4 to obtain the LR input of size 32 × 32. The training dataset thus is composed by around 30.000 sequences of 10 frames each (i.e. around 30.000 data-points for the recurrent network). We precompute the optical flows with FlowNet 2.0 and load them both during training and testing, as GPU memory becomes scarce specially when unfolding generator and discriminator. We compile a testing set, larger than other previous testing sets in the literature, also downloaded from youtube.com, favoring sharp 4k content that is further downsampled to 720 × 1280 for GT and 180 × 320 for the LR input. In this dataset there are 12 sequences of diverse nature scenes (e.g. landscapes, natural life, urban scenes) ranging from very little to fast motion. Each sequence is composed of roughly 100 to 150 frames, which totals 1281 frames. We use a batch size of 8 sequences, i.e., in total each batch contains 8×10 = 80 training images. All models are pre-trained with L E for about 100k training iterations and then trained with the adversarial and temporal loss for about 1.5M iterations. Training was performed on Nvidia Tesla P100 and V100 GPUs, both of which have 16 GB of memory. Evaluation Seq. Table 1. LPIPS scores (AlexNet architecture with linear calibration layer) for 12 sequences. Best performers in bold font, and runner's-up in blue color. The best performer on average is our proposed model trained with LEA followed closely by ENet. Methods Models We include in our validation three loss configuration: (1) L E is trained only with MSE loss as a baseline, (2) L EA is our adversarial model trained with L E + 3 × 10 −3 L A and (3) L EAT is our adversarial model with temporalconsistency loss L E + 3 × 10 −3 L A + 10 −2 L T . We also include two other state-of-the art models in our benchmarking: En-hanceNet (ENet) as a perceptual single image super-resolution baseline [14] (code and pre-trained network weights obtained from the authors' website), which minimizes an additional loss term based on the Gram matrix of VGG activations; and lastly our model without flow estimation or recurrent connections that we denote in the tables by L SI A . This last model is very similar to the network used in SRGAN from Ledig et al. [17]. Intra-frame quality Evaluation of images trained with perceptual losses is still an open problem. Even though it is trivial for humans to evaluate the perceived similarity between two images, the underlying principles of human perception are still not well-understood. Traditional metrics such as PSNR (based on MSE), Structural Self-Similarity (SSIM) or the Information Fidelity Criterion (IFC) still rely on well-aligned, more or less pixel-wise accuracy estimates, and minor artifacts in the images can cause great perturbations in them. In order to evaluate image samples from models that deviate from the MSE minimization scheme other metrics need to be considered. Table 2. Temporal Consistency Loss for adjacent frames (initial frame has not been included in the computation). Best performer in bold and runner-up in blue color (omitting bicubic). The best performer on average is our proposed method trained with LEAT followed by LE. Seq. Methods The recent work of Zhang et al. [37] explores the capabilities of deep architectures to capture perceptual features that are meaningful for similarity assessment. In their exhaustive evaluation they show how deep features of different architectures outperform other previous metrics by substantial margins and correlate very well with subjective human scores. They conclude that deep networks, regardless of the specific architecture, capture important perceptual features that are well-aligned with those of the human visual system. Consequently, they propose the Learned Perceptual Image Patch Similarity metric (LPIPS). We evaluate our testing set with LPIPS using the AlexNet architecture with an additional linear calibration layer as the authors propose in their manuscript. We show our LPIPS scores in Table 1. These scores are in line with what we show in the quantitative visual inspection in Figure 4: The samples obtained by ENet are together with L EA the most similar to the GT, with ENet producing slightly sharper images than our proposed method. L EAT comes afterwards, as a more conservative generative network (i.e. closer to the one obtained with the MSE loss). Temporal Consistency To evaluate the temporal consistency of the estimated videos, we compute the temporal-consistency loss as described in Equation 4 and Fig. 2 between adjacent frames for all the methods in the benchmark. We show the results in Table 2. We note that all the configurations that are recurrent perform well in this metric, even when we do not minimize L T directly (e.g. L E , L EA ). In contrast, models that are not aware of the temporal dimensions (such as ENet or L SI A ) obtain higher errors, validating that the recurrent network learns inter-frame relationships. Not considering the bicubic interpolation which is very blurry, the best performer is L EAT followed closely by L E . Our model L EA indeed performs reasonably well, especially taking into consideration that it is the best performer in the quality scores shown in Table 1. Evaluating the temporal consistency over adjacent frames in a sequence where the ground-truth optical flow is not known poses several problems, as errors present in the flow estimation will directly affect this metric. Additionally, the bilinear resampling performed for the image warping is, when analyzed in the frequency domain, a low-pass filter that can potentially blur high-frequencies and thus, result in an increase of uncertainty in the measured error. In order to ensure the reliability of the temporal consistency validation, we perform further testing with the MPI Sintel synthetic training sequence (which includes groundtruth optical flow). This enables us to asses the impact of using estimated flows in the temporal consistency metric. We show in Table 3 the results in terms of temporal consistency of the MPI Sintel training 23 sequences using the GT and also FlowNet2 estimated optical flows for the warping in the metric. The error from estimated flows to GT is not significant, and the relationship among methods is similar: L EA improves greatly over the non-recurrent SRGAN L SI A or EnhanceNet, and L EAT (with temporal consistency loss) further improves over L EA . Table 3. Temporal Consistency Loss for adjacent frames for MPI Sintel Dataset using ground-truth and estimated optical flows. Following the example of [21], we also show in Figure 5 temporal profiles for qualitative evaluation of temporal consistency. A temporal profile shows an image where each row is a fixed line over a set of consecutive time frames, creating a 2-dimensional visualization of the temporal evolution of the line. In this figure, we can corroborate the objective assessment performed in Table 1 and Table 2. ENet produces very appealing sharp textures, to the point that it is hallucinating frequencies not present in the GT image (i.e. over-sharpening). This is not necessarily unpleasing with static images, but temporal consistency then becomes very challenging, and some of those textures resemble noise in the temporal profile. Our model L A is hardly distinguishable from the GT image, generating high-quality plausible textures, while showing a very clean temporal profile. L EAT has fewer hallucinated textures than L A , but on the other hand we also see an improved temporal behavior (i.e. less flickering). Conclusions We presented a novel generative adversarial model for video upscaling. Differently from previous approaches to video super-resolution based on MSE minimization, we used adversarial loss functions in order to recover videos with photorealistic textures. To the best of our knowledge, this is the first work that applies perceptual loss functions to the task of video super-resolution. In order to tackle the problem of lacking temporal consistency due to perceptual loss functions, we propose two synergetic contributions: (1) A recurrent generator and discriminator model where the output of frame t − 1 is passed on to the next iteration in combination with the input of frame t, enabling temporal cues during learning and inference. Models trained with adversarial and MSE losses show improved behavior in terms of temporal consistency and a competitive quality when compared to SISR models. (2) Additionally, we introduce the temporal-consistency loss to video super-resolution, in which deviations from the previous warped frame are punished when estimating a given frame. We conducted evaluation by means of the LPIPS and temporal-consistency loss on a testing dataset of more than a thousand 4k video frames, obtaining promising results that open new possibilities within video upscaling.
3,638
1807.07930
2942641522
With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains images with high perceptual quality and improved temporal consistency.
Amongst classical video SR methods, @cite_20 have achieved notable image quality using Bayesian optimization methods, but the computational complexity of the approach prohibits use in real-time applications. Neural network based approaches include @cite_25 who use a bidirectional recurrent architecture with comparably shallow networks without explicit motion compensation. More recently, neural network based methods operate on a sliding window of input frames. The main idea of @cite_34 is to align and warp neighboring frames to the current frame before all images are fed into a SR network which combines details from all frames into a single image. Inspired by this idea, @cite_29 take a similar approach but employ a flow estimation network for the frame alignment. Similarly, @cite_14 use a sliding window approach but they combine the frame alignment and SR steps. @cite_30 also propose a method which operates on a stack of video frames. They estimate the motion in the frames and subsequently map them into high-resolution space before another SR network combines the information from all frames. @cite_41 operate on varying numbers of frames at the same time to generate different high-resolution images and then condense the results into a single image in a final step.
{ "abstract": [ "", "Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video super-resolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video super-resolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video super-resolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.", "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.", "Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30 whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.", "Convolutional neural networks (CNN) are a special type of deep neural networks (DNN). They have so far been successfully applied to image super-resolution (SR) as well as other image restoration tasks. In this paper, we consider the problem of video super-resolution. We propose a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution. Consecutive frames are motion compensated and used as input to a CNN that provides super-resolved video frames as output. We investigate different options of combining the video frames within one CNN architecture. While large image databases are available to train deep neural networks, it is more challenging to create a large video database of sufficient quality to train neural nets for video restoration. We show that by using images to pretrain our model, a relatively small video database is sufficient for the training of our model to achieve and even improve upon the current state-of-the-art. We compare our proposed approach to current video as well as image SR algorithms.", "", "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results." ], "cite_N": [ "@cite_30", "@cite_14", "@cite_41", "@cite_29", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "", "2726582817", "2781335552", "2951570301", "2320725294", "", "1981990039" ] }
Photorealistic Video Super Resolution
Advances in convolutional neural networks have revolutionized computer vision and the popular field of super-resolution has been no exception to this rule, as in recent years numerous publications have made great strides towards better reconstructions of high-resolution pictures. A most promising new trend in super-resolution has emerged as the application of perceptual loss functions rather than the previously ubiquitous optimization of the mean squared error. This paradigm shift has enabled the leap from images with blurred textures to near-photorealistic results in terms of perceived image quality using deep neural networks. Notwithstanding the recent success in single image super-resolution, perceptual losses have not yet been successfully utilized in the video superresolution domain as perceptual losses typically introduce artifacts that, while being undisturbing in the spatial domain, emerge as spurious flickering artifacts in videos. In this paper we propose a neural network model that is able to produce sharp videos with fine details while improving its behaviour in terms of temporal consistency. The contributions of the paper are: (1) We propose a recurrent arXiv:1807.07930v1 [cs.CV] 20 Jul 2018 generative adversarial model that uses optical flow in order to exploit temporal cues across frames, and (2) we introduce a temporal-consistency loss term that reinforces coherent consecutive frames in the temporal domain. Proposed method Notation and problem statement Video super-resolution aims at upscaling a given LR image sequence {Y t } by a factor of s, so that the estimated sequence X t resembles the original sequence {X t } by some metric. We denote images in the low-resolution domain by Y ∈ [0, 1] h×w×3 , and ground-truth images in the high-resolution domain by X ∈ [0, 1] sh×sw×3 for a given magnification factor s. An estimate of a high-resolution image X is denoted byX. We discern within a temporal sequence by a subindex to the image variable, e.g., Y t−1 , Y t . We use a superscript w, e.g.X w t−1 , to denote an imageX that has been warped from its time step t − 1 to the following frame X t . The proposed architecture is summarized in Figure 1 and will be explained in detail in the following sections. We define an architecture that naturally leverages not only single image but also inter-frame details present in video sequences by Generator Estimated images Discriminator ? Fig. 1. Network architectures for generator and discriminator. The previous output frame is warped onto the current frame and mapped to LR with the space to depth transformation before being concatenated to the current LR input frame. The generator follows a ResNet architecture with skip connections around the residual blocks and around the whole network. The discriminator follows the common design pattern of decreasing the spatial dimension of the images while increasing the number of channels after each block. using a recurrent neural network architecture. The previous output frame is warped according to the optical flow estimate given by FlowNet 2.0 [30]. By including a discriminator that is only needed at the training stage, we further enable adversarial training which has proved to be a powerful tool for generating sharper and more realistic images [14,17]. To the best of our knowledge, the use of perceptual loss functions (i.e. adversarial training in recurrent architectures) for video super-resolution is novel. In a recently published work, Sajjadi et al. [25] propose a similar recurrent architecture for video super-resolution, however, they do not utilize a perceptual objective and in contrast to our method, they do not apply an explicit loss term that enforces temporal consistency. Recurrent generator and discriminator Following recent super-resolution state of the art methods for both classical and perceptual loss functions [9,14,17,31], we use deep convolutional neural networks with residual connections. This class of networks facilitates learning the identity mapping and leads to better gradient flow through deep networks. Specifically, we adopt a ResNet architecture for our recurrent generator that is similar to the ones introduced by [14,17] with some modifications. Each of the residual blocks is composed by a convolution, a Rectified Linear Unit (ReLU) activation and another convolutional layer following the activation. Previous approaches have applied batch normalization layers in the residual blocks [17], but we choose not to add batch normalization to the generator due to the comparably small batch size, to avoid potential color shift problems, and also taking into account recent evidence hinting that they might be problematic for generative image models [32]. In order to further accelerate and stabilize training, we create an additional skip connection over the whole generator. This means that the network only needs to learn the residual between the bicubic interpolation of the input and the high-resolution ground-truth image rather than having to pass through all low frequencies as well [7,14]. We perform most of our convolutions in low-resolution space for a higher receptive field and higher efficiency. Since the input image has a lower dimension than the output image, the generator needs to have a module that increases the resolution towards the end. There are several ways to do so within a neural network, e.g., transposed convolution layers, interpolation, or depth to space units (pixelshuffle). Following Shi et al. [4], we reshuffle the activations from the channel dimension to the height and width dimensions so that the effective spatial resolution is increased (and, consequently, this operation decreases the number of channels). The upscaling unit is divided into two stages with an intermediate magnification step r (e.g. two times ×2 for a magnification factor of ×4). Each of the upscaling stages is composed of a convolutional layer that increments the number of channels by a factor of r 2 , a depth to space operation and a ReLU activation. Our discriminator follows common design choices and is composed by strided convolutions, batch normalization and leaky ReLU activations that progressively decrease the spatial resolution of the activations and increase the channel count [14,17,33]. The last stage of the discriminator is composed of two dense layers and a sigmoid activation function. In contract to general generative adversarial networks, the input to the proposed generative network is not a random variable but it is composed of the low-resolution image Y t (corresponding to the current frame t) and, additionally, the warped output of the network at the previous stepX w t−1 . The difference in resolution of these two images is adapted through a space to channel layer which decreases the spatial resolution ofX w t−1 without loss of data. For warping the previous image, a dense optical flow field is estimated with a flow estimation network as described in the following section. As described in Section 3.4, the warped frames are also used in an additional loss term that enforces higher temporal consistency in the results. Flow estimation Accurate dense flow estimation is crucial to the success of the proposed architecture. For this reason, we opt to use one of the best available flow estimation methods, FlowNet 2.0 [30]. We use the pre-trained model supplied by the authors [34] and run optical flow estimation on our whole dataset. FlowNet 2.0 is t t-1 warped to t not temporally consistent t t-1 warped to t no pixel-wise fidelity to GT, but temporally consistent Fig. 2. Behavior of the proposed temporal-consistency loss. The sequence depicts a checkerboard pattern which moves to the right. In the first group of GT images, the warped images are exactly similar, and thus the loss is 0. In (a), the results are not temporally consistent, so the warped image is different from the current frame which leads to a loss that is higher than 0. In the example in (b), the estimated patterns are temporally consistent despite not being the same as the GT images, so the loss is 0. the successor of the original FlowNet architecture which was the first approach that used convolutional neural networks to predict dense optical flow maps. The authors show that it is both faster and more accurate than its predecessor. Besides a more sophisticated training procedure, FlowNet 2.0 relies on an arrangement of stacked networks that capture large displacements in coarse flow estimates which are then refined by the next network in a subsequent step. In a final step, the estimates are fused by a shallow fusion network. For details, we refer the reader to the original publication [30]. Losses We train our model with three different loss terms, namely: pixel-wise mean squared error (MSE), adversarial loss and a temporal-consistency loss. Mean Squared Error MSE is by far the most common loss in the superresolution literature as it is well-understood and easy to compute. It accurately captures sharp edges and contours, but it leads to over-smooth and flat textures as the reconstruction of high-frequency areas falls to the local mean rather than a realistic mode [14]. The pixel-wise MSE is defined as the Frobenius norm of the difference of two images: L E = X t − X t 2 2 ,(1) whereX t denotes the estimated image of the generator for frame t and X t denotes the ground-truth HR frame t. Fig. 3. Unfolded recurrent generator G and discriminator D during training for 3 temporal steps. The output of the previous time step is fed into the generator for the next iteration. Note that the weights of G and D are shared across different time steps. Gradients of all losses during training pass through the whole unrolled configuration of network instances. The discriminator is applied independently for each time step. G G G D D D t t+2 t+1 Adversarial Loss Generative Adversarial Networks (GANs) [35] and their characteristic adversarial training scheme have been a very active research field in the recent years, defining a wide landscape of applications. In GANs, a generative model is obtained by simultaneously training an additional network. A generative model G (i.e. generator) that learns to produce samples close to the data distribution of the training set is trained along with a discriminative model D (i.e. discriminator) that estimates the probability of a given sample belonging to the training set or not, i.e., it is generated by G. The objective of G is to maximize the errors committed by D, whereas the training of D should minimize its own errors, leading to a two-player minimax game. Similar to previous single-image super-resolution [14,17], the input to the generator G is not a random vector but an LR image (in our case, with an additional recurrent input), and thus the generator minimizes the following loss: L A = − log(D(G(Y t ||X t−1 )),(2) where the operator || denotes concatenation. The discriminator minimizes: L D = − log(D(X t )) − log(1 − D(G(Y t ||X t−1 )).(3) Temporal-consistency Loss Upscaling video sequences has the additional challenge of respecting the original temporal consistency between adjacent frames so that the estimated video does not present unpleasing flickering artifacts. When minimizing only L E such artifacts are less noticeable for two main reasons: because (1) MSE minimization often converges to the mean in textured regions, and thus flickering is reduced and (2) the pixel-wise MSE with respect to the GT is up to a certain point enforcing the inter-frame consistency present in the training images. However, when adding an adversarial loss term, the difficult to maintain temporal consistency increases. Adversarial training aims at generating samples that lie in the manifold of images, and thus it generates highfrequency content that will hardly be pixel-wise accurate to any ground-truth image. Generating video frames separately thus introduces unpleasing flickering artifacts. In order to tackle the aforementioned limitation, we introduce the temporalconsistency loss, which has already been successfully used in the style transfer community [27,28,29]. The temporal-consistency loss is computed from two adjacent estimated frames (without need of ground-truth), by warping the frame t − 1 to t and computing the MSE between them. We show an example of the behavior of our proposed temporal-consistency loss in Figure 2. Let W (X, O) denote a image warping operation and O an optical flow field mapping t − 1 to t. Our proposed loss reads: L T = X w t−1 −X t 2 2 , forX w t−1 = W (X t−1 , O).(4) Results Training and parameters Our model falls in the category of recurrent neural networks, and thus must be trained via Backpropagation Through Time (BPTT) [36], which is a finite approximation of the infinite recurrent loop created in the model. In practice, BPTT unfolds the network into several temporal steps where each of those steps is a copy of the network sharing the same parameters. The backpropagation algorithm is then used to obtain gradients of the loss with respect to the parameters. An example of unfolded recurrent generator and discriminator can be visualized in Figure 3. We select 10 temporal steps for our training approximation. Note that our discriminator classifies each image independently and is not recurrent, thus the different images produced by G can be stacked in the batch dimension (i.e. the discriminator does not have any connection between adjacent frames). Our training set is composed by 4k videos downloaded from youtube.com and downscaled to 720 × 1280, from which we extract around 300.000 128 × 128 HR crops that serve as ground-truth images, and then further downsample them by a factor of s = 4 to obtain the LR input of size 32 × 32. The training dataset thus is composed by around 30.000 sequences of 10 frames each (i.e. around 30.000 data-points for the recurrent network). We precompute the optical flows with FlowNet 2.0 and load them both during training and testing, as GPU memory becomes scarce specially when unfolding generator and discriminator. We compile a testing set, larger than other previous testing sets in the literature, also downloaded from youtube.com, favoring sharp 4k content that is further downsampled to 720 × 1280 for GT and 180 × 320 for the LR input. In this dataset there are 12 sequences of diverse nature scenes (e.g. landscapes, natural life, urban scenes) ranging from very little to fast motion. Each sequence is composed of roughly 100 to 150 frames, which totals 1281 frames. We use a batch size of 8 sequences, i.e., in total each batch contains 8×10 = 80 training images. All models are pre-trained with L E for about 100k training iterations and then trained with the adversarial and temporal loss for about 1.5M iterations. Training was performed on Nvidia Tesla P100 and V100 GPUs, both of which have 16 GB of memory. Evaluation Seq. Table 1. LPIPS scores (AlexNet architecture with linear calibration layer) for 12 sequences. Best performers in bold font, and runner's-up in blue color. The best performer on average is our proposed model trained with LEA followed closely by ENet. Methods Models We include in our validation three loss configuration: (1) L E is trained only with MSE loss as a baseline, (2) L EA is our adversarial model trained with L E + 3 × 10 −3 L A and (3) L EAT is our adversarial model with temporalconsistency loss L E + 3 × 10 −3 L A + 10 −2 L T . We also include two other state-of-the art models in our benchmarking: En-hanceNet (ENet) as a perceptual single image super-resolution baseline [14] (code and pre-trained network weights obtained from the authors' website), which minimizes an additional loss term based on the Gram matrix of VGG activations; and lastly our model without flow estimation or recurrent connections that we denote in the tables by L SI A . This last model is very similar to the network used in SRGAN from Ledig et al. [17]. Intra-frame quality Evaluation of images trained with perceptual losses is still an open problem. Even though it is trivial for humans to evaluate the perceived similarity between two images, the underlying principles of human perception are still not well-understood. Traditional metrics such as PSNR (based on MSE), Structural Self-Similarity (SSIM) or the Information Fidelity Criterion (IFC) still rely on well-aligned, more or less pixel-wise accuracy estimates, and minor artifacts in the images can cause great perturbations in them. In order to evaluate image samples from models that deviate from the MSE minimization scheme other metrics need to be considered. Table 2. Temporal Consistency Loss for adjacent frames (initial frame has not been included in the computation). Best performer in bold and runner-up in blue color (omitting bicubic). The best performer on average is our proposed method trained with LEAT followed by LE. Seq. Methods The recent work of Zhang et al. [37] explores the capabilities of deep architectures to capture perceptual features that are meaningful for similarity assessment. In their exhaustive evaluation they show how deep features of different architectures outperform other previous metrics by substantial margins and correlate very well with subjective human scores. They conclude that deep networks, regardless of the specific architecture, capture important perceptual features that are well-aligned with those of the human visual system. Consequently, they propose the Learned Perceptual Image Patch Similarity metric (LPIPS). We evaluate our testing set with LPIPS using the AlexNet architecture with an additional linear calibration layer as the authors propose in their manuscript. We show our LPIPS scores in Table 1. These scores are in line with what we show in the quantitative visual inspection in Figure 4: The samples obtained by ENet are together with L EA the most similar to the GT, with ENet producing slightly sharper images than our proposed method. L EAT comes afterwards, as a more conservative generative network (i.e. closer to the one obtained with the MSE loss). Temporal Consistency To evaluate the temporal consistency of the estimated videos, we compute the temporal-consistency loss as described in Equation 4 and Fig. 2 between adjacent frames for all the methods in the benchmark. We show the results in Table 2. We note that all the configurations that are recurrent perform well in this metric, even when we do not minimize L T directly (e.g. L E , L EA ). In contrast, models that are not aware of the temporal dimensions (such as ENet or L SI A ) obtain higher errors, validating that the recurrent network learns inter-frame relationships. Not considering the bicubic interpolation which is very blurry, the best performer is L EAT followed closely by L E . Our model L EA indeed performs reasonably well, especially taking into consideration that it is the best performer in the quality scores shown in Table 1. Evaluating the temporal consistency over adjacent frames in a sequence where the ground-truth optical flow is not known poses several problems, as errors present in the flow estimation will directly affect this metric. Additionally, the bilinear resampling performed for the image warping is, when analyzed in the frequency domain, a low-pass filter that can potentially blur high-frequencies and thus, result in an increase of uncertainty in the measured error. In order to ensure the reliability of the temporal consistency validation, we perform further testing with the MPI Sintel synthetic training sequence (which includes groundtruth optical flow). This enables us to asses the impact of using estimated flows in the temporal consistency metric. We show in Table 3 the results in terms of temporal consistency of the MPI Sintel training 23 sequences using the GT and also FlowNet2 estimated optical flows for the warping in the metric. The error from estimated flows to GT is not significant, and the relationship among methods is similar: L EA improves greatly over the non-recurrent SRGAN L SI A or EnhanceNet, and L EAT (with temporal consistency loss) further improves over L EA . Table 3. Temporal Consistency Loss for adjacent frames for MPI Sintel Dataset using ground-truth and estimated optical flows. Following the example of [21], we also show in Figure 5 temporal profiles for qualitative evaluation of temporal consistency. A temporal profile shows an image where each row is a fixed line over a set of consecutive time frames, creating a 2-dimensional visualization of the temporal evolution of the line. In this figure, we can corroborate the objective assessment performed in Table 1 and Table 2. ENet produces very appealing sharp textures, to the point that it is hallucinating frequencies not present in the GT image (i.e. over-sharpening). This is not necessarily unpleasing with static images, but temporal consistency then becomes very challenging, and some of those textures resemble noise in the temporal profile. Our model L A is hardly distinguishable from the GT image, generating high-quality plausible textures, while showing a very clean temporal profile. L EAT has fewer hallucinated textures than L A , but on the other hand we also see an improved temporal behavior (i.e. less flickering). Conclusions We presented a novel generative adversarial model for video upscaling. Differently from previous approaches to video super-resolution based on MSE minimization, we used adversarial loss functions in order to recover videos with photorealistic textures. To the best of our knowledge, this is the first work that applies perceptual loss functions to the task of video super-resolution. In order to tackle the problem of lacking temporal consistency due to perceptual loss functions, we propose two synergetic contributions: (1) A recurrent generator and discriminator model where the output of frame t − 1 is passed on to the next iteration in combination with the input of frame t, enabling temporal cues during learning and inference. Models trained with adversarial and MSE losses show improved behavior in terms of temporal consistency and a competitive quality when compared to SISR models. (2) Additionally, we introduce the temporal-consistency loss to video super-resolution, in which deviations from the previous warped frame are punished when estimating a given frame. We conducted evaluation by means of the LPIPS and temporal-consistency loss on a testing dataset of more than a thousand 4k video frames, obtaining promising results that open new possibilities within video upscaling.
3,638
1807.07930
2942641522
With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains images with high perceptual quality and improved temporal consistency.
For generative video processing methods, temporal consistency of the output is crucial. Since most recent methods operate on a sliding window @cite_29 @cite_30 @cite_41 @cite_14 , it is hard to optimize the networks to produce temporally consistent results as no information of the previously super-resolved frame is directly included in the next step. To accommodate for this, @cite_32 use a frame-recurrent approach where the estimated high-resolution frame of the previous step is fed into the network for the following step. This encourages more temporally consistent results, however the authors do not explicitly employ a loss term for the temporal consistency of the output.
{ "abstract": [ "", "Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video super-resolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video super-resolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video super-resolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.", "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.", "Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30 whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.", "Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images. Current state-of-the-art methods process a batch of LR frames to generate a single high-resolution (HR) frame and run this scheme in a sliding window fashion over the entire video, effectively treating the problem as a large number of separate multi-frame super-resolution tasks. This approach has two main weaknesses: 1) Each input frame is processed and warped multiple times, increasing the computational cost, and 2) each output frame is estimated independently conditioned on the input frames, limiting the system's ability to produce temporally consistent results. In this work, we propose an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame. This naturally encourages temporally consistent results and reduces the computational cost by warping only one image in each step. Furthermore, due to its recurrent nature, the proposed method has the ability to assimilate a large number of previous frames without increased computational demands. Extensive evaluations and comparisons with previous methods validate the strengths of our approach and demonstrate that the proposed framework is able to significantly outperform the current state of the art." ], "cite_N": [ "@cite_30", "@cite_14", "@cite_41", "@cite_29", "@cite_32" ], "mid": [ "", "2726582817", "2781335552", "2951570301", "2962927175" ] }
Photorealistic Video Super Resolution
Advances in convolutional neural networks have revolutionized computer vision and the popular field of super-resolution has been no exception to this rule, as in recent years numerous publications have made great strides towards better reconstructions of high-resolution pictures. A most promising new trend in super-resolution has emerged as the application of perceptual loss functions rather than the previously ubiquitous optimization of the mean squared error. This paradigm shift has enabled the leap from images with blurred textures to near-photorealistic results in terms of perceived image quality using deep neural networks. Notwithstanding the recent success in single image super-resolution, perceptual losses have not yet been successfully utilized in the video superresolution domain as perceptual losses typically introduce artifacts that, while being undisturbing in the spatial domain, emerge as spurious flickering artifacts in videos. In this paper we propose a neural network model that is able to produce sharp videos with fine details while improving its behaviour in terms of temporal consistency. The contributions of the paper are: (1) We propose a recurrent arXiv:1807.07930v1 [cs.CV] 20 Jul 2018 generative adversarial model that uses optical flow in order to exploit temporal cues across frames, and (2) we introduce a temporal-consistency loss term that reinforces coherent consecutive frames in the temporal domain. Proposed method Notation and problem statement Video super-resolution aims at upscaling a given LR image sequence {Y t } by a factor of s, so that the estimated sequence X t resembles the original sequence {X t } by some metric. We denote images in the low-resolution domain by Y ∈ [0, 1] h×w×3 , and ground-truth images in the high-resolution domain by X ∈ [0, 1] sh×sw×3 for a given magnification factor s. An estimate of a high-resolution image X is denoted byX. We discern within a temporal sequence by a subindex to the image variable, e.g., Y t−1 , Y t . We use a superscript w, e.g.X w t−1 , to denote an imageX that has been warped from its time step t − 1 to the following frame X t . The proposed architecture is summarized in Figure 1 and will be explained in detail in the following sections. We define an architecture that naturally leverages not only single image but also inter-frame details present in video sequences by Generator Estimated images Discriminator ? Fig. 1. Network architectures for generator and discriminator. The previous output frame is warped onto the current frame and mapped to LR with the space to depth transformation before being concatenated to the current LR input frame. The generator follows a ResNet architecture with skip connections around the residual blocks and around the whole network. The discriminator follows the common design pattern of decreasing the spatial dimension of the images while increasing the number of channels after each block. using a recurrent neural network architecture. The previous output frame is warped according to the optical flow estimate given by FlowNet 2.0 [30]. By including a discriminator that is only needed at the training stage, we further enable adversarial training which has proved to be a powerful tool for generating sharper and more realistic images [14,17]. To the best of our knowledge, the use of perceptual loss functions (i.e. adversarial training in recurrent architectures) for video super-resolution is novel. In a recently published work, Sajjadi et al. [25] propose a similar recurrent architecture for video super-resolution, however, they do not utilize a perceptual objective and in contrast to our method, they do not apply an explicit loss term that enforces temporal consistency. Recurrent generator and discriminator Following recent super-resolution state of the art methods for both classical and perceptual loss functions [9,14,17,31], we use deep convolutional neural networks with residual connections. This class of networks facilitates learning the identity mapping and leads to better gradient flow through deep networks. Specifically, we adopt a ResNet architecture for our recurrent generator that is similar to the ones introduced by [14,17] with some modifications. Each of the residual blocks is composed by a convolution, a Rectified Linear Unit (ReLU) activation and another convolutional layer following the activation. Previous approaches have applied batch normalization layers in the residual blocks [17], but we choose not to add batch normalization to the generator due to the comparably small batch size, to avoid potential color shift problems, and also taking into account recent evidence hinting that they might be problematic for generative image models [32]. In order to further accelerate and stabilize training, we create an additional skip connection over the whole generator. This means that the network only needs to learn the residual between the bicubic interpolation of the input and the high-resolution ground-truth image rather than having to pass through all low frequencies as well [7,14]. We perform most of our convolutions in low-resolution space for a higher receptive field and higher efficiency. Since the input image has a lower dimension than the output image, the generator needs to have a module that increases the resolution towards the end. There are several ways to do so within a neural network, e.g., transposed convolution layers, interpolation, or depth to space units (pixelshuffle). Following Shi et al. [4], we reshuffle the activations from the channel dimension to the height and width dimensions so that the effective spatial resolution is increased (and, consequently, this operation decreases the number of channels). The upscaling unit is divided into two stages with an intermediate magnification step r (e.g. two times ×2 for a magnification factor of ×4). Each of the upscaling stages is composed of a convolutional layer that increments the number of channels by a factor of r 2 , a depth to space operation and a ReLU activation. Our discriminator follows common design choices and is composed by strided convolutions, batch normalization and leaky ReLU activations that progressively decrease the spatial resolution of the activations and increase the channel count [14,17,33]. The last stage of the discriminator is composed of two dense layers and a sigmoid activation function. In contract to general generative adversarial networks, the input to the proposed generative network is not a random variable but it is composed of the low-resolution image Y t (corresponding to the current frame t) and, additionally, the warped output of the network at the previous stepX w t−1 . The difference in resolution of these two images is adapted through a space to channel layer which decreases the spatial resolution ofX w t−1 without loss of data. For warping the previous image, a dense optical flow field is estimated with a flow estimation network as described in the following section. As described in Section 3.4, the warped frames are also used in an additional loss term that enforces higher temporal consistency in the results. Flow estimation Accurate dense flow estimation is crucial to the success of the proposed architecture. For this reason, we opt to use one of the best available flow estimation methods, FlowNet 2.0 [30]. We use the pre-trained model supplied by the authors [34] and run optical flow estimation on our whole dataset. FlowNet 2.0 is t t-1 warped to t not temporally consistent t t-1 warped to t no pixel-wise fidelity to GT, but temporally consistent Fig. 2. Behavior of the proposed temporal-consistency loss. The sequence depicts a checkerboard pattern which moves to the right. In the first group of GT images, the warped images are exactly similar, and thus the loss is 0. In (a), the results are not temporally consistent, so the warped image is different from the current frame which leads to a loss that is higher than 0. In the example in (b), the estimated patterns are temporally consistent despite not being the same as the GT images, so the loss is 0. the successor of the original FlowNet architecture which was the first approach that used convolutional neural networks to predict dense optical flow maps. The authors show that it is both faster and more accurate than its predecessor. Besides a more sophisticated training procedure, FlowNet 2.0 relies on an arrangement of stacked networks that capture large displacements in coarse flow estimates which are then refined by the next network in a subsequent step. In a final step, the estimates are fused by a shallow fusion network. For details, we refer the reader to the original publication [30]. Losses We train our model with three different loss terms, namely: pixel-wise mean squared error (MSE), adversarial loss and a temporal-consistency loss. Mean Squared Error MSE is by far the most common loss in the superresolution literature as it is well-understood and easy to compute. It accurately captures sharp edges and contours, but it leads to over-smooth and flat textures as the reconstruction of high-frequency areas falls to the local mean rather than a realistic mode [14]. The pixel-wise MSE is defined as the Frobenius norm of the difference of two images: L E = X t − X t 2 2 ,(1) whereX t denotes the estimated image of the generator for frame t and X t denotes the ground-truth HR frame t. Fig. 3. Unfolded recurrent generator G and discriminator D during training for 3 temporal steps. The output of the previous time step is fed into the generator for the next iteration. Note that the weights of G and D are shared across different time steps. Gradients of all losses during training pass through the whole unrolled configuration of network instances. The discriminator is applied independently for each time step. G G G D D D t t+2 t+1 Adversarial Loss Generative Adversarial Networks (GANs) [35] and their characteristic adversarial training scheme have been a very active research field in the recent years, defining a wide landscape of applications. In GANs, a generative model is obtained by simultaneously training an additional network. A generative model G (i.e. generator) that learns to produce samples close to the data distribution of the training set is trained along with a discriminative model D (i.e. discriminator) that estimates the probability of a given sample belonging to the training set or not, i.e., it is generated by G. The objective of G is to maximize the errors committed by D, whereas the training of D should minimize its own errors, leading to a two-player minimax game. Similar to previous single-image super-resolution [14,17], the input to the generator G is not a random vector but an LR image (in our case, with an additional recurrent input), and thus the generator minimizes the following loss: L A = − log(D(G(Y t ||X t−1 )),(2) where the operator || denotes concatenation. The discriminator minimizes: L D = − log(D(X t )) − log(1 − D(G(Y t ||X t−1 )).(3) Temporal-consistency Loss Upscaling video sequences has the additional challenge of respecting the original temporal consistency between adjacent frames so that the estimated video does not present unpleasing flickering artifacts. When minimizing only L E such artifacts are less noticeable for two main reasons: because (1) MSE minimization often converges to the mean in textured regions, and thus flickering is reduced and (2) the pixel-wise MSE with respect to the GT is up to a certain point enforcing the inter-frame consistency present in the training images. However, when adding an adversarial loss term, the difficult to maintain temporal consistency increases. Adversarial training aims at generating samples that lie in the manifold of images, and thus it generates highfrequency content that will hardly be pixel-wise accurate to any ground-truth image. Generating video frames separately thus introduces unpleasing flickering artifacts. In order to tackle the aforementioned limitation, we introduce the temporalconsistency loss, which has already been successfully used in the style transfer community [27,28,29]. The temporal-consistency loss is computed from two adjacent estimated frames (without need of ground-truth), by warping the frame t − 1 to t and computing the MSE between them. We show an example of the behavior of our proposed temporal-consistency loss in Figure 2. Let W (X, O) denote a image warping operation and O an optical flow field mapping t − 1 to t. Our proposed loss reads: L T = X w t−1 −X t 2 2 , forX w t−1 = W (X t−1 , O).(4) Results Training and parameters Our model falls in the category of recurrent neural networks, and thus must be trained via Backpropagation Through Time (BPTT) [36], which is a finite approximation of the infinite recurrent loop created in the model. In practice, BPTT unfolds the network into several temporal steps where each of those steps is a copy of the network sharing the same parameters. The backpropagation algorithm is then used to obtain gradients of the loss with respect to the parameters. An example of unfolded recurrent generator and discriminator can be visualized in Figure 3. We select 10 temporal steps for our training approximation. Note that our discriminator classifies each image independently and is not recurrent, thus the different images produced by G can be stacked in the batch dimension (i.e. the discriminator does not have any connection between adjacent frames). Our training set is composed by 4k videos downloaded from youtube.com and downscaled to 720 × 1280, from which we extract around 300.000 128 × 128 HR crops that serve as ground-truth images, and then further downsample them by a factor of s = 4 to obtain the LR input of size 32 × 32. The training dataset thus is composed by around 30.000 sequences of 10 frames each (i.e. around 30.000 data-points for the recurrent network). We precompute the optical flows with FlowNet 2.0 and load them both during training and testing, as GPU memory becomes scarce specially when unfolding generator and discriminator. We compile a testing set, larger than other previous testing sets in the literature, also downloaded from youtube.com, favoring sharp 4k content that is further downsampled to 720 × 1280 for GT and 180 × 320 for the LR input. In this dataset there are 12 sequences of diverse nature scenes (e.g. landscapes, natural life, urban scenes) ranging from very little to fast motion. Each sequence is composed of roughly 100 to 150 frames, which totals 1281 frames. We use a batch size of 8 sequences, i.e., in total each batch contains 8×10 = 80 training images. All models are pre-trained with L E for about 100k training iterations and then trained with the adversarial and temporal loss for about 1.5M iterations. Training was performed on Nvidia Tesla P100 and V100 GPUs, both of which have 16 GB of memory. Evaluation Seq. Table 1. LPIPS scores (AlexNet architecture with linear calibration layer) for 12 sequences. Best performers in bold font, and runner's-up in blue color. The best performer on average is our proposed model trained with LEA followed closely by ENet. Methods Models We include in our validation three loss configuration: (1) L E is trained only with MSE loss as a baseline, (2) L EA is our adversarial model trained with L E + 3 × 10 −3 L A and (3) L EAT is our adversarial model with temporalconsistency loss L E + 3 × 10 −3 L A + 10 −2 L T . We also include two other state-of-the art models in our benchmarking: En-hanceNet (ENet) as a perceptual single image super-resolution baseline [14] (code and pre-trained network weights obtained from the authors' website), which minimizes an additional loss term based on the Gram matrix of VGG activations; and lastly our model without flow estimation or recurrent connections that we denote in the tables by L SI A . This last model is very similar to the network used in SRGAN from Ledig et al. [17]. Intra-frame quality Evaluation of images trained with perceptual losses is still an open problem. Even though it is trivial for humans to evaluate the perceived similarity between two images, the underlying principles of human perception are still not well-understood. Traditional metrics such as PSNR (based on MSE), Structural Self-Similarity (SSIM) or the Information Fidelity Criterion (IFC) still rely on well-aligned, more or less pixel-wise accuracy estimates, and minor artifacts in the images can cause great perturbations in them. In order to evaluate image samples from models that deviate from the MSE minimization scheme other metrics need to be considered. Table 2. Temporal Consistency Loss for adjacent frames (initial frame has not been included in the computation). Best performer in bold and runner-up in blue color (omitting bicubic). The best performer on average is our proposed method trained with LEAT followed by LE. Seq. Methods The recent work of Zhang et al. [37] explores the capabilities of deep architectures to capture perceptual features that are meaningful for similarity assessment. In their exhaustive evaluation they show how deep features of different architectures outperform other previous metrics by substantial margins and correlate very well with subjective human scores. They conclude that deep networks, regardless of the specific architecture, capture important perceptual features that are well-aligned with those of the human visual system. Consequently, they propose the Learned Perceptual Image Patch Similarity metric (LPIPS). We evaluate our testing set with LPIPS using the AlexNet architecture with an additional linear calibration layer as the authors propose in their manuscript. We show our LPIPS scores in Table 1. These scores are in line with what we show in the quantitative visual inspection in Figure 4: The samples obtained by ENet are together with L EA the most similar to the GT, with ENet producing slightly sharper images than our proposed method. L EAT comes afterwards, as a more conservative generative network (i.e. closer to the one obtained with the MSE loss). Temporal Consistency To evaluate the temporal consistency of the estimated videos, we compute the temporal-consistency loss as described in Equation 4 and Fig. 2 between adjacent frames for all the methods in the benchmark. We show the results in Table 2. We note that all the configurations that are recurrent perform well in this metric, even when we do not minimize L T directly (e.g. L E , L EA ). In contrast, models that are not aware of the temporal dimensions (such as ENet or L SI A ) obtain higher errors, validating that the recurrent network learns inter-frame relationships. Not considering the bicubic interpolation which is very blurry, the best performer is L EAT followed closely by L E . Our model L EA indeed performs reasonably well, especially taking into consideration that it is the best performer in the quality scores shown in Table 1. Evaluating the temporal consistency over adjacent frames in a sequence where the ground-truth optical flow is not known poses several problems, as errors present in the flow estimation will directly affect this metric. Additionally, the bilinear resampling performed for the image warping is, when analyzed in the frequency domain, a low-pass filter that can potentially blur high-frequencies and thus, result in an increase of uncertainty in the measured error. In order to ensure the reliability of the temporal consistency validation, we perform further testing with the MPI Sintel synthetic training sequence (which includes groundtruth optical flow). This enables us to asses the impact of using estimated flows in the temporal consistency metric. We show in Table 3 the results in terms of temporal consistency of the MPI Sintel training 23 sequences using the GT and also FlowNet2 estimated optical flows for the warping in the metric. The error from estimated flows to GT is not significant, and the relationship among methods is similar: L EA improves greatly over the non-recurrent SRGAN L SI A or EnhanceNet, and L EAT (with temporal consistency loss) further improves over L EA . Table 3. Temporal Consistency Loss for adjacent frames for MPI Sintel Dataset using ground-truth and estimated optical flows. Following the example of [21], we also show in Figure 5 temporal profiles for qualitative evaluation of temporal consistency. A temporal profile shows an image where each row is a fixed line over a set of consecutive time frames, creating a 2-dimensional visualization of the temporal evolution of the line. In this figure, we can corroborate the objective assessment performed in Table 1 and Table 2. ENet produces very appealing sharp textures, to the point that it is hallucinating frequencies not present in the GT image (i.e. over-sharpening). This is not necessarily unpleasing with static images, but temporal consistency then becomes very challenging, and some of those textures resemble noise in the temporal profile. Our model L A is hardly distinguishable from the GT image, generating high-quality plausible textures, while showing a very clean temporal profile. L EAT has fewer hallucinated textures than L A , but on the other hand we also see an improved temporal behavior (i.e. less flickering). Conclusions We presented a novel generative adversarial model for video upscaling. Differently from previous approaches to video super-resolution based on MSE minimization, we used adversarial loss functions in order to recover videos with photorealistic textures. To the best of our knowledge, this is the first work that applies perceptual loss functions to the task of video super-resolution. In order to tackle the problem of lacking temporal consistency due to perceptual loss functions, we propose two synergetic contributions: (1) A recurrent generator and discriminator model where the output of frame t − 1 is passed on to the next iteration in combination with the input of frame t, enabling temporal cues during learning and inference. Models trained with adversarial and MSE losses show improved behavior in terms of temporal consistency and a competitive quality when compared to SISR models. (2) Additionally, we introduce the temporal-consistency loss to video super-resolution, in which deviations from the previous warped frame are punished when estimating a given frame. We conducted evaluation by means of the LPIPS and temporal-consistency loss on a testing dataset of more than a thousand 4k video frames, obtaining promising results that open new possibilities within video upscaling.
3,638
1807.07930
2942641522
With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains images with high perceptual quality and improved temporal consistency.
To the best of our knowledge, VSR methods have so far been restricted to MSE optimization methods and recent advancements in perceptual image quality in single image SR have not yet been successfully transferred to VSR. A possible explanation is that perceptual losses lead to sharper images which makes temporal inconsistencies significantly more evident in the results, leading to unpleasing flickering in the high-resolution videos @cite_31 .
{ "abstract": [ "Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks." ], "cite_N": [ "@cite_31" ], "mid": [ "2963037581" ] }
Photorealistic Video Super Resolution
Advances in convolutional neural networks have revolutionized computer vision and the popular field of super-resolution has been no exception to this rule, as in recent years numerous publications have made great strides towards better reconstructions of high-resolution pictures. A most promising new trend in super-resolution has emerged as the application of perceptual loss functions rather than the previously ubiquitous optimization of the mean squared error. This paradigm shift has enabled the leap from images with blurred textures to near-photorealistic results in terms of perceived image quality using deep neural networks. Notwithstanding the recent success in single image super-resolution, perceptual losses have not yet been successfully utilized in the video superresolution domain as perceptual losses typically introduce artifacts that, while being undisturbing in the spatial domain, emerge as spurious flickering artifacts in videos. In this paper we propose a neural network model that is able to produce sharp videos with fine details while improving its behaviour in terms of temporal consistency. The contributions of the paper are: (1) We propose a recurrent arXiv:1807.07930v1 [cs.CV] 20 Jul 2018 generative adversarial model that uses optical flow in order to exploit temporal cues across frames, and (2) we introduce a temporal-consistency loss term that reinforces coherent consecutive frames in the temporal domain. Proposed method Notation and problem statement Video super-resolution aims at upscaling a given LR image sequence {Y t } by a factor of s, so that the estimated sequence X t resembles the original sequence {X t } by some metric. We denote images in the low-resolution domain by Y ∈ [0, 1] h×w×3 , and ground-truth images in the high-resolution domain by X ∈ [0, 1] sh×sw×3 for a given magnification factor s. An estimate of a high-resolution image X is denoted byX. We discern within a temporal sequence by a subindex to the image variable, e.g., Y t−1 , Y t . We use a superscript w, e.g.X w t−1 , to denote an imageX that has been warped from its time step t − 1 to the following frame X t . The proposed architecture is summarized in Figure 1 and will be explained in detail in the following sections. We define an architecture that naturally leverages not only single image but also inter-frame details present in video sequences by Generator Estimated images Discriminator ? Fig. 1. Network architectures for generator and discriminator. The previous output frame is warped onto the current frame and mapped to LR with the space to depth transformation before being concatenated to the current LR input frame. The generator follows a ResNet architecture with skip connections around the residual blocks and around the whole network. The discriminator follows the common design pattern of decreasing the spatial dimension of the images while increasing the number of channels after each block. using a recurrent neural network architecture. The previous output frame is warped according to the optical flow estimate given by FlowNet 2.0 [30]. By including a discriminator that is only needed at the training stage, we further enable adversarial training which has proved to be a powerful tool for generating sharper and more realistic images [14,17]. To the best of our knowledge, the use of perceptual loss functions (i.e. adversarial training in recurrent architectures) for video super-resolution is novel. In a recently published work, Sajjadi et al. [25] propose a similar recurrent architecture for video super-resolution, however, they do not utilize a perceptual objective and in contrast to our method, they do not apply an explicit loss term that enforces temporal consistency. Recurrent generator and discriminator Following recent super-resolution state of the art methods for both classical and perceptual loss functions [9,14,17,31], we use deep convolutional neural networks with residual connections. This class of networks facilitates learning the identity mapping and leads to better gradient flow through deep networks. Specifically, we adopt a ResNet architecture for our recurrent generator that is similar to the ones introduced by [14,17] with some modifications. Each of the residual blocks is composed by a convolution, a Rectified Linear Unit (ReLU) activation and another convolutional layer following the activation. Previous approaches have applied batch normalization layers in the residual blocks [17], but we choose not to add batch normalization to the generator due to the comparably small batch size, to avoid potential color shift problems, and also taking into account recent evidence hinting that they might be problematic for generative image models [32]. In order to further accelerate and stabilize training, we create an additional skip connection over the whole generator. This means that the network only needs to learn the residual between the bicubic interpolation of the input and the high-resolution ground-truth image rather than having to pass through all low frequencies as well [7,14]. We perform most of our convolutions in low-resolution space for a higher receptive field and higher efficiency. Since the input image has a lower dimension than the output image, the generator needs to have a module that increases the resolution towards the end. There are several ways to do so within a neural network, e.g., transposed convolution layers, interpolation, or depth to space units (pixelshuffle). Following Shi et al. [4], we reshuffle the activations from the channel dimension to the height and width dimensions so that the effective spatial resolution is increased (and, consequently, this operation decreases the number of channels). The upscaling unit is divided into two stages with an intermediate magnification step r (e.g. two times ×2 for a magnification factor of ×4). Each of the upscaling stages is composed of a convolutional layer that increments the number of channels by a factor of r 2 , a depth to space operation and a ReLU activation. Our discriminator follows common design choices and is composed by strided convolutions, batch normalization and leaky ReLU activations that progressively decrease the spatial resolution of the activations and increase the channel count [14,17,33]. The last stage of the discriminator is composed of two dense layers and a sigmoid activation function. In contract to general generative adversarial networks, the input to the proposed generative network is not a random variable but it is composed of the low-resolution image Y t (corresponding to the current frame t) and, additionally, the warped output of the network at the previous stepX w t−1 . The difference in resolution of these two images is adapted through a space to channel layer which decreases the spatial resolution ofX w t−1 without loss of data. For warping the previous image, a dense optical flow field is estimated with a flow estimation network as described in the following section. As described in Section 3.4, the warped frames are also used in an additional loss term that enforces higher temporal consistency in the results. Flow estimation Accurate dense flow estimation is crucial to the success of the proposed architecture. For this reason, we opt to use one of the best available flow estimation methods, FlowNet 2.0 [30]. We use the pre-trained model supplied by the authors [34] and run optical flow estimation on our whole dataset. FlowNet 2.0 is t t-1 warped to t not temporally consistent t t-1 warped to t no pixel-wise fidelity to GT, but temporally consistent Fig. 2. Behavior of the proposed temporal-consistency loss. The sequence depicts a checkerboard pattern which moves to the right. In the first group of GT images, the warped images are exactly similar, and thus the loss is 0. In (a), the results are not temporally consistent, so the warped image is different from the current frame which leads to a loss that is higher than 0. In the example in (b), the estimated patterns are temporally consistent despite not being the same as the GT images, so the loss is 0. the successor of the original FlowNet architecture which was the first approach that used convolutional neural networks to predict dense optical flow maps. The authors show that it is both faster and more accurate than its predecessor. Besides a more sophisticated training procedure, FlowNet 2.0 relies on an arrangement of stacked networks that capture large displacements in coarse flow estimates which are then refined by the next network in a subsequent step. In a final step, the estimates are fused by a shallow fusion network. For details, we refer the reader to the original publication [30]. Losses We train our model with three different loss terms, namely: pixel-wise mean squared error (MSE), adversarial loss and a temporal-consistency loss. Mean Squared Error MSE is by far the most common loss in the superresolution literature as it is well-understood and easy to compute. It accurately captures sharp edges and contours, but it leads to over-smooth and flat textures as the reconstruction of high-frequency areas falls to the local mean rather than a realistic mode [14]. The pixel-wise MSE is defined as the Frobenius norm of the difference of two images: L E = X t − X t 2 2 ,(1) whereX t denotes the estimated image of the generator for frame t and X t denotes the ground-truth HR frame t. Fig. 3. Unfolded recurrent generator G and discriminator D during training for 3 temporal steps. The output of the previous time step is fed into the generator for the next iteration. Note that the weights of G and D are shared across different time steps. Gradients of all losses during training pass through the whole unrolled configuration of network instances. The discriminator is applied independently for each time step. G G G D D D t t+2 t+1 Adversarial Loss Generative Adversarial Networks (GANs) [35] and their characteristic adversarial training scheme have been a very active research field in the recent years, defining a wide landscape of applications. In GANs, a generative model is obtained by simultaneously training an additional network. A generative model G (i.e. generator) that learns to produce samples close to the data distribution of the training set is trained along with a discriminative model D (i.e. discriminator) that estimates the probability of a given sample belonging to the training set or not, i.e., it is generated by G. The objective of G is to maximize the errors committed by D, whereas the training of D should minimize its own errors, leading to a two-player minimax game. Similar to previous single-image super-resolution [14,17], the input to the generator G is not a random vector but an LR image (in our case, with an additional recurrent input), and thus the generator minimizes the following loss: L A = − log(D(G(Y t ||X t−1 )),(2) where the operator || denotes concatenation. The discriminator minimizes: L D = − log(D(X t )) − log(1 − D(G(Y t ||X t−1 )).(3) Temporal-consistency Loss Upscaling video sequences has the additional challenge of respecting the original temporal consistency between adjacent frames so that the estimated video does not present unpleasing flickering artifacts. When minimizing only L E such artifacts are less noticeable for two main reasons: because (1) MSE minimization often converges to the mean in textured regions, and thus flickering is reduced and (2) the pixel-wise MSE with respect to the GT is up to a certain point enforcing the inter-frame consistency present in the training images. However, when adding an adversarial loss term, the difficult to maintain temporal consistency increases. Adversarial training aims at generating samples that lie in the manifold of images, and thus it generates highfrequency content that will hardly be pixel-wise accurate to any ground-truth image. Generating video frames separately thus introduces unpleasing flickering artifacts. In order to tackle the aforementioned limitation, we introduce the temporalconsistency loss, which has already been successfully used in the style transfer community [27,28,29]. The temporal-consistency loss is computed from two adjacent estimated frames (without need of ground-truth), by warping the frame t − 1 to t and computing the MSE between them. We show an example of the behavior of our proposed temporal-consistency loss in Figure 2. Let W (X, O) denote a image warping operation and O an optical flow field mapping t − 1 to t. Our proposed loss reads: L T = X w t−1 −X t 2 2 , forX w t−1 = W (X t−1 , O).(4) Results Training and parameters Our model falls in the category of recurrent neural networks, and thus must be trained via Backpropagation Through Time (BPTT) [36], which is a finite approximation of the infinite recurrent loop created in the model. In practice, BPTT unfolds the network into several temporal steps where each of those steps is a copy of the network sharing the same parameters. The backpropagation algorithm is then used to obtain gradients of the loss with respect to the parameters. An example of unfolded recurrent generator and discriminator can be visualized in Figure 3. We select 10 temporal steps for our training approximation. Note that our discriminator classifies each image independently and is not recurrent, thus the different images produced by G can be stacked in the batch dimension (i.e. the discriminator does not have any connection between adjacent frames). Our training set is composed by 4k videos downloaded from youtube.com and downscaled to 720 × 1280, from which we extract around 300.000 128 × 128 HR crops that serve as ground-truth images, and then further downsample them by a factor of s = 4 to obtain the LR input of size 32 × 32. The training dataset thus is composed by around 30.000 sequences of 10 frames each (i.e. around 30.000 data-points for the recurrent network). We precompute the optical flows with FlowNet 2.0 and load them both during training and testing, as GPU memory becomes scarce specially when unfolding generator and discriminator. We compile a testing set, larger than other previous testing sets in the literature, also downloaded from youtube.com, favoring sharp 4k content that is further downsampled to 720 × 1280 for GT and 180 × 320 for the LR input. In this dataset there are 12 sequences of diverse nature scenes (e.g. landscapes, natural life, urban scenes) ranging from very little to fast motion. Each sequence is composed of roughly 100 to 150 frames, which totals 1281 frames. We use a batch size of 8 sequences, i.e., in total each batch contains 8×10 = 80 training images. All models are pre-trained with L E for about 100k training iterations and then trained with the adversarial and temporal loss for about 1.5M iterations. Training was performed on Nvidia Tesla P100 and V100 GPUs, both of which have 16 GB of memory. Evaluation Seq. Table 1. LPIPS scores (AlexNet architecture with linear calibration layer) for 12 sequences. Best performers in bold font, and runner's-up in blue color. The best performer on average is our proposed model trained with LEA followed closely by ENet. Methods Models We include in our validation three loss configuration: (1) L E is trained only with MSE loss as a baseline, (2) L EA is our adversarial model trained with L E + 3 × 10 −3 L A and (3) L EAT is our adversarial model with temporalconsistency loss L E + 3 × 10 −3 L A + 10 −2 L T . We also include two other state-of-the art models in our benchmarking: En-hanceNet (ENet) as a perceptual single image super-resolution baseline [14] (code and pre-trained network weights obtained from the authors' website), which minimizes an additional loss term based on the Gram matrix of VGG activations; and lastly our model without flow estimation or recurrent connections that we denote in the tables by L SI A . This last model is very similar to the network used in SRGAN from Ledig et al. [17]. Intra-frame quality Evaluation of images trained with perceptual losses is still an open problem. Even though it is trivial for humans to evaluate the perceived similarity between two images, the underlying principles of human perception are still not well-understood. Traditional metrics such as PSNR (based on MSE), Structural Self-Similarity (SSIM) or the Information Fidelity Criterion (IFC) still rely on well-aligned, more or less pixel-wise accuracy estimates, and minor artifacts in the images can cause great perturbations in them. In order to evaluate image samples from models that deviate from the MSE minimization scheme other metrics need to be considered. Table 2. Temporal Consistency Loss for adjacent frames (initial frame has not been included in the computation). Best performer in bold and runner-up in blue color (omitting bicubic). The best performer on average is our proposed method trained with LEAT followed by LE. Seq. Methods The recent work of Zhang et al. [37] explores the capabilities of deep architectures to capture perceptual features that are meaningful for similarity assessment. In their exhaustive evaluation they show how deep features of different architectures outperform other previous metrics by substantial margins and correlate very well with subjective human scores. They conclude that deep networks, regardless of the specific architecture, capture important perceptual features that are well-aligned with those of the human visual system. Consequently, they propose the Learned Perceptual Image Patch Similarity metric (LPIPS). We evaluate our testing set with LPIPS using the AlexNet architecture with an additional linear calibration layer as the authors propose in their manuscript. We show our LPIPS scores in Table 1. These scores are in line with what we show in the quantitative visual inspection in Figure 4: The samples obtained by ENet are together with L EA the most similar to the GT, with ENet producing slightly sharper images than our proposed method. L EAT comes afterwards, as a more conservative generative network (i.e. closer to the one obtained with the MSE loss). Temporal Consistency To evaluate the temporal consistency of the estimated videos, we compute the temporal-consistency loss as described in Equation 4 and Fig. 2 between adjacent frames for all the methods in the benchmark. We show the results in Table 2. We note that all the configurations that are recurrent perform well in this metric, even when we do not minimize L T directly (e.g. L E , L EA ). In contrast, models that are not aware of the temporal dimensions (such as ENet or L SI A ) obtain higher errors, validating that the recurrent network learns inter-frame relationships. Not considering the bicubic interpolation which is very blurry, the best performer is L EAT followed closely by L E . Our model L EA indeed performs reasonably well, especially taking into consideration that it is the best performer in the quality scores shown in Table 1. Evaluating the temporal consistency over adjacent frames in a sequence where the ground-truth optical flow is not known poses several problems, as errors present in the flow estimation will directly affect this metric. Additionally, the bilinear resampling performed for the image warping is, when analyzed in the frequency domain, a low-pass filter that can potentially blur high-frequencies and thus, result in an increase of uncertainty in the measured error. In order to ensure the reliability of the temporal consistency validation, we perform further testing with the MPI Sintel synthetic training sequence (which includes groundtruth optical flow). This enables us to asses the impact of using estimated flows in the temporal consistency metric. We show in Table 3 the results in terms of temporal consistency of the MPI Sintel training 23 sequences using the GT and also FlowNet2 estimated optical flows for the warping in the metric. The error from estimated flows to GT is not significant, and the relationship among methods is similar: L EA improves greatly over the non-recurrent SRGAN L SI A or EnhanceNet, and L EAT (with temporal consistency loss) further improves over L EA . Table 3. Temporal Consistency Loss for adjacent frames for MPI Sintel Dataset using ground-truth and estimated optical flows. Following the example of [21], we also show in Figure 5 temporal profiles for qualitative evaluation of temporal consistency. A temporal profile shows an image where each row is a fixed line over a set of consecutive time frames, creating a 2-dimensional visualization of the temporal evolution of the line. In this figure, we can corroborate the objective assessment performed in Table 1 and Table 2. ENet produces very appealing sharp textures, to the point that it is hallucinating frequencies not present in the GT image (i.e. over-sharpening). This is not necessarily unpleasing with static images, but temporal consistency then becomes very challenging, and some of those textures resemble noise in the temporal profile. Our model L A is hardly distinguishable from the GT image, generating high-quality plausible textures, while showing a very clean temporal profile. L EAT has fewer hallucinated textures than L A , but on the other hand we also see an improved temporal behavior (i.e. less flickering). Conclusions We presented a novel generative adversarial model for video upscaling. Differently from previous approaches to video super-resolution based on MSE minimization, we used adversarial loss functions in order to recover videos with photorealistic textures. To the best of our knowledge, this is the first work that applies perceptual loss functions to the task of video super-resolution. In order to tackle the problem of lacking temporal consistency due to perceptual loss functions, we propose two synergetic contributions: (1) A recurrent generator and discriminator model where the output of frame t − 1 is passed on to the next iteration in combination with the input of frame t, enabling temporal cues during learning and inference. Models trained with adversarial and MSE losses show improved behavior in terms of temporal consistency and a competitive quality when compared to SISR models. (2) Additionally, we introduce the temporal-consistency loss to video super-resolution, in which deviations from the previous warped frame are punished when estimating a given frame. We conducted evaluation by means of the LPIPS and temporal-consistency loss on a testing dataset of more than a thousand 4k video frames, obtaining promising results that open new possibilities within video upscaling.
3,638
1807.07930
2942641522
With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains images with high perceptual quality and improved temporal consistency.
The style transfer community has faced similar problems in their transition from single-image to video processing. Single-image style-transfer networks might produce very distant images for adjacent frames @cite_2 , creating very strong transitions from frame to frame. Several recent works have overcome this problem by including a temporal-consistency loss that ensures that the stylized consecutive frames are similar to each other when warped with the optical flow of the scene @cite_8 @cite_5 @cite_23 .
{ "abstract": [ "Recent progress in style transfer on images has focused on improving the quality of stylized images and speed of methods. However, real-time methods are highly unstable resulting in visible flickering when applied to videos. In this work we characterize the instability of these methods by examining the solution set of the style transfer objective. We show that the trace of the Gram matrix representing style is inversely related to the stability of the method. Then, we present a recurrent convolutional network for real-time video style transfer which incorporates a temporal consistency loss and overcomes the instability of prior methods. Our networks can be applied at any resolution, do not require optical flow at test time, and produce high quality, temporally consistent stylized videos in real-time.", "", "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation." ], "cite_N": [ "@cite_8", "@cite_5", "@cite_23", "@cite_2" ], "mid": [ "2612034718", "", "2740546229", "2475287302" ] }
Photorealistic Video Super Resolution
Advances in convolutional neural networks have revolutionized computer vision and the popular field of super-resolution has been no exception to this rule, as in recent years numerous publications have made great strides towards better reconstructions of high-resolution pictures. A most promising new trend in super-resolution has emerged as the application of perceptual loss functions rather than the previously ubiquitous optimization of the mean squared error. This paradigm shift has enabled the leap from images with blurred textures to near-photorealistic results in terms of perceived image quality using deep neural networks. Notwithstanding the recent success in single image super-resolution, perceptual losses have not yet been successfully utilized in the video superresolution domain as perceptual losses typically introduce artifacts that, while being undisturbing in the spatial domain, emerge as spurious flickering artifacts in videos. In this paper we propose a neural network model that is able to produce sharp videos with fine details while improving its behaviour in terms of temporal consistency. The contributions of the paper are: (1) We propose a recurrent arXiv:1807.07930v1 [cs.CV] 20 Jul 2018 generative adversarial model that uses optical flow in order to exploit temporal cues across frames, and (2) we introduce a temporal-consistency loss term that reinforces coherent consecutive frames in the temporal domain. Proposed method Notation and problem statement Video super-resolution aims at upscaling a given LR image sequence {Y t } by a factor of s, so that the estimated sequence X t resembles the original sequence {X t } by some metric. We denote images in the low-resolution domain by Y ∈ [0, 1] h×w×3 , and ground-truth images in the high-resolution domain by X ∈ [0, 1] sh×sw×3 for a given magnification factor s. An estimate of a high-resolution image X is denoted byX. We discern within a temporal sequence by a subindex to the image variable, e.g., Y t−1 , Y t . We use a superscript w, e.g.X w t−1 , to denote an imageX that has been warped from its time step t − 1 to the following frame X t . The proposed architecture is summarized in Figure 1 and will be explained in detail in the following sections. We define an architecture that naturally leverages not only single image but also inter-frame details present in video sequences by Generator Estimated images Discriminator ? Fig. 1. Network architectures for generator and discriminator. The previous output frame is warped onto the current frame and mapped to LR with the space to depth transformation before being concatenated to the current LR input frame. The generator follows a ResNet architecture with skip connections around the residual blocks and around the whole network. The discriminator follows the common design pattern of decreasing the spatial dimension of the images while increasing the number of channels after each block. using a recurrent neural network architecture. The previous output frame is warped according to the optical flow estimate given by FlowNet 2.0 [30]. By including a discriminator that is only needed at the training stage, we further enable adversarial training which has proved to be a powerful tool for generating sharper and more realistic images [14,17]. To the best of our knowledge, the use of perceptual loss functions (i.e. adversarial training in recurrent architectures) for video super-resolution is novel. In a recently published work, Sajjadi et al. [25] propose a similar recurrent architecture for video super-resolution, however, they do not utilize a perceptual objective and in contrast to our method, they do not apply an explicit loss term that enforces temporal consistency. Recurrent generator and discriminator Following recent super-resolution state of the art methods for both classical and perceptual loss functions [9,14,17,31], we use deep convolutional neural networks with residual connections. This class of networks facilitates learning the identity mapping and leads to better gradient flow through deep networks. Specifically, we adopt a ResNet architecture for our recurrent generator that is similar to the ones introduced by [14,17] with some modifications. Each of the residual blocks is composed by a convolution, a Rectified Linear Unit (ReLU) activation and another convolutional layer following the activation. Previous approaches have applied batch normalization layers in the residual blocks [17], but we choose not to add batch normalization to the generator due to the comparably small batch size, to avoid potential color shift problems, and also taking into account recent evidence hinting that they might be problematic for generative image models [32]. In order to further accelerate and stabilize training, we create an additional skip connection over the whole generator. This means that the network only needs to learn the residual between the bicubic interpolation of the input and the high-resolution ground-truth image rather than having to pass through all low frequencies as well [7,14]. We perform most of our convolutions in low-resolution space for a higher receptive field and higher efficiency. Since the input image has a lower dimension than the output image, the generator needs to have a module that increases the resolution towards the end. There are several ways to do so within a neural network, e.g., transposed convolution layers, interpolation, or depth to space units (pixelshuffle). Following Shi et al. [4], we reshuffle the activations from the channel dimension to the height and width dimensions so that the effective spatial resolution is increased (and, consequently, this operation decreases the number of channels). The upscaling unit is divided into two stages with an intermediate magnification step r (e.g. two times ×2 for a magnification factor of ×4). Each of the upscaling stages is composed of a convolutional layer that increments the number of channels by a factor of r 2 , a depth to space operation and a ReLU activation. Our discriminator follows common design choices and is composed by strided convolutions, batch normalization and leaky ReLU activations that progressively decrease the spatial resolution of the activations and increase the channel count [14,17,33]. The last stage of the discriminator is composed of two dense layers and a sigmoid activation function. In contract to general generative adversarial networks, the input to the proposed generative network is not a random variable but it is composed of the low-resolution image Y t (corresponding to the current frame t) and, additionally, the warped output of the network at the previous stepX w t−1 . The difference in resolution of these two images is adapted through a space to channel layer which decreases the spatial resolution ofX w t−1 without loss of data. For warping the previous image, a dense optical flow field is estimated with a flow estimation network as described in the following section. As described in Section 3.4, the warped frames are also used in an additional loss term that enforces higher temporal consistency in the results. Flow estimation Accurate dense flow estimation is crucial to the success of the proposed architecture. For this reason, we opt to use one of the best available flow estimation methods, FlowNet 2.0 [30]. We use the pre-trained model supplied by the authors [34] and run optical flow estimation on our whole dataset. FlowNet 2.0 is t t-1 warped to t not temporally consistent t t-1 warped to t no pixel-wise fidelity to GT, but temporally consistent Fig. 2. Behavior of the proposed temporal-consistency loss. The sequence depicts a checkerboard pattern which moves to the right. In the first group of GT images, the warped images are exactly similar, and thus the loss is 0. In (a), the results are not temporally consistent, so the warped image is different from the current frame which leads to a loss that is higher than 0. In the example in (b), the estimated patterns are temporally consistent despite not being the same as the GT images, so the loss is 0. the successor of the original FlowNet architecture which was the first approach that used convolutional neural networks to predict dense optical flow maps. The authors show that it is both faster and more accurate than its predecessor. Besides a more sophisticated training procedure, FlowNet 2.0 relies on an arrangement of stacked networks that capture large displacements in coarse flow estimates which are then refined by the next network in a subsequent step. In a final step, the estimates are fused by a shallow fusion network. For details, we refer the reader to the original publication [30]. Losses We train our model with three different loss terms, namely: pixel-wise mean squared error (MSE), adversarial loss and a temporal-consistency loss. Mean Squared Error MSE is by far the most common loss in the superresolution literature as it is well-understood and easy to compute. It accurately captures sharp edges and contours, but it leads to over-smooth and flat textures as the reconstruction of high-frequency areas falls to the local mean rather than a realistic mode [14]. The pixel-wise MSE is defined as the Frobenius norm of the difference of two images: L E = X t − X t 2 2 ,(1) whereX t denotes the estimated image of the generator for frame t and X t denotes the ground-truth HR frame t. Fig. 3. Unfolded recurrent generator G and discriminator D during training for 3 temporal steps. The output of the previous time step is fed into the generator for the next iteration. Note that the weights of G and D are shared across different time steps. Gradients of all losses during training pass through the whole unrolled configuration of network instances. The discriminator is applied independently for each time step. G G G D D D t t+2 t+1 Adversarial Loss Generative Adversarial Networks (GANs) [35] and their characteristic adversarial training scheme have been a very active research field in the recent years, defining a wide landscape of applications. In GANs, a generative model is obtained by simultaneously training an additional network. A generative model G (i.e. generator) that learns to produce samples close to the data distribution of the training set is trained along with a discriminative model D (i.e. discriminator) that estimates the probability of a given sample belonging to the training set or not, i.e., it is generated by G. The objective of G is to maximize the errors committed by D, whereas the training of D should minimize its own errors, leading to a two-player minimax game. Similar to previous single-image super-resolution [14,17], the input to the generator G is not a random vector but an LR image (in our case, with an additional recurrent input), and thus the generator minimizes the following loss: L A = − log(D(G(Y t ||X t−1 )),(2) where the operator || denotes concatenation. The discriminator minimizes: L D = − log(D(X t )) − log(1 − D(G(Y t ||X t−1 )).(3) Temporal-consistency Loss Upscaling video sequences has the additional challenge of respecting the original temporal consistency between adjacent frames so that the estimated video does not present unpleasing flickering artifacts. When minimizing only L E such artifacts are less noticeable for two main reasons: because (1) MSE minimization often converges to the mean in textured regions, and thus flickering is reduced and (2) the pixel-wise MSE with respect to the GT is up to a certain point enforcing the inter-frame consistency present in the training images. However, when adding an adversarial loss term, the difficult to maintain temporal consistency increases. Adversarial training aims at generating samples that lie in the manifold of images, and thus it generates highfrequency content that will hardly be pixel-wise accurate to any ground-truth image. Generating video frames separately thus introduces unpleasing flickering artifacts. In order to tackle the aforementioned limitation, we introduce the temporalconsistency loss, which has already been successfully used in the style transfer community [27,28,29]. The temporal-consistency loss is computed from two adjacent estimated frames (without need of ground-truth), by warping the frame t − 1 to t and computing the MSE between them. We show an example of the behavior of our proposed temporal-consistency loss in Figure 2. Let W (X, O) denote a image warping operation and O an optical flow field mapping t − 1 to t. Our proposed loss reads: L T = X w t−1 −X t 2 2 , forX w t−1 = W (X t−1 , O).(4) Results Training and parameters Our model falls in the category of recurrent neural networks, and thus must be trained via Backpropagation Through Time (BPTT) [36], which is a finite approximation of the infinite recurrent loop created in the model. In practice, BPTT unfolds the network into several temporal steps where each of those steps is a copy of the network sharing the same parameters. The backpropagation algorithm is then used to obtain gradients of the loss with respect to the parameters. An example of unfolded recurrent generator and discriminator can be visualized in Figure 3. We select 10 temporal steps for our training approximation. Note that our discriminator classifies each image independently and is not recurrent, thus the different images produced by G can be stacked in the batch dimension (i.e. the discriminator does not have any connection between adjacent frames). Our training set is composed by 4k videos downloaded from youtube.com and downscaled to 720 × 1280, from which we extract around 300.000 128 × 128 HR crops that serve as ground-truth images, and then further downsample them by a factor of s = 4 to obtain the LR input of size 32 × 32. The training dataset thus is composed by around 30.000 sequences of 10 frames each (i.e. around 30.000 data-points for the recurrent network). We precompute the optical flows with FlowNet 2.0 and load them both during training and testing, as GPU memory becomes scarce specially when unfolding generator and discriminator. We compile a testing set, larger than other previous testing sets in the literature, also downloaded from youtube.com, favoring sharp 4k content that is further downsampled to 720 × 1280 for GT and 180 × 320 for the LR input. In this dataset there are 12 sequences of diverse nature scenes (e.g. landscapes, natural life, urban scenes) ranging from very little to fast motion. Each sequence is composed of roughly 100 to 150 frames, which totals 1281 frames. We use a batch size of 8 sequences, i.e., in total each batch contains 8×10 = 80 training images. All models are pre-trained with L E for about 100k training iterations and then trained with the adversarial and temporal loss for about 1.5M iterations. Training was performed on Nvidia Tesla P100 and V100 GPUs, both of which have 16 GB of memory. Evaluation Seq. Table 1. LPIPS scores (AlexNet architecture with linear calibration layer) for 12 sequences. Best performers in bold font, and runner's-up in blue color. The best performer on average is our proposed model trained with LEA followed closely by ENet. Methods Models We include in our validation three loss configuration: (1) L E is trained only with MSE loss as a baseline, (2) L EA is our adversarial model trained with L E + 3 × 10 −3 L A and (3) L EAT is our adversarial model with temporalconsistency loss L E + 3 × 10 −3 L A + 10 −2 L T . We also include two other state-of-the art models in our benchmarking: En-hanceNet (ENet) as a perceptual single image super-resolution baseline [14] (code and pre-trained network weights obtained from the authors' website), which minimizes an additional loss term based on the Gram matrix of VGG activations; and lastly our model without flow estimation or recurrent connections that we denote in the tables by L SI A . This last model is very similar to the network used in SRGAN from Ledig et al. [17]. Intra-frame quality Evaluation of images trained with perceptual losses is still an open problem. Even though it is trivial for humans to evaluate the perceived similarity between two images, the underlying principles of human perception are still not well-understood. Traditional metrics such as PSNR (based on MSE), Structural Self-Similarity (SSIM) or the Information Fidelity Criterion (IFC) still rely on well-aligned, more or less pixel-wise accuracy estimates, and minor artifacts in the images can cause great perturbations in them. In order to evaluate image samples from models that deviate from the MSE minimization scheme other metrics need to be considered. Table 2. Temporal Consistency Loss for adjacent frames (initial frame has not been included in the computation). Best performer in bold and runner-up in blue color (omitting bicubic). The best performer on average is our proposed method trained with LEAT followed by LE. Seq. Methods The recent work of Zhang et al. [37] explores the capabilities of deep architectures to capture perceptual features that are meaningful for similarity assessment. In their exhaustive evaluation they show how deep features of different architectures outperform other previous metrics by substantial margins and correlate very well with subjective human scores. They conclude that deep networks, regardless of the specific architecture, capture important perceptual features that are well-aligned with those of the human visual system. Consequently, they propose the Learned Perceptual Image Patch Similarity metric (LPIPS). We evaluate our testing set with LPIPS using the AlexNet architecture with an additional linear calibration layer as the authors propose in their manuscript. We show our LPIPS scores in Table 1. These scores are in line with what we show in the quantitative visual inspection in Figure 4: The samples obtained by ENet are together with L EA the most similar to the GT, with ENet producing slightly sharper images than our proposed method. L EAT comes afterwards, as a more conservative generative network (i.e. closer to the one obtained with the MSE loss). Temporal Consistency To evaluate the temporal consistency of the estimated videos, we compute the temporal-consistency loss as described in Equation 4 and Fig. 2 between adjacent frames for all the methods in the benchmark. We show the results in Table 2. We note that all the configurations that are recurrent perform well in this metric, even when we do not minimize L T directly (e.g. L E , L EA ). In contrast, models that are not aware of the temporal dimensions (such as ENet or L SI A ) obtain higher errors, validating that the recurrent network learns inter-frame relationships. Not considering the bicubic interpolation which is very blurry, the best performer is L EAT followed closely by L E . Our model L EA indeed performs reasonably well, especially taking into consideration that it is the best performer in the quality scores shown in Table 1. Evaluating the temporal consistency over adjacent frames in a sequence where the ground-truth optical flow is not known poses several problems, as errors present in the flow estimation will directly affect this metric. Additionally, the bilinear resampling performed for the image warping is, when analyzed in the frequency domain, a low-pass filter that can potentially blur high-frequencies and thus, result in an increase of uncertainty in the measured error. In order to ensure the reliability of the temporal consistency validation, we perform further testing with the MPI Sintel synthetic training sequence (which includes groundtruth optical flow). This enables us to asses the impact of using estimated flows in the temporal consistency metric. We show in Table 3 the results in terms of temporal consistency of the MPI Sintel training 23 sequences using the GT and also FlowNet2 estimated optical flows for the warping in the metric. The error from estimated flows to GT is not significant, and the relationship among methods is similar: L EA improves greatly over the non-recurrent SRGAN L SI A or EnhanceNet, and L EAT (with temporal consistency loss) further improves over L EA . Table 3. Temporal Consistency Loss for adjacent frames for MPI Sintel Dataset using ground-truth and estimated optical flows. Following the example of [21], we also show in Figure 5 temporal profiles for qualitative evaluation of temporal consistency. A temporal profile shows an image where each row is a fixed line over a set of consecutive time frames, creating a 2-dimensional visualization of the temporal evolution of the line. In this figure, we can corroborate the objective assessment performed in Table 1 and Table 2. ENet produces very appealing sharp textures, to the point that it is hallucinating frequencies not present in the GT image (i.e. over-sharpening). This is not necessarily unpleasing with static images, but temporal consistency then becomes very challenging, and some of those textures resemble noise in the temporal profile. Our model L A is hardly distinguishable from the GT image, generating high-quality plausible textures, while showing a very clean temporal profile. L EAT has fewer hallucinated textures than L A , but on the other hand we also see an improved temporal behavior (i.e. less flickering). Conclusions We presented a novel generative adversarial model for video upscaling. Differently from previous approaches to video super-resolution based on MSE minimization, we used adversarial loss functions in order to recover videos with photorealistic textures. To the best of our knowledge, this is the first work that applies perceptual loss functions to the task of video super-resolution. In order to tackle the problem of lacking temporal consistency due to perceptual loss functions, we propose two synergetic contributions: (1) A recurrent generator and discriminator model where the output of frame t − 1 is passed on to the next iteration in combination with the input of frame t, enabling temporal cues during learning and inference. Models trained with adversarial and MSE losses show improved behavior in terms of temporal consistency and a competitive quality when compared to SISR models. (2) Additionally, we introduce the temporal-consistency loss to video super-resolution, in which deviations from the previous warped frame are punished when estimating a given frame. We conducted evaluation by means of the LPIPS and temporal-consistency loss on a testing dataset of more than a thousand 4k video frames, obtaining promising results that open new possibilities within video upscaling.
3,638
1906.03787
2948558869
Adversarial training is one of the main defenses against adversarial attacks. In this paper, we provide the first rigorous study on diagnosing elements of adversarial training, which reveals two intriguing properties. First, we study the role of normalization. Batch normalization (BN) is a crucial element for achieving state-of-the-art performance on many vision tasks, but we show it may prevent networks from obtaining strong robustness in adversarial training. One unexpected observation is that, for models trained with BN, simply removing clean images from training data largely boosts adversarial robustness, i.e., 18.3 . We relate this phenomenon to the hypothesis that clean images and adversarial images are drawn from two different domains. This two-domain hypothesis may explain the issue of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics of this mixture distribution is challenging. Guided by this two-domain hypothesis, we show disentangling the mixture distribution for normalization, i.e., applying separate BNs to clean and adversarial images for statistics estimation, achieves much stronger robustness. Additionally, we find that enforcing BNs to behave consistently at training and testing can further enhance robustness. Second, we study the role of network capacity. We find our so-called "deep" networks are still shallow for the task of adversarial learning. Unlike traditional classification tasks where accuracy is only marginally improved by adding more layers to "deep" networks (e.g., ResNet-152), adversarial training exhibits a much stronger demand on deeper networks to achieve higher adversarial robustness. This robustness improvement can be observed substantially and consistently even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638.
Adversarial training constitutes the current foundation of state-of-the-arts for defending against adversarial attacks. It is first developed in @cite_19 where both clean images and adversarial images are used for training. Kannan al @cite_21 propose to improve robustness further by encouraging the logits from the pairs of clean images and adversarial counterparts to be similar. Instead of using both clean and adversarial images for training, Madry al @cite_8 formulate adversarial training as a min-max optimization and train models exclusively on adversarial images. However, as these works mainly focus on demonstrating the effectiveness of their proposed mechanisms, a fair and detailed diagnosis of adversarial training strategies remains as a missing piece. In this work, we provide a detailed diagnosis which reveals two intriguing properties of training adversarial defenders.
{ "abstract": [ "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "In this paper, we develop improved techniques for defending against adversarial examples at scale. First, we implement the state of the art version of adversarial training at unprecedented scale on ImageNet and investigate whether it remains effective in this setting - an important open scientific question (, 2018). Next, we introduce enhanced defenses using a technique we call logit pairing, a method that encourages logits for pairs of examples to be similar. When applied to clean examples and their adversarial counterparts, logit pairing improves accuracy on adversarial examples over vanilla adversarial training; we also find that logit pairing on clean examples only is competitive with adversarial training in terms of accuracy on two datasets. Finally, we show that adversarial logit pairing achieves the state of the art defense on ImageNet against PGD white box attacks, with an accuracy improvement from 1.5 to 27.9 . Adversarial logit pairing also successfully damages the current state of the art defense against black box attacks on ImageNet (, 2018), dropping its accuracy from 66.6 to 47.1 . With this new accuracy drop, adversarial logit pairing ties with (2018) for the state of the art on black box attacks on ImageNet.", "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models." ], "cite_N": [ "@cite_19", "@cite_21", "@cite_8" ], "mid": [ "2963207607", "2791953061", "2964253222" ] }
INTRIGUING PROPERTIES OF ADVERSARIAL TRAINING AT SCALE
Adversarial attacks (Szegedy et al., 2014) can mislead neural networks to make wrong predictions by adding human imperceptible perturbations to input data. Adversarial training (Goodfellow et al., 2015) is shown to be an effective method to defend against such attacks, which trains neural networks on adversarial images that are generated on-the-fly during training. Later works further improve robustness of adversarially trained models by mitigating gradient masking (Tramèr et al., 2018), imposing logits pairing (Kannan et al., 2018), denoising at feature space (Xie et al., 2019b), etc. However, these works mainly focus on justifying the effectiveness of proposed strategies and apply inconsistent pipelines for adversarial training, which leaves revealing important elements for training robust models still a missing piece in current adversarial research. In this paper, we provide the first rigorous diagnosis of different adversarial learning strategies, under a unified training and testing framework, on the large-scale ImageNet dataset (Russakovsky et al., 2015). We discover two intriguing properties of adversarial training, which are essential for training models with stronger robustness. First, though Batch Normalization (BN) (Ioffe & Szegedy, 2015) is known as a crucial component for achieving state-of-the-arts on many vision tasks, it may become a major obstacle for securing robustness against strong attacks in the context of adversarial training. By training such networks adversarially with different strategies, e.g., imposing logits pairing (Kannan et al., 2018), we observe an unexpected phenomenon -removing clean images from training data is the most effective way for boosting model robustness. We relate this phenomenon to the conjecture that clean images and adversarial images are drawn from two different domains. This two-domain hypothesis may explain the limitation of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics on this mixture distribution is challenging. We further show that adversarial training without removing clean images can also obtain strong robustness, if the mixture distribution is well disentangled at BN by constructing different mini-batches for clean images and adversarial images to estimate normalization statistics, i.e., one set of BNs exclusively for adversarial images and another set of BNs exclusively for clean images. An alternative solution to avoiding mixture distribution for normalization is to simply replace all BNs with batch-unrelated normalization layers, e.g., group normalization (Wu & He, 2018), where normalization statistics are estimated on each image independently. These facts indicate that model robustness is highly related to normalization in adversarial training. Furthermore, additional performance gain is observed via enforcing consistent behavior of BN during training and testing. Second, we find that our so-called "deep" networks (e.g., are still shallow for the task of adversarial learning, and simply going deeper can effectively boost model robustness. Experiments show that directly adding more layers to "deep" networks only marginally improves accuracy for traditional image classification tasks. In contrast, substantial and consistent robustness improvement is witnessed even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. This phenomenon suggests that larger networks are encouraged for the task of adversarial learning, as the learning target, i.e., adversarial images, is a more complex distribution than clean images to fit. In summary, our paper reveals two intriguing properties of adversarial training: (1) properly handling normalization is essential for obtaining models with strong robustness; and (2) our so-called "deep" networks are still shallow for the task of adversarial learning. We hope these findings can benefit future research on understanding adversarial training and improving adversarial robustness. ADVERSARIAL TRAINING FRAMEWORK As inconsistent adversarial training pipelines were applied in previous works (Kannan et al., 2018;Xie et al., 2019b), it is hard to identify which elements are important for obtaining robust models. To this end, we provide a unified framework to train and to evaluate different models, for the sake of fair comparison. Training Parameters. We use the publicly available adversarial training pipeline 1 to train all models with different strategies on ImageNet. We select ResNet-152 (He et al., 2016) as the baseline network, and apply projected gradient descent (PGD) as the adversarial attacker to generate adversarial examples during training. The hyper-parameters of the PGD attacker are: maximum perturbation of each pixel = 16, attack step size α = 1, number of attack iterations N = 30, and the targeted class is selected uniformly at random over the 1000 ImageNet categories. We initialize the adversarial image by the clean counterpart with probability = 0.2, or randomly within the allowed cube with probability = 0.8. All models are trained for a total of 110 epochs, and we decrease the learning rate by 10× at the 35-th, 70-th, and 95-th epoch. Evaluation. For performance evaluation, we mainly study adversarial robustness (rather than clean image accuracy) in this paper. Specifically, we follow the setting in Kannan et al. (2018) and Xie et al. (2019b), where the targeted PGD attacker is chosen as the white-box attacker to evaluate robustness. The targeted class is selected uniformly at random. We constrain the maximum perturbation of each pixel = 16, set the attack step size α = 1, and measure the robustness by defending against PGD attacker of 2000 attack iterations (i.e., PGD-2000). As in Kannan et al. (2018) and Xie et al. (2019b), we always initialize the adversarial perturbation from a random point within the allowed -cube. We apply these training and evaluation settings by default for all experiments, unless otherwise stated. EXPLORING NORMALIZATION TECHNIQUES IN ADVERSARIAL TRAINING ON THE EFFECTS OF CLEAN IMAGES IN ADVERSARIAL TRAINING In this part, we first elaborate on the effectiveness of different adversarial training strategies on model robustness. Adversarial training can be dated back to Goodfellow et al. (2015), where they mix clean images and the corresponding adversarial counterparts into each mini-batch for training. We choose this strategy as our starting point, and the corresponding loss function is: J(θ, x, y) = αJ(θ, x clean , y) + (1 − α)J(θ, x adv , y),(1) where J(·) is the loss function, θ is the network parameter, y is the ground-truth, and training pairs {x clean , x adv } are comprised of clean images and their adversarial counterparts, respectively. The parameter α balances the relative importance between clean image loss and adversarial image loss. We set α = 0.5 following Goodfellow et al. (2015). With our adversarial training framework, this model can achieve 20.9% accuracy against PGD-2000 attacker. Besides this baseline, we also study the effectiveness of two recently proposed adversarial training strategies Kannan et al., 2018), and provide the results as follows. Ratio of clean images. Different from the canonical form in Goodfellow et al. (2015), apply the min-max formulation for adversarial training where no clean images are used. We note this min-max type optimization can be dated as early as Wald (1945). We hereby investigate the relationship between model robustness and the ratio of clean images used for training. Specifically, for each training mini-batch, we keep adversarial images unchanged, but removing their clean counterparts by 20%, 40%, 60%, 80% and 100%. We report the results in Figure 1. Interestingly, removing a portion of clean images from training data can significantly improve model robustness, and the strongest robustness can be obtained by completely removing clean images from the training set, i.e., it achieves an accuracy of 39.2% against PGD-2000 attacker, outperforming the baseline model by a large margin of 18.3%. Adversarial logits pairing. For performance comparison, we also explore the effectiveness of an alternative training strategy, adversarial logits pairing (ALP) (Kannan et al., 2018). Compared with the canonical form in Goodfellow et al. (2015), ALP imposes an additional loss to encourage the logits from the pairs of clean images and adversarial counterparts to be similar. As shown in Figure 2, our re-implemented ALP obtains an accuracy of 23.0% against PGD-2000 attacker 2 , which outperforms the baseline model by 2.1%. Compared with the strategy of removing clean images, this improvement is much smaller. Discussion. Given the results above, we conclude that training exclusively on adversarial images is the most effective strategy for boosting model robustness. For example, by defending against PGD-2000 attacker, the baseline strategy in Goodfellow et al. (2015) (referred to as 100% adv + 100% clean) obtains an accuracy of 20.9%. Adding an loss of logits pairing (Kannan et al., 2018) (referred to as 100% adv + 100% clean, ALP) slightly improves the performance by 2.1%, while completely removing clean images Xie et al., 2019b) (referred to as 100% adv + 0% clean) boosts the accuracy by 18.3%. We further plot a comprehensive evaluation curve of these three training strategies in Figure 2, by varying the number of PGD attack iteration from 10 to 2000. Surprisingly, only 100% adv + 0% clean can ensure model robustness against strong attacks, i.e., performance becomes asymptotic when allowing PGD attacker to perform more attack iterations. Training strategies which involve clean images for training are suspicious to result in worse robustness, if PGD attackers are allowed to perform more attack iterations. In the next section, we will study how to make these training strategies, i.e., 100% adv + 100% clean and 100% adv + 100% clean, ALP to secure their robustness against strong attacks. THE DEVIL IS IN THE BATCH NORMALIZATION Two-domain hypothesis. Compared to feature maps of clean images, Xie et al. (2019b) show that feature maps of their adversarial counterparts tend to be more noisy. Meanwhile, several works (Li Unlike the blue curves in Figure 2, these new curves become asymptotic when evaluating against attackers with more iterations, which indicate that the networks using MBN adv can behave robustly against PGD attackers with different attack iterations, even if clean images are used for training. & Li, 2017;Metzen et al., 2018;Feinman et al., 2017;Pang et al., 2018; demonstrate it is possible to build classifiers to separate adversarial images from clean images. These studies suggest that clean images and adversarial images are drawn from two different domains 3 . This two-domain hypothesis may provide an explanation to the unexpected observation (see Sec. 4.1) and we ask -why simply removing clean images from training data can largely boost adversarial robustness? As a crucial element for achieving state-of-the-arts on various vision tasks, BN (Ioffe & Szegedy, 2015) is widely adopted in many network architectures, e.g., Inception , ResNet (He et al., 2016) and DenseNet (Huang et al., 2017). The normalization statistics of BN are estimated across different images. However, exploiting batch-wise statistics is a challenging task if input images are drawn from different domains and therefore networks fail to learn a unified representation on this mixture distribution. Given our two-domain hypothesis, when training with both clean and adversarial images, the usage of BN can be the key issue for resulting in weak adversarial robustness in Figure 2. Based on the analysis above, an intuitive solution arise: accurately estimating normalization statistics should enable models to train robustly even if clean images and adversarial images are mixed at each training mini-batch. To this end, we explore two ways, where the mixture distribution is disentangled at normalization layers, for validating this argument: (1) maintaining separate BNs for clean/adversarial images; or (2) replacing BNs with batch-unrelated normalization layers. Training with Mixture BN. Current network architectures estimate BN statistics using the mixed features from both clean and adversarial images, which leads to weak model robustness as shown in Figure 2. Xie et al. (2019a) propose that properly decoupling the normalization statistics for adversarial training can effectively boost image recognition. Here, to study model robustness, we apply Mixture BN (MBN) (Xie et al., 2019a), which disentangles the mixed distribution via constructing different mini-batches for clean and adversarial images for accurate BN statistics estimation (illustrated in Figure 4), i.e., one set of BNs exclusively for adversarial images (referred to as MBN adv ), and another set of BNs exclusively for clean images (referred to as MBN clean ). We do not change the structure of other layers. We verify the effectiveness of this new architecture with two (previously less robust) training strategies, i.e., 100% adv + 100% clean and 100% adv + 100% clean, ALP. At inference time, whether an image is adversarial or clean is unknown. We thereby measure the performance of networks by applying either MBN adv or MBN clean separately. The results are shown in Table 1. We find the performance is strongly related to how BN is trained: when using MBN clean , the trained network achieves nearly the same clean image accuracy as the whole network trained exclusively on clean images; when using MBN adv , the trained network achieves nearly the same adversarial robustness as the whole network trained exclusively on adversarial images. Other factors, like whether ALP is applied for training, only cause subtle differences in performance. We further plot an extensive robustness evaluation curve of different training strategies in Figure 3. Unlike Figure 2, we observe that networks using MBN adv now can secure their robustness against strong attacks, e.g., the robustness is asymptotic when increasing attack iterations from 500 to 2000. The results in Table 1 suggest that BN statistics characterize different model performance. For a better understanding, we randomly sample 20 channels in a residual block and plot the corresponding running statistics of MBN clean and MBN adv in Figure 5. We observe that clean images and adversarial images induce significantly different running statistics, though these images share the same set of convolutional filters for feature extraction. This observation further supports that (1) clean images and adversarial images come from two different domains; and (2) current networks fail to learn a unified representation on these two domains. Interestingly, we also find that adversarial images lead to larger running mean and variance than clean images. This phenomenon is also consistent with the observation that adversarial images produce noisy-patterns/outliers at the feature space (Xie et al., 2019b). As a side note, this MBN structure is also used as a practical trick for training better generative adversarial networks (GAN) . Chintala et al. (2016) suggest to construct each mini-batch with only real or generated images when training discriminators, as generated images and real images belong to different domains at an early training stage. However, unlike our situation where BN statistics estimated on different domains remain divergent after training, a successful training of GAN, i.e., able to generate natural images with high quality, usually learns a unified set of BN statistics on real and generated images. Training with batch-unrelated normalization layers. Instead of applying MBN structure to disentangle the mixture distribution, we can also train networks with batch-unrelated normalization layers, which avoids exploiting the batch dimension to calculate statistics, for the same purpose. We choose Group Normalization (GN) for this experiment, as GN can reach a comparable performance to BN on various vision tasks (Wu & He, 2018 -2000). Exploring other batch-unrelated normalization in adversarial training remains as future work. Exceptional cases. There are some situations where models directly trained with BN can also ensure their robustness against strong attacks, even if clean images are included for adversarial training. Our experiments show constraining the maximum perturbation of each pixel to be a smaller value, e.g., = 8, is one of these exceptional cases. Kannan et al. (2018) and Mosbach et al. (2018) also show that adversarial training with clean images can secure robustness on small datasets, i.e., MNIST, CIFAR-10 and Tiny ImageNet. Intuitively, generating adversarial images on these much simpler datasets or under a smaller perturbation constraint induces a smaller gap between these two domains, therefore making it easier for networks to learn a unified representation on clean and adversarial images. Nonetheless, in this paper, we stick to the standard protocol in Kannan et al. (2018) and Xie et al. (2019b) where adversarial robustness is evaluated on ImageNet with the perturbation constraint = 16. REVISITING STATISTICS ESTIMATION OF BN Inconsistent behavior of BN. As the concept of "batch" is not legitimate at inference time, BN behaves differently at training and testing (Ioffe & Szegedy, 2015): during training, the mean and variance are computed on each mini-batch, referred to as batch statistics; during testing, there is no actual normalization performed -BN uses the mean and variance pre-computed on the training set (often by running average) to normalize data, referred to as running statistics. For traditional classification tasks, batch statistics usually converge to running statistics by the end of training, thus (practically) making the impact of this inconsistent behavior negligible. Nonetheless, this empirical assumption may not hold in the context of adversarial training. We check this statistics matching of models trained with the strategy 100% adv + 0% clean, where the robustness against strong attacks is secured. We randomly sample 20 channels in a residual block, and plot the batch statistics computed on two randomly sampled mini-batches, together with the pre-computed running statistics. In Figure 6, interestingly, we observe that batch mean is almost equivalent to running mean, while batch variance does not converge to running variance yet on certain channels. Given this fact, we then study if this inconsistent behavior of BN affects model robustness in adversarial training. A heuristic approach. Instead of developing a new training strategy to make batch statistics converge to running statistics by the end of training, we explore a more heuristic solution: applying pre-computed running statistics for model training during the last 10 epochs. We report the performance comparison in Table 2. By enabling BNs to behave consistently at training and testing, this approach can further boost the model robustness by 3.0% with the training strategy 100% adv + 0% clean. We also successfully validate the generality of this approach on other two robust training strategies. More specifically, it can improve the model robustness under the training strategies MBN adv , 100% adv + 100% clean and MBN adv , 100% adv + 100% clean, ALP by 1.6% and 2.8%, respectively. These results suggest that model robustness can be benefited from a consistent behavior of BN at training and testing. Moreover, we note this approach does not incur any additional training budgets. BEYOND ADVERSARIAL ROBUSTNESS On the importance of training convolutional filters adversarially. In Section 4.2, we study the performance of models where the mixture distribution is disentangled for normalization -by applying either MBN clean or MBN adv , the trained models achieve strong performance on either clean images or adversarial images. This result suggests that clean and adversarial images share the same convolutional filters to effectively extract features. We further explore whether the filters learned exclusively on adversarial images can extract features effectively on clean images, and vice versa. We first take a model trained with the strategy 100% adv + 0% clean, and then finetune BNs using only clean images for a few epochs. Interestingly, we find the accuracy on clean images can be significantly boosted from 62.3% to 73%, which is only 5.9% worse than the standard training setting, i.e., 78.9%. These result indicates that convolutional filters learned solely on adversarial images can also be effectively applied to clean images. However, we find the opposite direction does not work -convolutional filters learned on clean images cannot extract features robustly on adversarial images (e.g., 0% accuracy against PGD-2000 after finetuning BNs with adversarial images). This phenomenon indicates the importance of training convolutional filters adversarially, as such learned filters can also extract features from clean images effectively. The findings here also are related to the discussion of robust/non-robustness features in Ilyas et al. (2019). Readers with interests are recommended to refer to this concurrent work for more details. Limitation of adversarial training. We note our adversarially trained models exhibit a performance tradeoff between clean accuracy and robustness -the training strategies that achieve strong model robustness usually result in relatively low accuracy on clean images. For example, 100% adv + 0% clean, MBN adv , 100% adv + 100% clean and MBN adv , 100% adv + 100% clean, ALP only report 62.3%, 64.4% and 65.9% of clean image accuracy. By replacing BNs with GNs, 100% adv + 100% clean achieves much better clean image accuracy, i.e., 67.5%, as well maintaining strong robustness. We note that this tradeoff is also observed in the prior work . Besides, Balaji et al. (2019) show it is possible to make adversarially trained models to exhibit a better tradeoff between clean accuracy and robustness. Future attentions are deserved on this direction. GOING DEEPER IN ADVERSARIAL TRAINING As discussed in Section 4.2, current networks are not capable of learning a unified representation on clean and adversarial images. It may suggest that the "deep" network we used, i.e., ResNet-152, still underfits the complex distribution of adversarial images, which motivates us to apply larger networks for adversarial training. We simply instantiate the concept of larger networks by going deeper, i.e., adding more residual blocks. For traditional image classification tasks, the benefits brought by adding more layers to "deep" networks is diminishing, e.g., the blue curve in Figure 7 shows that the improvement of clean image accuracy becomes saturated once the network depth goes beyond ResNet-200. For a better illustration, we train deeper models exclusively on adversarial images and observe a possible underfitting phenomenon as shown in Figure 7. In particular, we apply the heuristic policy in Section 4.3 to mitigate the possible effects brought by BN. We observe that adversarial learning task exhibits a strong "thirst" on deeper networks to obtain stronger robustness. For example, increasing depth from ResNet-152 to ResNet-338 significantly improves the model robustness by 2.4%, while the corresponding improvement in the "clean" training setting (referred to as 0% adv + 100% clean) is only 0.5%. Moreover, this observation still holds even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. These results indicate that our so-called deep networks (e.g., ResNet-152) are still shallow for the task of adversarial learning, and larger networks should be used for fitting this complex distribution. Besides our findings on network depth, show increase network width also substantially improve network robustness. These empirical observations also corroborate with the recent theoretical studies (Nakkiran, 2019;Gao et al., 2019) which argues that robust adversarial learning needs much more complex classifiers. Besides adversarial robustness, we also observe a consistent performance gain on clean image accuracy by increasing network depth (as shown in Table 7). Our deepest network, ResNet-638, achieves an accuracy of 68.7% on clean images, outperforming the relatively shallow network ResNet-152 by 6.1%. CONCLUSION In this paper, we reveal two intriguing properties of adversarial training at scale: (1) conducting normalization in the right manner is essential for training robust models on large-scale datasets like ImageNet; and (2) our so-called "deep" networks are still shallow for the task of adversarial learning. Our discoveries may also be inherently related to our two-domain hypothesis -clean images and adversarial images are drawn from different distributions. We hope these findings can facilitate fellow researchers for better understanding of adversarial training as well as further improvement of adversarial robustness. A DIAGNOSIS ON ALP TRAINING PARAMETERS In the main paper, we note that our reproduced ALP significantly outperforms the results reported in Kannan et al. (2018), as well in an independent study . The main differences between our version and the original ALP implementation lie in parameter settings, and are detailed as follows: • learning rate decay: the original ALP decays the learning rate every two epochs at an exponential rate of 0.94, while ours decays the learning rate by 10× at the 35-th, 70-th and 95-th epoch. To ensure these two policies reach similar learning rates by the end of training, the total number of training epochs of the exponential decay setting and the step-wise decay setting are set as 220 and 110 respectively. • initial learning rate: the original ALP sets the initial learning rate as 0.045 whereas we set it as 0.1 in our implementation. • training optimizer: the original ALP uses RMSProp as the optimizer while we use Momentum SGD (M-SGD). • PGD initialization during training: the original ALP initializes the adversarial perturbation from a random point within the allowed cube; while we initialize the adversarial image by its clean counterpart with probability = 0.2, or randomly within the allowed the cube with probability = 0.8. Table 3: The results of ALP re-implementations under different parameter settings. We show that applying stronger attackers for training, e.g., change from PGD-10 to PGD-30, is the most important factor for achieving strong robustness. Other parameters, like optimizer, do not lead to significant robustness changes. By following the parameter settings listed in the ALP paper 4 , we can train a ResNet-101 with an accuracy of 38.1% against PGD-10. The ResNet-101 performance reported in the ALP paper is 30.2% accuracy against an attack suite 5 . This ∼8% performance gap is possibly due to different attacker settings in evaluation. However, by evaluating this model against PGD-2000, we are able to obtain a similar result that reported in , i.e., reports ALP obtains 0% accuracy, and in our implementation the accuracy is 2.1%. Given these different settings, we change them one by one to train corresponding models adversarially. The results are summarized in Table 3. Surprisingly, we find the most important factor for the performance gap between original ALP paper and ours is the attacker strength used for trainingby changing the attacker from PGD-10 to PGD-30 for training, the robustness against PGD-2000 can be increased by 19.7%. Other parameters, like network backbones or the GPU number, do not lead to significant performance changes. B EXPLORING THE IMPACT OF PARAMETER SETTINGS IN ADVERSARIAL TRAINING In this section, we explore the impact of different training parameters on model performance. B.1 PGD-N FOR TRAINING As suggested in Table 3, the number of attack iteration used for training is an important factor for model robustness. We hereby provide a detailed diagnosis of model performance trained with PGD-{5, 10, 20} 6 for different training strategies. We report the performance in Table 4, and observe that decreasing the number of PGD attack iteration used for training usually leads to weaker robustness. Nonetheless, we note the amount of this performance change is strongly related to training strategies. For strategies that cannot lead to models with strong robustness, i.e., 100% adv + 100% clean and 100% adv + 100% clean, ALP, this robustness degradation is extremely severe (which is similar to the observation in Table 3). For example, by training with PGD-5, these two strategies obtains nearly no robustness, i.e., ∼0% accuracy against PGD-2000. While for strategies that can secure model robustness against strong attacks, changing from PGD-30 to PGD-5 for training only lead to a marginal robustness drop. Table 4: Robustness evaluation of models adversarially trained with PGD-{30, 20, 10, 5} attackers. We observe that decreasing the number of PGD attack iteration for training usually leads to weaker robustness, while the amount of degraded robustness is strongly related to training strategies. B.2 APPLYING RUNNING STATISTICS IN TRAINING In Section 4.3 (of the main paper), we study the effectiveness of applying running statistics in training. We hereby test this heuristic policy under more different settings. Specifically, we consider 3 strategies, each trained with 4 different attackers (i.e., PGD-{5, 10, 20, 30}), which results in 12 different settings. We report the result in Table 5. We observe this heuristic policy can boost robustness on all settings, which further supports the importance of enforcing BN to behave consistently at training and testing. we study the training strategy 100% adv + 0% clean. The heuristic policy in Section 4.3 (of the main paper) is applied to achieve stronger robustness. Compared to the default setting (i.e., 4096 images/batch), training with smaller batch size leads to better robustness. For example, changing batch size from 4096 to 1024 or 2048 can improve the model robustness by ∼1%. While training with much smaller (i.e., 512 images/batch) or much larger (i.e., 8192 images/batch) batch size results in a slight performance degradation. C PERFORMANCE OF ADVERSARIALLY TRAINED MODELS In the main paper, our study is driven by improving adversarial robustness (measured by the accuracy against PGD-2000), while leaving the performance on clean images ignored. For the completeness of performance evaluation, we list the clean image performance of these adversarially trained models in Table 7. Moreover, to facilitate performance comparison in future works, we list the corresponding accuracy against PGD-{10, 20, 100, 500} in this
4,685
1906.03657
2948290717
Group convolution works well with many deep convolutional neural networks (CNNs) that can effectively compress the model by reducing the number of parameters and computational cost. Using this operation, feature maps of different group cannot communicate, which restricts their representation capability. To address this issue, in this work, we propose a novel operation named Hierarchical Group Convolution (HGC) for creating computationally efficient neural networks. Different from standard group convolution which blocks the inter-group information exchange and induces the severe performance degradation, HGC can hierarchically fuse the feature maps from each group and leverage the inter-group information effectively. Taking advantage of the proposed method, we introduce a family of compact networks called HGCNets. Compared to networks using standard group convolution, HGCNets have a huge improvement in accuracy at the same model size and complexity level. Extensive experimental results on the CIFAR dataset demonstrate that HGCNets obtain significant reduction of parameters and computational cost to achieve comparable performance over the prior CNN architectures designed for mobile devices such as MobileNet and ShuffleNet.
Most of works applied this approach improve the efficiency of CNNs via weight pruning @cite_7 @cite_6 @cite_0 and quantization @cite_13 . These approaches are effective because deep neural networks often have a substantial number of redundant weights that can be pruned or quantized without sacrificing much accuracy. For convolutional neural networks, different pruning techniques may lead to different levels of granularity. Fine-grained pruning, e.g., independent weight pruning @cite_7 , generally achieves a high degree of sparsity. However, it requires storing a large number of indices, and relies on special hardware software accelerators. In contrast, coarse-grained pruning methods such as filter-level pruning @cite_0 achieve a lower degree of sparsity, but the resulting networks are much more regular, which facilitates efficient implementations. These approaches are simple and intuitive, however, iterative optimization strategy is commonly utilized in these approaches, which slows down the training procedure.
{ "abstract": [ "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ], "cite_N": [ "@cite_0", "@cite_13", "@cite_6", "@cite_7" ], "mid": [ "2962851801", "2300242332", "2963363373", "2963674932" ] }
HGC: Hierarchical Group Convolution for Highly Efficient Neural Network
Deep convolutional neural networks (CNNs) have shown remarkable performance in many computer vision tasks in recent years. In order to achieve higher accuracy for major tasks such as image classification, building deeper and wider CNNs [22,24,4,10] is the primary trend. However, deeper and wider CNNs usually have hundreds of layers and thousands of channels, which come with an increasing amount of parameters and computational cost. For example, one of the classic networks, VGG16 [22] with 130 million parameters needs more than 30 billion floating-point operations (FLOPs) to classify a single image, it fails to achieve real-time classification even with a powerful GPU. And many real-world applications often need to be performed on limited resource in real-time, e.g., mobile devices. Thereby, the model should be compact to reduce computational cost and achieve better trade-off between efficiency and accuracy. Recently, many research work focus on the field of model compression [12,25,7,28,21]. These works can be separated into two main kinds of approaches: compression for per-trained network and efficient architecture design. The compressing approach usually bases on traditional compression techniques such as pruning and quantization which removes connections to eliminate redundancy or reduce the number of bits to represent the parameters. These approaches are simple and intuitive, but always needs multiple steps, i.e., pretraining and compressing, thus cannot do an end-to-end training at one time. The second approach trains model from scratch in a fully end-to-end manner. It usually utilizes a sequence of sparsely-connected convolutions rather than the standard fully-connected convolution to design new efficient architectures. For instance, in the ShuffleNet [28], the original 3 × 3 convolution is replaced with a 3 × 3 depthwise convolution, while the 1 × 1 convolution is substituted with a pointwise group convolution. The application of group convolution significantly Figure 1: An illustration of group convolutions. (a) is a standard group convolution, which severely blocks the information flow between channels of each group. (b) is a group convolution with shuffle operation to facilitate inter-group information exchange, but still suffers from much loss of inter-group information. Red lines show that each output only relates to three input channels. reduces amount of parameters and the computational cost. However, the group convolution blocks the information flow between channels of each group, as shown in Figure 1(a), the G groups are computed independently from completely separate groups of input feature maps, thus there is no interaction between each group and leads to severe performance degradation. Although ShuffleNet introduces a channel shuffle operation to facilitate inter-group information exchange, it still suffers from the loss of inter-group information. As shown in Figure 1(b), even with a shuffle operation, a large portion of the inter-group information cannot be leveraged. This problem is aggravated when number of channel groups increases. To solve the above issue, we propose a novel operation named Hierarchical Group Convolution (HGC) to effectively facilitate the interaction of information between different groups. In contrast to common group convolution, HGC can hierarchically fuse the feature maps from each group and leverage the inter-group information effectively. Specifically, we split the input feature maps of a layer into multiple groups, in which the first group features are extracted by a group of filters; output feature maps of the previous group are concatenated with the next group of input feature maps, and then feed to the next group of filters. This process repeats until all input feature maps are included. By exploiting the HGC operation and depthwise seperable convolution, we introduce the HGC module, a powerful and effective unit to build a highly efficient architecture called HGCNets. A series of controlled experiments show the effectiveness of our design. Compared to other structures, HGCNets perform better in alleviating the loss of inter-group information, and thus achieve substantial improvement as the group number increases. Our work brings following contributions and benefits: First, a new hierarchical group convolution operation is proposed to facilitate the interaction of information between different groups of feature maps and leverage the inter-group information effectively. Second, our proposed HGCNets achieve higher classification accuracy than prior compact CNNs at the same or even lower complexity. The rest of this paper is organized as follow: Section 2 provides an overview of the related work on model compression. The details of the proposed Hierarchical Group Convolution operation is introduced in Section 3. In Section 4, we describe the structure of the HGC module and HGCNets architecture. The performance evaluation of the proposed method is described in Section 5. Finally, we conclude this paper in Section 6. Compression for pre-trained networks Most of works applied this approach improve the efficiency of CNNs via weight pruning [3,5,18] and quantization [20]. These approaches are effective because deep neural networks often have a substantial number of redundant weights that can be pruned or quantized without sacrificing much accuracy. For convolutional neural networks, different pruning techniques may lead to different levels of granularity. Fine-grained pruning, e.g., independent weight pruning [3], generally achieves a high degree of sparsity. However, it requires storing a large number of indices, and relies on special hardware/software accelerators. In contrast, coarse-grained pruning methods such as filter-level pruning [18] achieve a lower degree of sparsity, but the resulting networks are much more regular, which facilitates efficient implementations. These approaches are simple and intuitive, however, iterative optimization strategy is commonly utilized in these approaches, which slows down the training procedure. Designing efficient architectures Considering the above-mentioned limitations, some researchers go other way to directly design efficient network architectures [7,28,27] that can be trained end-to-end using smaller filters, such as depthwise separable convolution, group convolution, and etc. Two well-known applicants of this kind of approach that are sufficiently efficient to be deployed on mobile devices are MobileNet [7] and ShuffleNet [28]. MobileNet exploited depthwise separable convolution as its building unit, which decompose a standard convolution into a combination of a depthwise convolution and a pointwise convolution. ShuffleNet utilize depthwise convolution and pointwise group convolution into the bottleneck unit, and proposed the channel shuffle operation to enable inter-group information exchange. Compact networks can be trained from the scratch, so the training procedure is very fast. Moreover, the model can be further compressed combined with the aforementioned model compression methods which are orthogonal to this approach, e.g., Huang [9] combined the channel pruning and group convolution to sparsify networks, however, this channel pruning methods obtain a sparse network based on a complex training procedure that requires significant cost of offline training and directly removing the input feature maps typically has limited compression and speedup with significant accuracy drop.In addition to the methods described above, some other approaches such as low-rank factorization [14] and knowledge distillation [6] can also efficiently accelerate deep neural network. Group Convolution Group convolution is a special case of a sparsely connected convolution. It was first used in the AlexNet [16] architecture, and has more recently been popularized by their successful application in ResNeXt [26]. Standard convolutional layers generate O output feature maps by applying convolutional filters over all I input feature maps, leading to a computational cost of I × O. In comparison, group convolution reduces this computational cost by partitioning the input features into G mutually exclusive groups and each group produces its own outputs-reducing the computational cost by a factor G to I×O G . However, the grouping operation usually compromises performance because there is no interaction among groups. As a result, information of feature maps in different groups is not combined, as opposed to the original convolution that combines information of all input channels, which restricts their representation capability. To solve this problem, in ShuffleNet [28], a channel shuffle operation is proposed to permute the output channels of group convolution and makes the output better related to the input. But any output group still only accesses I G input feature maps and thus collects partial information. Due to this reason, ShuffleNet has to employ a deeper architecture than MobileNet to achieve competitive results. Hierarchical Group Convolution Motivation In modern deep neural networks, the size of convolutional filters is mostly 3 × 3 or 1 × 1, and the main computational cost is from the convolutional layer, that the fully connected layer can be considered as a special case of the 1 × 1 convolutional layer. To reduce the parameters in convolution operation, an extremely efficient scheme is to replace standard 3 × 3 convolution by a 3 × 3 depth-wise separable Figure 2: The proposed Hierarchical Group Convolution operation. Input feature mapsI concatenate Output 1 X 2 X 3 X 4 X 1 Y 2 Y 3 Y 4 Y 1 1  1 1  1 1  1 1  convolution [1] followed by interleaved 1 × 1 group convolution [27,28]. This scheme significantly reduces the model size and therefore attracts increasing attention. Since the 1 × 1 filters are non-seperable, group convolution becomes a hopeful and feasible solution and works well with many deep neural network architectures. However, preliminary experiments show that a naive adaptation of group convolution in the 1 × 1 convolutional layer leads to drastic reductions in accuracy especially in dense architectures. As analyzed in CondenseNet [9], this is caused by the fact that the inputs to the 1 × 1 convolutional layer are concatenations of feature maps generated by preceding layers and they have an intrinsic order or they are far more diverse. The hard assignment of these features to disjoint groups hinders effective feature reuse in the network. More specifically, as investigated in network explanation, individual feature maps across different layers play different roles in the network, e.g., features from shallow layers usually encode low-level spatial visual information like edges, corners, circles, etc., and features from deep layers encode high-level semantic information. Group convolution severely blocks the inter-group information exchange and induce the severe performance degradation. In order to facilitate the fusion of feature maps from each group and leverage the inter-group information effectively, we develop a novel approach, named hierarchical group convolution operation that efficiently overcomes the side effects brought by the group convolution. Details of Hierarchical Group Convolution Details of the proposed Hierarchical Group Convolution are shown in Figure 2. Generally, a 1 × 1 standard convolutional layer transforms the input feature maps X ∈ R I×Hin×Win into the output feature maps Y ∈ R O×Hout×Wout by using the filters W ∈ R O×I×1×1 . Here, I and O is the number of the input feature maps and the output feature maps respectively. In HGC operation, the input channels and filters are divided into G groups respectively, i.e., I G input channels and O G filters in each group, denote as X = {X 1 , X 2 , · · · , X G } where each X i ∈ R I G ×Hin×Win , and W = {W 1 , W 2 , · · · , W G }, where W i ∈ R O G × I G ×1×1 when i = 1, and W i ∈ R O G ×( O G + I G )×1×1 when i ∈ {2, 3, · · · , G}. Except that the first group feature maps directly go through the W 1 , the feature groupX i is concatenated with the output Y i−1 on the channel dimension, and then fed into W i . Thus, the Y i can be formulated as follows: Y i = X i * W i i = 1 concatenate(X i , Y i−1 ) * W i 1 < i ≤ G(1) where * represents the 1 × 1 convolutional operation. For simplicity, the biases are omitted for easy presentation. After all input feature maps are processed, we finally concatenate each Y i as the output of HGC. Notice that each 1 × 1 convolutional operator could potentially receive information from all feature subsets {X k , k ≤ i} of the previous layer. Each time a feature group X k go through a 1 × 1 convolutional operator, the output result can have more information from input feature maps. The split and concatenation strategy can effectively process feature maps with less parameters. The parameters of the HGC is calculated as bellow: O G × I G × 1 × 1 + (G − 1) × O G × ( O G + I G ) × 1 × 1 (2) compared with the parameters of standard convolution, the compression ratio r of each layer is: r = ( O I × G + 1 G ) × (1 − 1 G ) + 1 G 2 ≈ 2 G × (1 − 1 G ) + 1 G 2(3) As can be observed in Eq. 3, HGC contains about 2 G fewer parameters than standard convolution. Although with negligible parameters increase than standard group convolution, HGC has stronger ability of feature representation. As will be shown in Section 5.1, HGC has a substantial improvement in accuracy especially in the case of large number of groups. HGCNet HGC module Taking advantage of the proposed HGC operation, we propose a novel HGC module specially designed for efficient neural networks. The HGC module is shown in Figure 3(b). The typical bottleneck structure shown in Figure 3(a) is a basic building block in many modern backbone CNNs architectures, e.g., Densenet [10]. Instead of directly extracting features using a group of 1 × 1 convolutional filters as in the bottleneck, we use HGC operation with stronger inter-group information exchange ability, while maintaining similar computational load. A channel shuffle operation before the HGC allows for more inter-group information exchange. Finally, feature maps from all groups are concatenated and sent to a computational economical 3 × 3 depthwise seperable convolution to capture spatial information. The usage of batch normalization [13] and nonlinearity [19] is similar to Xception [1], that we do not use ReLU before depthwise convolution. As discussed in Section 3.1, the information contained in each output group gradually increase, which results in each channel has different contribution to latter layers. Thus, we can integrate the SE [8] block to the HGC module to adaptively re-calibrates channel-wise feature responses by explicitly modeling importance of each channel. Our HGC module can benefit from the integration of the SE block, which we have experimentally demonstrated in Section 5.3. HGCNet Architecture Combined with the efficient HGC module and dense connectivity, we propose HGCNets, a new family of compact neural networks. Similar to CondenseNet [9], we exponentially increasing the growth rate as the depth grows to increase the proportion of features coming from later layers relative to those from earlier layers due to the fact that deeper layers in DenseNet tend to rely on high-level features more than on low-level features. For simplicity, we multiply the growth rate by a power of 2. The overall architecture of HGCNets for CIFAR classification is group into three stages. The number of HGC module output channels is kept the same to the growth-rate in each stage, and doubled in the next stage. Experiments In this section, we evaluate the effectiveness of our proposed HGCNets on the CIFAR-10, CIFAR-100 [15] image classification datasets. We implement all the proposed models using the Pytorch framework [2]. Datasets. The CIFAR-10 and CIFAR-100 datasets consist of RGB images of size 32 × 32 pixels, corresponding to 10 and 100 classes, respectively. Both datasets contain 50,000 training images and 10,000 test images. We use a standard data-augmentation scheme [11,17,23], in which the images are zero-padded with 4 pixels on each side, randomly cropped to produce 32 × 32 images, and horizontally mirrored with probability 0.5. Ablation study on CIFAR We first perform a set of experiments on CIFAR-10 to validate the effectiveness of the efficient HGC operation and the proposed HGCNets. Training details. We train all models with stochastic gradient descent (SGD) using similar optimization hyperparameters as in [4,10], Specifically, we adopt Nesterov momentum with a momentum weight decay of 10 −4 . All models are trained with mini-bath size 128 for 300 epochs, unless otherwise specified. We use a cosine shape learning rate which starts from 0.1 and gradually reduces to 0. Ablation Study. For better contrast with standard group convolution (SGC), we replace the hierarchical group convolution with SGC in the HGC module which is formed the SGCNets. We first explore the accuracy of them with respect to different number of groups, the results are shown in Table 1 and Figure 4(a). When the group number is kept the same, HGCNets surpass SGCNets by a large margin. As can be seen, the accuracy drops dramatically when the standard group convolution is applied to the 1 × 1 convolution, mainly due to the loss of representation capability from hard assignment. Differently, our HGC successfully generates more discriminative features and maintains the accuracy even with large number of groups. More importantly, HGCNets gain substantial improvements as the group number increases. Figure 4(b) shows the computational efficiency gains brought by the HGC. Compared to SGCNets, HGCNets require 30% fewer parameters to achieve comparable performance. As discussed above, increasing G makes more inter-group connections lost, which aggravates the loss of inter-group information and harms the representation capability. However, the hierarchical group convolution fuses the features from all channels hierarchically and generates more discriminative features than ShuffleNet. As shown in Figure 5, HGCNet overcomes the performance degradation and has a better convergence than the network which uses standard group convolution. These improvements are consistent with our initial motivation to design HGC module. Comparison to state-of-the-art compact CNNs In Table 2, we show the results of experiments comparing HGCNets with alternative state-of-the-art compact CNN architectures. Following [10], our models were trained for 300 epochs, and set G to 4 for better tradeoff between the compression and accuracy. From the results, we can observe that HGCNets require fewer parameters and FLOPs to achieve a better accuracy than MobileNets and ShuffleNets. Comparison to state-of-the-art large CNNs In this subsection, we experimentally demonstrate that the proposed HGCNets, as a lightweight architecture, can still outperform state-of-the-art large models, e.g., ResNet [4]. We can also integrate the SE-block [8] to the HGC module to adaptively recalibrate channel-wise feature responses by explicitly modeling importance of each channel. As shown in Table 3, the original HGCNets can already outperform 110-layer ResNet using 6x fewer parameters. When we insert SE block into HGC module, the top-1 error of HGCNet on CIFAR-10 further decreases to 5.81%, with negligible increase in the number of parameters. Conclusion In this paper, we propose a novel hierarchical group convolution operation to perform model compression by replacing standard group convolution in deep neural networks. Different from standard group convolution which blocks the inter-group information exchange and induce the severe performance degradation, HGC can effectively leverage the inter-group information and generate more discriminative features even with a large number of groups. Based on the proposed HGC, we propose HGCNets, a new family of compact neural networks. Extensive experiments show that HGCNets achieve higher classification accuracy than the prior CNNs designed for mobile devices at the same or even lower complexity.
3,166
1906.03861
2948563793
Augmenting transformation knowledge onto a convolutional neural network's weights has often yielded significant improvements in performance. For rotational transformation augmentation, an important element to recent approaches has been the use of a steerable basis i.e. the circular harmonics. Here, we propose a scale-steerable filter basis for the locally scale-invariant CNN, denoted as log-radial harmonics. By replacing the kernels in the locally scale-invariant CNN lsi_cnn with scale-steered kernels, significant improvements in performance can be observed on the MNIST-Scale and FMNIST-Scale datasets. Training with a scale-steerable basis results in filters which show meaningful structure, and feature maps demonstrate which demonstrate visibly higher spatial-structure preservation of input. Furthermore, the proposed scale-steerable CNN shows on-par generalization to global affine transformation estimation methods such as Spatial Transformers, in response to test-time data distortions.
Scale-transformed weights were proposed in @cite_9 , where it was observed to improve performance over the normal baseline CNN, on MNIST-Scale. On the same dataset (with a 10k, 2k and 50k split), better performance was observed in @cite_3 , where in addition to forwarding the maximum filter response to a range of scales, the actual scale at which the response was obtained was also forwarded. In both works, weight scaling was only indirectly emulated, by rather scaling the input and the resizing back the convolution response to a fix size for max-pooling across scales.
{ "abstract": [ "Convolutional Neural Networks (ConvNets) have shown excellent results on many visual classification tasks. With the exception of ImageNet, these datasets are carefully crafted such that objects are well-aligned at similar scales. Naturally, the feature learning problem gets more challenging as the amount of variation in the data increases, as the models have to learn to be invariant to certain changes in appearance. Recent results on the ImageNet dataset show that given enough data, ConvNets can learn such invariances producing very discriminative features [1]. But could we do more: use less parameters, less data, learn more discriminative features, if certain invariances were built into the learning process? In this paper we present a simple model that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters. We show on a modified MNIST dataset that when faced with scale variation, building in scale-invariance allows ConvNets to learn more discriminative features with reduced chances of over-fitting.", "We study the effect of injecting local scale equivariance into Convolutional Neural Networks. This is done by applying each convolutional filter at multiple scales. The output is a vector field encoding for the maximally activating scale and the scale itself, which is further processed by the following convolutional layers. This allows all the intermediate representations to be locally scale equivariant. We show that this improves the performance of the model by over 20 in the scale equivariant task of regressing the scaling factor applied to randomly scaled MNIST digits. Furthermore, we find it also useful for scale invariant tasks, such as the actual classification of randomly scaled digits. This highlights the usefulness of allowing for a compact representation that can also learn relationships between different local scales by keeping internal scale equivariance." ], "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "252252322", "2887520328" ] }
Scale Steerable Filters for Locally Scale-Invariant Convolutional Neural Networks
Convolutional Neural Networks rise to success on large datasets like ImageNet in [2], has prompted a myriad of work in their direction, which build on their key depthpreserved transformation equivariance property to achieve better classifiers [3,4,5]. Equivariance to transformations has been thus recognized as an important pre-requisite to any classifier, and CNNs which are by definition translation equivariant have been recognized as a first important step in this direction. An underlying requirement to a transformation equivariant representation is the construction of transformed copies of filters, i.e. when the transformation is a translation, the 1 National University of Singapore, Singapore. Correspondence to: Rohan Ghosh <[email protected]>. operation becomes a convolution. A natural extension of this idea to general transformation groups led to the idea of Group-equivariant CNNs [3], where in the first layer, transformed copies of filter weights are generated. Subsequently, the application of group convolution ensures that the network stays equivariant to that transformation throughout. However, there are certain issues pertaining to the application of any (spatial) transformation on a filter: 1. There is no prior on the spatial complexity of a convolutional filter within a CNN, which means a considerable part of the filter space may contain filters which are not sensitive to the desired spatial transformation. Examples include rotation symmetric filters, high-frequency filters etc. 2. As noted in [4], most transformations are continuous in nature, necessitating interpolation for obtaining filter values at new locations. This usually leads to interpolation artifacts, which can have a greater disruptive effect when the filters are usually of small size. Steerable Filters To alleviate these issues, the use of a steerable filter basis for filter construction and learning was proposed in [6]. Steerable filters have the unique property, that allow them to be transformed by simply using linear combinations of an appropriate steerable filter basis. Importantly, the choice of the steerable basis allows one to control the transformation sensitivity of the final computed filter. Especially for a circular harmonic basis [7], we find that filters of order k are only sensitive to rotation shifts in the range (0, 2π/k). In this case, higher order filter responses show less sensitivity to input rotations, and simultaneously are of higher spatial frequency and complexity. Using a small basis of the first few filter orders enabled the authors of [4] to achieve state-of-the-art on MNIST-Rot classification (with small training data size). Contributions of this Work Log-Radial Harmonics: A scale steerable basis In this paper, we define filters which are steerable in their spatial scale using a complex filter basis we denote as log-radial harmonics. Each kernel of a CNN is represented as the real part of the linear combination of the proposed basis filters, arXiv:1906.03861v1 [cs.CV] 10 Jun 2019 which contains filters of various orders, analogous to circular harmonics. Furthermore, the scale steerable property permits exact representation of the filters in its scale simply through a linear combination of learnt complex coefficients on the log-radial harmonics. The filter form is conjugate to the circular harmonics, with the choice of filter order having a direct impact on the scale sensitivity of the resulting filters. Scale-Steered CNN (SS-CNN) Using the log-radial harmonics as a complex steerable basis, we construct a locally scale invariant CNN, where the filters in each convolution layer are a linear combination of the basis filters. For obtaining filter response across scales, each filter is simultaneously steered in its scale and size, and the filter responses are eventually max-pooled. We demonstrate accuracy improvements with the scale-steered CNN on datasets containing global (MNIST-Scale, and FMNIST-Scale) and local (MNIST-Scale-Local; synthesized here) scale variations. Importantly, we find that on MNIST-Scale, the proposed SS-CNN achieves competitive accuracy to the Spatial Transformer Network [8], which due to its global affine re-sampling property has a natural advantage in this task. Related Work Previous work with Local Scale Invariant/Equivariant CNNs Scale-transformed weights were proposed in [1], where it was observed to improve performance over the normal baseline CNN, on MNIST-Scale. On the same dataset (with a 10k, 2k and 50k split), better performance was observed in [9], where in addition to forwarding the maximum filter response to a range of scales, the actual scale at which the response was obtained was also forwarded. In both works, weight scaling was only indirectly emulated, by rather scaling the input and the resizing back the convolution response to a fix size for max-pooling across scales. Methods Scale-steerable filters: Log-Radial Harmonics Similar to the rotation steerable circular harmonics, we can analogously construct a set of filters of the form W (r, φ) = Φ(φ)F (log r)/r m . Since we wish to steer the scale of the filter, now Φ is of Gaussian form, whereas F (log r) is complex valued with unit norm, i.e. e i(k log r+β) . The proposed mathematical form of a scale steerable filter of order k and centered on a particular φ = φ j is, S kj (r, φ) = 1 r m (K(φ, φ j ) + K(φ, φ j + π)) e i(k(log r)+β) ,(1) where K(φ, φ j ) = e −d(φ,φj ) 2 /2σ 2 φ . Here d(φ, φ j ) is the distance between the two angles φ and φ j . Example filters constructed using equation 1 are shown in Figure 5.1. When steering the above filter in scale, we find that a complex multiplication of s m−2 e −i(klogs) suffices, where s is the scale factor change. This we prove in the following theorem. Theorem 1. Given a circular input patch I(a) within a larger image, which is defined within the x, y range of 0 ≤ x 2 + y 2 ≤ a. Let I s (a) denote the same patch when the image was scaled around the centre of the patch by a factor of s. We then have I s (a) S kj (a) = s m−2 e −i(k log s) I(as) S kj (as) 1 , (2) where is the cross-correlation operator (in the continuous domain), used in the same context as in [7]. The proof of theorem 1 is shown in the appendix. Scale-Steerable Complex Basis Input Scale-steering via phase manipulation of An immediate consequence of the above theorem is that for a = ∞ the theorem assumes a simpler form, I s S kj = s m−2 e i(k log s) I S kj . Scale steerability A useful consequence of steerability is that any filter expressed as a linear combination (with complex coefficients) of the steerable basis is also steerable. However, we want the filters to be real valued, and hence we only take the real part of W s Re (as) = (W s (as)). Note that equality in equation (2) is for both the real and the imaginary parts on both sides of the equation, and thus working with the real part of the filters does not change steerability. The result in Theorem 1 includes an additional change of radius from a to as. This indicates that the pixel values of W s are sampled across a circular region of radius as, which depends on the scale factor s. Finally, as noted in [10,7], steerability and sampling are interchangeable, therefore the sampled version of the scaled basis filters are same as the scaled version of the sampled filter. Scale-Invariant CNNs with Scale Steered Weights Here we describe the Scale-Steered CNN (SS-CNN), which employs a scale steeerable filter basis in the computation of its filters. Figure 2 shows the proposed scale-invariant layer. Each filter within the scale-invariant layers is computed as a linear combination of the assigned scale steerable basis S kj . The network directly only learns the complex coefficients c kj . At each scale-invariant layer, the scaled and resized versions of the filters are directly computed from the complex coefficients using equation 3. Only the maximum responses across all scales are channeled to the next layer, by max-pooling the responses across scales. Experiments First, to validate the proposed approach, datasets such as MNIST-Scale and FMNIST-Scale were chosen which contain global scale variations. In addition, a dataset containing local scale variations was also synthesized from MNIST. Subsequently, the filters and the activation maps within the SS-CNN are visualized. All experiments were run on a NVIDIA Titan GPU processor. The code has been released at https://github.com/rghosh92/SS-CNN. Classification with SS-CNN MNIST AND FMNIST The data partitioning protocol for MNIST-Scale is a 10k, 2k, and 50k split of the scaled version of original MNIST, into training, validation and testing data respectively. 2 We use the same split ratio for creating FMNIST-Scale, with the same range of spatial scaling (0.3, 1). No additional data augmentation was performed for all the networks. Global scale variations: MNIST and FMNIST The results on MNIST-Scale and FMNIST-Scale are shown in Table 1 3 . The proposed method is compared with three other CNN variants: Locally scale invariant CNN [1], scale equivariant vector fields [9] and spatial transformer networks 4 [8]. For a fair comparison, all networks used have a total of 3 convolutional layers and 2 fully connected layers. The number of trainable parameters for all four networks were kept approximately the same. Mean and standard deviations of accuracies are reported after 6 splits. 5 Generalization to Distortions Here we test and compare method performance on MNIST with added elastic distortions. The networks are all trained on the undistorted MNIST-Scale, but tested on MNIST-Scale with added elastic deformations. Results are shown in Table 2. We only record the performance for a single network (best performing) for each method. Table 3. The results demonstrate the superior performance of local scale-invariance based methods over global transformation estimation architectures such as spatial transformers, in a scenario where the data contains local scale variations. Visualization Experiments In this section we visualize the network filters and feature map activations for two scale-invariant networks: our proposed SS-CNN and the LocScaleInv-CNN. Both networks were trained on MNIST-Scale. Figure 3 (a) shows a visual comparison of the layer 1 filters for these networks. Notice that the scale-steered filters show considerably higher structure, centrality, and interesting filter form: some of them resembling oriented bars. Figure 3 (b) compares the average feature map activation of Layer 1, in response to different inputs. Notice that spatial structure is far better preserved in the SS-CNN responses (bottom row), with the digit outlines clearly distinguishable. This is partly due to the ingrained centrality of the scale-steered basis (the 1/r term), which generates a response which is more structure preserving. Discussions Based on the proposed SS-CNN framework in this work, we underline some of the important issues and considerations moving forward. Also, we provide detailed explanations for some of the design choices used in this work. • Input Resizing vs Filter Scaling: For locally scale invariant CNNs, usually the input is reshaped to a range of sizes, both smaller and greater than the original size [1,9]. Feature maps are obtained by convolving each resized input with an unchanged filter. Lastly all the feature maps are reshaped back to a common size, beyond which only the maximum response across scales are channeled. This approach uses two rounds of reshaping, and thus is clearly prone to interpolation artifacts, especially if the filters are not smooth enough. The method proposed in this work only steers the filters in their scale and size, without having to rely on any interpolation operations. Note that change of filter size just requires computing the filter values at the new locations using equation 3 and 1. • Filter Centrality: If the filters are not central, i.e. centered near to their centre of mass 6 , then they pose the risk of entangling scale and translation information. This happens, when the filter response to the input, at a certain scale and location is the same as the response of the same filter at a different scale and a different location. This can be quite common for filters which have most of their "mass" away from their center. Such entanglement can often lead to feature maps with distorted and over-smoothed spatial structure, as observed in Figure 3 (b) (top). This issue can be tackled to a certain extent by using filters which show centrality (Figure 3 (a)). As seen in equation 1, one can control the centrality of the steerable basis filters, with the radial term (1/r m ), and by ensuring radial-symmetric filters with (K(φ, φ j ) + K(φ, φ j + π)) as the angular term. Figure 5.1 shows the central nature of the steerable basis. Filter centrality is preserved for the subsequently generated filters, as seen in Figure 3 (a) (left), which shows the generated filters after training. • Transformation Sensitivity: As iterated in section 1, an important yet partly overlooked aspect of using a steerable basis from the family of circular harmonics (or log-radial harmonics), is the ability to control the transformation sensitivity of the filters. For instance, circular harmonics beyond a certain order have a much smaller sensitivity to changes in input rotation. This is simply because each circular harmonic filter is invariant to discrete rotations of 2π/k, k being the filter order. Similarly, it is easily seen that each log-radial harmonic filter is invariant to filters being scaled by a scale factor of e ±2π/k . Therefore, higher order filters show considerably less transformation sensitivity. It is perhaps noteworthy that the 2D Fourier transform (or the 2D DCT) basis functions can also be used as a steerable basis (e.g. [11]). In that case, higher frequency (analogous to filter order) filters are less sensitive to input translations, compared to low frequency filters. Therefore in a certain sense, the circular harmonic and log-radial harmonic filter bases are a natural extension of the Fourier basis (translations), to other transforma- 6 Centre of mass, in this case holds the same definition as in physics. The "mass" element can be considered as the absolute value of the filter at a certain location. tions (rotation and scale). Conclusions and Future Work A scale-steerable filter basis is proposed, which along with the popular rotation-steerable circular harmonics, can help augment CNNs with a much higher degree of transformational weight-sharing. Experiments on multiple datasets showcasing global and local scale variations demonstrated the performance benefits from using scale-steered filters in a scale-invariant framework. Scale-steered filters are found to showcase heightened centrality and structure. A natural trajectory for this approach will be to inculcate the scalesteering paradigm onto equivariant architectures such as GCNNs.
2,404
1807.07466
2884370868
Semantic segmentation architectures are mainly built upon an encoder-decoder structure. These models perform subsequent downsampling operations in the encoder. Since operations on high-resolution activation maps are computationally expensive, usually the decoder produces output segmentation maps by upsampling with parameters-free operators like bilinear or nearest-neighbor. We propose a Neural Network named Guided Upsampling Network which consists of a multiresolution architecture that jointly exploits high-resolution and large context information. Then we introduce a new module named Guided Upsampling Module (GUM) that enriches upsampling operators by introducing a learnable transformation for semantic maps. It can be plugged into any existing encoder-decoder architecture with little modifications and low additional computation cost. We show with quantitative and qualitative experiments how our network benefits from the use of GUM module. A comprehensive set of experiments on the publicly available Cityscapes dataset demonstrates that Guided Upsampling Network can efficiently process high-resolution images in real-time while attaining state-of-the art performances.
@cite_15 @cite_20 @cite_24 represent the pioneer works that employed CNNs for semantic segmentation. FCN @cite_22 laid the foundations for modern architectures where CNNs are employed in a fully-convolutional way. Authors used a pre-trained encoder together with a simple decoder module that takes advantage of skip-connections from lower layers to exploit high-resolution feature maps. They obtained a significant improvement both in terms of accuracy and efficiency. DeepLab @cite_27 made use of Dilated Convolutions @cite_19 to increase the receptive field of inner layers without increasing the overall number of parameters. After the introduction of Residual Networks (Resnets) @cite_17 most methods employed a very deep Resnet as encoder DeepLabv2 @cite_10 Resnet38 @cite_7 FRRN @cite_25 , pushing forward the performance boundary on semantic segmentation task. PSPNet @cite_3 and DeepLabv3 @cite_21 introduced context layers in order to expand the theoretical receptive field of inner layers. All these methods attain high accuracy on different benchmarks but at high computational costs.
{ "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "The trend towards increasingly deep neural networks has been driven by a general observation that increasing depth increases the performance of a network. Recently, however, evidence has been amassing that simply increasing depth may not be the best way to increase performance, particularly given other limitations. Investigations into deep residual networks have also suggested that they may not in fact be operating as a single deep network, but rather as an ensemble of many relatively shallow networks. We examine these issues, and in doing so arrive at a new interpretation of the unravelled view of deep residual networks which explains some of the behaviours that have been observed experimentally. As a result, we are able to derive a new, shallower, architecture of residual networks which significantly outperforms much deeper models such as ResNet-200 on the ImageNet classification dataset. We also show that this performance is transferable to other problem domains by developing a semantic segmentation approach which outperforms the state-of-the-art by a remarkable margin on datasets including PASCAL VOC, PASCAL Context, and Cityscapes. The architecture that we propose thus outperforms its comparators, including very deep ResNets, and yet is more efficient in memory use and sometimes also in training time. The code and models are available at this https URL", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "", "Semantic image segmentation is an essential component of modern autonomous driving systems, as an accurate understanding of the surrounding scene is crucial to navigation and action planning. Current state-of-the-art approaches in semantic image segmentation rely on pre-trained networks that were initially developed for classifying images as a whole. While these networks exhibit outstanding recognition performance (i.e., what is visible?), they lack localization accuracy (i.e., where precisely is something located?). Therefore, additional processing steps have to be performed in order to obtain pixel-accurate segmentation masks at the full image resolution. To alleviate this problem we propose a novel ResNet-like architecture that exhibits strong localization and recognition performance. We combine multi-scale context with pixel-level accuracy by using two processing streams within our network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. The two streams are coupled at the full image resolution using residuals. Without additional processing steps and without pre-training, our approach achieves an intersection-over-union score of 71.8 on the Cityscapes dataset.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ], "cite_N": [ "@cite_22", "@cite_7", "@cite_21", "@cite_3", "@cite_24", "@cite_19", "@cite_27", "@cite_15", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2952632681", "2952147788", "2787091153", "2952596663", "", "2286929393", "1923697677", "2102605133", "", "2950668883", "", "2949650786" ] }
MAZZINI: GUN FOR REAL-TIME SEMANTIC SEGMENTATION Guided Upsampling Network for Real-Time Semantic Segmentation
Most of the current state-of-the-art architectures for image segmentation rely on an encoderdecoder structure to obtain high-resolution predictions and, at the same time, to exploit large context information. One way to increase network receptive fields is to perform downsampling operations like pooling or convolutions with large stride. Reduction of spatial resolution is twice beneficial because it also lightens the computational burden. Even state-of-theart architectures that make use of dilated convolutions [5,23,25], employ some downsampling operators in order to maintain the computation feasible. Semantic maps are usually predicted at 1/8 or 1/16 of the target resolution and then they are upsampled using nearest neighbor or bilinear interpolation. Our focus and contribution We focus on Semantic Segmentation of street scenes for automotive applications where a model needs to be run continuously on vehicles to take fast decisions in response to environmental events. For this reason, our design choices are the result of a trade-off between c 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. processing speed and accuracy. Our work focuses on a fast architecture with a lightweight decoder that makes use of a more effective upsampling operator. Our contributions are the following: • We developed a novel multi-resolution network architecture named Guided Upsampling Network, presented in Section 3 that is able to achieve high-quality predictions without sacrificing speed. Our system can process a 512x1024 resolution image on a single GPU at 33 FPS while attaining 70.4% IoU on the cityscapes test dataset. • We designed our network in an incremental way outlining pros and cons of every choice and we included all crucial implementation details in Section 3.1 to make our experiments easily repeatable. • We designed a novel module named GUM (Guided Upsampling Module, introduced in Section 4) to efficiently exploit high-resolution clues during upsampling. [7,8,17] represent the pioneer works that employed CNNs for semantic segmentation. FCN [15] laid the foundations for modern architectures where CNNs are employed in a fullyconvolutional way. Authors used a pre-trained encoder together with a simple decoder module that takes advantage of skip-connections from lower layers to exploit high-resolution feature maps. They obtained a significant improvement both in terms of accuracy and efficiency. DeepLab [2] made use of Dilated Convolutions [22] to increase the receptive field of inner layers without increasing the overall number of parameters. After the introduction of Residual Networks (Resnets) [9] most methods employed a very deep Resnet as encoder e.g. DeepLabv2 [4] Resnet38 [21] FRRN [18], pushing forward the performance boundary on semantic segmentation task. PSPNet [25] and DeepLabv3 [5] introduced context layers in order to expand the theoretical receptive field of inner layers. All these methods attain high accuracy on different benchmarks but at high computational costs. Dataset and evaluation metrics All the experiments presented in this work have been performed on Cityscapes [6]. It is a dataset of urban scenes images with semantic pixelwise annotations. It consists of 5000 finely annotated high-resolution images (2048x1024) of which 2975, 500, and 1525 belong to train, validation and test sets respectively. Annotations include 30 different object classes but only 19 are used to train and evaluate models. Adopted evaluation metrics are mean of class-wise Intersection over Union (mIoU) and Frame Per Second (FPS), defined as the inverse of time needed for our network to perform a single forward pass. FPS reported in the following sections are estimated on a single Titan Xp GPU. Network design In this section we describe in details our network architecture. Most works in literature expose the final model followed by an ablation study. This is motivated by an implicit inductive prior towards simpler models, i.e. simpler is better. Even though we agree with this line of thought we designed our experiments following a different path: by incremental steps. We started from a baseline model and incrementally added single features analyzing benefits and disadvantages. Our network architecture, based on a fully-convolutional encoder-decoder, is presented in details in the following subsections. Input downsampling A naive way to speed up inference process in real-time applications is to subsample the the input image. This comes at a price. Loss of fine details hurts performance because borders between classes and fine texture information are lost. We investigated a trade-off between system speed and accuracy. We used a DRN-D-22 model [23] pre-trained on Imagenet as encoder and a simple bilinear upsampling as decoder. First column of Table 1 shows the mIoU of the baseline model without any subsampling. In the second column the same model is trained and evaluated with input images subsampled by factor 4. Model speed increases from 6.7 FPS to 50.6 which is far beyond real-time but, as expected, there is a big (8%) performance drop. Multiresolution encoder As a second experiment we designed a multi-resolution architecture as a good compromise to speed up the system without sacrificing its discriminative power. Our encoder consists of two branches: a low-resolution branch which is composed of all the layers of a Dilated Residual Network 22 type D (DRN-D-22) [23] with the exception of the last two. A medium-resolution branch with only the first layers of the DRN-D-22 before dilated convolutions. The main idea is to induce the first branch to extract large context features while inducing the second to extract more local features that will help to recover fine details during decoding. We experimented 3 different encoder configurations. The first named enc24 in Table 1 consists of two branches that process input images with Table 1: Performance on Cityscapes validation set and speed (FPS) of four encoder architectures. baseline is a full-resolution network. enc4 is trained and evaluated with downsampled input. enc24 and enc124 means 2 and 3 branches with subsampling factors 2,4 and 1,2,4 respectively. shared means that weights are partially shared between branches. In bold the configuration adopted in the final model. sub-sampling factors 2 and 4 with the structure defined above. The second configuration named enc24shared is similar to the first. The only difference is weight sharing between the two branches. Results in Table 1 show that the network with shared branches achieve better performance. We argue that, by reducing the number of network parameters, weight sharing between branches, induces an implicit form of regularization. For this reason we used this configuration as base encoder for the next experiments. In the third configuration named enc124shared in Table 1 we added a further branch to elaborate full-resolution image. This indeed brought some performance improvements but we decided to discard this configuration because operations at full resolution are computationally too heavy and the whole system would slow down below the real-time threshold (30FPS). To train and evaluate the different encoder designs in Table 1 we fixed the decoder architecture to a configuration which is referred in Subsection 3 as baseline. Figure 1 depicts the second encoder design enc24shared. Others have been omitted for space reasons but can be intuitively deduced. Fusion module. It is the first part of our decoder. It joins information flows coming from the two encoder branches extracted at multiple resolutions. Input from low-resolution Table 3: mIoU on Cityscapes validation set with different data augmentation techniques used during training. In bold the configuration adopted in the final model. branch is up-sampled to match the spatial size of signal coming from the medium-resolution branch. Input coming from medium-resolution branch is expanded from 128 to 512 channels to match the number of features of the first branch. Then multi-resolution signals are merged and further processed. In Table 2 are reported experimental results of four different designs. We experimented channel concatenation and addiction as merge strategies for signals coming from the two branches, named concat and sum respectively. We further investigated if the network benefits from feeding the final classification layer directly with the signal after the merge operation (base in Table 2), or if a dimensionality reduction brings improvements (postproc). From experimental results shown in Table 2 both mIoU and speed take advantage of the post-processing step. The model is empowered by adding more convolutions and nonlinearities and the final upsampling operations are applied to a smaller feature space. Figure 2 depicts two different configurations: base sum and postproc sum, both with addiction merge strategy, without and with the post-processing step. Fusion modules with concat as merge strategy have a similar structure. Training recipes In this section we expose our training recipes: some considerations about hyper-parameters and their values used to train our models plus a small paragraph on synthetic data augmentation. For all experiments in this paper we trained the network with SGD plus momentum. Following [23] we set learning rate to 0.001 and trained every model for at least 250 epochs. We adopted a step learning rate policy. The initial value is decreased every 100 epochs by a order of magnitude. We also tried different base learning rates and poly learning rate policy from [2] but we obtained better results with our baseline configuration. We found out that batch size is a very sensitive parameter affecting the final accuracy. After experimenting with different values we set it to 8. In contrast to what pointed out in [3], increasing the batch size, in our case, hurts performance. Batch size affect performance because of intra-batch dependencies introduced by Batch Normalization layers. We argue that, in our case, the higher stochasticity introduced by intra-batch dependencies acts as regularizer, thus effectively improving the final network performance. Synthetic data augmentation. Considering the low amount of data used to train our network i.e. 2970 fine-annotated images from Cityscapes dataset, we decided to investigate some well-known data augmentation techniques. The application of these techniques is almost cost-free in terms of computational resources. They do not increase processing time during inference and they can be applied as a CPU pre-processing step during training. This is in line with the research direction of this work which goal is to push forward accuracy while maintaining a real-time inference speed. Since our system is supposed to work with outdoor scenes and thus dealing with a high variability of lighting conditions we experimented the use of some light transformations. Color Jitter consists in modifying image brightness, saturation and contrast in random-order. Lighting Jitter is a PCA based noise jittering from [13], we used σ = 0.1 as standard deviation to generate random noise. We Table which steers the upsampling process. also experimented a geometric transform: rescaling image with a scale factor between 0.5 and 2, borrowing values from [23]. Table 3 shows the results of applying data augmentation techniques described in this section. Only random scale brought some improvements, thus we decided to include it in our training pipeline for the next experiments. Guided upsampling module In this section we introduce Guided Upsampling Module (GUM). It is born from the intuition that generating a semantic map by predicting independently every single pixel is quite inefficient. It is a matter of fact that most algorithms that perform semantic segmentation do not predict full resolution maps [5,15,23,24,25]. They produce a low-resolution map that is upsampled with a parameters-free operator. Usually Nearest Neighbor or Bilinear upsampling are employed. When upsampling a low-resolution map, pixels close to object boundaries are often assigned to the wrong class, see Figure 3 (a). The idea behind GUM is to guide the upsampling operator through a guidance table of offsets vectors that steer sampling towards the correct semantic class. Figure 3 (b) depicts the Guided Upsampling Module. A Guidance Module predicts a high-resolution Guidance Offset Table. Then GUM performs a Nearest Neighbor upsampling by exploiting the Offset Table as steering guide. Each bidimensional coordinates vector of the regular sampling grid is summed with its corresponding bidimensional vector from the Guidance Offset Table. In Figure 3 the GUM module is presented in conjunction with Nearest Neighbor for simplicity, however, with simple modifications, GUM can be employed along with Bilinear operator. Nearest Neighbor and Bilinear operators perform upsampling by superimposing a regular grid on the input feature map. Given G i the regular input sampling grid, the output grid is produced by a linear transformation T θ (G i ). For the specific case of upsampling, T θ is simply defined as: x s i y s i = T θ (G i ) = θ 0 0 θ x t i y t i , with θ ≥ 1,(1) where (x s i , y s i ) ∈ G i are source coordinates, (x t i , y t i ) are target coordinates and θ represents the upsampling factor. Given V i the output feature map and U nm the input feature map, GUM can be defined as follows: V i = H ∑ n W ∑ m U nm δ ( x s i + p i + 0.5 − m)δ ( y s i + q i + 0.5 − n) (2) where x s i +0.5 rounds coordinates to the nearest integer location and δ is a Kronecker delta function. Equation 2 represents a sum over the whole sampling grid U nm where, through the Kronecker function, only a single specific location is selected and copied to the output. p i and q i represents the two offsets that shifts the sampling coordinates of each grid element in x and y dimensions respectively. They are the output of a function φ i of i, the Guidance Module, defined as: φ i = p i q i(3) Notice that V i and U nm are defined as bi-dimensional feature maps. The upsampling transformation is supposed to be consistent between channels therefore, equations presented in this section, generalize to multiple channels feature maps. In a similar way the bilinear sampling operator can be defined as: V i = H ∑ n W ∑ m U nm max(0, 1 − |x s i + p i − m|)max(0, 1 − |y s i + q i − n|)(4) The resulting operator is differentiable with respect to U and p i . We do not need the operator to be differentiable with respect to x s i because G i is a fixed regular grid. Equations above follows the notation used by Jaderberg et al. in [10]. In the following paragraph we will briefly outline the connection between Guided Upsampling Module and Spatial Transformer Networks. Connection with Spatial Transformer Networks (STN) [10] They introduce the ability for Convolutional Neural Networks to spatially warp the input signal with a learnable transformation. Authors of [10] separate an STN into three distinct modules: Localization Net, Grid Generator and Grid Sampler. Localization Net can be any function that outputs the transformation parameters conditioned on a particular input. Grid Generator takes as input the transformation parameters and warp a regular grid to match that specific transformation. Finally the Grid Sampler samples the input signal accordingly. Our Guided Upsampling Module can be interpreted as a Spatial Transformer Network where the Guidance Module plays the role of Localization Net and Grid Generator together. An STN explicitly outputs the parameters of a defined a priori transformation and then applies them to warp the regular sampling grid. GUM directly outputs offsets on x and y directions to warp the regular sampling grid without explicitly model the transformation. Grid Sampler plays the exact same role both in GUM and STN. Since Grid Sampler module is already implemented in major Deep Learning Frameworks e.g. PyTorch, TensorFlow, Caffe etc., integration of GUM within existing CNN architectures is quite straightforward. Guidance module. The role of Guidance Module is to predict the Guidance Offset Table: the bidimensional grid that guides the upsampling process. The Guidance Module is a function which output is a tensor with specific dimensions: HxW xC where H and W represents width and height of the high-resolution output semantic map and C = 2 is the dimension containing the two offset coordinates w.r.t x and y. We implemented the Guidance Module as a branch of our Neural Network, thus parameters are trainable end-to-end by backpropagation together with the whole network. We experimented three different designs for our Guidance Module and we named them large-rf, high-res and fusion. • large-rf it is composed of three upsampling layers interleaved by Conv-BatchNorm-Relu blocks. It takes the output of the fusion module and gradually upsample it. This design relies on deep network layers activations with large receptive fields but doesn't exploit high-resolution information. It is the most computationally demanding, due to the number of layers required. • high-res it is composed by a single convolutional layer that takes as input a highresolution activation map from the last Convolution before downsampling in the mediumresolution branch (see Section 3). The convolutional layer is a 1x1 kernel and maps the 32-dimensional feature space to a 2-dimensional feature space. It is almost free in terms of computational costs because, with our architecture, it only requires 64 additional parameters and the same number of additional per-pixel operations. • fusion it lies in the middle between large-rf and high-res modules. It merges information coming from high-resolution and large-receptive-field activation maps using the base sum fusion module described in Section 3. It is a good compromise in terms of efficiency since it requires only two Conv-BatchNorm blocks and a single upsampling layer. Despite that, it is the one with most impact on performance because it exploit the required semantic information being at the same time faster than the iterative upsampling of large-rf design. Table 4 reports mIoU on Cityscapes validation set and speed of the overall network in FPS. Best performance are achieved with fusion Guidance Module. Boundaries analysis To asses the behavior of our Guided Upsampling Network near object boundaries we performed a trimap experiment inspired by [2,5,11,12]. The trimap experiment in [2,5] was run on Pascal VOC where semantic annotations include a specific class to be ignored in train and evaluation in correspondence with object boundaries. The trimap experiment was carried out by gradually increasing annotation borders with a morphological structuring element and considering for the evaluation only pixels belonging to the expanded boundaries. To the best of our knowledge we are the first to perform the trimap experiment on Cityscapes dataset. Since there is no boundary class to expand we decided to implement the experiment in a different but equivalent way: for each object class independently we computed the distance transform on ground-truth maps. Then we performed the trimap experiment by gradually increasing the threshold on our computed distance transform map to include pixels at different distances from object boundaries. Figure 4 shows a qualitative and quantitative comparison of the three Guidance Modules i.e. large-rf, high-res and fusion with respect to the baseline, where baseline is the exact same network with bilinear upsampling instead of GUM. There is a clear advantage in using GUM versus the baseline. The type of Guidance Module does not drastically affect the results even though GUM with fusion achieve slightly higher mIoU levels. Comparison with the state-of-the-art In Table 5 we reported performance of Guided Upsampling Network along with state-of-theart methods on Cityscapes test set. Segmentation quality has been evaluated by Cityscapes evaluation server and it is reported in the official leaderboard 1 . FPS in Table 5 have been estimated on a single Titan Xp GPU. For fairness we only included algorithms that declare their running time on Cityscapes leaderboard, even though DeepLabv3+ [5] has been listed in Table 5 as a reference for accuracy-oriented methods. Usually, methods that do not care about processing time, are computationally heavy. Most of them e.g. PSPNet, DeepLabv3 [5,25] achieve very high mIoU levels, i.e. DeepLabv3+ is the best published model to date, reaching 81.2%, but they adopt very time-consuming multi-scale testing to increase accuracy. Our Guided Upsampling Network achieve 70.4% of mIoU on Cityscapes test set without any postprocessing. To the best of our knowledge this is the highest mIoU for a published method running at >30 FPS. It performs even better than some methods like Adelaide, Dilation10 etc. that do not care about speed. Conclusions We proposed a novel network architecture to perform real-time semantic segmentation of street scene images. It consists of a multiresolution architecture to jointly exploit highresolution textures and large context information. We introduced a new module named Guided Upsampling Module to improve upsampling operators by learning a transformation conditioned on high-resolution details. We included GUM in our network architecture and we experimentally demonstrated performance improvements with low additional comptutational costs. We evaluated our network on the Cityscapes test dataset showing that it is able to achieve 70.4% mIoU while running at 33.3 FPS on a single Titan Xp GPU. Further details and a demo video can be found in our project page: http://www.ivl.disco. unimib.it/activities/semantic-segmentation. Table 5: Comparison with state-of-the-art methods on Cityscapes test set sorted by increasing mIoU. Our method in boldface. Input Image Groundtruth GUN (ours) Figure 5: From top to bottom respectively input image, ground-truth and prediction obtained with our Guided Upsampling Net.
3,479
1807.06136
2884999974
Software systems are not static, they have to undergo frequent changes to stay fit for purpose, and in the process of doing so, their complexity increases. It has been observed that this process often leads to the erosion of the systems design and architecture and with it, the decline of many desirable quality attributes, such as maintainability. This process can be captured in terms of antipatterns-atomic violations of widely accepted design principles. We present a visualisation that exposes the design of evolving Java programs, highlighting instances of selected antipatterns including their emergence and cancerous growth. This visualisation assists software engineers and architects in assessing, tracing and therefore combating design erosion. We evaluated the effectiveness of the visualisation in four case studies with ten participants.
Empirical studies on larger corpora of real-world programs started in the early 2000s and revealed that surprisingly, antipatterns are prevalent @cite_7 . This was first discovered for circular dependencies @cite_18 , and later confirmed to apply to other antipatterns as well @cite_30 . Antipatterns can be detected by means of static analysis before a system is deployed. The main issue here is the use of dynamic programming language features that create dependencies that may not be visible when the static analysis models are built. This area is generally under-researched, and we must assume that the models used only under-approximate the behaviour of the actual program. In particular, dependency graphs may not contain all edges showing actual program dependencies.
{ "abstract": [ "To deal with the challenges when building large and complex systems modularisation techniques such as component-based software engineering and aspect-oriented programming have been developed. In the Java space these include dependency injection frameworks and dynamic component models such as OSGi. The question arises as to how easy it will be to transform existing systems to take advantage of these new techniques. Anecdotal evidence from industry suggests that the presence of certain patterns presents barriers to refactoring of monolithic systems into a modular architecture. In this paper, we present such a set of patterns and analyse a large set of open-source systems for occurrences of these patterns. We use a novel, scalable static analyser that we have developed for this purpose. The key findings of this paper are that almost all programs investigated have a significant number of these patterns, implying that modularising will be therefore difficult and expensive.", "Advocates of the design principle avoid cyclic dependencies among modules have argued that cycles are detrimental to software quality attributes such as understandability, testability, reusability, buildability and maintainability, yet folklore suggests such cycles are common in real object-oriented systems. In this paper we present the first significant empirical study of cycles among the classes of 78 open- and closed-source Java applications. We find that, of the applications comprising enough classes to support such a cycle, about 45 have a cycle involving at least 100 classes and around 10 have a cycle involving at least 1,000 classes. We present further empirical evidence to support the contention these cycles are not due to intrinsic interdependencies between particular classes in a domain. Finally, we attempt to gauge the strength of connection among the classes in a cycle using the concept of a minimum edge feedback set.", "In order to increase our ability to use measurement to support software development practise we need to do more analysis of code. However, empirical studies of code are expensive and their results are difficult to compare. We describe the Qualitas Corpus, a large curated collection of open source Java systems. The corpus reduces the cost of performing large empirical studies of code and supports comparison of measurements of the same artifacts. We discuss its design, organisation, and issues associated with its development." ], "cite_N": [ "@cite_30", "@cite_18", "@cite_7" ], "mid": [ "1606125994", "2068521941", "2095938258" ] }
Visualizing Design Erosion: How Big Balls of Mud are Made
0
1807.06136
2884999974
Software systems are not static, they have to undergo frequent changes to stay fit for purpose, and in the process of doing so, their complexity increases. It has been observed that this process often leads to the erosion of the systems design and architecture and with it, the decline of many desirable quality attributes, such as maintainability. This process can be captured in terms of antipatterns-atomic violations of widely accepted design principles. We present a visualisation that exposes the design of evolving Java programs, highlighting instances of selected antipatterns including their emergence and cancerous growth. This visualisation assists software engineers and architects in assessing, tracing and therefore combating design erosion. We evaluated the effectiveness of the visualisation in four case studies with ten participants.
There exist many approaches to visualise software evolution @cite_16 . Most visualisations want to provide an improved understanding of the development activities by visualising structural changes, e.g. by using added and removed lines as metrics @cite_43 @cite_3 @cite_46 @cite_5 @cite_25 @cite_45 @cite_28 or by providing highly aggregated information @cite_13 @cite_32 @cite_4 . Our use case requires the visualisation of the structural evolution of the system and the antipattern instances at the same time. We are not aware of any evolution visualisation that supports this. There exist evolution visualisations of call graphs @cite_38 @cite_19 @cite_36 . However, they do not provide any structural information.
{ "abstract": [ "Large software systems have a rich development history. Mining certain aspects of this rich history can reveal interesting insights into the system and its structure. Previous approaches to visualize the evolution of software systems provide static views. These static views often do not fully capture the dynamic nature of evolution. We introduce the Evolution Storyboard, a visualization which provides dynamic views of the evolution of a software’s structure. Our tool implementation takes as input a series of software graphs, e.g., call graphs or co-change graphs, and automatically generates an evolution storyboard. To illustrate the concept, we present a storyboard for PostgreSQL, as a representative example for large open source systems. Evolution storyboards help to understand a system’s structure and to reveal its possible decay over time. The storyboard highlights important changes in the structure during the lifetime of a software system, and how artifacts changed their dependencies over time.", "C ode C rawler (in the remainder of the text CC) is a language independent, interactive, information visualization tool. It is mainly targeted at visualizing object-oriented software, and has been successfully validated in several industrial case studies over the past few years. CC adheres to lightweight principles: it implements and visualizes polymetric views, visualizations of software enriched with information such as software metrics and other source code semantics. CC is built on top of Moose, an extensible language independent reengineering environment that implements the FAMIX metamodel. In its last implementation, CC has become a general-purpose information visualization tool.", "", "", "Software evolution is one of the most important topics in modern software engineering research. It deals with complex information and large amounts of data. Software visualization can be helpful in this scenario, helping to summarize, analyze and understand software evolution data. This paper presents SourceMiner Evolution (SME), a software tool that uses an interactive differential and temporal approach to visualize software evolution. The tool is implemented as an Eclipse plug-in and has four views that are assembled directly from the IDE AST. The views portray the software from different perspectives. One view shows how metrics of a chosen software entity evolves over time. The other three views show differential comparisons of any two versions of a system structure, dependency and inheritance properties.", "Versioning systems such as CVS or Subversion exhibit a large potential to investigate the evolution of software systems. They are used to record the development steps of software systems as they make it possible to reconstruct the whole evolution of single files. However, they provide no good means to understand how much a certain file has been changed over time and by whom. In this paper we present an approach to visualize files using fractal figures, which: (1) convey the overall development effort; (2) illustrate the distribution of the effort among various developers; and (3) allow files to be categorized in terms of the distribution of the effort following gestah principles. Our approach allows us to discover files of high development efforts in terms of team size and effort intensity of individual developers. The visualizations allow an analyst or a project manager to get first insights into team structures and code ownership principles. We have analyzed Mozilla as a case study and we show some of the recovered team development patterns in this paper as a validation of our approach", "", "", "", "", "", "Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.", "", "" ], "cite_N": [ "@cite_38", "@cite_4", "@cite_28", "@cite_36", "@cite_32", "@cite_3", "@cite_43", "@cite_19", "@cite_45", "@cite_5", "@cite_46", "@cite_16", "@cite_13", "@cite_25" ], "mid": [ "2163308664", "2045240618", "", "", "2052955308", "2087053547", "", "", "", "", "", "2159601092", "", "" ] }
Visualizing Design Erosion: How Big Balls of Mud are Made
0
1807.05935
2949567128
Survival analysis in the presence of multiple possible adverse events, i.e., competing risks, is a pervasive problem in many industries (healthcare, finance, etc.). Since only one event is typically observed, the incidence of an event of interest is often obscured by other related competing events. This nonidentifiability, or inability to estimate true cause-specific survival curves from empirical data, further complicates competing risk survival analysis. We introduce Siamese Survival Prognosis Network (SSPN), a novel deep learning architecture for estimating personalized risk scores in the presence of competing risks. SSPN circumvents the nonidentifiability problem by avoiding the estimation of cause-specific survival curves and instead determines pairwise concordant time-dependent risks, where longer event times are assigned lower risks. Furthermore, SSPN is able to directly optimize an approximation to the C-discrimination index, rather than relying on well-known metrics which are unable to capture the unique requirements of survival analysis with competing risks.
Previous work on classical survival analysis has demonstrated the advantages of deep learning over statistical methods @cite_26 @cite_1 @cite_8 . Cox proportional hazards model @cite_15 is the baseline statistical model for survival analysis, but is limited since the dependent risk function is the product of a linear covariate function and a time dependent function, which is insufficient for modeling complex non-linear medical data. @cite_1 replaced the linear covariate function with a feed-forward neural network as input for the Cox PH model and demonstrated improved performance. The current literature addresses competing risks based on statistical methods (the Fine Gray model @cite_21 ), classical machine learning (Random Survival Forest @cite_7 @cite_22 ), multi-task learning @cite_4 ) etc., with limited success. These existing competing risk models are challenged by computational scalability issues for datasets with many patients and multiple covariates. To address this challenge, we propose a deep learning architecture for survival analysis with competing risks to optimize the time-dependent discrimination index. This is not trivial and will be elaborated in the next section.
{ "abstract": [ "An accurate model of patient-specific kidney graft survival distributions can help to improve shared-decision making in the treatment and care of patients. In this paper, we propose a deep learning method that directly models the survival function instead of estimating the hazard function to predict survival times for graft patients based on the principle of multi-task learning. By learning to jointly predict the time of the event, and its rank in the cox partial log likelihood framework, our deep learning approach outperforms, in terms of survival time prediction quality and concordance index, other common methods for survival analysis, including the Cox Proportional Hazards model and a network trained on the cox partial log-likelihood.", "CONTINUOUS FAILURE TIMES AND THEIR CAUSES Basic Probability Functions Some Small Data Sets Hazard Functions Regression Models PARAMETRIC LIKELIHOOD INFERENCE The Likelihood for Competing Risks Model Checking Inference Some Examples Masked Systems LATENT FAILURE TIMES: PROBABILITY DISTRIBUTIONS Basic Probability Functions Some Examples Marginal vs. Sub-Distributions Independent Risks A Risk-Removal Model LIKELIHOOD FUNCTIONS FOR UNIVARIATE SURVIVAL DATA Discrete and Continuous Failure Times Discrete Failure Times: Estimation Continuous Failure Times: Random Samples Continuous Failure Times: Explanatory Variables Discrete Failure Times Again Time-Dependent Covariates DISCRETE FAILURE TIMES IN COMPETING RISKS Basic Probability Functions Latent Failure Times Some Examples Based on Bernoulli Trials Likelihood Functions HAZARD-BASED METHODS FOR CONTINUOUS FAILURE TIMES Latent Failure Times vs. Hazard Modelling Some Examples of Hazard Modelling Nonparametric Methods for Random Samples Proportional Hazards and Partial Likelihood LATENT FAILURE TIMES: IDENTIFIABILITY CRISES The Cox-Tsiatis Impasse More General Identifiability Results Specified Marginals Discrete Failure Times Regression Case Censoring of Survival Data Parametric Identifiability MARTINGALE COUNTING PROCESSESES IN SURVIVAL DATA Introduction Back to Basics: Probability Spaces and Conditional Expectation Filtrations Martingales Counting Processes Product Integrals Survival Data Non-parametric Estimation Non-parametric Testing Regression Models Epilogue APPENDIX 1: Numerical Maximisation of Likelihood Functions APPENDIX 2: Bayesian Computation Bibliography Index", "We introduce a new approach to competing risks using random forests. Our method is fully non-parametric and can be used for selecting event-specific variables and for estimating the cumulative incidence function. We show that the method is highly effective for both prediction and variable selection in high-dimensional problems and in settings such as HIV AIDS that involve many competing risks.", "", "", "Abstract With explanatory covariates, the standard analysis for competing risks data involves modeling the cause-specific hazard functions via a proportional hazards assumption. Unfortunately, the cause-specific hazard function does not have a direct interpretation in terms of survival probabilities for the particular failure type. In recent years many clinicians have begun using the cumulative incidence function, the marginal failure probabilities for a particular cause, which is intuitively appealing and more easily explained to the nonstatistician. The cumulative incidence is especially relevant in cost-effectiveness analyses in which the survival probabilities are needed to determine treatment utility. Previously, authors have considered methods for combining estimates of the cause-specific hazard functions under the proportional hazards formulation. However, these methods do not allow the analyst to directly assess the effect of a covariate on the marginal probability function. In this article we pro...", "Previous research has shown that neural networks can model survival data in situations in which some patients' death times are unknown, e.g. right-censored. However, neural networks have rarely been shown to outperform their linear counterparts such as the Cox proportional hazards model. In this paper, we run simulated experiments and use real survival data to build upon the risk-regression architecture proposed by Faraggi and Simon. We demonstrate that our model, DeepSurv, not only works as well as other survival models but actually outperforms in predictive ability on survival data with linear and nonlinear risk functions. We then show that the neural network can also serve as a recommender system by including a categorical variable representing a treatment group. This can be used to provide personalized treatment recommendations based on an individual's calculated risk. We provide an open source Python module that implements these methods in order to advance research on deep learning and survival analysis.", "" ], "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_7", "@cite_8", "@cite_21", "@cite_1", "@cite_15" ], "mid": [ "2618421739", "562988351", "2020628257", "", "", "2038981426", "2415877456", "" ] }
Siamese Survival Analysis with Competing Risks
Survival analysis is a method for analyzing data where the outcome variable is the time to the occurrence of an event (death, disease, stock liquidation, mechanical failure, etc.) of interest. Competing risks are additional possible events or outcomes that "compete" with and may preclude or interfere with the desired event observation. Though survival analysis is practiced across many disciplines (epidemiology, econometrics, manufacturing, etc.), this paper focuses on healthcare applications, where competing risk analysis has recently emerged as an important analytical tool in medical prognosis [9,26,22]. With an increasing aging population, the presence of multiple coexisting chronic diseases (multimorbidities) is on the rise, with more than two-thirds of people aged over 65 considered multimorbid. Developing optimal treatment plans for these patients with multimorbidities is a challenging problem, where the best treatment or intervention for a patient may depend upon the existence and susceptibility to other competing risks. Consider oncology and cardiovascular medicine, where the risk of a cardiac disease may alter the decision on whether a cancer patient should undergo chemotherapy or surgery. Countless examples like this involving competing risks are pervasive throughout the healthcare industry and insufficiently addressed in it's current state. Contributions In both machine learning and statistics, predictive models are compared in terms of the area under the receiver operating characteristic (ROC) curve or the timedependent discrimination index (in the survival analysis literature). The equivalence of the two metrics was established in [11]. Numerous works on supervised learning [4,20,19,23] have shown that training the models to directly optimize the AUC improves out-of-sample (generalization) performance (in terms of AUC) rather than optimizing the error rate (or the accuracy). In this work, we adopt and apply this idea to survival analysis with competing risks. We develop a novel Siamese feed-forward neural network [3] designed to optimize concordance and account for competing risks by specifically targeting the time-dependent discrimination index [2]. This is achieved by estimating risks in a relative fashion so that the risk for the "true" event of a patient (i.e. the event which actually took place) must be higher than: all other risks for the same patient and the risks for the same true event of other patients that experienced it at a later time. Furthermore, the risks for all the causes are estimated jointly in an effort to generate a unified representation capturing the latent structure of the data and estimating cause-specific risks. Because our neural network issues a joint risk for all competing events, it compares different risks for the different events at different times and arranges them in a concordant fashion (earlier time means higher risk for any pair of patients). Unlike previous Siamese neural networks architectures [5,3,25] developed for purposes such as learning the pairwise similarity between different inputs, our architecture aims to maximize the distance between output risks for the different inputs. We overcome the discontinuity problem of the above metric by introducing a continuous approximation of the time-dependent discrimination function. This approximation is only evaluated at the survival times observed in the dataset. However, training a neural network only over the observed survival times will result in poor generalization and undesirable out-of-sample performance (in terms of discrimination index computed at different times). In response to this, we add a loss term (to the loss function) which for any pair of patients, penalizes cases where the longer event time does not receive lower risk. The nonidentifiability problem in competing risks arises from the inability to estimate the true cause-specific survival curves from empirical data [24]. We address this issue by bypassing and avoiding the estimation of the individual cause-specific survival curves and utilize concordant risks instead. Our implementation is agnostic to any underlying causal assumptions and therefore immune to nonidentifiability. We report statistically significant improvements over state-of-the-art competing risk survival analysis methods on both synthetic and real medical data. Problem Formulation We consider a dataset H comprising of time-to-event information about N subjects who are followed up for a finite amount of time. Each subject (patient) experiences an event D ∈ {0, 1, .., M }, where D is the event type. D = 0 means the subject is censored (lost in follow-up or study ended). If D ∈ {1, .., M }, then the subject experiences one of the events of interest (for instance, subject develops cardiac disease). We assume that a subject can only experience one of the above events and that the censorship times are independent of them [17,22,8,7,10,24]. T is defined as the time-to-event, where we assume that time is discrete T ∈ {t 1 , ..., t K } and t 1 = 0 (t i denotes the elapsed time since t 1 ). Let H = {T i , D i , x i } N i=1 , where T i is the time-to-event for subject i, D i is the event experienced by the subject i and x i ∈ R S are the covariates of the subject (the covariates are measured at baseline, which may include age, gender, genetic information etc.). The Cumulative Incidence Function (CIF) [8] computed at time t for a certain event D is the probability of occurrence of a particular event D before time t conditioned on the covariates of a subject x, and is given as F (t, D|x) = P r(T ≤ t, D|x). The cumulative incidence function evaluated at a certain point can be understood as the risk of experiencing a certain event before a specified time. In this work, our goal is to develop a neural network that can learn the complex interactions in the data specifically addressing competing risks survival analysis. In determining our loss function, we consider that the time-dependent discrimination index is the most commonly used metric for evaluating models in survival analysis [2]. Multiple publications in the supervised learning literature demonstrate that approximating the area under the curve (AUC) directly and training a classifier leads to better generalization performance in terms of the AUC (see e.g. [4,20,19,23]). However, these ideas were not explored in the context of survival analysis with competing risks. We will follow the same principles to construct an approximation of the time-dependent discrimination index to train our neural network. We first describe the time-dependent discrimination index below. Consider an ordered pair of two subjects (i, j) in the dataset. If the subject i experiences event m, i.e., D i = 0 and if subject j's time-to-event exceeds the time-to-event of subject i, i.e., T j > T i , then the pair (i, j) is a comparable pair. The set of all such comparable pairs is defined as the comparable set for event m, and is denoted as X m . A model outputs the risk of the subject x for experiencing the event m before time t, which is given as R m (t, x) = F (t, D = m|x). The time-dependent discrimination index for a certain cause m is the probability that a model accurately orders the risks of the comparable pairs of subjects in the comparable set for event m. The time-dependent discrimination index [2] for cause m is defined as C t (m) = K k=1 AU C m (t k )w m (t k ) K k=1 w m (t k ) . ( where AU C m (t k ) = P r{R m (t k , x i ) > R m (t k , x j )|T i = t k , T j > t k , D i = m} ,(2)w m (t k ) = P r{T i = t k , T j > t k , D i = m} .(3) The discrimination index in (1) cannot be computed exactly since the distribution that generates the data is unknown. However, the discrimination index can be estimated using a standard estimator, which takes as input the risk values associated with subjects in the dataset. [2] defines the estimator for (1) aŝ C t (m) = N i,j=1 1{R m (T i , x i ) > R m (T i , x j )} · 1{T j > T i , D i = m} N i,j=1 1{T j > T i , D i = m} .(4) Note that in the above (4) only the numerator depends on the model. Henceforth, we will only consider the quantity in the numerator and we write it as C t (m) = N i,j=1 1{R m (T i , x i ) > R m (T i , x j )} · 1{T j > T i , D i = m} .(5) The above equation can be simplified as C t (m) = |X m | i=1 1{R m (T i (left), X m i (left)) > R m (T i (left), X m i (right))} .(6) where 1(x) is the indicator function, X m i (left) (X m i (right)) is the left (right) element of the i th comparable pair in the set X m and T i (left) (T i (right)) is the respective time-to-event. In the next section, we will use the above simplification (6) to construct the loss function for the neural network. Siamese Survival Prognosis Network In this section, we will describe the architecture of the network and the loss functions that we propose to train the network. Denote H as a feed-forward neural network which is visualized in Fig. 1. It is composed of a sequence of L fully connected hidden layers with "scaled exponential linear units" (SELU) activation. The last hidden layer is fed to M layers of width K. Each neuron in the latter M layers estimates the probability that a subject x experiences cause m occurs in a time interval t k , which is given as P r m (t k , x). For an input covariate x the output from all the neurons is a vector of probabilities given as P r m (t k , x) K k=1 M m=1 . The estimate of cumulative incidence function computed for cause m at time t k is given asR m (t k , x) = k i=1 P r m (t i , x). The final output of the neural network for input x is vector of estimates of the cumulative incidence function given as H(x) = R m (t k , x) K k=1 M m=1 . The loss function is composed of three terms: discrimination, accuracy, and a loss term. We cannot use the metric in (6) directly to train the network because it is a discontinuous function (composed of indicators), which can impede training. We overcome this problem by approximating the indicator function using a scaled sigmoid function σ(αx) = 1 1+exp(−αx) . The approximated discrimination index is given aŝ C t (m) = |X m | i=1 σ α R m (T i (left), X m i (left)) −R m (T i (left), X m i (right)) . (7) The scaling parameter α determines the sensitivity of the loss function to discrimination. If the value of α is high, then the penalty for error in discrimination is also very high. Therefore, higher values of alpha guarantee that the subjects in a comparable pair are assigned concordant risk values. The discrimination part defined above captures a model's ability to discriminate subjects for each cause separately. We also need to ensure that the model can predict the cause accurately. We define the accuracy of a model in terms of a scaled sigmoid function with scaling parameter κ as follows L 1 = |X m | i=1 σ κ R D(left) (T i (left), X m i (left)) − m =D(left)R m (T i (left), X m i (left)) . (8) The accuracy term penalizes the risk functions only at the event times of the left subjects in comparable pairs. However, it is important that the neural network is optimized to produce risk values that interpolate well to other time intervals as well. Therefore, we introduce a loss term below L 2 = β M m=1 |X m | i=1 t k <Ti(left) R m (t k , X m i (right)) 2 .(9) The loss term ensures that the risk of each right subject is minimized for all the times before time-to-event of the left subject in the respective comparable pair. Intuitively, the loss term can be justified as follows. The right subjects do not experience an event before the time T i (left). Hence, the probability that they experience an event before T i (left) should take a small value. The final loss function is the sum of the discrimination terms (described above), the accuracy and the loss terms, and is given as M m=1Ĉ t (m) + L 1 + L 2 .(10) Finally, we adjust for the event imbalance and the time interval imbalance caused by the unequal number of pairs for each event and time interval with inverse propensity weights. These weights are the frequency of the occurrence of the various events at the various times and are multiplying the loss functions of the corresponding comparable pairs. We train the feed-forward network using the above loss function (10) and regularize it using SELU dropout [16]. Since the loss function involves the discrimination term, each term in the loss function involves a pairwise comparison. This makes the network training similar to a Siamese network [3]. The backpropagation terms now depend on each comparable pair. Experiments This section includes a discussion of hyper-parameter optimization followed by competing risk and survival analysis experiments 4 . We compare against Fine-Gray model ("cmprsk" R package), Competing Random Forest (CRF) ("ran-domForestSRC" R package) and the cause-specific (cs) extension of two single event (non-competing risks) methods, Cox PH model and [14]. In cause-specific extension of single event models, we mark the occurrence of any event apart from the event of interest as censorship and decouple the problem into separate single event problem (one for each cause); this is a standard way of extending single-event models to competing risk models. In the following results we refer to our method with the acronym SSPN. Hyper-parameter Optimization Optimization was performed using a 5-fold cross-validation with fixed censorship rates in each fold. We choose 60-20-20 division for training, validation and testing sets. A standard grid search was used to determine the batch size, number of hidden layers, width of the hidden layers and the dropout rate. The optimal values of α and β were consistently 500 and 0.01 for all datasets. As previously mentioned, the sets are comprised of patient pairs. In each training iteration, a batch size of pairs was sampled with replacement from the training set which reduces convergence speed but doesn't lower performance relative to regular batches [21]. We note that the training sets are commonly in the tens of million pairs with patients appearing multiple times in both sides of the pair. A standard definition of an epoch would compose of a single iteration over all patient. However, in our case, we not only learn patient specific characteristics but also patient comparison relationships, which means an epoch with a number of iterations equal to the number of patients is not sufficient. On the other hand, an epoch definition as an iteration over all pairs is impractical. Our best empirical results were attained after 100K iterations with Tensorflow on 8-core Xeon E3-1240, Adam optimizer [15] and a decaying learning rate, LR −1 (i) = 10 −3 + i. Table 1 summarizes the optimal hyper-parameters. SEER The Surveillance, Epidemiology, and End Results Program (SEER) dataset provides information on breast cancer patients during the years 1992-2007. A total of 72,809 patients experienced breast cancer, cardiovascular disease (CVD), other diseases, or were right-censored. The cohort consists of 23 features, including Table 2 displays the results for this dataset. We notice that for the infrequent adverse event, CVD, the performance gain is negligible while for the frequent breast cancer event, the gain is significant. However, we wish to remind the reader that our focus is on healthcare where even minor gains have the potential to save lives. Considering there are 72,809 patients, a performance improvement even as low as 0.1% has the potential to save multiple lives and should not be disregarded. Synthetic Data Due to the relative scarcity of competing risks datasets and methods, we have created an additional synthetic dataset to further validate the performance of our method. We have constructed two stochastic processes with parameters and the event times as follows x 1 i , x 2 i , x 3 i ∼ N (0, I), T 1 i ∼ exp (x 3 i ) 2 + x 1 i , T 2 i ∼ exp (x 3 i ) 2 + x 2 i .(11) where (x 1 i , x 2 i , x 3 i ) is the vector of features for patient i. For k = 1, 2, the features x k only have an effect on the event time for event k, while x 3 has an effect on the event times of both events. Note that we assume event times are exponentially distributed with a mean parameter depending on both linear and non-linear (quadratic) function of features. Given the parameters, we first produced 30, 000 patients; among those, we randomly selected 15, 000 patients (50%) to be rightcensored at a time randomly drawn from the uniform distribution on the interval [0, min{T 1 i , T 2 i }]. (This censoring fraction was chosen to be roughly the same censoring fraction as in the real datasets, and hence to present the same difficulty as found in those datasets.) Table 3 displays the results for the above dataset. We demonstrate the same consistent performance gain as in the previous case. Conclusion Competing risks settings are pervasive in healthcare. They are encountered in cardiovascular diseases, in cancer, and in the geriatric population suffering from multiple diseases. To solve the challenging problem of learning the model parameters from time-to-event data while handling right censoring, we have developed a novel deep learning architecture for estimating personalized risk scores in the presence of competing risks based on the well-known Siamese network architecture. Our method is able to capture complex non-linear representations missed by classical machine learning and statistical models. Experimental results show that our method is able to outperform existing competing risk methods by successfully learning representations which flexibly describe non-proportional hazard rates with complex interactions between covariates and survival times that are common in many diseases with heterogeneous phenotypes.
3,092
1807.05597
2884491022
Deep learning has revolutionised many fields, but it is still challenging to transfer its success to small mobile robots with minimal hardware. Specifically, some work has been done to this effect in the RoboCup humanoid football domain, but results that are performant and efficient and still generally applicable outside of this domain are lacking. We propose an approach conceptually different from those taken previously. It is based on semantic segmentation and does achieve these desired properties. In detail, it is being able to process full VGA images in real-time on a low-power mobile processor. It can further handle multiple image dimensions without retraining, it does not require specific domain knowledge for achieving a high frame rate and it is applicable on a minimal mobile hardware.
Instead, the optimisations applied to our networks are very much motivated by MobileNets @cite_3 . Most notably, we utilise to significantly reduce the computational complexity of our segmentation networks. Such convolutions split a regular convolution into a filter and a combination step: first a separate 2D filter is applied to each input channel, after which a 1x1 convolution is applied to combine the results of these features. This can be seen as a factorisation of a full convolution that reduces the computational cost by a factor of @math , where @math is the number of output features and @math the kernel size. Not all convolutions can be factorised like this, so separable convolutions have less expressive power, but the results of the original MobileNets and those reported here show they can still perform at high accuracy.
{ "abstract": [ "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization." ], "cite_N": [ "@cite_3" ], "mid": [ "2612445135" ] }
Deep Learning for Semantic Segmentation on Minimal Hardware
Deep learning (DL) has greatly accelerated progress in many areas of artificial intelligence (AI) and machine learning. Several breakthrough ideas and methods, combined with the availability of large amounts of data and computation power, have lifted classical artificial neural networks (ANNs) to new heights in natural language processing, time series modelling and advanced computer vision problems [11]. For computer vision in particular, networks using convolution operations, i.e., Convolutional Neural Networks (CNNs), have had great success. Many of these successful applications of DL rely on cutting edge computation hardware, specifically high-end GPU processors, sometimes in clusters of dozens to hundreds of machines [15]. Low-power robots, such as the robotic footballers participating in RoboCup, are not able to carry such hardware. It is not a surprise that the uptake of DL in the domain of humanoid robotic football has lagged behind. Some demonstrations of its use became available recently [16,1,5,8,14]. However, as we will discuss in the next section, these applications are so far rather limited; either in terms of performance or in terms of their generalisability for areas other than RoboCup. In this paper, we will address these issues and present a DL framework that achieves high accuracy, is more generally applicable and still runs at a usable frame rate on minimal hardware. The necessary conceptual switch and main driver behind these results is to apply DL to the direct semantic segmentation of camera images, in contrast to most previous work in the humanoid robotic football domain that has applied it to the final object detection or recognition problem. Semantic segmentation is the task of assigning a class label to each separate pixel in an image, in contrast to predicting a single output for an image as a whole, or some sub-region of interest. There are three primary reasons why this approach is attractive. Firstly, semantic segmentation networks can be significantly smaller in terms of learnable weights than other network types. The number of weights in a convolution layer is reduced significantly compared to the fully connected layers of classical ANNs, by 'reusing' filter weights as they slide over an image. However, most image classification or object detection networks still need to convert a 2D representation into a single output, for which they do use fully connected layers on top of the efficient convolution layers. The number of weights of fully connected layers is quadratic in their size, which means they can be responsible for a major part of the computational complexity of the network. Semantic segmentation networks on the other hand typically only have convolution layers-they are Fully Convolutional Networks (FCNs)-and so do away with fully connected ones, and the number of their weights only grows linearly with the number of layers used. Secondly, the fully convolutional nature also ensures that the network is independent of image resolution. The input resolution of a network with a fully connected output layer is fixed by the number of weights in that layer. Such a network, trained on data of those dimensions, cannot readily be reused on data of differing sizes; the user will have to crop or rescale the input data, or retrain new fully connected layers of the appropriate size. Convolution operations on the other hand are agnostic of input dimensions, so a fully convolutional network can be used at any input resolution 1 . This provides very useful opportunities. For example, if a known object is tracked, or an object is known to be close to the camera, the algorithm allows for an on-demand up and down scaling of vision effort. Instead of processing a complete camera frame when searching for such an aforementioned object, only an image subset or a downscaled version of a camera frame is processed. Finally, semantic segmentation fits in directly with many popular vision pipelines used currently in the RoboCup football domain. Historically, the domain consisted of clearly colour coded concepts: green field, orange ball, yellow/blue goalposts. Commonly a lookup-table based approach is used to label each pixel separately, after which fast specialised connected component, scanning, or integral image methods are applied to detect and localise all relevant objects. Over the years the scenario has developed to be more challenging (e.g., natural light, limited colours) and unstructured, making the lookup-table methods less feasible. Using a semantic segmentation CNN that can learn to use more complex features would allow the simple replacement of these methods and still reuse all highly optimised algorithms of the existing vision pipeline. Network Architecture As mentioned before, our approach is based on fully convolutional semantic segmentation networks. The main structure of our networks is similar to popular encoder-decoder networks, such as U-Net [13] and SegNet [3], mainly following the latter. In such networks, a first series of convolution layers encode the input into successively lower resolution but higher dimensional feature maps, after which a second series of layers decode these maps into a full-resolution pixelwise classification. This architecture is shown in Fig. 1. SegNet and U-Net both have the property that some information from the encoder layers are fed into the respective decoder layers of the same size, either in terms of maxpooling indices, or full feature maps. This helps overcoming the loss of fine detail caused by the resolution reduction along the way. As good performance is still possible without these connections, we do not use those here. They in fact introduce a significant increase in computation load on our hardware, due to having to combine tensors in possibly significantly different memory locations. Another modification is to use depthwise separable convolution, as introduced by MobileNets [9], as a much more efficient alternative to full 3D convolution. This is one of the major contributions to efficiency of our networks, without significantly decreasing their performance. To study the trade-off between network minimalism, efficiency and performance, we create, train and evaluate a number of varieties of the above network, with different combinations of the following parameter values: Larger and smaller values for these parameters have been tried out, but we only report those here that resulted in networks that were able to learn the provided task to some degree, but were light enough to be run on the minimal test hardware. Specific instantiations will be denoted with L x F y M z S w with parameter values filled into the place holders. Experiments The networks described in the previous section are trained to segment ball pixels in images taken by a Kid-Size RoboCup robot on a competition pitch. Specifically, the image set bitbots-set00-04 from the Bit-Bots' Imagetagger 2 was used. It contains 1000 images 3 with 1003 bounding box ball annotations. For deriving the target pixel label masks for training the networks, the rectangular annotations are converted to filled ellipsoids. Figure 2 shows an example of the input images and targets. We use the TensorFlow library to construct, train and run the networks. The networks are trained on an NVIDIA GeForce GTX 1080-ti GPU, with a RGB Target L4F5M2S1 L3F5M2S2 L3F4M1.5S2 For testing the performance of the networks we map the class probabilities from the softmax output to discrete class labels and use this to calculate the commonly used Intersection over Union (IoU) score as T P T P +F P +F N , where T P is the number of true positive ball pixels, F P the number of false positives and F N the number of false negatives. Due to the extreme class imbalance, the networks hardly ever predict the probability of a pixel being part of a ball, P (B), to be above 0.5. This means that if we use the most probable class as final output, the IoU score often is 0, even though the networks do learn to assign relatively higher probability at the right pixels. Instead we find the threshold θ * for P (B) that results in the best IoU score for each trained network. Finally, since the original goal is to develop networks that can run on minimal hardware, the networks are run and timed on such hardware, belonging to a Kid-Size humanoid football robot, namely an Odroid-XU4. This device is based on a Samsung Exynos 5422 Cortex-A15 with 2 GHz and a Cortex-A7 Octa core CPU, which is the same as used in some 2015 model flagship smartphones. Before running the networks, they are optimised using TensorFlow's Graph Transform tool, which is able to merge several operations, such as batch normalisation, into more efficient ones. The test program and TensorFlow library are compiled with all standard optimisation flags for the ARM Neon platform. We time the networks both on full 640 × 480 images and on 320 × 256 images. Results We firstly evaluate the performance of semantic segmentation networks trained for the official RoboCup ball. We compare the performance and runtime of the different network instantiations with each other, as well as to a baseline segmentation method. This method is based on a popular fast lookup table (LUT) method, where the table directly maps pixel values to object classes. To create the table, we train a Support Vector Machine (SVM) on the same set as the CNNs to classify pixels. More complex and performant methods may be chosen, perhaps specifically for the robot football scenario, however we selected this method to reflect the same workflow of training a single model on simple pixel data, without injecting domain specific knowledge. We did improve performance by using input in HSV colour space and applying grid search to optimise its hyper parameters. Secondly, we extend the binary case and train the networks for balls and goal posts, and compare the network performance with the binary segmentation. We first analyse the segmentation quality of the set of networks and the influence of their parameters on their performance. The best network is L 4 F 5 M 2 S 1 with a best IoU of 0.804. As may be expected, this is the one with the most layers, most filters, highest multiplication factor and densest stride. The least performant network is one of the simplest in terms of layers and features: L 3 F 3 M 1.25 S 1 with a best IoU of 0.085. Perhaps surprisingly the version of that network with stride 2 manages to perform better, with a score of 0.39. Figure 3 shows the distributions of performance over networks with the same values for each parameter. One can see that overall more complex networks score higher, but that the median network with stride 2 performs better than the median with stride 1. The best performing network in terms of IoU is also the least efficient one, but the second best is a full 74 ms faster with a drop in IoU of only 0.003. The linear fits show that there is indeed a trend within each cluster of better performance given the runtime, but it is clear that this is not generally the case: networks with similar runtimes can vary greatly in achieved performance. Binary segmentation The SVM-based LUT method, though being significantly faster, scores well below most networks, with an IoU of 0.085. This is because such a pixel-by-pixel method does not consider the surrounding area and thus has no way to discern pixels with similar colours, resulting in many false positives for pixels that have colours that are also found in the ball. In contrast, the CNNs can perform much more informed classification by utilising the receptive field around each pixel. From this figure we can conclude the same as from Fig. 3, that introducing a stride of 2 does not significantly reduce performance, but with the addition that it does make the network run significantly faster. The best network with stride 2, L 3 F 5 M 2 S 2 , has an IoU of only 0.066 less than the best network (a drop of just 8%), but runs over twice as fast, at more than 5 frames per second. On the lower resolution of 320 × 256 the best stride 2 networks achieve frame rates of 15 to 20 frames per second. Table 1 lists the results for all networks. Multi-class segmentation Binary classification is too limited for robotic football, or other real world scenarios. To study the more general usability of our method, we extend the binaryclass segmentation case from Sect. 5.1. The same dataset as before is used, but with additionally goalposts annotated as a third class. We selected the best stride 1 and best stride 2 networks to train. These two networks are kept the same, except for an additional channel added to the last decoding layer and to the softmax layer. We found it to be difficult for the networks to successfully learn to segment the full goalpost, not being able to discern higher sections above the field edge from parts of the background. Combined with the fact that the robots typically use points on the ground, as they are able to better judge their distance, we select the bottom of the goalposts by labelling a circle with a radius of 20 pixels in the target output where the goalposts touch the field in the images. Because of the additional difficulty of learning features for an extra class, the IoU score for the ball class dropped slightly for both networks: to 0.754 The worse scores on the goalposts are mostly due to false positives, either marking too large an area or mislabelling other objects, especially in the case of the stride 2 network. Several reasons contribute to this. Firstly, the data used is typical for an image sequence of a robot playing a game of football, where it most of the time is focused on the ball. This results in goals being less often visible, and thus in the data being unbalanced: at least one ball is visible in all 1000 images, whereas at least one goal post is visible in only 408 images. Secondly, goal posts are less feature-rich, so more difficult to discern from other objects. Finally, our annotation method does not mark a well-distinguished area, making it harder for the networks to predict its exact shape. Further research is required to alleviate these issues and improve performance, however the results obtained here provide evidence that our approach can handle multi-class segmentation with only little performance reduction. RGB Target L4F5M2S1 L3F5M2S2 Conclusions We have developed a minimal deep learning semantic segmentation architecture to be run on minimal hardware. We have shown that such a network can achieve good performance in segmenting the ball, and show promising results for additionally detecting goalposts, in a RoboCup environment with a useful frame rate. Table 2 lists the resolutions and frame rates reported by other authors alongside ours. It must be noted that a direct comparison is difficult, because of the different outputs of the used CNNs and the different robot platforms, but our approach is the only one that has all of the following properties: -Processes full VGA images at 5 fps and QVGA at 15 to 20 fps 4 -Can handle multiple image dimensions without retraining -Does not require task specific knowledge to achieve high frame rate -Achieves all this on minimal mobile hardware For achieving full object localisation as given by the other solutions, additional steps are still required. However, because the output of the semantic segmentation is in the same form as a lookup table based labelling approach, any already existing methods built on top of such a method can directly be reused. For instance, an efficient-and still task agnostic-connected component based method previously developed by us readily fits onto the architecture outlined here and performs the final object detection step within only 1 to 2 ms. N/A 11-22 Task dependent region proposal Cruz et al. [5] 24 × 24 440 Task dependent region proposal Javadi et al. [10] N/A 240 no loss: 6 fps; task dependent Da Silva et al. [6] 110 × 110 8 Predict end-to-end desired action Hess et al. [8] 32 × 32 50 Focus on generation of training data Schnekenburger et al. [14] 640 By delaying the use of task dependent methods, one actually has an opportunity to optimise the segmentation output for such methods, by varying the threshold used to determine the final class pixels. For specific use cases it may be desirable to choose a threshold that represents a preference for either high true positive rate (recall), e.g. when a robot's vision system requires complete segmentation of the ball and/or it has good false-positive filtering algorithms, or for low false positive rate (fall-out), e.g., when it can work well with only partly segmented balls, but struggles with too many false positives.
2,765
1807.05151
2951039794
Investigative journalism in recent years is confronted with two major challenges: 1) vast amounts of unstructured data originating from large text collections such as leaks or answers to Freedom of Information requests, and 2) multi-lingual data due to intensified global cooperation and communication in politics, business and civil society. Faced with these challenges, journalists are increasingly cooperating in international networks. To support such collaborations, we present the new version of new s leak 2.0, our open-source software for content-based searching of leaks. It includes three novel main features: 1) automatic language detection and language-dependent information extraction for 40 languages, 2) entity and keyword visualization for efficient exploration, and 3) decentral deployment for analysis of confidential data from various formats. We illustrate the new analysis capabilities with an exemplary case study.
@cite_10 is a more advanced open-source application developed by computer scientists in collaboration with journalists to support investigative journalism. The application supports import of PDF, MS Office and HTML documents, document clustering based on topic similarity, a simplistic location entity detection, full-text search, and document tagging. Since this tool is already mature and has successfully been used in a number of published news stories, we adapted some of its most useful features such as document tagging and a keyword-in-context (KWIC) view for search hits. Furthermore, in we concentrate on intuitive and visually pleasing approaches to display extracted contents for a fast exploration of collections.
{ "abstract": [ "For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview , an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system “in the wild”, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of “exploring” a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview 's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology." ], "cite_N": [ "@cite_10" ], "mid": [ "2027855569" ] }
New/s/leak 2.0 -Multilingual Information Extraction and Visualization for Investigative Journalism
In the era of digitization, journalists are confronted with major changes drastically challenging the ways how news are produced for a mass audience. Not only that digital publishing and direct audience feedback through online media influences what is reported on, how and by whom. But also digital data itself becomes a source and subject of newsworthy stories -a development described by the term "data journalism". According to [12], 51 % of all news organizations in 2017 already employed dedicated data journalists and most of them reported a growing demand. The systematic analysis of digital social trace data blurs the line between journalism and (computational) social science. However, social scientists and journalists differ distinctively in their goals. Confronted with a huge haystack of digital data related to some social or political phenomenon, social scientists usually are interested in quantitatively or qualitatively characterizing arXiv:1807.05151v1 [cs.CL] 13 Jul 2018 this haystack while journalists actually look for the needle in it to tell a newsworthy story. This 'needle in the haystack' problem becomes especially vital for investigative stories confronted with large and heterogeneous datasets. Most of the information in such datasets is contained in written unstructured text form, for instance in scanned documents, letter correspondences, emails or protocols. Sources of such datasets typically range from 1) official disclosures of administrative and business documents, 2) court-ordered revelation of internal communication, 3) answers to requests based on Freedom of Information (FoI) acts, and 4) unofficial leaks of confidential information. In many cases, a public revelation of such confidential information benefits a democratic society since it contributes to reveal corruption and abuse of power, thus strengthening transparency of decisions in politics and the economy. To support this role of investigative journalism, we introduce the second, substantially re-engineered and improved version of our software tool new/s/leak ("network of searchable leaks") [16]. It is developed by experts from natural language processing and visualization in computer science in cooperation with journalists from "Der Spiegel", a large German news organization. 1 New/s/leak 2.0 serves three central requirements that have not been addressed sufficiently by previously existing solutions for investigative and forensic text analysis: 1. Since journalists are primarily interested in stories around persons, organizations and locations, we adopt a visual exploration approach centered around named entities and keywords. 2. Many tools only work for English documents or a small number of other 'big languages'. To foster international collaboration, our tool allows for simultaneous analysis of documents from a set of currently 40 languages. 3. The work with confidential data such as from unofficial leaks requires a decentralized analysis environment, which can be used potentially disconnected from the internet. We distribute new/s/leak as a free, open-source server infrastructure via Docker containers, which can be easily deployed by both news organizations and single journalists. In the following sections, we introduce related work and discuss technical aspects of new/s/leak. We also illustrate its analysis capabilities in a brief case study. The most reputated application is DocumentCloud 2 , an open platform designed for journalists to annotate and search in (scanned) text documents. It is supported by popular media partners such as the New York Times, and PBS. Besides fulltext search it provides automatic named entity recognition (NER) based on OpenCalais [15] in English, Spanish and German. Data Wrangling Extracting text and metadata from various formats into a format readable by a specific analysis tool can be a tedious task. In an investigative journalism scenario it can even be a deal breaker since time is an especially valuable resource and file format conversion might not be a task journalists are well trained in. To offer easy access to as many file formats as possible in new/s/leak, we opted for a close integration with Hoover, 8 a set of open-source tools for text extraction and search in large text collections. Hoover is developed by the European Investigative Collaborations (EIC) network 9 with a special focus on large data leaks and heterogeneous data sets. It can extract data from various text file formats (txt, html, docx, pdf) but also extracts from archives (zip, tar, etc.) and email inbox formats (pst, mbox, eml). The text is extracted along with metadata from files (e.g. file name, creation date, file hash) and header information (e.g. subject, sender, receiver in case of emails). Extracted data is stored in an ElasticSearch index. Then, new/s/leak connects directly to Hoover's index to read full texts and metadata for its own information extraction pipeline. Multilingual Information Extraction The core functionality of new/s/leak to support investigative journalists is the automatic extraction of various kinds of entities from text to facilitate the exploration and the sense-making process from large collections. Since a lot of steps in this process involve language dependent resources, we put emphasis on supporting as many languages as possible. Preprocessing: Information extraction is implemented as a configurable UIMA pipeline [4]. Text documents and metadata from a Hoover collection (see Section 4) are read in parallelized manner and put through a chain of automatic annotators. In the final step of the chain, results from annotation processes are indexed in an ElasticSearch index for later retrieval and visualization. First, we identify the language of each document. Second, we separate sentences and tokens in each text. To guarantee compatibility with various Unicode scripts in different languages, we rely on the ICU4J library [8], which provides sentence and word boundary detection. Dictionary and pattern matching: In many cases, journalists follow some hypothesis to test for their investigative work. Such a proceeding can involve looking for mentions of already known terms or specific entities in the data. This can be realized by lists of dictionaries provided to the initial information extraction process. New/s/leak annotates every mention of a dictionary term with its respective list type. Dictionaries can be defined in a language-specific fashion, but also applied across all languages in the corpus. Extracted dictionary entities are visualized later on along with extracted named entities. In addition to self-defined dictionaries, we annotate email addresses, telephone numbers, and URLs with regular expression patterns. This is useful, especially for email leaks, to reveal communication networks of persons. Temporal expressions: Tracking documents across time of their creation can provide valuable information during investigative research. Unfortunately, many document sets (e.g. collections of scanned pages) do not come with a specific document creation date as structured metadata. To offer a temporal selection of contents to the user, we extract mentions of temporal expressions in documents. This is done by integrating the Heideltime temporal tagger [14] in our UIMA workflow. Heideltime provides automatically learned rules for temporal tagging in more than 200 languages. Extracted timestamps can be used to select and filter documents. Named Entity Recognition: We automatically extract person, organization and location names from all documents to allow for an entity-centric exploration of the data collection. Named entity recognition is done using the polyglot-NER library [1]. Polyglot-NER contains sequence classification for named entities based on weakly annotated training data automatically composed from Wikipedia 10 and Freebase 11 . Relying on the automatic composition of training data allows polyglot-NER to provide pre-trained models for 40 languages (see Appendix). Keyterm extraction: To further summarize document contents in addition to named entities, we automatically extract keyterms and phrases from documents. For this, we have implemented our own keyterm extraction library. The approach is based on statistical comparison of document contents with generic reference data. Reference data for each language is retrieved from the Leipzig Corpora Collection [5], which provides large representative corpora for language statistics. We included resources for the 40 languages also covered by the NER library. We employ log-likelihood significance as described in [11] to measure the overuse of terms (i.e. keyterms) in our target documents compared to the generic reference data. Ongoing sequences of keyterms in target documents are concatenated to key phrases if they occur regularly in that exact same order. Regularity is determined with the Dice statistic, which allows reliably to extract multiword units such as "stock market" or "machine learning" in the documents. Since the keyterm extraction method may also extract named entities there can be a substantial overlap between the two types. To allow for a separate display of entities and keywords in a later step, we ignore keyterms that already have been identified as named entities. The remaining top keyterms are used to provide a brief summary of each document for the user. Entity-and Keyword-Centric Visualization and Filtering: Access to unstructured text collections via named entities and keyterms is essential for journalistic investigations. To support this, we included two types of graph visualization, as it is shown in Figure 2. The first graph, called entity network, displays entities in a current document selection as nodes and their joint occurrence as edges between nodes. Different node colors represent different types such as per-son, organization or location names. Furthermore, mentions of entities that are annotated based on dictionary lists or annotated by a given regular expression are included in the entity network graph. The second graph, called keyword network, is build based on the set of keywords representing the current document selection. Besides to fulltext search, visualizations are the core concept to navigate in an unknown dataset. Nodes in networks as well as entities, metadata or document date ranges displayed as frequency bar charts in the user interface (see Appendix) can be added as filter to constrain the current set of documents. By this, journalists can easily drill down into interesting aspects of the dataset or zoom out again by removing filter conditions. From current sub-selections, user can easily switch to the fulltext view of a document, which also highlights extracted entities and allows for tagging of interesting parts. Exemplary Case Study: Parliamentary Investigations In the following, we present an exemplary case studies to illustrate the analysis capabilities of our tool. The scenario is centered around the parliamentary investigations of the "Nationalsozialistischer Untergrund" (NSU) case in Germany. Between 1998 and 2011, a terror network of neo-Nazis was responsible for murders of nine migrants and one policewoman. Although many informants of the police and the domestic intelligence service (Verfassungsschutz) were part of the larger network, the authorities officially did neither take notice of the group nor the racist motives behind the murder cases. Since 2012, this failure of domestic intelligence led to a number of parliamentary investigations. We collected 7 final reports from the Bundestag and federal parliaments investigating details of the NSU case. Altogether the reports comprise roughly 12,000 pages. In new/s/leak, long documents such as the reports can be split into more manageable units of certain length such as paragraphs or pages. Smaller units ensure that co-occurrence of extracted entities and keywords actually represents semantic coherence. By splitting into sections of an average page size as a minimum, we receive 12,021 analysis units. For our investigation scenario, we want to follow a certain hypothesis in the data. Since the NSU core group acted against a background of national socialist ideology, we try to answer the following question: To which extent did members of the neo-Nazi network associate themselves with protagonists of former Nazi Germany? To answer this question, we feed a list of prominent NSDAP party members collected from Wikipedia as additional dictionary into the information extraction process. The resulting entity network reveals mentioning of 17 former Nazis in the reports with Rudolf Heß as the most frequent one. He is celebrated as a martyr in the German neo-Nazi scene. In the third position, the list reveals Adolf Höh, a much less prominent NSDAP party member. Filtering down the collection to documents containing reference to this person reveals the context of the NSU murder case in Dortmund 2006. At that time an openly acting neo-Nazi group existed in Dortmund, which named itself after the former SA-member Höh who got killed by communists in 1930. The network also reveals connections to another SA-member, Walter Spangenberg, who got killed in Cologne during the 1930s as well. The fact that places of the two killings, Dortmund and Cologne, are near places of two later NSU attacks led to a specific theory about the patterns of the NSU murders in the parliamentary investigation. As the keyterm network reveals, the cases are discussed with reference to the term "Blutzeugentheorie" ('lit. trans: theory of blood witnesses') in the reports. Our combination of an external name list with the automatic extraction of location entities and keyterms quickly led us to this striking detail of the entire case. Discussion In this article, we introduced version 2.0 of new/s/leak, a software to support investigative journalism. It aims to support journalists throughout the entire process of analyzing large, unstructured and heterogeneous document collections: data cleaning and formatting, metadata extraction, information extraction, interactive filtering, close reading, an tagging interesting findings. We reported on technical aspects of the software based on the central idea to approach an unknown collection via extraction and display of entity and keyterm networks. As a unique feature, our tool allows a simultaneous analysis of multi-lingual collections for up to 40 languages. We further demonstrated in an exemplary use case that the software is able to support investigative journalists in the two major ways of proceeding research (cp. [3]): 1. exploration and story finding in an unknown collection, and 2. testing hypothesis based on previous information. Cases such as "Paradise Papers", "Dieselgate" or "Football leaks" made it especially clear that a decentralized collaboration tool for analysis across different languages is needed for effective journalistic investigation. We are convinced that our software will contribute to this goal in the work of journalists at the "Der Spiegel" news organization and, since it is released as open-source, also for other news organizations and individual journalists.
2,273